Sample records for distributed database concurrency

  1. The Raid distributed database system

    NASA Technical Reports Server (NTRS)

    Bhargava, Bharat; Riedl, John

    1989-01-01

    Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.

  2. Advanced technologies for scalable ATLAS conditions database access on the grid

    NASA Astrophysics Data System (ADS)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  3. Virtual time and time warp on the JPL hypercube. [operating system implementation for distributed simulation

    NASA Technical Reports Server (NTRS)

    Jefferson, David; Beckman, Brian

    1986-01-01

    This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.

  4. Distributed Database Control and Allocation. Volume 2. Performance Analysis of Concurrency Control Algorithms.

    DTIC Science & Technology

    1983-10-01

    Concurrency Control Algorithms Computer Corporation of America Wente K. Lin, Philip A. Bernstein, Nathan Goodman and Jerry Nolte APPROVED FOR PUBLIC ...84 03 IZ 004 ’KV This report has been reviewed by the RADC Public Affairs Office (PA) an is releasable to the National Technical Information Service...NTIS). At NTIS it will be releasable to the general public , including foreign na~ions. RADC-TR-83-226, Vol II (of three) has been reviewed and is

  5. Distributed Database Control and Allocation. Volume 1. Frameworks for Understanding Concurrency Control and Recovery Algorithms.

    DTIC Science & Technology

    1983-10-01

    an Aborti , It forwards the operation directly to the recovery system. When the recovery system acknowledges that the operation has been processed, the...list... AbortI . rite Ti Into the abort list. Then undo all of Ti’s writes by reedina their bet ore-images from the audit trail and writin. them back...Into the stable database. [Ack) Then, delete Ti from the active list. Restart. Process Aborti for each Ti on the active list. Ack) In this algorithm

  6. Building a highly available and intrusion tolerant Database Security and Protection System (DSPS).

    PubMed

    Cai, Liang; Yang, Xiao-Hu; Dong, Jin-Xiang

    2003-01-01

    Database Security and Protection System (DSPS) is a security platform for fighting malicious DBMS. The security and performance are critical to DSPS. The authors suggested a key management scheme by combining the server group structure to improve availability and the key distribution structure needed by proactive security. This paper detailed the implementation of proactive security in DSPS. After thorough performance analysis, the authors concluded that the performance difference between the replicated mechanism and proactive mechanism becomes smaller and smaller with increasing number of concurrent connections; and that proactive security is very useful and practical for large, critical applications.

  7. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  8. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  9. Characterizing Distributed Concurrent Engineering Teams: A Descriptive Framework for Aerospace Concurrent Engineering Design Teams

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Debarati; Hihn, Jairus; Warfield, Keith

    2011-01-01

    As aerospace missions grow larger and more technically complex in the face of ever tighter budgets, it will become increasingly important to use concurrent engineering methods in the development of early conceptual designs because of their ability to facilitate rapid assessments and trades in a cost-efficient manner. To successfully accomplish these complex missions with limited funding, it is also essential to effectively leverage the strengths of individuals and teams across government, industry, academia, and international agencies by increased cooperation between organizations. As a result, the existing concurrent engineering teams will need to increasingly engage in distributed collaborative concurrent design. This paper is an extension of a recent white paper written by the Concurrent Engineering Working Group, which details the unique challenges of distributed collaborative concurrent engineering. This paper includes a short history of aerospace concurrent engineering, and defines the terms 'concurrent', 'collaborative' and 'distributed' in the context of aerospace concurrent engineering. In addition, a model for the levels of complexity of concurrent engineering teams is presented to provide a way to conceptualize information and data flow within these types of teams.

  10. CUDASW++ 3.0: accelerating Smith-Waterman protein database search by coupling CPU and GPU SIMD instructions.

    PubMed

    Liu, Yongchao; Wirawan, Adrianto; Schmidt, Bertil

    2013-04-04

    The maximal sensitivity for local alignments makes the Smith-Waterman algorithm a popular choice for protein sequence database search based on pairwise alignment. However, the algorithm is compute-intensive due to a quadratic time complexity. Corresponding runtimes are further compounded by the rapid growth of sequence databases. We present CUDASW++ 3.0, a fast Smith-Waterman protein database search algorithm, which couples CPU and GPU SIMD instructions and carries out concurrent CPU and GPU computations. For the CPU computation, this algorithm employs SSE-based vector execution units as accelerators. For the GPU computation, we have investigated for the first time a GPU SIMD parallelization, which employs CUDA PTX SIMD video instructions to gain more data parallelism beyond the SIMT execution model. Moreover, sequence alignment workloads are automatically distributed over CPUs and GPUs based on their respective compute capabilities. Evaluation on the Swiss-Prot database shows that CUDASW++ 3.0 gains a performance improvement over CUDASW++ 2.0 up to 2.9 and 3.2, with a maximum performance of 119.0 and 185.6 GCUPS, on a single-GPU GeForce GTX 680 and a dual-GPU GeForce GTX 690 graphics card, respectively. In addition, our algorithm has demonstrated significant speedups over other top-performing tools: SWIPE and BLAST+. CUDASW++ 3.0 is written in CUDA C++ and PTX assembly languages, targeting GPUs based on the Kepler architecture. This algorithm obtains significant speedups over its predecessor: CUDASW++ 2.0, by benefiting from the use of CPU and GPU SIMD instructions as well as the concurrent execution on CPUs and GPUs. The source code and the simulated data are available at http://cudasw.sourceforge.net.

  11. Designing and Implementing a Distributed System Architecture for the Mars Rover Mission Planning Software (Maestro)

    NASA Technical Reports Server (NTRS)

    Goldgof, Gregory M.

    2005-01-01

    Distributed systems allow scientists from around the world to plan missions concurrently, while being updated on the revisions of their colleagues in real time. However, permitting multiple clients to simultaneously modify a single data repository can quickly lead to data corruption or inconsistent states between users. Since our message broker, the Java Message Service, does not ensure that messages will be received in the order they were published, we must implement our own numbering scheme to guarantee that changes to mission plans are performed in the correct sequence. Furthermore, distributed architectures must ensure that as new users connect to the system, they synchronize with the database without missing any messages or falling into an inconsistent state. Robust systems must also guarantee that all clients will remain synchronized with the database even in the case of multiple client failure, which can occur at any time due to lost network connections or a user's own system instability. The final design for the distributed system behind the Mars rover mission planning software fulfills all of these requirements and upon completion will be deployed to MER at the end of 2005 as well as Phoenix (2007) and MSL (2009).

  12. Probabilistic simulation of concurrent engineering of propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Technology readiness and the available infrastructure is assessed for timely computational simulation of concurrent engineering for propulsion systems. Results for initial coupled multidisciplinary, fabrication-process, and system simulators are presented including uncertainties inherent in various facets of engineering processes. An approach is outlined for computationally formalizing the concurrent engineering process from cradle-to-grave via discipline dedicated workstations linked with a common database.

  13. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  14. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  15. Issues in Real-Time Data Management.

    DTIC Science & Technology

    1991-07-01

    2. Multiversion concurrency control [5] interprets write operations as the creation of new ver- sions of the items (in contrast to the update-in...features of optimistic (deferred writing, celayed selection of serialization order) and multiversion concurrency control. They do not present any...34 Multiversion Concurrency Control - Theory and Algorithms". ACM Transactions on Database Systems 8, 4 (December 1983), 465-484. 6. Buchman, A. P

  16. Comparing host and target environments for distributed Ada programs

    NASA Technical Reports Server (NTRS)

    Paulk, Mark C.

    1986-01-01

    The Ada programming language provides a means of specifying logical concurrency by using multitasking. Extending the Ada multitasking concurrency mechanism into a physically concurrent distributed environment which imposes its own requirements can lead to incompatibilities. These problems are discussed. Using distributed Ada for a target system may be appropriate, but when using the Ada language in a host environment, a multiprocessing model may be more suitable than retargeting an Ada compiler for the distributed environment. The tradeoffs between multitasking on distributed targets and multiprocessing on distributed hosts are discussed. Comparisons of the multitasking and multiprocessing models indicate different areas of application.

  17. Barista: A Framework for Concurrent Speech Processing by USC-SAIL

    PubMed Central

    Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G.; Narayanan, Shrikanth S.

    2016-01-01

    We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0. PMID:27610047

  18. Barista: A Framework for Concurrent Speech Processing by USC-SAIL.

    PubMed

    Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G; Narayanan, Shrikanth S

    2014-05-01

    We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0.

  19. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    NASA Astrophysics Data System (ADS)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  20. A concurrent distributed system for aircraft tactical decision generation

    NASA Technical Reports Server (NTRS)

    Mcmanus, John W.

    1990-01-01

    A research program investigating the use of AI techniques to aid in the development of a tactical decision generator (TDG) for within visual range (WVR) air combat engagements is discussed. The application of AI programming and problem-solving methods in the development and implementation of a concurrent version of the computerized logic for air-to-air warfare simulations (CLAWS) program, a second-generation TDG, is presented. Concurrent computing environments and programming approaches are discussed, and the design and performance of prototype concurrent TDG system (Cube CLAWS) are presented. It is concluded that the Cube CLAWS has provided a useful testbed to evaluate the development of a distributed blackboard system. The project has shown that the complexity of developing specialized software on a distributed, message-passing architecture such as the Hypercube is not overwhelming, and that reasonable speedups and processor efficiency can be achieved by a distributed blackboard system. The project has also highlighted some of the costs of using a distributed approach to designing a blackboard system.

  1. Performance Studies on Distributed Virtual Screening

    PubMed Central

    Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.

    2014-01-01

    Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219

  2. KeyWare: an open wireless distributed computing environment

    NASA Astrophysics Data System (ADS)

    Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir

    1995-12-01

    Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.

  3. Concurrent Engineering Working Group White Paper Distributed Collaborative Design: The Next Step in Aerospace Concurrent Engineering

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Chattopadhyay, Debarati; Karpati, Gabriel; McGuire, Melissa; Panek, John; Warfield, Keith; Borden, Chester

    2011-01-01

    As aerospace missions grow larger and more technically complex in the face of ever tighter budgets, it will become increasingly important to use concurrent engineering methods in the development of early conceptual designs because of their ability to facilitate rapid assessments and trades of performance, cost and schedule. To successfully accomplish these complex missions with limited funding, it is essential to effectively leverage the strengths of individuals and teams across government, industry, academia, and international agencies by increased cooperation between organizations. As a result, the existing concurrent engineering teams will need to increasingly engage in distributed collaborative concurrent design. The purpose of this white paper is to identify a near-term vision for the future of distributed collaborative concurrent engineering design for aerospace missions as well as discuss the challenges to achieving that vision. The white paper also documents the advantages of creating a working group to investigate how to engage the expertise of different teams in joint design sessions while enabling organizations to maintain their organizations competitive advantage.

  4. Assessing variability in chemical acute toxicity of unionid mussels: Influence of intra- and inter-laboratory testing, life stage, and species

    USGS Publications Warehouse

    Raimondo, Sandy; Lilavois, Crystal R.; Lee, Larisa; Augspurger, Tom; Wang, Ning; Ingersoll, Christopher G.; Bauer, Candice R.; Hammer, Edward J.; Barron, Mace G.

    2016-01-01

    We developed a toxicity database for unionid mussels to examine the extent of intra- and inter-laboratory variability in acute toxicity tests with mussel larvae (glochidia) and juveniles; the extent of differential sensitivity of the two life stages; and the variation in sensitivity among commonly tested mussels (Lampsilis siliquoidea, Utterbackia imbecillis, Villosa iris), commonly tested cladocerans (Daphnia magna, Ceriodaphnia dubia) and fish (Oncorhynchus mykiss, Pimephales promelas, Lepomis macrochirus). The results of these analyses indicate intra-laboratory variability for median effect concentrations (EC50) averaged about 2 fold for both life stages, while inter-laboratory variability averaged 3.6 fold for juvenile mussels and 6.3 fold for glochidia. The EC50s for juveniles and glochidia were within a factor of 2 of each other for 50% of paired records across chemicals, with juveniles more sensitive than glochidia by more than 2 fold for 33% of the comparisons made between life stages. There was a high concurrence of the sensitivity of commonly tested L. siliquoidea, U. imbecillis, and V. iris to that of other mussels. However, this concurrence decreases as the taxonomic distance of the commonly tested cladocerans and fish to mussels increases. The compiled mussel database and determination of data variability will advance risk assessments by including more robust species sensitivity distributions, interspecies correlation estimates, and availability of taxon-specific empirically derived application factors for risk assessment.

  5. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  6. Multicenter evaluation of signalment and comorbid conditions associated with aortic thrombotic disease in dogs.

    PubMed

    Winter, Randolph L; Budke, Christine M

    2017-08-15

    OBJECTIVE To assess signalment and concurrent disease processes in dogs with aortic thrombotic disease (ATD). DESIGN Retrospective case-control study. ANIMALS Dogs examined at North American veterinary teaching hospitals from 1985 through 2011 with medical records submitted to the Veterinary Medical Database. PROCEDURES Medical records were reviewed to identify dogs with a diagnosis of ATD (case dogs). Five control dogs without a diagnosis of ATD were then identified for every case dog. Data were collected regarding dog age, sex, breed, body weight, and concurrent disease processes. RESULTS ATD was diagnosed in 291 of the 984,973 (0.03%) dogs included in the database. The odds of a dog having ATD did not differ significantly by sex, age, or body weight. Compared with mixed-breed dogs, Shetland Sheepdogs had a significantly higher odds of ATD (OR, 2.59). Protein-losing nephropathy (64/291 [22%]) was the most commonly recorded concurrent disease in dogs with ATD. CONCLUSIONS AND CLINICAL RELEVANCE Dogs with ATD did not differ significantly from dogs without ATD in most signalment variables. Contrary to previous reports, cardiac disease was not a common concurrent diagnosis in dogs with ATD.

  7. The Hierarchical Database Decomposition Approach to Database Concurrency Control.

    DTIC Science & Technology

    1984-12-01

    approach, we postulate a model of transaction behavior under two phase locking as shown in Figure 39(a) and a model of that under multiversion ...transaction put in the block queue until it is reactivated. Under multiversion timestamping, however, the request is always granted. Once the request

  8. Concurrent risk-reduction surgery in patients with increased lifetime risk for breast and ovarian cancer: an analysis of the National Surgical Quality Improvement Program (NSQIP) database.

    PubMed

    Elmi, Maryam; Azin, Arash; Elnahas, Ahmad; McCready, David R; Cil, Tulin D

    2018-05-14

    Patients with genetic susceptibility to breast and ovarian cancer are eligible for risk-reduction surgery. Surgical morbidity of risk-reduction mastectomy (RRM) with concurrent bilateral salpingo-oophorectomy (BSO) is unknown. Outcomes in these patients were compared to patients undergoing RRM without BSO using a large multi-institutional database. A retrospective cohort analysis was conducted using the American College of Surgeon's National Surgical Quality Improvement Program (NSQIP) 2007-2016 datasets, comparing postoperative morbidity between patients undergoing RRM with patients undergoing RRM with concurrent BSO. Patients with genetic susceptibility to breast/ovarian cancer undergoing risk-reduction surgery were identified. The primary outcome was 30-day postoperative major morbidity. Secondary outcomes included surgical site infections, reoperations, readmissions, length of stay, and venous thromboembolic events. A multivariate analysis was performed to determine predictors of postoperative morbidity and the adjusted effect of concurrent BSO on morbidity. Of the 5470 patients undergoing RRM, 149 (2.7%) underwent concurrent BSO. The overall rate of major morbidity and postoperative infections was 4.5% and 4.6%, respectively. There was no significant difference in the rate of postoperative major morbidity (4.5% vs 4.7%, p = 0.91) or any of the secondary outcomes between patients undergoing RRM without BSO vs. those undergoing RRM with concurrent BSO. Multivariable analysis showed Body Mass Index (OR 1.05; p < 0.001) and smoking (OR 1.78; p = 0.003) to be the only predictors associated with major morbidity. Neither immediate breast reconstruction (OR 1.02; p = 0.93) nor concurrent BSO (OR 0.94; p = 0.89) were associated with increased postoperative major morbidity. This study demonstrated that RRM with concurrent BSO was not associated with significant additional morbidity when compared to RRM without BSO. Therefore, this joint approach may be considered for select patients at risk for both breast and ovarian cancer.

  9. Substantial increase in concurrent droughts and heatwaves in the United States

    PubMed Central

    Mazdiyasni, Omid; AghaKouchak, Amir

    2015-01-01

    A combination of climate events (e.g., low precipitation and high temperatures) may cause a significant impact on the ecosystem and society, although individual events involved may not be severe extremes themselves. Analyzing historical changes in concurrent climate extremes is critical to preparing for and mitigating the negative effects of climatic change and variability. This study focuses on the changes in concurrences of heatwaves and meteorological droughts from 1960 to 2010. Despite an apparent hiatus in rising temperature and no significant trend in droughts, we show a substantial increase in concurrent droughts and heatwaves across most parts of the United States, and a statistically significant shift in the distribution of concurrent extremes. Although commonly used trend analysis methods do not show any trend in concurrent droughts and heatwaves, a unique statistical approach discussed in this study exhibits a statistically significant change in the distribution of the data. PMID:26324927

  10. Substantial increase in concurrent droughts and heatwaves in the United States.

    PubMed

    Mazdiyasni, Omid; AghaKouchak, Amir

    2015-09-15

    A combination of climate events (e.g., low precipitation and high temperatures) may cause a significant impact on the ecosystem and society, although individual events involved may not be severe extremes themselves. Analyzing historical changes in concurrent climate extremes is critical to preparing for and mitigating the negative effects of climatic change and variability. This study focuses on the changes in concurrences of heatwaves and meteorological droughts from 1960 to 2010. Despite an apparent hiatus in rising temperature and no significant trend in droughts, we show a substantial increase in concurrent droughts and heatwaves across most parts of the United States, and a statistically significant shift in the distribution of concurrent extremes. Although commonly used trend analysis methods do not show any trend in concurrent droughts and heatwaves, a unique statistical approach discussed in this study exhibits a statistically significant change in the distribution of the data.

  11. A privacy preserving protocol for tracking participants in phase I clinical trials.

    PubMed

    El Emam, Khaled; Farah, Hanna; Samet, Saeed; Essex, Aleksander; Jonker, Elizabeth; Kantarcioglu, Murat; Earle, Craig C

    2015-10-01

    Some phase 1 clinical trials offer strong financial incentives for healthy individuals to participate in their studies. There is evidence that some individuals enroll in multiple trials concurrently. This creates safety risks and introduces data quality problems into the trials. Our objective was to construct a privacy preserving protocol to track phase 1 participants to detect concurrent enrollment. A protocol using secure probabilistic querying against a database of trial participants that allows for screening during telephone interviews and on-site enrollment was developed. The match variables consisted of demographic information. The accuracy (sensitivity, precision, and negative predictive value) of the matching and its computational performance in seconds were measured under simulated environments. Accuracy was also compared to non-secure matching methods. The protocol performance scales linearly with the database size. At the largest database size of 20,000 participants, a query takes under 20s on a 64 cores machine. Sensitivity, precision, and negative predictive value of the queries were consistently at or above 0.9, and were very similar to non-secure versions of the protocol. The protocol provides a reasonable solution to the concurrent enrollment problems in phase 1 clinical trials, and is able to ensure that personal information about participants is kept secure. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Trends of Concurrent Ankle Arthroscopy at the Time of Operative Treatment of Ankle Fracture: A National Database Review.

    PubMed

    Ackermann, Jakob; Fraser, Ethan J; Murawski, Christopher D; Desai, Payal; Vig, Khushdeep; Kennedy, John G

    2016-04-01

    The purpose of this study was to report trends associated with concurrent ankle arthroscopy at the time of operative treatment of ankle fracture. The current procedural terminology (CPT) billing codes were used to search the PearlDiver Patient Record Database and identify all patients who were treated for acute ankle fracture in the United States. The Medicare Standard Analytic Files were searchable between 2005 and 2011 and the United Healthcare Orthopedic Dataset from 2007 to 2011. Annual trends were expressed only between 2007 and 2011, as it was the common time period among both databases. Demographic factors were identified for all procedures as well as the cost aspect using the Medicare data set. In total, 32 307 patients underwent open reduction internal fixation (ORIF) of an ankle fracture, of whom 313 (1.0%) had an ankle arthroscopy performed simultaneously. Of those 313 cases, 70 (22.4%) patients received microfracture treatment. Between 2005 and 2011, 85 203 patients were treated for an ankle fracture whether via ORIF or closed treatment. Of these, a total of 566 patients underwent arthroscopic treatment within 7 years. The prevalence of arthroscopy after ankle fracture decreased significantly by 45% from 2007 to 2011 (P< .0001). When ORIF and microfracture were performed concurrently, the total average charge for both procedures drops to $4253.00 and average reimbursement to $818.00 compared with approximately $4964.00 and $1069.00, respectively, when they were performed subsequently. Despite good evidence in favor of arthroscopy at the time of ankle fracture treatment, it appears that only a small proportion of surgeons in the United States perform these procedures concurrently. Therapeutic, Level IV: Retrospective. © 2015 The Author(s).

  13. Actors: A Model of Concurrent Computation in Distributed Systems.

    DTIC Science & Technology

    1985-06-01

    Artificial Intelligence Labora- tory of the Massachusetts Institute of Technology. Support for the labora- tory’s aritificial intelligence research is...RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE ...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp-oved I= pblicrelease and sale; itsI

  14. Improving generalized inverted index lock wait times

    NASA Astrophysics Data System (ADS)

    Borodin, A.; Mirvoda, S.; Porshnev, S.; Ponomareva, O.

    2018-01-01

    Concurrent operations on tree like data structures is a cornerstone of any database system. Concurrent operations intended for improving read\\write performance and usually implemented via some way of locking. Deadlock-free methods of concurrency control are known as tree locking protocols. These protocols provide basic operations(verbs) and algorithm (ways of operation invocations) for applying it to any tree-like data structure. These algorithms operate on data, managed by storage engine which are very different among RDBMS implementations. In this paper, we discuss tree locking protocol implementation for General inverted index (Gin) applied to multiversion concurrency control (MVCC) storage engine inside PostgreSQL RDBMS. After that we introduce improvements to locking protocol and provide usage statistics about evaluation of our improvement in very high load environment in one of the world’s largest IT company.

  15. Studying Venus using a GIS database

    NASA Technical Reports Server (NTRS)

    Price, Maribeth; Suppe, John

    1993-01-01

    A Geographic Information System (GIS) can significantly enhance geological studies on Venus because it facilitates concurrent analysis of many sources of data, as demonstrated by our work on topographic and deformation characteristics of tesserae. We are creating a database of structures referenced to real-world coordinates to encourage the archival of Venusian studies in digital format and to foster quantitative analysis of many combinations of data. Contributions to this database from all aspects of Venusian science are welcome.

  16. Does Sensitivity to Magnitude Depend on the Temporal Distribution of Reinforcement?

    ERIC Educational Resources Information Center

    Grace, Randolph C.; Bragason, Orn

    2005-01-01

    Our research addressed the question of whether sensitivity to relative reinforcer magnitude in concurrent chains depends on the distribution of reinforcer delays when the terminal-link schedules are equal. In Experiment 1, 12 pigeons responded in a two-component procedure. In both components, the initial links were concurrent variable-interval 40…

  17. Enhancing navigation in biomedical databases by community voting and database-driven text classification

    PubMed Central

    Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph

    2009-01-01

    Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled by the database, scales well with concurrent change events, and can be adapted to add text classification capability to other biomedical databases. The system can be accessed at . PMID:19799796

  18. Concurrent chemo-radiotherapy with S-1 as an alternative therapy for elderly Chinese patients with non-metastatic esophageal squamous cancer: evidence based on a systematic review and meta-analysis.

    PubMed

    Song, Guo-Min; Tian, Xu; Liu, Xiao-Ling; Chen, Hui; Zhou, Jian-Guo; Bian, Wei; Chen, Wei-Qing

    2017-06-06

    This systematic review and meta-analysis aims to systematically assess the effects of concurrent chemo-radiotherapy (CRT) compared with radiotherapy (RT) alone for elderly Chinese patients with non-metastatic esophageal squamous cancer. We searched PubMed, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), China Biomedical Literature Database (CBM), and China National Knowledge Infrastructure (CNKI) databases. We retrieved randomized controlled trials on concurrent CRT with Gimeraciland Oteracil Porassium (S-1) compared with RT alone for aged Chinese patients with non-metastatic esophageal squamous cancer performed until August 2016. Eight eligible studies involving 536 patients were subjected to meta-analysis. As a response rate measure, a relative risk (RR) of 1.37 [95% confidence intervals (CIs): 1.24, 1.53; P = 0.00], which reached statistical significance, was estimated when concurrent CRT with S-1 was performed compared with RT alone. Sensitivity analysis on response rate confirmed the robustness of the pooled result. The RR values of 1.44 (95% CIs: 1.22, 1.70; P = 0.00) and 1.77 (95% CIs: 1.26, 2.48; P = 0.00) estimated for 1- and 2-year survival rate indices, respectively, were also statistically significant. The incidence of adverse events was similar in both groups. This review concluded that concurrent CRT with S-1 can improve the efficacy and prolong the survival period of elderly Chinese patients with non-metastatic esophageal squamous cancer and does not significantly increase the acute adverse effects of RT alone.

  19. The Application of Security Concepts to the Personnel Database for the Indonesian Navy.

    DTIC Science & Technology

    1983-09-01

    Postgraduate School, lionterey, California, June 1982. Since 1977, the Indonesian Navy Data Center (DISPULAHTAL) has collected and processed pa-sonnel data to...zel dlta Processing in the Indonesian Navy. 4 -a "o ’% ’." 5. ’S 1 1’S~. . . II. THE _IIIT_ IPR2ES1D PERSONSEL DATABASE SYSTEM The present Database...LEVEL *USER PROCESSING :CONCURRENT MULTI USER/LEVEL Ulf, U 3 , U 3 . . . users S. .. ...... secret C. .. ...... classified U .. .. ..... unclassified

  20. Towards PCC for Concurrent and Distributed Systems (Work in Progress)

    NASA Technical Reports Server (NTRS)

    Henriksen, Anders S.; Filinski, Andrzej

    2009-01-01

    We outline some conceptual challenges in extending the PCC paradigm to a concurrent and distributed setting, and sketch a generalized notion of module correctness based on viewing communication contracts as economic games. The model supports compositional reasoning about modular systems and is meant to apply not only to certification of executable code, but also of organizational workflows.

  1. A Comparison of the Functional Distribution of Language in Bilingual Classrooms Following Language Separation vs. Concurrent Instructional Approaches.

    ERIC Educational Resources Information Center

    Milk, Robert D.

    This study analyzes how two bilingual classroom language distribution approaches affect classroom language use patterns. The two strategies, separate instruction in the two languages vs. the new concurrent language usage approach (NCA) allowing use of both languages with strict guidelines for language alternation, are observed on videotapes of a…

  2. CHANGING ATTITUDES ABOUT CONCURRENCY AMONG YOUNG AFRICAN AMERICANS: RESULTS OF A RADIO CAMPAIGN

    PubMed Central

    Adimora, Adaora A.; Schoenbach, Victor J.; Cates, Joan R.; Cope, Anna B.; Ramirez, Catalina; Powell, Wizdom; Agans, Robert P.

    2018-01-01

    We created and evaluated an 8-month campaign of provocative radio ads to change attitudes about concurrent (overlapping) sexual partnerships among young African Americans. Using focus groups, vignette-based items, and factor analysis, we created a concurrency attitude scale and compared its score distributions in independent samples of African Americans, ages 18-34 years, interviewed by telephone before (n=678) and after (n=479) the campaign. Pre-and post-campaign samples reflected similar response rates (pre: 32.6%; post: 31.8%) and distributions of personal characteristics. Reported exposure to concurrency messages was greater after the campaign (pre: 6.3%, post: 30.9%), and mean scores became less accepting of concurrency (pre: 3.40 (95% confidence interval: 3.23, 3.57); post: 2.62 (2.46, 2.78)). Score differences were not a function of differences in composition of the two samples (adjusted means: pre: 3.37 (3.21, 3.53); post: 2.62 (2.47, 2.76)). Findings demonstrate that a carefully targeted, intensive mass media campaign can change attitudes about concurrency, which should facilitate behavior change. PMID:28825864

  3. Monogamy relations of concurrence for any dimensional quantum systems

    NASA Astrophysics Data System (ADS)

    Zhu, Xue-Na; Li-Jost, Xianqing; Fei, Shao-Ming

    2017-11-01

    We study monogamy relations for arbitrary dimensional multipartite systems. Monogamy relations based on concurrence and concurrence of assistance for any dimensional m_1⊗ m_2⊗ \\cdots ⊗ mN quantum states are derived, which give rise to the restrictions on the entanglement distributions among the subsystems. Besides, we give the lower bound of concurrence for four-partite mixed states. The approach can be readily generalized to arbitrary multipartite systems.

  4. Assessing the validity of sales self-efficacy: a cautionary tale.

    PubMed

    Gupta, Nina; Ganster, Daniel C; Kepes, Sven

    2013-07-01

    We developed a focused, context-specific measure of sales self-efficacy and assessed its incremental validity against the broad Big 5 personality traits with department store salespersons, using (a) both a concurrent and a predictive design and (b) both objective sales measures and supervisory ratings of performance. We found that in the concurrent study, sales self-efficacy predicted objective and subjective measures of job performance more than did the Big 5 measures. Significant differences between the predictability of subjective and objective measures of performance were not observed. Predictive validity coefficients were generally lower than concurrent validity coefficients. The results suggest that there are different dynamics operating in concurrent and predictive designs and between broad and contextualized measures; they highlight the importance of distinguishing between these designs and measures in meta-analyses. The results also point to the value of focused, context-specific personality predictors in selection research. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  5. Design, Development and Utilization Perspectives on Database Management Systems

    ERIC Educational Resources Information Center

    Shneiderman, Ben

    1977-01-01

    This paper reviews the historical development of integrated data base management systems and examines competing approaches. Topics include management and utilization, implementation and design, query languages, security, integrity, privacy and concurrency. (Author/KP)

  6. Generalized monogamy relations of concurrence for N -qubit systems

    NASA Astrophysics Data System (ADS)

    Zhu, Xue-Na; Fei, Shao-Ming

    2015-12-01

    We present a different kind of monogamous relations based on concurrence and concurrence of assistance. For N -qubit systems A B C1...CN -2 , the monogamy relations satisfied by the concurrence of N -qubit pure states under the partition A B and C1...CN -2 , as well as under the partition A B C1 and C2...CN -2 , are established, which gives rise to a kind of restrictions on the entanglement distribution and trade off among the subsystems.

  7. Not all nonnormal distributions are created equal: Improved theoretical and measurement precision.

    PubMed

    Joo, Harry; Aguinis, Herman; Bradley, Kyle J

    2017-07-01

    We offer a four-category taxonomy of individual output distributions (i.e., distributions of cumulative results): (1) pure power law; (2) lognormal; (3) exponential tail (including exponential and power law with an exponential cutoff); and (4) symmetric or potentially symmetric (including normal, Poisson, and Weibull). The four categories are uniquely associated with mutually exclusive generative mechanisms: self-organized criticality, proportionate differentiation, incremental differentiation, and homogenization. We then introduce distribution pitting, a falsification-based method for comparing distributions to assess how well each one fits a given data set. In doing so, we also introduce decision rules to determine the likely dominant shape and generative mechanism among many that may operate concurrently. Next, we implement distribution pitting using 229 samples of individual output for several occupations (e.g., movie directors, writers, musicians, athletes, bank tellers, call center employees, grocery checkers, electrical fixture assemblers, and wirers). Results suggest that for 75% of our samples, exponential tail distributions and their generative mechanism (i.e., incremental differentiation) likely constitute the dominant distribution shape and explanation of nonnormally distributed individual output. This finding challenges past conclusions indicating the pervasiveness of other types of distributions and their generative mechanisms. Our results further contribute to theory by offering premises about the link between past and future individual output. For future research, our taxonomy and methodology can be used to pit distributions of other variables (e.g., organizational citizenship behaviors). Finally, we offer practical insights on how to increase overall individual output and produce more top performers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Language and System Support for Concurrent Programming

    DTIC Science & Technology

    1990-04-01

    language. We give suggestions on how to avoid polling programs , and suggest changes to the rendezvous facilities to eliminate the polling bias. The...concerned with support for concurrent pro- Capsule gramming provided to the application programmer by operating Description systems and programming ...of concurrent programming has widened Philosophy from "pure" operating system applications to a multitude of real-time and distributed programs . Since

  9. Finite elements and the method of conjugate gradients on a concurrent processor

    NASA Technical Reports Server (NTRS)

    Lyzenga, G. A.; Raefsky, A.; Hager, G. H.

    1985-01-01

    An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.

  10. A Concurrent Distributed System for Aircraft Tactical Decision Generation

    NASA Technical Reports Server (NTRS)

    McManus, John W.

    1990-01-01

    A research program investigating the use of artificial intelligence (AI) techniques to aid in the development of a Tactical Decision Generator (TDG) for Within Visual Range (WVR) air combat engagements is discussed. The application of AI programming and problem solving methods in the development and implementation of a concurrent version of the Computerized Logic For Air-to-Air Warfare Simulations (CLAWS) program, a second generation TDG, is presented. Concurrent computing environments and programming approaches are discussed and the design and performance of a prototype concurrent TDG system are presented.

  11. Finite elements and the method of conjugate gradients on a concurrent processor

    NASA Technical Reports Server (NTRS)

    Lyzenga, G. A.; Raefsky, A.; Hager, B. H.

    1984-01-01

    An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.

  12. A Database to Support Ecosystems Services Research in Lakes of the Northeastern United States

    EPA Science Inventory

    Northeastern lakes provide valuable ecosystem services that benefit residents and visitors and are increasingly important for provisioning of recreational opportunities and amenities. Concurrently, however, population growth threatens lakes by, for instance, increasing nutrient...

  13. The compatibility of concurrent high intensity interval training and resistance training for muscular strength and hypertrophy: a systematic review and meta-analysis.

    PubMed

    Sabag, Angelo; Najafi, Abdolrahman; Michael, Scott; Esgin, Tuguy; Halaki, Mark; Hackett, Daniel

    2018-04-16

    The purpose of this systematic review and meta-analysis is to assess the effect of concurrent high intensity interval training (HIIT) and resistance training (RT) on strength and hypertrophy. Five electronic databases were searched using terms related to HIIT, RT, and concurrent training. Effect size (ES), calculated as standardised differences in the means, were used to examine the effect of concurrent HIIT and RT compared to RT alone on muscle strength and hypertrophy. Sub-analyses were performed to assess region-specific strength and hypertrophy, HIIT modality (cycling versus running), and inter-modal rest responses. Compared to RT alone, concurrent HIIT and RT led to similar changes in muscle hypertrophy and upper body strength. Concurrent HIIT and RT resulted in a lower increase in lower body strength compared to RT alone (ES = -0.248, p = 0.049). Sub analyses showed a trend for lower body strength to be negatively affected by cycling HIIT (ES = -0.377, p = 0.074) and not running (ES = -0.176, p = 0.261). Data suggests concurrent HIIT and RT does not negatively impact hypertrophy or upper body strength, and that any possible negative effect on lower body strength may be ameliorated by incorporating running based HIIT and longer inter-modal rest periods.

  14. Children concurrently wasted and stunted: A meta‐analysis of prevalence data of children 6–59 months from 84 countries

    PubMed Central

    Khara, Tanya; Mwangome, Martha; Ngari, Moses

    2017-01-01

    Abstract Children can be stunted and wasted at the same time. Having both deficits greatly elevates risk of mortality. The analysis aimed to estimate the prevalence and burden of children aged 6–59 months concurrently wasted and stunted. Data from demographic and health survey and Multi‐indicator Cluster Surveys datasets from 84 countries were analysed. Overall prevalence for being wasted, stunted, and concurrently wasted and stunted among children 6 to 59 months was calculated. A pooled prevalence of concurrence was estimated and reported by gender, age, United Nations regions, and contextual categories. Burden was calculated using population figures from the global joint estimates database. The pooled prevalence of concurrence in the 84 countries was 3.0%, 95% CI [2.97, 3.06], ranging from 0% to 8.0%. Nine countries reported a concurrence prevalence greater than 5%. The estimated burden was 5,963,940 children. Prevalence of concurrence was highest in the 12‐ to 24‐month age group 4.2%, 95% CI [4.1, 4.3], and was significantly higher among boys 3.54%, 95% CI [3.47, 3.61], compared to girls; 2.46%, 95% CI [2.41, 2.52]. Fragile and conflict‐affected states reported significantly higher concurrence 3.6%, 95% CI [3.5, 3.6], than those defined as stable 2.24%, 95% CI [2.18, 2.30]. This analysis represents the first multiple country estimation of the prevalence and burden of children concurrently wasted and stunted. Given the high risk of mortality associated with concurrence, the findings indicate a need to report on this condition as well as investigate whether these children are being reached through existing programmes. PMID:28944990

  15. Acute kidney injury associated with concomitant vancomycin and piperacillin/tazobactam administration: a systematic review and meta-analysis.

    PubMed

    Chen, Xiao-Yu; Xu, Ri-Xiang; Zhou, Xin; Liu, Yang; Hu, Cheng-Yang; Xie, Xue-Feng

    2018-05-11

    As a tricyclic glycopeptide antibiotic used to treat acute infections, Vancomycin (VAN) is often administered with piperacillin/tazobactam (PT) to treat various infections in clinical practice. However, whether the combination of these two drugs, compared to VAN alone, can cause an increased risk of acute kidney injury (AKI) remains controversial. This study aims to identify the correlation between the development of AKI and the combined use of VAN and PT. We conducted a meta-analysis of eight observational cohort studies (a total of 10727 participants received VAN and PT versus VAN and other β-lactams). PubMed, Chinese Biological Medicine Database (CBM), China National Knowledge Infrastructure (CNKI) Database, Wan Fang Digital Periodicals Database (WFDP), and China Science Citation Database (CSCD) were searched through April 2017 using "vancomycin" and "piperacillin" and "tazobactam" as well as "acute kidney injury" or "acute renal failure" or "AKI" or "ARF" or "nephrotoxicity." Two reviewers extracted the data and assessed the risk of bias. A correlation was found between the development of AKI and concurrent use of VAN and PT compared with concomitant VAN and β-lactams (OR 1.57; 95% CI, 1.13-2.01; I 2  = 76.4%, p < 0.001). Similar findings were obtained in an analysis of studies comparing concurrent VAN and PT use with concurrent VAN and β-lactam (cefepime) use (OR 1.50; 95% CI, 1.07-1.93; I 2  = 80.5%, p < 0.001). Exclusion of fair-quality and low-quality articles did not change the results (OR 1.49; 95% CI, 1.06-1.92; I 2  = 84.1%, p < 0.001). Regarding β-lactam therapy in clinical practice, an elevated risk of AKI due to the combined use of VAN and PT should be considered.

  16. Estimation of flood-frequency characteristics of small urban streams in North Carolina

    USGS Publications Warehouse

    Robbins, J.C.; Pope, B.F.

    1996-01-01

    A statewide study was conducted to develop methods for estimating the magnitude and frequency of floods of small urban streams in North Carolina. This type of information is critical in the design of bridges, culverts and water-control structures, establishment of flood-insurance rates and flood-plain regulation, and for other uses by urban planners and engineers. Concurrent records of rainfall and runoff data collected in small urban basins were used to calibrate rainfall-runoff models. Historic rain- fall records were used with the calibrated models to synthesize a long- term record of annual peak discharges. The synthesized record of annual peak discharges were used in a statistical analysis to determine flood- frequency distributions. These frequency distributions were used with distributions from previous investigations to develop a database for 32 small urban basins in the Blue Ridge-Piedmont, Sand Hills, and Coastal Plain hydrologic areas. The study basins ranged in size from 0.04 to 41.0 square miles. Data describing the size and shape of the basin, level of urban development, and climate and rural flood charac- teristics also were included in the database. Estimation equations were developed by relating flood-frequency char- acteristics to basin characteristics in a generalized least-squares regression analysis. The most significant basin characteristics are drainage area, impervious area, and rural flood discharge. The model error and prediction errors for the estimating equations were less than those for the national flood-frequency equations previously reported. Resulting equations, which have prediction errors generally less than 40 percent, can be used to estimate flood-peak discharges for 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals for small urban basins across the State assuming negligible, sustainable, in- channel detention or basin storage.

  17. LIFE CYCLE MANAGEMENT OF MUNICIPAL SOLID WASTE

    EPA Science Inventory

    This is a large, complex project in which a number of different research activities are taking place concurrently to collect data, develop cost and LCI methodologies, construct a database and decision support tool, and conduct case studies with communities to support the life cyc...

  18. Briefer assessment of social network drinking: A test of the Important People Instrument-5 (IP-5).

    PubMed

    Hallgren, Kevin A; Barnett, Nancy P

    2016-12-01

    The Important People instrument (IP; Longabaugh et al., 2010) is one of the most commonly used measures of social network drinking. Although its reliability and validity are well-supported, the length of the instrument may limit its use in many settings. The present study evaluated whether a briefer, 5-person version of the IP (IP-5) adequately reproduces scores from the full IP. College freshmen (N = 1,053) reported their own past-month drinking, alcohol-related consequences, and information about drinking in their close social networks at baseline and 1 year later. From this we derived network members' drinking frequency, percentage of drinkers, and percentage of heavy drinkers, assessed for up to 10 (full IP) or 5 (IP-5) network members. We first modeled the expected concordance between full-IP scores and scores from simulated shorter IP instruments by sampling smaller subsets of network members from full IP data. Then, using quasi-experimental methods, we administered the full IP and IP-5 and compared the 2 instruments' score distributions and concurrent and year-lagged associations with participants' alcohol consumption and consequences. Most of the full-IP variance was reproduced from simulated shorter versions of the IP (ICCs ≥ 0.80). The full IP and IP-5 yielded similar score distributions, concurrent associations with drinking (r = 0.22 to 0.52), and year-lagged associations with drinking. The IP-5 retains most of the information about social network drinking from the full IP. The shorter instrument may be useful in clinical and research settings that require frequent measure administration, yielding greater temporal resolution for monitoring social network drinking. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Patterns of and Motivations for Concurrent Use of Video Games and Substances

    PubMed Central

    Ream, Geoffrey L.; Elliott, Luther C.; Dunlap, Eloise

    2011-01-01

    “Behavioral addictions” share biological mechanisms with substance dependence, and “drug interactions” have been observed between certain substances and self-reinforcing behaviors. This study examines correlates of patterns of and motivations for playing video games while using or feeling the effects of a substance (concurrent use). Data were drawn from a nationally-representative survey of adult Americans who “regularly” or “occasionally” played video games and had played for at least one hour in the past seven days (n = 3,380). Only recent concurrent users’ data were included in analyses (n = 1,196). Independent variables included demographics, substance use frequency and problems, game genre of concurrent use (identified by looking titles up in an industry database), and general game playing variables including problem video game play (PVP), consumer involvement, enjoyment, duration, and frequency of play. Exploratory factor analysis identified the following dimensions underlying patterns of and motivations for concurrent use: pass time or regulate negative emotion, enhance an already enjoyable or positive experience, and use of video games and substances to remediate each other’s undesirable effects. Multivariate regression analyses indicated PVP and hours/day of video game play were associated with most patterns/motivations, as were caffeine, tobacco, alcohol, marijuana, and painkiller use problems. This suggests that concurrent use with some regular situational pattern or effect-seeking motivation is part of the addictive process underlying both PVP and substance dependence. Various demographic, game playing, game genre of concurrent use, and substance use variables were associated with specific motivations/patterns, indicating that all are important in understanding concurrent use. PMID:22073024

  20. Patterns of and motivations for concurrent use of video games and substances.

    PubMed

    Ream, Geoffrey L; Elliott, Luther C; Dunlap, Eloise

    2011-10-01

    "Behavioral addictions" share biological mechanisms with substance dependence, and "drug interactions" have been observed between certain substances and self-reinforcing behaviors. This study examines correlates of patterns of and motivations for playing video games while using or feeling the effects of a substance (concurrent use). Data were drawn from a nationally-representative survey of adult Americans who "regularly" or "occasionally" played video games and had played for at least one hour in the past seven days (n = 3,380). Only recent concurrent users' data were included in analyses (n = 1,196). Independent variables included demographics, substance use frequency and problems, game genre of concurrent use (identified by looking titles up in an industry database), and general game playing variables including problem video game play (PVP), consumer involvement, enjoyment, duration, and frequency of play. Exploratory factor analysis identified the following dimensions underlying patterns of and motivations for concurrent use: pass time or regulate negative emotion, enhance an already enjoyable or positive experience, and use of video games and substances to remediate each other's undesirable effects. Multivariate regression analyses indicated PVP and hours/day of video game play were associated with most patterns/motivations, as were caffeine, tobacco, alcohol, marijuana, and painkiller use problems. This suggests that concurrent use with some regular situational pattern or effect-seeking motivation is part of the addictive process underlying both PVP and substance dependence. Various demographic, game playing, game genre of concurrent use, and substance use variables were associated with specific motivations/patterns, indicating that all are important in understanding concurrent use.

  1. Position Regarding Concurrent Review Under PSD and the Offset Policy

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  2. A PDA-based system for online recording and analysis of concurrent events in complex behavioral processes.

    PubMed

    Held, Jürgen; Manser, Tanja

    2005-02-01

    This article outlines how a Palm- or Newton-based PDA (personal digital assistant) system for online event recording was used to record and analyze concurrent events. We describe the features of this PDA-based system, called the FIT-System (flexible interface technique), and its application to the analysis of concurrent events in complex behavioral processes--in this case, anesthesia work processes. The patented FIT-System has a unique user interface design allowing the user to design an interface template with a pencil and paper or using a transparency film. The template usually consists of a drawing or sketch that includes icons or symbols that depict the observer's representation of the situation to be observed. In this study, the FIT-System allowed us to create a design for fast, intuitive online recording of concurrent events using a set of 41 observation codes. An analysis of concurrent events leads to a description of action density, and our results revealed a characteristic distribution of action density during the administration of anesthesia in the operating room. This distribution indicated the central role of the overlapping operations in the action sequences of medical professionals as they deal with the varying requirements of this complex task. We believe that the FIT-System for online recording of concurrent events in complex behavioral processes has the potential to be useful across a broad spectrum of research areas.

  3. Surprise responses in the human brain demonstrate statistical learning under high concurrent cognitive demand

    NASA Astrophysics Data System (ADS)

    Garrido, Marta Isabel; Teng, Chee Leong James; Taylor, Jeremy Alexander; Rowe, Elise Genevieve; Mattingley, Jason Brett

    2016-06-01

    The ability to learn about regularities in the environment and to make predictions about future events is fundamental for adaptive behaviour. We have previously shown that people can implicitly encode statistical regularities and detect violations therein, as reflected in neuronal responses to unpredictable events that carry a unique prediction error signature. In the real world, however, learning about regularities will often occur in the context of competing cognitive demands. Here we asked whether learning of statistical regularities is modulated by concurrent cognitive load. We compared electroencephalographic metrics associated with responses to pure-tone sounds with frequencies sampled from narrow or wide Gaussian distributions. We showed that outliers evoked a larger response than those in the centre of the stimulus distribution (i.e., an effect of surprise) and that this difference was greater for physically identical outliers in the narrow than in the broad distribution. These results demonstrate an early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. Moreover, we manipulated concurrent cognitive load by having participants perform a visual working memory task while listening to these streams of sounds. We again observed greater prediction error responses in the narrower distribution under both low and high cognitive load. Furthermore, there was no reliable reduction in prediction error magnitude under high-relative to low-cognitive load. Our findings suggest that statistical learning is not a capacity limited process, and that it proceeds automatically even when cognitive resources are taxed by concurrent demands.

  4. Scale out databases for CERN use cases

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Lanza Garcia, Daniel; Surdy, Kacper

    2015-12-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database.

  5. Evaluation of concurrent priority queue algorithms. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Q.

    1991-02-01

    The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs. This thesis addresses the design and experimental evaluation of two novel concurrent priority queues: a parallel Fibonacci heap and a concurrent priority pool, and compares them with the concurrent binary heap. The parallel Fibonacci heap is based on the sequential Fibonacci heap, which is theoretically the most efficient data structure for sequential priority queues. This scheme not only preserves the efficient operation time bounds of its sequential counterpart, but also hasmore » very low contention by distributing locks over the entire data structure. The experimental results show its linearly scalable throughput and speedup up to as many processors as tested (currently 18). A concurrent access scheme for a doubly linked list is described as part of the implementation of the parallel Fibonacci heap. The concurrent priority pool is based on the concurrent B-tree and the concurrent pool. The concurrent priority pool has the highest throughput among the priority queues studied. Like the parallel Fibonacci heap, the concurrent priority pool scales linearly up to as many processors as tested. The priority queues are evaluated in terms of throughput and speedup. Some applications of concurrent priority queues such as the vertex cover problem and the single source shortest path problem are tested.« less

  6. Incidental category learning and cognitive load in a multisensory environment across childhood.

    PubMed

    Broadbent, H J; Osborne, T; Rea, M; Peng, A; Mareschal, D; Kirkham, N Z

    2018-06-01

    Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which a concurrent unisensory or multisensory cognitive load task would interfere with or support multisensory learning remains unclear. This study examined the role of concurrent task modality on incidental category learning in 6- to 10-year-olds. Participants were engaged in a multisensory learning task while also performing either a unisensory (visual or auditory only) or multisensory (audiovisual) concurrent task (CT). We found that engaging in an auditory CT led to poorer performance on incidental category learning compared with an audiovisual or visual CT, across groups. In 6-year-olds, category test performance was at chance in the auditory-only CT condition, suggesting auditory concurrent tasks may interfere with learning in younger children, but the addition of visual information may serve to focus attention. These findings provide novel insight into the use of multisensory concurrent information on incidental learning. Implications for the deployment of multisensory learning tasks within education across development and developmental changes in modality dominance and ability to switch flexibly across modalities are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Request for Concurrence as to Applicability of PSD and NSPS Regulations to Marblehead Lime Company Proposed Lime Plant

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  8. Computer Sciences and Data Systems, volume 1

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  9. Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures

    NASA Astrophysics Data System (ADS)

    Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris

    2018-04-01

    An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.

  10. 7 CFR 253.1 - General purpose and scope.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Distribution Program and the Food Stamp Program on Indian reservations when such concurrent operation is... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION ADMINISTRATION OF THE FOOD DISTRIBUTION PROGRAM...

  11. 7 CFR 253.1 - General purpose and scope.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Distribution Program and the Food Stamp Program on Indian reservations when such concurrent operation is... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION ADMINISTRATION OF THE FOOD DISTRIBUTION PROGRAM...

  12. Sex ratio, poverty, and concurrent partnerships among men and women in the United States: a multilevel analysis.

    PubMed

    Adimora, Adaora A; Schoenbach, Victor J; Taylor, Eboni M; Khan, Maria R; Schwartz, Robert J; Miller, William C

    2013-11-01

    Social and economic contextual factors may promote concurrent sexual partnerships, which can accelerate population HIV transmission and are more common among African Americans than U.S. Whites. We investigated the relationship between contextual factors and concurrency. We analyzed past 12-month concurrency prevalence in the 2002 National Survey of Family Growth and its contextual database in relation to county sex ratio (among respondent's racial and ethnic group), percentage in poverty (among respondent's racial and ethnic group), and violent crime rate. Analyses examined counties with balanced (0.95-1.05 males/female) or low (<0.9) sex ratios. Concurrency prevalence was greater (odds ratio [OR]; 95% confidence interval [CI]) in counties with low sex ratios (OR, 1.67; 95% CI, 1.17-2.39), more poverty (OR, 1.18; 95% CI, 0.98-1.42 per 10 percentage-point increase), and higher crime rates (OR, 1.04; 95% CI, 1.00-1.09 per 1000 population/year). Notably, 99.5% of Whites and 93.7% of Hispanics, but only 7.85% of Blacks, lived in balanced sex ratio counties; about 5% of Whites, half of Hispanics, and three-fourths of Blacks resided in counties with >20% same-race poverty. The dramatic Black-White differences in contextual factors in the United States and their association with sexual concurrency could contribute to the nation's profound racial disparities in HIV infection. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.

  14. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    NASA Astrophysics Data System (ADS)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  15. Database Management Systems: A Case Study of Faculty of Open Education

    ERIC Educational Resources Information Center

    Kamisli, Zehra

    2004-01-01

    We live in the information and the microelectronic age, where technological advancements become a major determinant of our lifestyle. Such advances in technology cannot possibly be made or sustained without concurrent advancement in management systems (5). The impact of computer technology on organizations and society is increasing as new…

  16. Developments in the treatment of Chiari type 1 malformations over the past decade

    PubMed Central

    Pyne, Alexandra; Horn, Samantha R.; Poorman, Gregory W.; Janjua, Muhammad B.; Vasquez-Montes, Dennis; Bortz, Cole A.; Segreto, Frank A.; Frangella, Nicholas J.; Siow, Matthew Y.; Sure, Akhila; Zhou, Peter L.; Moon, John Y.; Diebo, Bassel G.; Vira, Shaleen N.

    2018-01-01

    Background Chiari malformations type 1 (CM-1), a developmental anomaly of the posterior fossa, usually presents in adolescence or early adulthood. There are few studies on the national incidence of CM-1, taking into account outcomes based on concurrent diagnoses. To quantify trends in treatment and associated diagnoses, as retrospective review of the Kid’s Inpatient Database (KID) from 2003-2012 was conducted. Methods Patients aged 0–20 with primary diagnosis of CM-1 in the KID database were identified. Demographics and concurrent diagnoses were analyzed using chi-squared and t-tests for categorical and numerical variables, respectively. Trends in diagnosis, treatments, and outcomes were analyzed using analysis of variance (ANOVA). Results Five thousand four hundred and thirty-eight patients were identified in the KID database with a primary diagnosis of CM-1 (10.5 years, 55% female). CM-1 primary diagnoses have increased over time (45 to 96 per 100,000). CM-1 patients had the following concurrent diagnoses: 23.8% syringomyelia/syringobulbia, 11.5% scoliosis, 5.9% hydrocephalus, 2.2% tethered cord syndrome. Eighty-three point four percent of CM-1 patients underwent surgical treatment, and rate of surgical treatment for CM-1 increased from 2003–2012 (66% to 72%, P<0.001) though complication rate decreased (7% to 3%, P<0.001) and mortality rates remained constant. Seventy percent of surgeries involved decompression-only, which increased neurologic complications compared to fusions (P=0.039). Cranial decompressions decreased from 2003–2012 (42.2–30.5%) while spinal decompressions increased (73.1–77.4%). Fusion rates have increased over time (0.45% to 1.8%) and are associated with higher complications than decompression-only (11.9% vs. 4.7%). Seven point four percent of patients experienced at least one peri-operative complication (nervous system, dysphagia, respiratory most common). Patients with concurrent hydrocephalus had increased; nervous system, respiratory and urinary complications (P<0.006) and syringomyelia increased the rate of respiratory complications (P=0.037). Conclusions CM-1 diagnoses have increased in the last decade. Despite the decrease in overall complication rates, fusions are becoming more common and are associated with higher peri-operative complication rates. Commonly associated diagnoses including syringomyelia and hydrocephalus, can dramatically increase complication rates. PMID:29732422

  17. Computer Science Research in Europe.

    DTIC Science & Technology

    1984-08-29

    most attention, multi- database and its structure, and (3) the dependencies between databases Distributed Systems and multi- databases . Having...completed a multi- database Newcastle University, UK system for distributed data management, At the University of Newcastle the INRIA is now working on a real...communications re- INRIA quirements of distributed database A project called SIRIUS was estab- systems, protocols for checking the lished in 1977 at the

  18. Association Between Hospital Case Volume and the Use of Bronchoscopy and Esophagoscopy During Head and Neck Cancer Diagnostic Evaluation

    PubMed Central

    Sun, Gordon H.; Aliu, Oluseyi; Moloci, Nicholas M.; Mondschein, Joshua K.; Burke, James F.; Hayward, Rodney A.

    2013-01-01

    Background There are no clinical guidelines on best practices for the use of bronchoscopy and esophagoscopy in diagnosing head and neck cancer. This retrospective cohort study examined variation in the use of bronchoscopy and esophagoscopy across hospitals in Michigan. Patients and Methods We identified 17,828 head and neck cancer patients in the 2006–2010 Michigan State Ambulatory Surgery Databases. We used hierarchical, mixed-effect logistic regression to examine whether a hospital’s risk-adjusted rate of concurrent bronchoscopy or esophagoscopy was associated with its case volume (<100, 100–999, or ≥1000 cases/hospital) for those undergoing diagnostic laryngoscopy. Results Of 9,218 patients undergoing diagnostic laryngoscopy, 1,191 (12.9%) received concurrent bronchoscopy and 1,675 (18.2%) underwent concurrent esophagoscopy. The median hospital rate of bronchoscopy was 2.7% (range 0–61.1%), and low-volume (OR 27.1 [95% CI 1.9, 390.7]) and medium-volume (OR 28.1 [95% CI 2.0, 399.0]) hospitals were more likely to perform concurrent bronchoscopy compared to high-volume hospitals. The median hospital rate of esophagoscopy was 5.1% (range 0–47.1%), and low-volume (OR 9.8 [95% CI 1.5, 63.7]) and medium-volume (OR 8.5 [95% CI 1.3, 55.0]) hospitals were significantly more likely to perform concurrent esophagoscopy relative to high-volume hospitals. Conclusions Head and neck cancer patients undergoing diagnostic laryngoscopy are much more likely to undergo concurrent bronchoscopy and esophagoscopy at low- and medium-volume hospitals than at high-volume hospitals. Whether this represents over-use of concurrent procedures or appropriate care that leads to earlier diagnosis and better outcomes merits further investigation. PMID:24114146

  19. Software engineering aspects of real-time programming concepts

    NASA Astrophysics Data System (ADS)

    Schoitsch, Erwin

    1986-08-01

    Real-time programming is a discipline of great importance not only in process control, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other. The second part deals with structuring and modularization of technical processes to build reliable and maintainable real time systems. Software-quality and software engineering aspects are considered throughout the paper.

  20. Distribution Grid Integration Unit Cost Database | Solar Research | NREL

    Science.gov Websites

    Unit Cost Database Distribution Grid Integration Unit Cost Database NREL's Distribution Grid Integration Unit Cost Database contains unit cost information for different components that may be used to associated with PV. It includes information from the California utility unit cost guides on traditional

  1. Inferring rupture characteristics using new databases for 3D slab geometry and earthquake rupture models

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Plescia, S. M.; Moore, G.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center has recently published a database of finite fault models for globally distributed M7.5+ earthquakes since 1990. Concurrently, we have also compiled a database of three-dimensional slab geometry models for all global subduction zones, to update and replace Slab1.0. Here, we use these two new and valuable resources to infer characteristics of earthquake rupture and propagation in subduction zones, where the vast majority of large-to-great-sized earthquakes occur. For example, we can test questions that are fairly prevalent in seismological literature. Do large ruptures preferentially occur where subduction zones are flat (e.g., Bletery et al., 2016)? Can `flatness' be mapped to understand and quantify earthquake potential? Do the ends of ruptures correlate with significant changes in slab geometry, and/or bathymetric features entering the subduction zone? Do local subduction zone geometry changes spatially correlate with areas of low slip in rupture models (e.g., Moreno et al., 2012)? Is there a correlation between average seismogenic zone dip, and/or seismogenic zone width, and earthquake size? (e.g., Hayes et al., 2012; Heuret et al., 2011). These issues are fundamental to the understanding of earthquake rupture dynamics and subduction zone seismogenesis, and yet many are poorly understood or are still debated in scientific literature. We attempt to address these questions and similar issues in this presentation, and show how these models can be used to improve our understanding of earthquake hazard in subduction zones.

  2. Validity and reliability of Internet-based physiotherapy assessment for musculoskeletal disorders: a systematic review.

    PubMed

    Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard

    2017-04-01

    Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.

  3. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  4. Maintaining consistency in distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.

  5. Decision Support Systems for Research and Management in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Rodriquez, Luis F.

    2004-01-01

    Decision support systems have been implemented in many applications including strategic planning for battlefield scenarios, corporate decision making for business planning, production planning and control systems, and recommendation generators like those on Amazon.com(Registered TradeMark). Such tools are reviewed for developing a similar tool for NASA's ALS Program. DSS are considered concurrently with the development of the OPIS system, a database designed for chronicling of research and development in ALS. By utilizing the OPIS database, it is anticipated that decision support can be provided to increase the quality of decisions by ALS managers and researchers.

  6. BESIU Physical Analysis on Hadoop Platform

    NASA Astrophysics Data System (ADS)

    Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing

    2014-06-01

    In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.

  7. Work Coordination Engine

    NASA Technical Reports Server (NTRS)

    Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, Rachel; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed

    2009-01-01

    The Work Coordination Engine (WCE) is a Java application integrated into the Service Management Database (SMDB), which coordinates the dispatching and monitoring of a work order system. WCE de-queues work orders from SMDB and orchestrates the dispatching of work to a registered set of software worker applications distributed over a set of local, or remote, heterogeneous computing systems. WCE monitors the execution of work orders once dispatched, and accepts the results of the work order by storing to the SMDB persistent store. The software leverages the use of a relational database, Java Messaging System (JMS), and Web Services using Simple Object Access Protocol (SOAP) technologies to implement an efficient work-order dispatching mechanism capable of coordinating the work of multiple computer servers on various platforms working concurrently on different, or similar, types of data or algorithmic processing. Existing (legacy) applications can be wrapped with a proxy object so that no changes to the application are needed to make them available for integration into the work order system as "workers." WCE automatically reschedules work orders that fail to be executed by one server to a different server if available. From initiation to completion, the system manages the execution state of work orders and workers via a well-defined set of events, states, and actions. It allows for configurable work-order execution timeouts by work-order type. This innovation eliminates a current processing bottleneck by providing a highly scalable, distributed work-order system used to quickly generate products needed by the Deep Space Network (DSN) to support space flight operations. WCE is driven by asynchronous messages delivered via JMS indicating the availability of new work or workers. It runs completely unattended in support of the lights-out operations concept in the DSN.

  8. Visual attention and emotional memory: recall of aversive pictures is partially mediated by concurrent task performance.

    PubMed

    Pottage, Claire L; Schaefer, Alexandre

    2012-02-01

    The emotional enhancement of memory is often thought to be determined by attention. However, recent evidence using divided attention paradigms suggests that attention does not play a significant role in the formation of memories for aversive pictures. We report a study that investigated this question using a paradigm in which participants had to encode lists of randomly intermixed negative and neutral pictures under conditions of full attention and divided attention followed by a free recall test. Attention was divided by a highly demanding concurrent task tapping visual processing resources. Results showed that the advantage in recall for aversive pictures was still present in the DA condition. However, mediation analyses also revealed that concurrent task performance significantly mediated the emotional enhancement of memory under divided attention. This finding suggests that visual attentional processes play a significant role in the formation of emotional memories. PsycINFO Database Record (c) 2012 APA, all rights reserved

  9. Height-diameter allometry of tropical forest trees

    Treesearch

    T.R. Feldpausch; L. Banin; O.L. Phillips; T.R. Baker; S.L. Lewis; C.A. Quesada; K. Affum-Baffoe; E.J.M.M. Arets; N.J. Berry; M. Bird; E.S. Brondizio; P de Camargo; J. Chave; G. Djagbletey; T.F. Domingues; M. Drescher; P.M. Fearnside; M.B. Franca; N.M. Fyllas; G. Lopez-Gonzalez; A. Hladik; N. Higuchi; M.O. Hunter; Y. Iida; K.A. Salim; A.R. Kassim; M. Keller; J. Kemp; D.A. King; J.C. Lovett; B.S. Marimon; B.H. Marimon-Junior; E. Lenza; A.R. Marshall; D.J. Metcalfe; E.T.A. Mitchard; E.F. Moran; B.W. Nelson; R. Nilus; E.M. Nogueira; M. Palace; S. Patiño; K.S.-H. Peh; M.T. Raventos; J.M. Reitsma; G. Saiz; F. Schrodt; B. Sonke; H.E. Taedoumg; S. Tan; L. White; H. Woll; J. Lloyd

    2011-01-01

    Tropical tree height-diameter (H:D) relationships may vary by forest type and region making large-scale estimates of above-ground biomass subject to bias if they ignore these differences in stem allometry. We have therefore developed a new global tropical forest database consisting of 39 955 concurrent H and D measurements encompassing 283 sites in 22 tropical...

  10. Distribution of G concurrence of random pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol

    2006-12-15

    The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.

  11. Forensic DNA databases in Western Balkan region: retrospectives, perspectives, and initiatives

    PubMed Central

    Marjanović, Damir; Konjhodžić, Rijad; Butorac, Sara Sanela; Drobnič, Katja; Merkaš, Siniša; Lauc, Gordan; Primorac, Damir; Anđelinović, Šimun; Milosavljević, Mladen; Karan, Željko; Vidović, Stojko; Stojković, Oliver; Panić, Bojana; Vučetić Dragović, Anđelka; Kovačević, Sandra; Jakovski, Zlatko; Asplen, Chris; Primorac, Dragan

    2011-01-01

    The European Network of Forensic Science Institutes (ENFSI) recommended the establishment of forensic DNA databases and specific implementation and management legislations for all EU/ENFSI members. Therefore, forensic institutions from Bosnia and Herzegovina, Serbia, Montenegro, and Macedonia launched a wide set of activities to support these recommendations. To assess the current state, a regional expert team completed detailed screening and investigation of the existing forensic DNA data repositories and associated legislation in these countries. The scope also included relevant concurrent projects and a wide spectrum of different activities in relation to forensics DNA use. The state of forensic DNA analysis was also determined in the neighboring Slovenia and Croatia, which already have functional national DNA databases. There is a need for a ‘regional supplement’ to the current documentation and standards pertaining to forensic application of DNA databases, which should include regional-specific preliminary aims and recommendations. PMID:21674821

  12. Forensic DNA databases in Western Balkan region: retrospectives, perspectives, and initiatives.

    PubMed

    Marjanović, Damir; Konjhodzić, Rijad; Butorac, Sara Sanela; Drobnic, Katja; Merkas, Sinisa; Lauc, Gordan; Primorac, Damir; Andjelinović, Simun; Milosavljević, Mladen; Karan, Zeljko; Vidović, Stojko; Stojković, Oliver; Panić, Bojana; Vucetić Dragović, Andjelka; Kovacević, Sandra; Jakovski, Zlatko; Asplen, Chris; Primorac, Dragan

    2011-06-01

    The European Network of Forensic Science Institutes (ENFSI) recommended the establishment of forensic DNA databases and specific implementation and management legislations for all EU/ENFSI members. Therefore, forensic institutions from Bosnia and Herzegovina, Serbia, Montenegro, and Macedonia launched a wide set of activities to support these recommendations. To assess the current state, a regional expert team completed detailed screening and investigation of the existing forensic DNA data repositories and associated legislation in these countries. The scope also included relevant concurrent projects and a wide spectrum of different activities in relation to forensics DNA use. The state of forensic DNA analysis was also determined in the neighboring Slovenia and Croatia, which already have functional national DNA databases. There is a need for a 'regional supplement' to the current documentation and standards pertaining to forensic application of DNA databases, which should include regional-specific preliminary aims and recommendations.

  13. Concurrent Learning of Control in Multi agent Sequential Decision Tasks

    DTIC Science & Technology

    2018-04-17

    Concurrent Learning of Control in Multi-agent Sequential Decision Tasks The overall objective of this project was to develop multi-agent reinforcement...learning (MARL) approaches for intelligent agents to autonomously learn distributed control policies in decentral- ized partially observable...shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number

  14. Concurrent infection with sibling Trichinella species in a natural host.

    PubMed

    Pozio, E; Bandi, C; La Rosa, G; Järvis, T; Miller, I; Kapel, C M

    1995-10-01

    Random amplified polymorphic DNA (RAPD) analysis of individual Trichinella muscle larvae, collected from several sylvatic and domestic animals in Estonia, revealed concurrent infection of a racoon dog with Trichinella nativa and Trichinella britovi. This finding provides strong support for their taxonomic ranking as sibling species. These 2 species appear uniformly distributed among sylvatic animals through Estonia, while Trichinella spiralis appears restricted to the domestic habitat.

  15. All Together Now: Concurrent Learning of Multiple Structures in an Artificial Language

    ERIC Educational Resources Information Center

    Romberg, Alexa R.; Saffran, Jenny R.

    2013-01-01

    Natural languages contain many layers of sequential structure, from the distribution of phonemes within words to the distribution of phrases within utterances. However, most research modeling language acquisition using artificial languages has focused on only one type of distributional structure at a time. In two experiments, we investigated adult…

  16. The effects of curiosity-evoking events on activity enjoyment.

    PubMed

    Isikman, Elif; MacInnis, Deborah J; Ülkümen, Gülden; Cavanaugh, Lisa A

    2016-09-01

    Whereas prior literature has studied the positive effects of curiosity-evoking events that are integral to focal activities, we explore whether and how a curiosity-evoking event that is incidental to a focal activity induces negative outcomes for enjoyment. Four experiments and 1 field study demonstrate that curiosity about an event that is incidental to an activity in which individuals are engaged, significantly affects enjoyment of a concurrent activity. The reason why is that curiosity diverts attention away from the concurrent activity and focuses attention on the curiosity-evoking event. Thus, curiosity regarding an incidental event decreases enjoyment of a positive focal activity but increases enjoyment of a negative focal activity. PsycINFO Database Record (c) 2016 APA, all rights reserved

  17. Requiem for a Data Base System.

    DTIC Science & Technology

    1979-01-18

    were defined -- - 2) the final syntax and semantics of QUEL were defined 3) protection was figured out 14) EQUEL was designed 5) concurrency control and...features which were not thought about in the initial design (such as concurrency control and recovery) and began worrying about distributed data...made in progress rather than on eventual corrections. Some attention is also given to the role of structured design in a data base system implementation

  18. Tighter entanglement monogamy relations of qubit systems

    NASA Astrophysics Data System (ADS)

    Jin, Zhi-Xiang; Fei, Shao-Ming

    2017-03-01

    Monogamy relations characterize the distributions of entanglement in multipartite systems. We investigate monogamy relations related to the concurrence C and the entanglement of formation E. We present new entanglement monogamy relations satisfied by the α -th power of concurrence for all α ≥ 2, and the α -th power of the entanglement of formation for all α ≥ √{2}. These monogamy relations are shown to be tighter than the existing ones.

  19. Automatic Management of Parallel and Distributed System Resources

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

    1990-01-01

    Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

  20. Data Mining on Distributed Medical Databases: Recent Trends and Future Directions

    NASA Astrophysics Data System (ADS)

    Atilgan, Yasemin; Dogan, Firat

    As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.

  1. Production and distribution of scientific and technical databases - Comparison among Japan, US and Europe

    NASA Astrophysics Data System (ADS)

    Onodera, Natsuo; Mizukami, Masayuki

    This paper estimates several quantitative indice on production and distribution of scientific and technical databases based on various recent publications and attempts to compare the indice internationally. Raw data used for the estimation are brought mainly from the Database Directory (published by MITI) for database production and from some domestic and foreign study reports for database revenues. The ratio of the indice among Japan, US and Europe for usage of database is similar to those for general scientific and technical activities such as population and R&D expenditures. But Japanese contributions to production, revenue and over-countory distribution of databases are still lower than US and European countries. International comparison of relative database activities between public and private sectors is also discussed.

  2. Analysis of DIRAC's behavior using model checking with process algebra

    NASA Astrophysics Data System (ADS)

    Remenska, Daniela; Templon, Jeff; Willemse, Tim; Bal, Henri; Verstoep, Kees; Fokkink, Wan; Charpentier, Philippe; Graciani Diaz, Ricardo; Lanciotti, Elisa; Roiser, Stefan; Ciba, Krzysztof

    2012-12-01

    DIRAC is the grid solution developed to support LHCb production activities as well as user data analysis. It consists of distributed services and agents delivering the workload to the grid resources. Services maintain database back-ends to store dynamic state information of entities such as jobs, queues, staging requests, etc. Agents use polling to check and possibly react to changes in the system state. Each agent's logic is relatively simple; the main complexity lies in their cooperation. Agents run concurrently, and collaborate using the databases as shared memory. The databases can be accessed directly by the agents if running locally or through a DIRAC service interface if necessary. This shared-memory model causes entities to occasionally get into inconsistent states. Tracing and fixing such problems becomes formidable due to the inherent parallelism present. We propose more rigorous methods to cope with this. Model checking is one such technique for analysis of an abstract model of a system. Unlike conventional testing, it allows full control over the parallel processes execution, and supports exhaustive state-space exploration. We used the mCRL2 language and toolset to model the behavior of two related DIRAC subsystems: the workload and storage management system. Based on process algebra, mCRL2 allows defining custom data types as well as functions over these. This makes it suitable for modeling the data manipulations made by DIRAC's agents. By visualizing the state space and replaying scenarios with the toolkit's simulator, we have detected race-conditions and deadlocks in these systems, which, in several cases, were confirmed to occur in the reality. Several properties of interest were formulated and verified with the tool. Our future direction is automating the translation from DIRAC to a formal model.

  3. Applying Agile Methods to the Development of a Community-Based Sea Ice Observations Database

    NASA Astrophysics Data System (ADS)

    Pulsifer, P. L.; Collins, J. A.; Kaufman, M.; Eicken, H.; Parsons, M. A.; Gearheard, S.

    2011-12-01

    Local and traditional knowledge and community-based monitoring programs are increasingly being recognized as an important part of establishing an Arctic observing network, and understanding Arctic environmental change. The Seasonal Ice Zone Observing Network (SIZONet, http://www.sizonet.org) project has implemented an integrated program for observing seasonal ice in Alaska. Observation and analysis by local sea ice experts helps track seasonal and inter-annual variability of the ice cover and its use by coastal communities. The ELOKA project (http://eloka-arctic.org) is collaborating with SIZONet on the development of a community accessible, Web-based application for collecting and distributing local observations. The SIZONet project is dealing with complicated qualitative and quantitative data collected from a growing number of observers in different communities while concurrently working to design a system that will serve a wide range of different end users including Arctic residents, scientists, educators, and other stakeholders with a need for sea ice information. The benefits of linking and integrating knowledge from communities and university-based researchers are clear, however, development of an information system in this multidisciplinary, multi-participant context is challenging. Participants are geographically distributed, have different levels of technical expertise, and have varying goals for how the system will be used. As previously reported (Pulsifer et al. 2010), new technologies have been used to deal with some of the challenges presented in this complex development context. In this paper, we report on the challenges and innovations related to working as a multi-disciplinary software development team. Specifically, we discuss how Agile software development methods have been used in defining and refining user needs, developing prototypes, and releasing a production level application. We provide an overview of the production application that includes discussion of a hybrid architecture that combines a traditional relational database, schema-less database, advanced free text search, and the preliminary framework for Semantic Web support. The current version of the SIZONet web application is discussed in relation to the high-value features defined as part of the Agile approach. Preliminary feedback indicates a system that meets the needs of multiple user groups.

  4. Nonuniform Changes in the Distribution of Visual Attention from Visual Complexity and Action: A Driving Simulation Study.

    PubMed

    Park, George D; Reed, Catherine L

    2015-02-01

    Researchers acknowledge the interplay between action and attention, but typically consider action as a response to successful attentional selection or the correlation of performance on separate action and attention tasks. We investigated how concurrent action with spatial monitoring affects the distribution of attention across the visual field. We embedded a functional field of view (FFOV) paradigm with concurrent central object recognition and peripheral target localization tasks in a simulated driving environment. Peripheral targets varied across 20-60 deg eccentricity at 11 radial spokes. Three conditions assessed the effects of visual complexity and concurrent action on the size and shape of the FFOV: (1) with no background, (2) with driving background, and (3) with driving background and vehicle steering. The addition of visual complexity slowed task performance and reduced the FFOV size but did not change the baseline shape. In contrast, the addition of steering produced not only shrinkage of the FFOV, but also changes in the FFOV shape. Nonuniform performance decrements occurred in proximal regions used for the central task and for steering, independent of interference from context elements. Multifocal attention models should consider the role of action and account for nonhomogeneities in the distribution of attention. © 2015 SAGE Publications.

  5. Performance related issues in distributed database systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  6. Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries.

    PubMed

    McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E; Madhavan, Subha

    2012-06-01

    Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy.

  7. Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries

    PubMed Central

    McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E

    2012-01-01

    Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy. PMID:22323393

  8. 7 CFR 281.1 - General purpose and scope.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM ADMINISTRATION OF THE FOOD STAMP PROGRAM ON INDIAN... Program on Indian reservations either separately or concurrently with the Food distribution program. In order to assure that the Food Stamp Program is responsive to the needs of Indians on reservations, State...

  9. Identification and Classification of Common Risks in Space Science Missions

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Chattopadhyay, Debarati; Hanna, Robert A.; Port, Daniel; Eggleston, Sabrina

    2010-01-01

    Due to the highly constrained schedules and budgets that NASA missions must contend with, the identification and management of cost, schedule and risks in the earliest stages of the lifecycle is critical. At the Jet Propulsion Laboratory (JPL) it is the concurrent engineering teams that first address these items in a systematic manner. Foremost of these concurrent engineering teams is Team X. Started in 1995, Team X has carried out over 1000 studies, dramatically reducing the time and cost involved, and has been the model for other concurrent engineering teams both within NASA and throughout the larger aerospace community. The ability to do integrated risk identification and assessment was first introduced into Team X in 2001. Since that time the mission risks identified in each study have been kept in a database. In this paper we will describe how the Team X risk process is evolving highlighting the strengths and weaknesses of the different approaches. The paper will especially focus on the identification and classification of common risks that have arisen during Team X studies of space based science missions.

  10. Effects of networking on career success: a longitudinal study.

    PubMed

    Wolff, Hans-Georg; Moser, Klaus

    2009-01-01

    Previous research has reported effects of networking, defined as building, maintaining, and using relationships, on career success. However, empirical studies have relied exclusively on concurrent or retrospective designs that rest upon strong assumptions about the causal direction of this relation and depict a static snapshot of the relation at a given point in time. This study provides a dynamic perspective on the effects of networking on career success and reports results of a longitudinal study. Networking was assessed with 6 subscales that resulted from combining measures of the facets of (a) internal versus external networking and (b) building versus maintaining versus using contacts. Objective (salary) and subjective (career satisfaction) measures of career success were obtained for 3 consecutive years. Multilevel analyses showed that networking is related to concurrent salary and that it is related to the growth rate of salary over time. Networking is also related to concurrent career satisfaction. As satisfaction remained stable over time, no effects of networking on the growth of career satisfaction were found. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  11. Family and Other Impacts on Retention

    DTIC Science & Technology

    1992-04-01

    provide the Army with an invaluable database for evaluating and designing policies and programs to enhance Army retention objectives. These programs... policy , as well as other aspects of the military force. Concurrently, continuing economic growth in the private sector will result in higher levels...work on retention and on the broader body of research on job satisfaction and job turnover. More recently, there has been both policy and theoretical

  12. Consolidated Environmental Resource Database Information Process (CERDIP)

    DTIC Science & Technology

    2015-11-19

    Secretary of the Army for Installations, Energy and Environment [OASA(IE&E)] ESOH 5850 21st Street, Bldg 211, Second Floor Fort Belvoir, VA 22060-5938...Elizabeth J. Keysar 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) National Defense Center for Energy and Environment Operated by Concurrent...Markup Language NDCEE National Defense Center for Energy and Environment NFDD National Geospatial–Intelligence Agency Feature Data Dictionary

  13. Programming your way out of the past: ISIS and the META Project

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Marzullo, Keith

    1989-01-01

    The ISIS distributed programming system and the META Project are described. The ISIS programming toolkit is an aid to low-level programming that makes it easy to build fault-tolerant distributed applications that exploit replication and concurrent execution. The META Project is reexamining high-level mechanisms such as the filesystem, shell language, and administration tools in distributed systems.

  14. WLN's Database: New Directions.

    ERIC Educational Resources Information Center

    Ziegman, Bruce N.

    1988-01-01

    Describes features of the Western Library Network's database, including the database structure, authority control, contents, quality control, and distribution methods. The discussion covers changes in distribution necessitated by increasing telecommunications costs and the development of optical data disk products. (CLB)

  15. Linear monogamy of entanglement in three-qubit systems

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Gao, Fei; Wen, Qiao-Yan

    2015-11-01

    For any three-qubit quantum systems ABC, Oliveira et al. numerically found that both the concurrence and the entanglement of formation (EoF) obey the linear monogamy relations in pure states. They also conjectured that the linear monogamy relations can be saturated when the focus qubit A is maximally entangled with the joint qubits BC. In this work, we prove analytically that both the concurrence and EoF obey linear monogamy relations in an arbitrary three-qubit state. Furthermore, we verify that all three-qubit pure states are maximally entangled in the bipartition A|BC when they saturate the linear monogamy relations. We also study the distribution of the concurrence and EoF. More specifically, when the amount of entanglement between A and B equals to that of A and C, we show that the sum of EoF itself saturates the linear monogamy relation, while the sum of the squared EoF is minimum. Different from EoF, the concurrence and the squared concurrence both saturate the linear monogamy relations when the entanglement between A and B equals to that of A and C.

  16. Linear monogamy of entanglement in three-qubit systems.

    PubMed

    Liu, Feng; Gao, Fei; Wen, Qiao-Yan

    2015-11-16

    For any three-qubit quantum systems ABC, Oliveira et al. numerically found that both the concurrence and the entanglement of formation (EoF) obey the linear monogamy relations in pure states. They also conjectured that the linear monogamy relations can be saturated when the focus qubit A is maximally entangled with the joint qubits BC. In this work, we prove analytically that both the concurrence and EoF obey linear monogamy relations in an arbitrary three-qubit state. Furthermore, we verify that all three-qubit pure states are maximally entangled in the bipartition A|BC when they saturate the linear monogamy relations. We also study the distribution of the concurrence and EoF. More specifically, when the amount of entanglement between A and B equals to that of A and C, we show that the sum of EoF itself saturates the linear monogamy relation, while the sum of the squared EoF is minimum. Different from EoF, the concurrence and the squared concurrence both saturate the linear monogamy relations when the entanglement between A and B equals to that of A and C.

  17. Linear monogamy of entanglement in three-qubit systems

    PubMed Central

    Liu, Feng; Gao, Fei; Wen, Qiao-Yan

    2015-01-01

    For any three-qubit quantum systems ABC, Oliveira et al. numerically found that both the concurrence and the entanglement of formation (EoF) obey the linear monogamy relations in pure states. They also conjectured that the linear monogamy relations can be saturated when the focus qubit A is maximally entangled with the joint qubits BC. In this work, we prove analytically that both the concurrence and EoF obey linear monogamy relations in an arbitrary three-qubit state. Furthermore, we verify that all three-qubit pure states are maximally entangled in the bipartition A|BC when they saturate the linear monogamy relations. We also study the distribution of the concurrence and EoF. More specifically, when the amount of entanglement between A and B equals to that of A and C, we show that the sum of EoF itself saturates the linear monogamy relation, while the sum of the squared EoF is minimum. Different from EoF, the concurrence and the squared concurrence both saturate the linear monogamy relations when the entanglement between A and B equals to that of A and C. PMID:26568265

  18. Low cost management of replicated data in fault-tolerant distributed systems

    NASA Technical Reports Server (NTRS)

    Joseph, Thomas A.; Birman, Kenneth P.

    1990-01-01

    Many distributed systems replicate data for fault tolerance or availability. In such systems, a logical update on a data item results in a physical update on a number of copies. The synchronization and communication required to keep the copies of replicated data consistent introduce a delay when operations are performed. A technique is described that relaxes the usual degree of synchronization, permitting replicated data items to be updated concurrently with other operations, while at the same time ensuring that correctness is not violated. The additional concurrency thus obtained results in better response time when performing operations on replicated data. How this technique performs in conjunction with a roll-back and a roll-forward failure recovery mechanism is also discussed.

  19. Parallel State Space Construction for a Model Checking Based on Maximality Semantics

    NASA Astrophysics Data System (ADS)

    El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine

    2009-03-01

    The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.

  20. Software For Drawing Design Details Concurrently

    NASA Technical Reports Server (NTRS)

    Crosby, Dewey C., III

    1990-01-01

    Software system containing five computer-aided-design programs enables more than one designer to work on same part or assembly at same time. Reduces time necessary to produce design by implementing concept of parallel or concurrent detailing, in which all detail drawings documenting three-dimensional model of part or assembly produced simultaneously, rather than sequentially. Keeps various detail drawings consistent with each other and with overall design by distributing changes in each detail to all other affected details.

  1. Superconcurrency: A Form of Distributed Heterogeneous Supercomputing

    DTIC Science & Technology

    1991-05-01

    and Nathaniel J. Davis IV, An Overview of the PASM Parallel Processing System, in Computer Architecture, edited by D. D. Gajski , V. M. Milutinovic, H...nianag- concurrency Research Team has been rarena in the next few months, iag optinmalyconfigured sutes of the development of the Distributed e- g ., an

  2. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  3. Predictive and concurrent validity of the Braden scale in long-term care: a meta-analysis.

    PubMed

    Wilchesky, Machelle; Lungu, Ovidiu

    2015-01-01

    Pressure ulcer prevention is an important long-term care (LTC) quality indicator. While the Braden Scale is a recommended risk assessment tool, there is a paucity of information specifically pertaining to its validity within the LTC setting. We, therefore, undertook a systematic review and meta-analysis comparing Braden Scale predictive and concurrent validity within this context. We searched the Medline, EMBASE, PsychINFO and PubMed databases from 1985-2014 for studies containing the requisite information to analyze tool validity. Our initial search yielded 3,773 articles. Eleven datasets emanating from nine published studies describing 40,361 residents met all meta-analysis inclusion criteria and were analyzed using random effects models. Pooled sensitivity, specificity, positive predictive value (PPV), and negative predictive values were 86%, 38%, 28%, and 93%, respectively. Specificity was poorer in concurrent samples as compared with predictive samples (38% vs. 72%), while PPV was low in both sample types (25 and 37%). Though random effects model results showed that the Scale had good overall predictive ability [RR, 4.33; 95% CI, 3.28-5.72], none of the concurrent samples were found to have "optimal" sensitivity and specificity. In conclusion, the appropriateness of the Braden Scale in LTC is questionable given its low specificity and PPV, in particular in concurrent validity studies. Future studies should further explore the extent to which the apparent low validity of the Scale in LTC is due to the choice of cutoff point and/or preventive strategies implemented by LTC staff as a matter of course. © 2015 by the Wound Healing Society.

  4. Concurrent versus sequential sorafenib therapy in combination with radiation for hepatocellular carcinoma.

    PubMed

    Wild, Aaron T; Gandhi, Nishant; Chettiar, Sivarajan T; Aziz, Khaled; Gajula, Rajendra P; Williams, Russell D; Kumar, Rachit; Taparra, Kekoa; Zeng, Jing; Cades, Jessica A; Velarde, Esteban; Menon, Siddharth; Geschwind, Jean F; Cosgrove, David; Pawlik, Timothy M; Maitra, Anirban; Wong, John; Hales, Russell K; Torbenson, Michael S; Herman, Joseph M; Tran, Phuoc T

    2013-01-01

    Sorafenib (SOR) is the only systemic agent known to improve survival for hepatocellular carcinoma (HCC). However, SOR prolongs survival by less than 3 months and does not alter symptomatic progression. To improve outcomes, several phase I-II trials are currently examining SOR with radiation (RT) for HCC utilizing heterogeneous concurrent and sequential treatment regimens. Our study provides preclinical data characterizing the effects of concurrent versus sequential RT-SOR on HCC cells both in vitro and in vivo. Concurrent and sequential RT-SOR regimens were tested for efficacy among 4 HCC cell lines in vitro by assessment of clonogenic survival, apoptosis, cell cycle distribution, and γ-H2AX foci formation. Results were confirmed in vivo by evaluating tumor growth delay and performing immunofluorescence staining in a hind-flank xenograft model. In vitro, concurrent RT-SOR produced radioprotection in 3 of 4 cell lines, whereas sequential RT-SOR produced decreased colony formation among all 4. Sequential RT-SOR increased apoptosis compared to RT alone, while concurrent RT-SOR did not. Sorafenib induced reassortment into less radiosensitive phases of the cell cycle through G1-S delay and cell cycle slowing. More double-strand breaks (DSBs) persisted 24 h post-irradiation for RT alone versus concurrent RT-SOR. In vivo, sequential RT-SOR produced the greatest tumor growth delay, while concurrent RT-SOR was similar to RT alone. More persistent DSBs were observed in xenografts treated with sequential RT-SOR or RT alone versus concurrent RT-SOR. Sequential RT-SOR additionally produced a greater reduction in xenograft tumor vascularity and mitotic index than either concurrent RT-SOR or RT alone. In conclusion, sequential RT-SOR demonstrates greater efficacy against HCC than concurrent RT-SOR both in vitro and in vivo. These results may have implications for clinical decision-making and prospective trial design.

  5. Concurrent versus Sequential Sorafenib Therapy in Combination with Radiation for Hepatocellular Carcinoma

    PubMed Central

    Chettiar, Sivarajan T.; Aziz, Khaled; Gajula, Rajendra P.; Williams, Russell D.; Kumar, Rachit; Taparra, Kekoa; Zeng, Jing; Cades, Jessica A.; Velarde, Esteban; Menon, Siddharth; Geschwind, Jean F.; Cosgrove, David; Pawlik, Timothy M.; Maitra, Anirban; Wong, John; Hales, Russell K.; Torbenson, Michael S.; Herman, Joseph M.; Tran, Phuoc T.

    2013-01-01

    Sorafenib (SOR) is the only systemic agent known to improve survival for hepatocellular carcinoma (HCC). However, SOR prolongs survival by less than 3 months and does not alter symptomatic progression. To improve outcomes, several phase I-II trials are currently examining SOR with radiation (RT) for HCC utilizing heterogeneous concurrent and sequential treatment regimens. Our study provides preclinical data characterizing the effects of concurrent versus sequential RT-SOR on HCC cells both in vitro and in vivo. Concurrent and sequential RT-SOR regimens were tested for efficacy among 4 HCC cell lines in vitro by assessment of clonogenic survival, apoptosis, cell cycle distribution, and γ-H2AX foci formation. Results were confirmed in vivo by evaluating tumor growth delay and performing immunofluorescence staining in a hind-flank xenograft model. In vitro, concurrent RT-SOR produced radioprotection in 3 of 4 cell lines, whereas sequential RT-SOR produced decreased colony formation among all 4. Sequential RT-SOR increased apoptosis compared to RT alone, while concurrent RT-SOR did not. Sorafenib induced reassortment into less radiosensitive phases of the cell cycle through G1-S delay and cell cycle slowing. More double-strand breaks (DSBs) persisted 24 h post-irradiation for RT alone versus concurrent RT-SOR. In vivo, sequential RT-SOR produced the greatest tumor growth delay, while concurrent RT-SOR was similar to RT alone. More persistent DSBs were observed in xenografts treated with sequential RT-SOR or RT alone versus concurrent RT-SOR. Sequential RT-SOR additionally produced a greater reduction in xenograft tumor vascularity and mitotic index than either concurrent RT-SOR or RT alone. In conclusion, sequential RT-SOR demonstrates greater efficacy against HCC than concurrent RT-SOR both in vitro and in vivo. These results may have implications for clinical decision-making and prospective trial design. PMID:23762417

  6. High fold computer disk storage DATABASE for fast extended analysis of γ-rays events

    NASA Astrophysics Data System (ADS)

    Stézowski, O.; Finck, Ch.; Prévost, D.

    1999-03-01

    Recently spectacular technical developments have been achieved to increase the resolving power of large γ-ray spectrometers. With these new eyes, physicists are able to study the intricate nature of atomic nuclei. Concurrently more and more complex multidimensional analyses are needed to investigate very weak phenomena. In this article, we first present a software (DATABASE) allowing high fold coincidences γ-rays events to be stored on hard disk. Then, a non-conventional method of analysis, anti-gating procedure, is described. Two physical examples are given to explain how it can be used and Monte Carlo simulations have been performed to test the validity of this method.

  7. Distributed Leadership in a Maltese College: The Voices of Those among Whom Leadership Is "Distributed" and Who Concurrently Narrate Themselves as Leadership "Distributors"

    ERIC Educational Resources Information Center

    Mifsud, Denise

    2017-01-01

    In the unfolding Maltese education scenario of decentralization and school networking, I explore distributed leadership as it occurs at the college level through the leaders' narrative and performance in an investigation of the power relations among the different-tiered leaders. This article uses data from the case study of a Maltese college…

  8. Concurrent Transmission Based on Channel Quality in Ad Hoc Networks: A Game Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Gao, Xinbo; Li, Xiaoji; Pei, Qingqi

    In this paper, a decentralized concurrent transmission strategy in shared channel in Ad Hoc networks is proposed based on game theory. Firstly, a static concurrent transmissions game is used to determine the candidates for transmitting by channel quality threshold and to maximize the overall throughput with consideration of channel quality variation. To achieve NES (Nash Equilibrium Solution), the selfish behaviors of node to attempt to improve the channel gain unilaterally are evaluated. Therefore, this game allows each node to be distributed and to decide whether to transmit concurrently with others or not depending on NES. Secondly, as there are always some nodes with lower channel gain than NES, which are defined as hunger nodes in this paper, a hunger suppression scheme is proposed by adjusting the price function with interferences reservation and forward relay, to fairly give hunger nodes transmission opportunities. Finally, inspired by stock trading, a dynamic concurrent transmission threshold determination scheme is implemented to make the static game practical. Numerical results show that the proposed scheme is feasible to increase concurrent transmission opportunities for active nodes, and at the same time, the number of hunger nodes is greatly reduced with the least increase of threshold by interferences reservation. Also, the good performance on network goodput of the proposed model can be seen from the results.

  9. Concurrent Use of Hypnotic Drugs and Chinese Herbal Medicine Therapies among Taiwanese Adults with Insomnia Symptoms: A Population-Based Study.

    PubMed

    Lee, Kuei-Hua; Tsai, Yueh-Ting; Lai, Jung-Nien; Lin, Shun-Ku

    2013-01-01

    Background. The increased practice of traditional Chinese medicine (TCM) worldwide has raised concerns regarding herb-drug interactions. The purpose of our study is to analyze the concurrent use of Chinese herbal products (CHPs) among Taiwanese insomnia patients taking hypnotic drugs. Methods. The usage, frequency of services, and CHP prescribed among 53,949 insomnia sufferers were evaluated from a random sample of 1 million beneficiaries in the National Health Insurance Research Database. A logistic regression method was used to identify the factors that were associated with the coprescription of a CHP and a hypnotic drug. Cox proportional hazards regressions were performed to calculate the hazard ratios (HRs) of hip fracture between the two groups. Results. More than 1 of every 3 hypnotic users also used a CHP concurrently. Jia-Wei-Xiao-Yao-San (Augmented Rambling Powder) and Suan-Zao-Ren-Tang (Zizyphus Combination) were the 2 most commonly used CHPs that were coadministered with hypnotic drugs. The HR of hip fracture for hypnotic-drug users who used a CHP concurrently was 0.57-fold (95% CI = 0.47-0.69) that of hypnotic-drug users who did not use a CHP. Conclusion. Exploring potential CHP-drug interactions and integrating both healthcare approaches might be beneficial for the overall health and quality of life of insomnia sufferers.

  10. Environmental Conditions Associated with Elevated Vibrio parahaemolyticus Concentrations in Great Bay Estuary, New Hampshire.

    PubMed

    Urquhart, Erin A; Jones, Stephen H; Yu, Jong W; Schuster, Brian M; Marcinkiewicz, Ashley L; Whistler, Cheryl A; Cooper, Vaughn S

    2016-01-01

    Reports from state health departments and the Centers for Disease Control and Prevention indicate that the annual number of reported human vibriosis cases in New England has increased in the past decade. Concurrently, there has been a shift in both the spatial distribution and seasonal detection of Vibrio spp. throughout the region based on limited monitoring data. To determine environmental factors that may underlie these emerging conditions, this study focuses on a long-term database of Vibrio parahaemolyticus concentrations in oyster samples generated from data collected from the Great Bay Estuary, New Hampshire over a period of seven consecutive years. Oyster samples from two distinct sites were analyzed for V. parahaemolyticus abundance, noting significant relationships with various biotic and abiotic factors measured during the same period of study. We developed a predictive modeling tool capable of estimating the likelihood of V. parahaemolyticus presence in coastal New Hampshire oysters. Results show that the inclusion of chlorophyll a concentration to an empirical model otherwise employing only temperature and salinity variables, offers improved predictive capability for modeling the likelihood of V. parahaemolyticus in the Great Bay Estuary.

  11. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.

  12. Heterogeneous distributed query processing: The DAVID system

    NASA Technical Reports Server (NTRS)

    Jacobs, Barry E.

    1985-01-01

    The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.

  13. Sharing the Burden and Risk: An Operational Assessment of the Reserve Components in Operation Iraqi Freedom

    DTIC Science & Technology

    2016-10-01

    Manpower Data Center (DMDC) for data extracts identifying monthly deployments from September 2001 through December 2014. This data would answer questions... Manpower Data Center (DMDC) databases captured which service members were mobilized and deployed. Government history offices, lessons learned...develop MOEs and MOPs to conduct assessments. 1. Data Extracts Concurrent with engagement efforts, IDA queried the Defense Manpower Data Center (DMDC

  14. Architecture Knowledge for Evaluating Scalable Databases

    DTIC Science & Technology

    2015-01-16

    problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly

  15. Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California

    USGS Publications Warehouse

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2006-01-01

    The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  16. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  17. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  18. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less

  19. SEASONAL VARIATION OF THE PARTICLE SIZE DISTRIBUTION OF POLYCYCLIC AROMATIC HYDROCARBONS AND OF MAJOR AEROSOL SPECIES IN CLAREMONT, CALIFORNIA. (R827352C020)

    EPA Science Inventory

    As part of the Southern California Particle Center and Supersite (SCPCS) activities, we measured, during all seasons, particle size distributions of 12 priority pollutant polycyclic aromatic hydrocarbons (PAHs), concurrently with elemental carbon (EC), organic carbon (OC), sul...

  20. Multilevel multi-informant structure of the authoritative school climate survey.

    PubMed

    Konold, Timothy; Cornell, Dewey; Huang, Francis; Meyer, Patrick; Lacey, Anna; Nekvasil, Erin; Heilbrun, Anna; Shukla, Kathan

    2014-09-01

    The Authoritative School Climate Survey was designed to provide schools with a brief assessment of 2 key characteristics of school climate--disciplinary structure and student support--that are hypothesized to influence 2 important school climate outcomes--student engagement and prevalence of teasing and bullying in school. The factor structure of these 4 constructs was examined with exploratory and confirmatory factor analyses in a statewide sample of 39,364 students (Grades 7 and 8) attending 423 schools. Notably, the analyses used a multilevel structural approach to model the nesting of students in schools for purposes of evaluating factor structure, demonstrating convergent and concurrent validity and gauging the structural invariance of concurrent validity coefficients across gender. These findings provide schools with a core group of school climate measures guided by authoritative discipline theory. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Software-safety and software quality assurance in real-time applications Part 2: Real-time structures and languages

    NASA Astrophysics Data System (ADS)

    Schoitsch, Erwin

    1988-07-01

    Our society is depending more and more on the reliability of embedded (real-time) computer systems even in every-day life. Considering the complexity of the real world, this might become a severe threat. Real-time programming is a discipline important not only in process control and data acquisition systems, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt- and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other and with respect to their potential to quality and safety.

  2. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    NASA Astrophysics Data System (ADS)

    Viegas, F.; Malon, D.; Cranshaw, J.; Dimitrov, G.; Nowak, M.; Nairz, A.; Goossens, L.; Gallas, E.; Gamboa, C.; Wong, A.; Vinek, E.

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  3. Brief Report: Databases in the Asia-Pacific Region: The Potential for a Distributed Network Approach.

    PubMed

    Lai, Edward Chia-Cheng; Man, Kenneth K C; Chaiyakunapruk, Nathorn; Cheng, Ching-Lan; Chien, Hsu-Chih; Chui, Celine S L; Dilokthornsakul, Piyameth; Hardy, N Chantelle; Hsieh, Cheng-Yang; Hsu, Chung Y; Kubota, Kiyoshi; Lin, Tzu-Chieh; Liu, Yanfang; Park, Byung Joo; Pratt, Nicole; Roughead, Elizabeth E; Shin, Ju-Young; Watcharathanakij, Sawaeng; Wen, Jin; Wong, Ian C K; Yang, Yea-Huei Kao; Zhang, Yinghong; Setoguchi, Soko

    2015-11-01

    This study describes the availability and characteristics of databases in Asian-Pacific countries and assesses the feasibility of a distributed network approach in the region. A web-based survey was conducted among investigators using healthcare databases in the Asia-Pacific countries. Potential survey participants were identified through the Asian Pharmacoepidemiology Network. Investigators from a total of 11 databases participated in the survey. Database sources included four nationwide claims databases from Japan, South Korea, and Taiwan; two nationwide electronic health records from Hong Kong and Singapore; a regional electronic health record from western China; two electronic health records from Thailand; and cancer and stroke registries from Taiwan. We identified 11 databases with capabilities for distributed network approaches. Many country-specific coding systems and terminologies have been already converted to international coding systems. The harmonization of health expenditure data is a major obstacle for future investigations attempting to evaluate issues related to medical costs.

  4. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.

  5. A Database for Decision-Making in Training and Distributed Learning Technology

    DTIC Science & Technology

    1998-04-01

    developer must answer these questions: ♦ Who will develop the courseware? Should we outsource ? ♦ What media should we use? How much will it cost? ♦ What...to develop , the database can be useful for answering staffing questions and planning transitions to technology- assisted courses. The database...of distributed learning curricula in com- parison to traditional methods. To develop a military-wide distributed learning plan, the existing course

  6. Database searching and accounting of multiplexed precursor and product ion spectra from the data independent analysis of simple and complex peptide mixtures.

    PubMed

    Li, Guo-Zhong; Vissers, Johannes P C; Silva, Jeffrey C; Golick, Dan; Gorenstein, Marc V; Geromanos, Scott J

    2009-03-01

    A novel database search algorithm is presented for the qualitative identification of proteins over a wide dynamic range, both in simple and complex biological samples. The algorithm has been designed for the analysis of data originating from data independent acquisitions, whereby multiple precursor ions are fragmented simultaneously. Measurements used by the algorithm include retention time, ion intensities, charge state, and accurate masses on both precursor and product ions from LC-MS data. The search algorithm uses an iterative process whereby each iteration incrementally increases the selectivity, specificity, and sensitivity of the overall strategy. Increased specificity is obtained by utilizing a subset database search approach, whereby for each subsequent stage of the search, only those peptides from securely identified proteins are queried. Tentative peptide and protein identifications are ranked and scored by their relative correlation to a number of models of known and empirically derived physicochemical attributes of proteins and peptides. In addition, the algorithm utilizes decoy database techniques for automatically determining the false positive identification rates. The search algorithm has been tested by comparing the search results from a four-protein mixture, the same four-protein mixture spiked into a complex biological background, and a variety of other "system" type protein digest mixtures. The method was validated independently by data dependent methods, while concurrently relying on replication and selectivity. Comparisons were also performed with other commercially and publicly available peptide fragmentation search algorithms. The presented results demonstrate the ability to correctly identify peptides and proteins from data independent acquisition strategies with high sensitivity and specificity. They also illustrate a more comprehensive analysis of the samples studied; providing approximately 20% more protein identifications, compared to a more conventional data directed approach using the same identification criteria, with a concurrent increase in both sequence coverage and the number of modified peptides.

  7. Hypsometry and the distribution of high-alpine lakes in the European Alps

    NASA Astrophysics Data System (ADS)

    Prasicek, Günther; Otto, Jan-Christoph; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Climate change strongly affects alpine landscapes. Cold-climate processes shape the terrain in a typical way and ice-free overdeepenings in cirques and glacial valleys as well as different types of moraines favor the formation of lakes. These water bodies act as sediment sinks and high-alpine water storage but may also favor outburst and flooding events. Glacier retreat worldwide is associated with an increasing number and size of high-alpine lakes which implies a concurrent expansion of sediment retention and natural hazard potential. Rising temperatures are regarded to be the major cause for this development, but other factors such as the distribution of area over elevation and glacier erosional and depositional dynamics may play an important role as well. While models of ice flow and glacial erosion are employed to understand the impact of glaciers on mountain landscapes, comprehensive datasets and analyses on the distribution of existing high-alpine lakes are lacking. In this study we present an exhaustive database of natural lakes in the European Alps and analyze lake distribution with respect to hypsometry. We find that the distribution of lake number and lake area over elevation only weakly coincides with hypsometry. Unsurprisingly, largest lakes are often tectonically influenced and located at the fringe of the mountain range and in prominent inter-montane basins. With increasing elevation, however, the number of lakes, lake area and total area decrease until a local minimum is reached around the equilibrium line latitude (ELA) of the last glacial maximum (LGM). Above the LGM ELA, total area further decreases, but lake number and area increase again. A local maximum in lake area coincides with an absolute maximum in lake number between the ELAs of the LGM and the little ice age around 2500 m. We conclude that glacial erosional and depositional dynamics control the distribution and size of high-alpine lakes and thus demand for exceptional attention when predicting future lake development.

  8. Distribution System Upgrade Unit Cost Database

    DOE Data Explorer

    Horowitz, Kelsey

    2017-11-30

    This database contains unit cost information for different components that may be used to integrate distributed photovotaic (D-PV) systems onto distribution systems. Some of these upgrades and costs may also apply to integration of other distributed energy resources (DER). Which components are required, and how many of each, is system-specific and should be determined by analyzing the effects of distributed PV at a given penetration level on the circuit of interest in combination with engineering assessments on the efficacy of different solutions to increase the ability of the circuit to host additional PV as desired. The current state of the distribution system should always be considered in these types of analysis. The data in this database was collected from a variety of utilities, PV developers, technology vendors, and published research reports. Where possible, we have included information on the source of each data point and relevant notes. In some cases where data provided is sensitive or proprietary, we were not able to specify the source, but provide other information that may be useful to the user (e.g. year, location where equipment was installed). NREL has carefully reviewed these sources prior to inclusion in this database. Additional information about the database, data sources, and assumptions is included in the "Unit_cost_database_guide.doc" file included in this submission. This guide provides important information on what costs are included in each entry. Please refer to this guide before using the unit cost database for any purpose.

  9. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed

    O'Neill, M A; Hilgetag, C C

    2001-08-29

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement.

  10. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed Central

    O'Neill, M A; Hilgetag, C C

    2001-01-01

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement. PMID:11545702

  11. Clinical and pathological features of kidney transplant patients with concurrent polyomavirus nephropathy and rejection-associated endarteritis

    PubMed Central

    McGregor, Stephanie M; Chon, W James; Kim, Lisa; Chang, Anthony; Meehan, Shane M

    2015-01-01

    AIM: To describe the clinicopathologic features of concurrent polyomavirus nephropathy (PVN) and endarteritis due to rejection in renal allografts. METHODS: We searched our electronic records database for cases with transplant kidney biopsies demonstrating features of both PVN and acute rejection (AR). PVN was defined by the presence of typical viral cytopathic effect on routine sections and positive polyomavirus SV40 large-T antigen immunohistochemistry. AR was identified by endarteritis (v1 by Banff criteria). All cases were subjected to chart review in order to determine clinical presentation, treatment course and outcomes. Outcomes were recorded with a length of follow-up of at least one year or time to nephrectomy. RESULTS: Of 94 renal allograft recipients who developed PVN over an 11-year period at our institution, we identified 7 (7.4%) with viral cytopathic changes, SV40 large T antigen staining, and endarteritis in the same biopsy specimen, indicative of concurrent PVN and AR. Four arose after reduction of immunosuppression (IS) (for treatment of PVN in 3 and tuberculosis in 1), and 3 patients had no decrease of IS before developing simultaneous concurrent disease. Treatment consisted of reduced oral IS and leflunomide for PVN, and anti-rejection therapy. Three of 4 patients who developed endarteritis in the setting of reduced IS lost their grafts to rejection. All 3 patients with simultaneous PVN and endarteritis cleared viremia and were stable at 1 year of follow up. Patients with endarteritis and PVN arising in a background of reduced IS had more severe rejection and poorer outcome. CONCLUSION: Concurrent PVN and endarteritis may be more frequent than is currently appreciated and may occur with or without prior reduction of IS. PMID:26722657

  12. The role of aging in intra-item and item-context binding processes in visual working memory.

    PubMed

    Peterson, Dwight J; Naveh-Benjamin, Moshe

    2016-11-01

    Aging is accompanied by declines in both working memory and long-term episodic memory processes. Specifically, important age-related memory deficits are characterized by performance impairments exhibited by older relative to younger adults when binding distinct components into a single integrated representation, despite relatively intact memory for the individual components. While robust patterns of age-related binding deficits are prevalent in studies of long-term episodic memory, observations of such deficits in visual working memory (VWM) may depend on the specific type of binding process being examined. For instance, a number of studies indicate that processes involved in item-context binding of items to occupied spatial locations within visual working memory are impaired in older relative to younger adults. Other findings suggest that intra-item binding of visual surface features (e.g., color, shape), compared to memory for single features, within visual working memory, remains relatively intact. Here, we examined each of these binding processes in younger and older adults under both optimal conditions (i.e., no concurrent load) and concurrent load (e.g., articulatory suppression, backward counting). Experiment 1 revealed an age-related intra-item binding deficit for surface features under no concurrent load but not when articulatory suppression was required. In contrast, in Experiments 2 and 3, we observed an age-related item-context binding deficit regardless of the level of concurrent load. These findings reveal that the influence of concurrent load on distinct binding processes within VWM, potentially those supported by rehearsal, is an important factor mediating the presence or absence of age-related binding deficits within VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Semi-automatic feedback using concurrence between mixture vectors for general databases

    NASA Astrophysics Data System (ADS)

    Larabi, Mohamed-Chaker; Richard, Noel; Colot, Olivier; Fernandez-Maloigne, Christine

    2001-12-01

    This paper describes how a query system can exploit the basic knowledge by employing semi-automatic relevance feedback to refine queries and runtimes. For general databases, it is often useless to call complex attributes, because we have not sufficient information about images in the database. Moreover, these images can be topologically very different from one to each other and an attribute that is powerful for a database category may be very powerless for the other categories. The idea is to use very simple features, such as color histogram, correlograms, Color Coherence Vectors (CCV), to fill out the signature vector. Then, a number of mixture vectors is prepared depending on the number of very distinctive categories in the database. Knowing that a mixture vector is a vector containing the weight of each attribute that will be used to compute a similarity distance. We post a query in the database using successively all the mixture vectors defined previously. We retain then the N first images for each vector in order to make a mapping using the following information: Is image I present in several mixture vectors results? What is its rank in the results? These informations allow us to switch the system on an unsupervised relevance feedback or user's feedback (supervised feedback).

  14. Quantum Trajectories and Their Statistics for Remotely Entangled Quantum Bits

    NASA Astrophysics Data System (ADS)

    Chantasri, Areeya; Kimchi-Schwartz, Mollie E.; Roch, Nicolas; Siddiqi, Irfan; Jordan, Andrew N.

    2016-10-01

    We experimentally and theoretically investigate the quantum trajectories of jointly monitored transmon qubits embedded in spatially separated microwave cavities. Using nearly quantum-noise-limited superconducting amplifiers and an optimized setup to reduce signal loss between cavities, we can efficiently track measurement-induced entanglement generation as a continuous process for single realizations of the experiment. The quantum trajectories of transmon qubits naturally split into low and high entanglement classes. The distribution of concurrence is found at any given time, and we explore the dynamics of entanglement creation in the state space. The distribution exhibits a sharp cutoff in the high concurrence limit, defining a maximal concurrence boundary. The most-likely paths of the qubits' trajectories are also investigated, resulting in three probable paths, gradually projecting the system to two even subspaces and an odd subspace, conforming to a "half-parity" measurement. We also investigate the most-likely time for the individual trajectories to reach their most entangled state, and we find that there are two solutions for the local maximum, corresponding to the low and high entanglement routes. The theoretical predictions show excellent agreement with the experimental entangled-qubit trajectory data.

  15. Resident database interfaces to the DAVID system, a heterogeneous distributed database management system

    NASA Technical Reports Server (NTRS)

    Moroh, Marsha

    1988-01-01

    A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.

  16. Multilevel Atomicity - A New Correctness Criterion for Database Concurrency Control.

    DTIC Science & Technology

    1982-09-01

    Research Office Contract #DAAG29-79-C-0155, Office of Naval Research Contract #N00014.79-C-0873, and Advanced Research PRojecta Agecy of the Department...steps of V. Since the transactions need not be straight-line programs , but can branch in complicated ways. I am forced to describe separately the places...not know whether these specializations provide efficient implementations. This question is a topic for future study. The new programming language

  17. A Programming Language Supporting First-Class Parallel Environments

    DTIC Science & Technology

    1989-01-01

    Symmetric Lisp later in the thesis. 1.5.1.2 Procedures as Data - Comparison with Lisp Classical Lisp[48, 54] has been altered and extended in many ways... manangement problems. A resource manager controls access to one or more resources shared by concurrently executing processes. Database transaction systems...symmetric languages are related to languages based on more classical models? 3. What are the kinds of uniformity that the symmetric model supports and what

  18. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  19. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    NASA Astrophysics Data System (ADS)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based interface by a metadata editor in CMO as needed. Then daily differential uptake of metadata from the XML database to databases in several distribution websites is automatically processed using a convertor defined by the EAI software. Currently, CMO is available for three distribution websites: "Deep Sea Floor Rock Sample Database GANSEKI", "Marine Biological Sample Database", and "JAMSTEC E-library of Deep-sea Images". CMO is planned to provide "JAMSTEC Data Site for Research Cruises" with metadata in the future.

  20. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizesmore » the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)« less

  1. Bridging the Gap between the Data Base and User in a Distributed Environment.

    ERIC Educational Resources Information Center

    Howard, Richard D.; And Others

    1989-01-01

    The distribution of databases physically separates users from those who administer the database and the administrators who perform database administration. By drawing on the work of social scientists in reliability and validity, a set of concepts and a list of questions to ensure data quality were developed. (Author/MLW)

  2. Nonclassical features of trimodal excited coherent Greenberger - Horne - Zeilinger(GHZ) - type state

    NASA Astrophysics Data System (ADS)

    Merlin, J.; Ahmed, A. B. M.; Mohammed, S. Naina

    2017-06-01

    We examine the influence of photon excitation on each mode of the Glauber coherent GHZ type tripartite state. Concurrence is adopted as entanglement measure between bipartite entangled state. The pairwise concurrence is calculated and used as a quantifier of intermodal entanglement. The entanglement distribution among three modes is investigated using tangle as a measure and the residual entanglement is also calculated. The effect of the photon addition process on the quadrature squeezing is investigated. The higher order squeezing capacity of the photon addition process is also shown.

  3. A Web-based open-source database for the distribution of hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Ferwerda, J. G.; Jones, S. D.; Du, Pei-Jun

    2006-10-01

    With the coming of age of field spectroscopy as a non-destructive means to collect information on the physiology of vegetation, there is a need for storage of signatures, and, more importantly, their metadata. Without the proper organisation of metadata, the signatures itself become limited. In order to facilitate re-distribution of data, a database for the storage & distribution of hyperspectral signatures and their metadata was designed. The database was built using open-source software, and can be used by the hyperspectral community to share their data. Data is uploaded through a simple web-based interface. The database recognizes major file-formats by ASD, GER and International Spectronics. The database source code is available for download through the hyperspectral.info web domain, and we happily invite suggestion for additions & modification for the database to be submitted through the online forums on the same website.

  4. Analysis of quantitative data obtained from toxicity studies showing non-normal distribution.

    PubMed

    Kobayashi, Katsumi

    2005-05-01

    The data obtained from toxicity studies are examined for homogeneity of variance, but, usually, they are not examined for normal distribution. In this study I examined the measured items of a carcinogenicity/chronic toxicity study with rats for both homogeneity of variance and normal distribution. It was observed that a lot of hematology and biochemistry items showed non-normal distribution. For testing normal distribution of the data obtained from toxicity studies, the data of the concurrent control group may be examined, and for the data that show a non-normal distribution, non-parametric tests with robustness may be applied.

  5. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    DTIC Science & Technology

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  6. Concurrent aerobic plus resistance exercise versus aerobic exercise alone to improve health outcomes in paediatric obesity: a systematic review and meta-analysis.

    PubMed

    García-Hermoso, Antonio; Ramírez-Vélez, Robinson; Ramírez-Campillo, Rodrigo; Peterson, Mark D; Martínez-Vizcaíno, Vicente

    2018-02-01

    To determine if the combination of aerobic and resistance exercise is superior to aerobic exercise alone for the health of obese children and adolescents. Systematic review with meta-analysis. Computerised search of 3 databases (MEDLINE, EMBASE, and Cochrane Controlled Trials Registry). Studies that compared the effect of supervised concurrent exercise versus aerobic exercise interventions, with anthropometric and metabolic outcomes in paediatric obesity (6-18 years old). The mean differences (MD) of the parameters from preintervention to postintervention between groups were pooled using a random-effects model. 12 trials with 555 youths were included in the meta-analysis. Compared with aerobic exercise alone, concurrent exercise resulted in greater reductions in body mass (MD=-2.28 kg), fat mass (MD=-3.49%; and MD=-4.34 kg) and low-density lipoprotein cholesterol (MD=-10.20 mg/dL); as well as greater increases in lean body mass (MD=2.20 kg) and adiponectin level (MD=2.59 μg/mL). Differences were larger for longer term programmes (>24 weeks). Concurrent aerobic plus resistance exercise improves body composition, metabolic profiles, and inflammatory state in the obese paediatric population. CRD42016039807. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Concurrent renal amyloidosis and thymoma resulting in a fatal ventricular thrombus in a dog

    PubMed Central

    Loewen, Jennifer M.; Cianciolo, Rachel E.; Zhang, Liwen; Yaeger, Michael; Ward, Jessica L.; Smith, Jodi D.

    2018-01-01

    Thymoma‐associated nephropathies have been reported in people but not in dogs. In this report, we describe a dog with thymoma and concurrent renal amyloidosis. A 7‐year‐old castrated male Weimaraner was presented for progressive anorexia, lethargy, and tachypnea. The dog was diagnosed with azotemia, marked proteinuria, and a thymoma that was surgically removed. Postoperatively, the dog developed a large left ventricular thrombus and was euthanized. Necropsy confirmed the presence of a left ventricular thrombus and histopathology revealed renal amyloidosis. We speculate that the renal amyloidosis occurred secondary to the thymoma, with amyloidosis in turn leading to nephrotic syndrome, hypercoagulability, and ventricular thrombosis. This case illustrates the potential for thymoma‐associated nephropathies to occur in dogs and that dogs suspected to have thymoma should have a urinalysis and urine protein creatinine ratio performed as part of the pre‐surgical database. PMID:29485186

  8. The Design and Implementation of a Relational to Network Query Translator for a Distributed Database Management System.

    DTIC Science & Technology

    1985-12-01

    RELATIONAL TO NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM TH ESI S .L Kevin H. Mahoney -- Captain, USAF AFIT/GCS/ENG/85D-7...NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM - THESIS Presented to the Faculty of the School of Engineering of the Air Force...Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Systems - Kevin H. Mahoney

  9. Distributed Episodic Exploratory Planning (DEEP)

    DTIC Science & Technology

    2008-12-01

    API). For DEEP, Hibernate offered the following advantages: • Abstracts SQL by utilizing HQL so any database with a Java Database Connectivity... Hibernate SQL ICCRTS International Command and Control Research and Technology Symposium JDB Java Distributed Blackboard JDBC Java Database Connectivity...selected because of its opportunistic reasoning capabilities and implemented in Java for platform independence. Java was chosen for ease of

  10. Monte Carlo simulations of product distributions and contained metal estimates

    USGS Publications Warehouse

    Gettings, Mark E.

    2013-01-01

    Estimation of product distributions of two factors was simulated by conventional Monte Carlo techniques using factor distributions that were independent (uncorrelated). Several simulations using uniform distributions of factors show that the product distribution has a central peak approximately centered at the product of the medians of the factor distributions. Factor distributions that are peaked, such as Gaussian (normal) produce an even more peaked product distribution. Piecewise analytic solutions can be obtained for independent factor distributions and yield insight into the properties of the product distribution. As an example, porphyry copper grades and tonnages are now available in at least one public database and their distributions were analyzed. Although both grade and tonnage can be approximated with lognormal distributions, they are not exactly fit by them. The grade shows some nonlinear correlation with tonnage for the published database. Sampling by deposit from available databases of grade, tonnage, and geological details of each deposit specifies both grade and tonnage for that deposit. Any correlation between grade and tonnage is then preserved and the observed distribution of grades and tonnages can be used with no assumption of distribution form.

  11. The Development of Design Guides for the Implementation of Multiprocessing Element Systems.

    DTIC Science & Technology

    1985-09-01

    Conclusions............................ 30 -~-.4 IMPLEMENTATION OF CHILL SIGNALS . COMMUNICATION PRIMITIVES ON A DISTRIBUTED SYSTEM ........................ 31...Architecture of a Distributed System .......... ........................... 32 4.2 Algorithm for the SEND Signal Operation ...... 35 4.3 Algorithm for the...elements operating concurrently. Such Multi Processing-element Systems are clearly going to be complex and it is important that the designers of such

  12. Concurrent MR-NIR Imaging for Breast Cancer Diagnosis

    DTIC Science & Technology

    2007-06-01

    DISTRIBUTION / AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited 13. SUPPLEMENTARY NOTES – Original contains colored plates ...stand-alone NIR system . This information includes hemoglobin, water and lipid concentration, optical scatter power and oxygen saturation images, and ICG...absorption coef cients of each voxel by a system of linear equations. The shape of the breast was approximated as a cylinder and the Kirchhoff

  13. Comparison of Lidar Backscatter with Particle Distribution and GOES-7 Data in Hurricane Juliette

    NASA Technical Reports Server (NTRS)

    Jarzembski, Maurice A.; Srivastava, Vandana; McCaul, Eugene W., Jr.; Jedlovec, Gary J.; Atkinson, Robert J.; Pueschel, Rudolf F.; Cutten, Dean R.

    1997-01-01

    Measurements of calibrated backscatter, using two continuous wave Doppler lidars operating at wavelengths 9.1 and 10.6 micrometers were obtained along with cloud particle size distributions in Hurricane Juliette on 21 September 1995 at altitude approximately 11.7 km. Agreement between backscatter from the two lidars and with the cloud particle size distribution is excellent. Features in backscatter and particle number density compare well with concurrent GOES-7 infrared images.

  14. How to ensure sustainable interoperability in heterogeneous distributed systems through architectural approach.

    PubMed

    Pape-Haugaard, Louise; Frank, Lars

    2011-01-01

    A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.

  15. Data Parallel Bin-Based Indexing for Answering Queries on Multi-Core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosink, Luke; Wu, Kesheng; Bethel, E. Wes

    2009-06-02

    The multi-core trend in CPUs and general purpose graphics processing units (GPUs) offers new opportunities for the database community. The increase of cores at exponential rates is likely to affect virtually every server and client in the coming decade, and presents database management systems with a huge, compelling disruption that will radically change how processing is done. This paper presents a new parallel indexing data structure for answering queries that takes full advantage of the increasing thread-level parallelism emerging in multi-core architectures. In our approach, our Data Parallel Bin-based Index Strategy (DP-BIS) first bins the base data, and then partitionsmore » and stores the values in each bin as a separate, bin-based data cluster. In answering a query, the procedures for examining the bin numbers and the bin-based data clusters offer the maximum possible level of concurrency; each record is evaluated by a single thread and all threads are processed simultaneously in parallel. We implement and demonstrate the effectiveness of DP-BIS on two multi-core architectures: a multi-core CPU and a GPU. The concurrency afforded by DP-BIS allows us to fully utilize the thread-level parallelism provided by each architecture--for example, our GPU-based DP-BIS implementation simultaneously evaluates over 12,000 records with an equivalent number of concurrently executing threads. In comparing DP-BIS's performance across these architectures, we show that the GPU-based DP-BIS implementation requires significantly less computation time to answer a query than the CPU-based implementation. We also demonstrate in our analysis that DP-BIS provides better overall performance than the commonly utilized CPU and GPU-based projection index. Finally, due to data encoding, we show that DP-BIS accesses significantly smaller amounts of data than index strategies that operate solely on a column's base data; this smaller data footprint is critical for parallel processors that possess limited memory resources (e.g., GPUs).« less

  16. Database System Design and Implementation for Marine Air-Traffic-Controller Training

    DTIC Science & Technology

    2017-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. DATABASE SYSTEM DESIGN AND...thesis 4. TITLE AND SUBTITLE DATABASE SYSTEM DESIGN AND IMPLEMENTATION FOR MARINE AIR-TRAFFIC-CONTROLLER TRAINING 5. FUNDING NUMBERS 6. AUTHOR(S...12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) This project focused on the design , development, and implementation of a centralized

  17. Interactive and Versatile Navigation of Structural Databases.

    PubMed

    Korb, Oliver; Kuhn, Bernd; Hert, Jérôme; Taylor, Neil; Cole, Jason; Groom, Colin; Stahl, Martin

    2016-05-12

    We present CSD-CrossMiner, a novel tool for pharmacophore-based searches in crystal structure databases. Intuitive pharmacophore queries describing, among others, protein-ligand interaction patterns, ligand scaffolds, or protein environments can be built and modified interactively. Matching crystal structures are overlaid onto the query and visualized as soon as they are available, enabling the researcher to quickly modify a hypothesis on the fly. We exemplify the utility of the approach by showing applications relevant to real-world drug discovery projects, including the identification of novel fragments for a specific protein environment or scaffold hopping. The ability to concurrently search protein-ligand binding sites extracted from the Protein Data Bank (PDB) and small organic molecules from the Cambridge Structural Database (CSD) using the same pharmacophore query further emphasizes the flexibility of CSD-CrossMiner. We believe that CSD-CrossMiner closes an important gap in mining structural data and will allow users to extract more value from the growing number of available crystal structures.

  18. Time-varying Concurrent Risk of Extreme Droughts and Heatwaves in California

    NASA Astrophysics Data System (ADS)

    Sarhadi, A.; Diffenbaugh, N. S.; Ausin, M. C.

    2016-12-01

    Anthropogenic global warming has changed the nature and the risk of extreme climate phenomena such as droughts and heatwaves. The concurrent of these nature-changing climatic extremes may result in intensifying undesirable consequences in terms of human health and destructive effects in water resources. The present study assesses the risk of concurrent extreme droughts and heatwaves under dynamic nonstationary conditions arising from climate change in California. For doing so, a generalized fully Bayesian time-varying multivariate risk framework is proposed evolving through time under dynamic human-induced environment. In this methodology, an extreme, Bayesian, dynamic copula (Gumbel) is developed to model the time-varying dependence structure between the two different climate extremes. The time-varying extreme marginals are previously modeled using a Generalized Extreme Value (GEV) distribution. Bayesian Markov Chain Monte Carlo (MCMC) inference is integrated to estimate parameters of the nonstationary marginals and copula using a Gibbs sampling method. Modelled marginals and copula are then used to develop a fully Bayesian, time-varying joint return period concept for the estimation of concurrent risk. Here we argue that climate change has increased the chance of concurrent droughts and heatwaves over decades in California. It is also demonstrated that a time-varying multivariate perspective should be incorporated to assess realistic concurrent risk of the extremes for water resources planning and management in a changing climate in this area. The proposed generalized methodology can be applied for other stochastic nature-changing compound climate extremes that are under the influence of climate change.

  19. Noise-Assisted Concurrent Multipath Traffic Distribution in Ad Hoc Networks

    PubMed Central

    Murata, Masayuki

    2013-01-01

    The concept of biologically inspired networking has been introduced to tackle unpredictable and unstable situations in computer networks, especially in wireless ad hoc networks where network conditions are continuously changing, resulting in the need of robustness and adaptability of control methods. Unfortunately, existing methods often rely heavily on the detailed knowledge of each network component and the preconfigured, that is, fine-tuned, parameters. In this paper, we utilize a new concept, called attractor perturbation (AP), which enables controlling the network performance using only end-to-end information. Based on AP, we propose a concurrent multipath traffic distribution method, which aims at lowering the average end-to-end delay by only adjusting the transmission rate on each path. We demonstrate through simulations that, by utilizing the attractor perturbation relationship, the proposed method achieves a lower average end-to-end delay compared to other methods which do not take fluctuations into account. PMID:24319375

  20. Application of the actor model to large scale NDE data analysis

    NASA Astrophysics Data System (ADS)

    Coughlin, Chris

    2018-03-01

    The Actor model of concurrent computation discretizes a problem into a series of independent units or actors that interact only through the exchange of messages. Without direct coupling between individual components, an Actor-based system is inherently concurrent and fault-tolerant. These traits lend themselves to so-called "Big Data" applications in which the volume of data to analyze requires a distributed multi-system design. For a practical demonstration of the Actor computational model, a system was developed to assist with the automated analysis of Nondestructive Evaluation (NDE) datasets using the open source Myriad Data Reduction Framework. A machine learning model trained to detect damage in two-dimensional slices of C-Scan data was deployed in a streaming data processing pipeline. To demonstrate the flexibility of the Actor model, the pipeline was deployed on a local system and re-deployed as a distributed system without recompiling, reconfiguring, or restarting the running application.

  1. A Characterization of Student Reflections in an Introductory Pharmacy Practice Experience Discussion Course.

    PubMed

    Dinkins, Melissa M; Haltom, Wesley R

    2018-04-01

    Objective. To characterize weekly student reflections in an introductory pharmacy practice experience (IPPE) discussion course meeting concurrently with IPPE rotations in institutional pharmacy. Methods. A qualitative analysis was conducted to identify themes within weekly reflective statements submitted by second year pharmacy students (P2) enrolled in an IPPE rotation and concurrent discussion course. Weekly reflections from the 2015-2016 offering of the course were reviewed by investigators to identify common themes via an iterative process. Subsequently, investigators coded each submission into one of the identified categories. Initial agreement between investigators was assessed using the Cohen kappa coefficient. Discrepancies between coding were resolved through discussion to reach consensus. Results. A total of 402 reflection assignments were reviewed from 85 P2 students enrolled in the IPPE course. Ten themes were identified, with the most common themes being interprofessional teamwork, pharmacist and technician roles and responsibilities, and policies and procedures. Substantial initial agreement between investigators was found, with the most discrepancies arising within the themes of medication distribution and pharmacy administration/organizational structure. Conclusion. Student reflections on IPPEs centered on 10 key topics, primarily related to distributive, legal, and regulatory functions of institutional pharmacy practice. Structuring an IPPE rotation longitudinally in an academic term, with a concurrent discussion course, builds a framework for regular student reflection.

  2. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  3. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  4. Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw

    2000-01-01

    Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.

  5. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  6. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.

  7. Multispectral analysis of ocean dumped materials

    NASA Technical Reports Server (NTRS)

    Johnson, R. W.

    1977-01-01

    Experiments conducted in the Atlantic coastal zone indicated that plumes resulting from ocean dumping of acid wastes and sewage sludge have unique spectral characteristics. Remotely sensed wide area synoptic coverage provided information on these pollution features that was not readily available from other sources. Aircraft remotely sensed photographic and multispectral scanner data were interpreted by two methods. First, qualitative analyses in which pollution features were located, mapped, and identified without concurrent sea truth and, second, quantitative analyses in which concurrently collected sea truth was used to calibrate the remotely sensed data and to determine quantitative distributions of one or more parameters in a plume.

  8. An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Ferraro, Robert D.

    1996-01-01

    A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.

  9. Environmental Conditions Associated with Elevated Vibrio parahaemolyticus Concentrations in Great Bay Estuary, New Hampshire

    PubMed Central

    Urquhart, Erin A.; Jones, Stephen H.; Yu, Jong W.; Schuster, Brian M.; Marcinkiewicz, Ashley L.; Whistler, Cheryl A.; Cooper, Vaughn S.

    2016-01-01

    Reports from state health departments and the Centers for Disease Control and Prevention indicate that the annual number of reported human vibriosis cases in New England has increased in the past decade. Concurrently, there has been a shift in both the spatial distribution and seasonal detection of Vibrio spp. throughout the region based on limited monitoring data. To determine environmental factors that may underlie these emerging conditions, this study focuses on a long-term database of Vibrio parahaemolyticus concentrations in oyster samples generated from data collected from the Great Bay Estuary, New Hampshire over a period of seven consecutive years. Oyster samples from two distinct sites were analyzed for V. parahaemolyticus abundance, noting significant relationships with various biotic and abiotic factors measured during the same period of study. We developed a predictive modeling tool capable of estimating the likelihood of V. parahaemolyticus presence in coastal New Hampshire oysters. Results show that the inclusion of chlorophyll a concentration to an empirical model otherwise employing only temperature and salinity variables, offers improved predictive capability for modeling the likelihood of V. parahaemolyticus in the Great Bay Estuary. PMID:27144925

  10. Development, Validation, and Fairness of a Biographical Data Questionnaire for the Air Traffic Control Specialist Occupation

    DTIC Science & Technology

    2012-12-01

    Development and validation. ABA, BQ , and criterion data were extracted from AT- SAT concurrent, criterion- related validation database. Overall, 1,232...dependent on responses to the other instrument. 3 A subset of 260 controllers in the AT- SAT dataset had full and complete ABA, BQ , and criterion data (i.e... SAT cases with ABA, BQ , and criterion data (n=260) was very small, making fairness analyses with the validation sample impractical. However, the

  11. The Implementation of a Multi-Backend Database System (MDBS). Part I. Software Engineering Strategies and Efforts Towards a Prototype MDBS.

    DTIC Science & Technology

    1983-06-01

    for DEC PDPll systems. MAINSAIL was developed and is marketed with a set of integrated tools for program development. The syntax of the language is...stack, and to test for stack-full and stack-empty conditions. This technique is useful in enforcing data integrity and in con- trolling concurrent...and market MAINSAIL. The language is distinguished by its portability. The same compiler and runtime system, both written in MAINSAIL, are the basis

  12. Modulation of task demands suggests that semantic processing interferes with the formation of episodic associations.

    PubMed

    Long, Nicole M; Kahana, Michael J

    2017-02-01

    Although episodic and semantic memory share overlapping neural mechanisms, it remains unclear how our pre-existing semantic associations modulate the formation of new, episodic associations. When freely recalling recently studied words, people rely on both episodic and semantic associations, shown through temporal and semantic clustering of responses. We asked whether orienting participants toward semantic associations interferes with or facilitates the formation of episodic associations. We compared electroencephalographic (EEG) activity recorded during the encoding of subsequently recalled words that were either temporally or semantically clustered. Participants studied words with or without a concurrent semantic orienting task. We identified a neural signature of successful episodic association formation whereby high-frequency EEG activity (HFA, 44-100 Hz) overlying left prefrontal regions increased for subsequently temporally clustered words, but only for those words studied without a concurrent semantic orienting task. To confirm that this disruption in the formation of episodic associations was driven by increased semantic processing, we measured the neural correlates of subsequent semantic clustering. We found that HFA increased for subsequently semantically clustered words only for lists with a concurrent semantic orienting task. This dissociation suggests that increased semantic processing of studied items interferes with the neural processes that support the formation of novel episodic associations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Concurrent chemoradiotherapy versus radiotherapy alone for locoregionally advanced nasopharyngeal carcinoma in the era of intensity-modulated radiotherapy: a meta-analysis.

    PubMed

    He, Yan; Guo, Tao; Guan, Hui; Wang, Jingjing; Sun, Yu; Peng, Xingchen

    2018-01-01

    In this study, we attempted to compare the efficacy and toxicity of concurrent chemoradiotherapy (CCRT) with radiotherapy alone (RT) for locoregionally advanced nasopharyngeal carcinoma (LANPC) in the era of intensity-modulated radiotherapy (IMRT) by meta-analysis. We searched databases, and all randomized controlled trials meeting the inclusion criteria were utilized for a meta-analysis with RevMan 5.3 based on the Cochrane methodology. Fifteen studies were found suitable based on the inclusion criteria. CCRT not only significantly improved the overall response rate (risk ratio [RR]=0.53, 95% CI 0.43-0.66) and the complete response rate (RR=0.60, 95% CI 0.51-0.71) but also contributed to longer overall survival. The incidence of grade 3-4 adverse events from CCRT group increased in hematologic toxicity (RR 2.25, 95% CI 1.54-3.29), radiation-induced oral mucositis (RR 1.64, 95% CI 1.14-2.35), and radiodermatitis (RR 1.80, 95% CI 1.13-2.88). Compared with IMRT alone, CCRT provided survival benefit with acceptable toxicity in patients with LANPC. However, we need multicenter randomized controlled trials and long-term follow-up to evaluate the eventual efficacy and toxicity of concurrent chemotherapy plus IMRT.

  14. Metformin association with lower prostate cancer recurrence in type 2 diabetes: a systematic review and meta-analysis.

    PubMed

    Hwang, In Cheol; Park, Sang Min; Shin, Doosup; Ahn, Hong Yup; Rieken, Malte; Shariat, Shahrokh F

    2015-01-01

    Accumulating evidence suggests that metformin possesses anticarcinogenic properties, and its use is associated with favorable outcomes in several cancers. However, it remains unclear whether metformin influences prognosis in prostate cancer (PCa) with concurrent type 2 diabetes (T2D). We searched PubMed, EMBASE, and the Cochrane Library from database inception to April 16, 2014 without language restrictions to identify studies investigating the effect of metformin treatment on outcomes of PCa with concurrent T2D. We conducted a meta-analysis to quantify the risk of recurrence, progression, cancer-specific mortality, and all-cause mortality. Summary relative risks (RRs) with corresponding 95% confidence intervals (CIs) were calculated. Publication bias was assessed by Begg's rank correlation test. A total of eight studies fulfilled the eligibility criteria. We found that diabetic PCa patients who did not use metformin were at increased risk of cancer recurrence (RR, 1.20; 95%CI, 1.00-1.44), compared with those who used metformin. A similar trend was observed for other outcomes, but their relationships did not reach statistical significance. Funnel plot asymmetry was not observed among studies reporting recurrence (p=0.086). Our results suggest that metformin may improve outcomes in PCa patients with concurrent T2D. Well-designed large studies and collaborative basic research are warranted.

  15. Concurrent administration of anticancer chemotherapy drug and herbal medicine on the perspective of pharmacokinetics.

    PubMed

    Cheng, Yung-Yi; Hsieh, Chen-Hsi; Tsai, Tung-Hu

    2018-04-01

    With an increasing number of cancer patients seeking an improved quality of life, complementary and alternative therapies are becoming more common ways to achieve such improvements. The potential risks of concurrent administration are serious and must be addressed. However, comprehensive evidence for the risks and benefits of combining anticancer drugs with traditional herbs is rare. Pharmacokinetic investigations are an efficient way to understand the influence of concomitant remedies. Therefore, this study aimed to collect the results of pharmacokinetic studies relating to the concurrent use of cancer chemotherapy and complementary and alternative therapies. According to the National Health Insurance (NHI) database in Taiwan and several publications, the three most commonly prescribed formulations for cancer patients are Xiang-Sha-Liu-Jun-Zi-Tang, Jia-Wei-Xiao-Yao-San and Bu-Zhong-Yi-Qi-Tang. The three most commonly prescribed single herbs for cancer patients are Hedyotis diffusa, Scutellaria barbata, and Astragalus membranaceus. Few studies have discussed herb-drug interactions involving these herbs from a pharmacokinetics perspective. Here, we reviewed Jia-Wei-Xiao-Yao-San, Long-Dan-Xie-Gan-Tang, Curcuma longa and milk thistle to provide information based on pharmacokinetic evidence for healthcare professionals to use in educating patients about the risks of the concomitant use of various remedies. Copyright © 2018. Published by Elsevier B.V.

  16. Concurrent and lagged effects of registered nurse turnover and staffing on unit-acquired pressure ulcers.

    PubMed

    Park, Shin Hye; Boyle, Diane K; Bergquist-Beringer, Sandra; Staggs, Vincent S; Dunton, Nancy E

    2014-08-01

    We examined the concurrent and lagged effects of registered nurse (RN) turnover on unit-acquired pressure ulcer rates and whether RN staffing mediated the effects. Quarterly unit-level data were obtained from the National Database of Nursing Quality Indicators for 2008 to 2010. A total of 10,935 unit-quarter observations (2,294 units, 465 hospitals) were analyzed. This longitudinal study used multilevel regressions and tested time-lagged effects of study variables on outcomes. The lagged effect of RN turnover on unit-acquired pressure ulcers was significant, while there was no concurrent effect. For every 10 percentage-point increase in RN turnover in a quarter, the odds of a patient having a pressure ulcer increased by 4 percent in the next quarter. Higher RN turnover in a quarter was associated with lower RN staffing in the current and subsequent quarters. Higher RN staffing was associated with lower pressure ulcer rates, but it did not mediate the relationship between turnover and pressure ulcers. We suggest that RN turnover is an important factor that affects pressure ulcer rates and RN staffing needed for high-quality patient care. Given the high RN turnover rates, hospital and nursing administrators should prepare for its negative effect on patient outcomes. © Health Research and Educational Trust.

  17. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  18. Concurrent and Lagged Effects of Registered Nurse Turnover and Staffing on Unit-Acquired Pressure Ulcers

    PubMed Central

    Park, Shin Hye; Boyle, Diane K; Bergquist-Beringer, Sandra; Staggs, Vincent S; Dunton, Nancy E

    2014-01-01

    Objective We examined the concurrent and lagged effects of registered nurse (RN) turnover on unit-acquired pressure ulcer rates and whether RN staffing mediated the effects. Data Sources/Setting Quarterly unit-level data were obtained from the National Database of Nursing Quality Indicators for 2008 to 2010. A total of 10,935 unit-quarter observations (2,294 units, 465 hospitals) were analyzed. Methods This longitudinal study used multilevel regressions and tested time-lagged effects of study variables on outcomes. Findings The lagged effect of RN turnover on unit-acquired pressure ulcers was significant, while there was no concurrent effect. For every 10 percentage-point increase in RN turnover in a quarter, the odds of a patient having a pressure ulcer increased by 4 percent in the next quarter. Higher RN turnover in a quarter was associated with lower RN staffing in the current and subsequent quarters. Higher RN staffing was associated with lower pressure ulcer rates, but it did not mediate the relationship between turnover and pressure ulcers. Conclusions We suggest that RN turnover is an important factor that affects pressure ulcer rates and RN staffing needed for high-quality patient care. Given the high RN turnover rates, hospital and nursing administrators should prepare for its negative effect on patient outcomes. PMID:24476194

  19. Secondary task for full flight simulation incorporating tasks that commonly cause pilot error: Time estimation

    NASA Technical Reports Server (NTRS)

    Rosch, E.

    1975-01-01

    The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.

  20. GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application

    NASA Technical Reports Server (NTRS)

    McGuire, Melissa L.; Kunkel, Matthew R.; Smith, David A.

    2010-01-01

    The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.

  1. Concurrent Image Processing Executive (CIPE). Volume 2: Programmer's guide

    NASA Technical Reports Server (NTRS)

    Williams, Winifred I.

    1990-01-01

    This manual is intended as a guide for application programmers using the Concurrent Image Processing Executive (CIPE). CIPE is intended to become the support system software for a prototype high performance science analysis workstation. In its current configuration CIPE utilizes a JPL/Caltech Mark 3fp Hypercube with a Sun-4 host. CIPE's design is capable of incorporating other concurrent architectures as well. CIPE provides a programming environment to applications' programmers to shield them from various user interfaces, file transactions, and architectural complexities. A programmer may choose to write applications to use only the Sun-4 or to use the Sun-4 with the hypercube. A hypercube program will use the hypercube's data processors and optionally the Weitek floating point accelerators. The CIPE programming environment provides a simple set of subroutines to activate user interface functions, specify data distributions, activate hypercube resident applications, and to communicate parameters to and from the hypercube.

  2. Influence of regional-scale anthropogenic emissions on CO2 distributions over the western North Pacific

    NASA Astrophysics Data System (ADS)

    Vay, S. A.; Woo, J.-H.; Anderson, B. E.; Thornhill, K. L.; Blake, D. R.; Westberg, D. J.; Kiley, C. M.; Avery, M. A.; Sachse, G. W.; Streets, D. G.; Tsutsumi, Y.; Nolf, S. R.

    2003-10-01

    We report here airborne measurements of atmospheric CO2 over the western North Pacific during the March-April 2001 Transport and Chemical Evolution over the Pacific (TRACE-P) mission. The CO2 spatial distributions were notably influenced by cyclogenesis-triggered transport of regionally polluted continental air masses. Examination of the CO2 to C2H2/CO ratio indicated rapid outflow of combustion-related emissions in the free troposphere below 8 km. Although the highest CO2 mixing ratios were measured within the Pacific Rim region, enhancements were also observed further east over the open ocean at locations far removed from surface sources. Near the Asian continent, discrete plumes encountered within the planetary boundary layer contained up to 393 ppmv of CO2. Coincident enhancements in the mixing ratios of C2Cl4, C2H2, and C2H4 measured concurrently revealed combustion and industrial sources. To elucidate the source distributions of CO2, an emissions database for Asia was examined in conjunction with the chemistry and 5-day backward trajectories that revealed the WNW/W sector of northeast Asia was a major contributor to these pollution events. Comparisons of NOAA/CMDL and JMA surface data with measurements obtained aloft showed a strong latitudinal gradient that peaked between 35° and 40°N. We estimated a net CO2 flux from the Asian continent of approximately 13.93 Tg C day-1 for late winter/early spring with the majority of the export (79%) occurring in the lower free troposphere (2-8 km). The apportionment of the flux between anthropogenic and biospheric sources was estimated at 6.37 Tg C day-1 and 7.56 Tg C day-1, respectively.

  3. Distribution Characteristics of Air-Bone Gaps – Evidence of Bias in Manual Audiometry

    PubMed Central

    Margolis, Robert H.; Wilson, Richard H.; Popelka, Gerald R.; Eikelboom, Robert H.; Swanepoel, De Wet; Saly, George L.

    2015-01-01

    Objective Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps. PMID:26627469

  4. Private database queries based on counterfactual quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-Li; Guo, Fen-Zhuo; Gao, Fei; Liu, Bin; Wen, Qiao-Yan

    2013-08-01

    Based on the fundamental concept of quantum counterfactuality, we propose a protocol to achieve quantum private database queries, which is a theoretical study of how counterfactuality can be employed beyond counterfactual quantum key distribution (QKD). By adding crucial detecting apparatus to the device of QKD, the privacy of both the distrustful user and the database owner can be guaranteed. Furthermore, the proposed private-database-query protocol makes full use of the low efficiency in the counterfactual QKD, and by adjusting the relevant parameters, the protocol obtains excellent flexibility and extensibility.

  5. Surviving the Glut: The Management of Event Streams in Cyberphysical Systems

    NASA Astrophysics Data System (ADS)

    Buchmann, Alejandro

    Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de

  6. Determination of hyporheic travel time distributions and other parameters from concurrent conservative and reactive tracer tests by local-in-global optimization

    NASA Astrophysics Data System (ADS)

    Knapp, Julia L. A.; Cirpka, Olaf A.

    2017-06-01

    The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.

  7. New model for distributed multimedia databases and its application to networking of museums

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1998-02-01

    This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

  8. ARACHNID: A prototype object-oriented database tool for distributed systems

    NASA Technical Reports Server (NTRS)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  9. Model-Driven Test Generation of Distributed Systems

    NASA Technical Reports Server (NTRS)

    Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin

    2012-01-01

    This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.

  10. Systematic review of the concurrent and predictive validity of MRI biomarkers in OA

    PubMed Central

    Hunter, D.J.; Zhang, W.; Conaghan, Philip G.; Hirko, K.; Menashe, L.; Li, L.; Reichmann, W.M.; Losina, E.

    2012-01-01

    SUMMARY Objective To summarize literature on the concurrent and predictive validity of MRI-based measures of osteoarthritis (OA) structural change. Methods An online literature search was conducted of the OVID, EMBASE, CINAHL, PsychInfo and Cochrane databases of articles published up to the time of the search, April 2009. 1338 abstracts obtained with this search were preliminarily screened for relevance by two reviewers. Of these, 243 were selected for data extraction for this analysis on validity as well as separate reviews on discriminate validity and diagnostic performance. Of these 142 manuscripts included data pertinent to concurrent validity and 61 manuscripts for the predictive validity review. For this analysis we extracted data on criterion (concurrent and predictive) validity from both longitudinal and cross-sectional studies for all synovial joint tissues as it relates to MRI measurement in OA. Results Concurrent validity of MRI in OA has been examined compared to symptoms, radiography, histology/pathology, arthroscopy, CT, and alignment. The relation of bone marrow lesions, synovitis and effusion to pain was moderate to strong. There was a weak or no relation of cartilage morphology or meniscal tears to pain. The relation of cartilage morphology to radiographic OA and radiographic joint space was inconsistent. There was a higher frequency of meniscal tears, synovitis and other features in persons with radiographic OA. The relation of cartilage to other constructs including histology and arthroscopy was stronger. Predictive validity of MRI in OA has been examined for ability to predict total knee replacement (TKR), change in symptoms, radiographic progression as well as MRI progression. Quantitative cartilage volume change and presence of cartilage defects or bone marrow lesions are potential predictors of TKR. Conclusion MRI has inherent strengths and unique advantages in its ability to visualize multiple individual tissue pathologies relating to pain and also predict clinical outcome. The complex disease of OA which involves an array of tissue abnormalities is best imaged using this imaging tool. PMID:21396463

  11. Content Based Image Retrieval based on Wavelet Transform coefficients distribution

    PubMed Central

    Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice

    2007-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013

  12. Design considerations, architecture, and use of the Mini-Sentinel distributed data system.

    PubMed

    Curtis, Lesley H; Weiner, Mark G; Boudreau, Denise M; Cooper, William O; Daniel, Gregory W; Nair, Vinit P; Raebel, Marsha A; Beaulieu, Nicolas U; Rosofsky, Robert; Woodworth, Tiffany S; Brown, Jeffrey S

    2012-01-01

    We describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA. A common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data. As of July 2011, information on 99,260,976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316,009,067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center. This work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Integrating a local database into the StarView distributed user interface

    NASA Technical Reports Server (NTRS)

    Silberberg, D. P.

    1992-01-01

    A distributed user interface to the Space Telescope Data Archive and Distribution Service (DADS) known as StarView is being developed. The DADS architecture consists of the data archive as well as a relational database catalog describing the archive. StarView is a client/server system in which the user interface is the front-end client to the DADS catalog and archive servers. Users query the DADS catalog from the StarView interface. Query commands are transmitted via a network and evaluated by the database. The results are returned via the network and are displayed on StarView forms. Based on the results, users decide which data sets to retrieve from the DADS archive. Archive requests are packaged by StarView and sent to DADS, which returns the requested data sets to the users. The advantages of distributed client/server user interfaces over traditional one-machine systems are well known. Since users run software on machines separate from the database, the overall client response time is much faster. Also, since the server is free to process only database requests, the database response time is much faster. Disadvantages inherent in this architecture are slow overall database access time due to the network delays, lack of a 'get previous row' command, and that refinements of a previously issued query must be submitted to the database server, even though the domain of values have already been returned by the previous query. This architecture also does not allow users to cross correlate DADS catalog data with other catalogs. Clearly, a distributed user interface would be more powerful if it overcame these disadvantages. A local database is being integrated into StarView to overcome these disadvantages. When a query is made through a StarView form, which is often composed of fields from multiple tables, it is translated to an SQL query and issued to the DADS catalog. At the same time, a local database table is created to contain the resulting rows of the query. The returned rows are displayed on the form as well as inserted into the local database table. Identical results are produced by reissuing the query to either the DADS catalog or to the local table. Relational databases do not provide a 'get previous row' function because of the inherent complexity of retrieving previous rows of multiple-table joins. However, since this function is easily implemented on a single table, StarView uses the local table to retrieve the previous row. Also, StarView issues subsequent query refinements to the local table instead of the DADS catalog, eliminating the network transmission overhead. Finally, other catalogs can be imported into the local database for cross correlation with local tables. Overall, it is believe that this is a more powerful architecture for distributed, database user interfaces.

  14. Effects of concurrent drug therapy on technetium /sup 99m/Tc gluceptate biodistribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hinkle, G.H.; Basmadjian, G.P.; Peek, C.

    Drug interactions with /sup 99m/Tc gluceptate resulting in altered biodistribution were studied using chart review and animal tests. Charts of nine patients who had abnormal gallbladder uptake of technetium /sup 99m/Tc gluceptate during a two-year period were reviewed to obtain data such as concurrent drug therapy, primary diagnosis, and laboratory values. Adult New Zealand white rabbits were then used for testing the biodistribution of technetium /sup 99m/Tc gluceptate when administered concurrently with possibly interacting drugs identified in the chart review--penicillamine, penicillin G potassium, penicillin V potassium, acetaminophen, and trimethoprim-sulfamethoxazole. Chart review revealed no conclusive patterns of altered biodistribution associated withmore » other factors. The data did suggest the possibility that the five drugs listed above might cause increased hepatobiliary clearance of the radiopharmaceutical. Animal tests showed that i.v. penicillamine caused substantial distribution of radioactivity into the gallbladder and small bowel. Minimally increased gallbladder radioactivity occurred when oral acetaminophen and trimethoprim-sulfamethoxazole were administered concurrently. Oral and i.v. penicillins did not increase gallbladder activity. Penicillamine may cause substantial alteration of the biodistribution of technetium /sup 99m/Tc gluceptate.« less

  15. Evidence for two attentional components in visual working memory.

    PubMed

    Allen, Richard J; Baddeley, Alan D; Hitch, Graham J

    2014-11-01

    How does executive attentional control contribute to memory for sequences of visual objects, and what does this reveal about storage and processing in working memory? Three experiments examined the impact of a concurrent executive load (backward counting) on memory for sequences of individually presented visual objects. Experiments 1 and 2 found disruptive concurrent load effects of equivalent magnitude on memory for shapes, colors, and colored shape conjunctions (as measured by single-probe recognition). These effects were present only for Items 1 and 2 in a 3-item sequence; the final item was always impervious to this disruption. This pattern of findings was precisely replicated in Experiment 3 when using a cued verbal recall measure of shape-color binding, with error analysis providing additional insights concerning attention-related loss of early-sequence items. These findings indicate an important role for executive processes in maintaining representations of earlier encountered stimuli in an active form alongside privileged storage of the most recent stimulus. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. The articulatory in-out effect resists oral motor interference.

    PubMed

    Lindau, Berit; Topolinski, Sascha

    2018-02-01

    People prefer words with inward directed consonantal patterns (e.g., MENIKA) compared to outward patterns (KENIMA), because inward (outward) articulation movements resemble positive (negative) mouth actions such as swallowing (spitting). This effect might rely on covert articulation simulations, or subvocalizations, since it occurs also under silent reading. We tested to what degree these underlying articulation simulations are disturbed by oral motor interference. In 3 experiments (total N = 465) we interfered with these articulation simulations by employing concurrent oral exercises that induce oral motor noise while judging inward and outward words (chewing gum, Experiment 1; executing meaningless tongue movements, Experiment 2; concurrent verbalizations, Experiment 3). Across several word stimulus types, the articulatory in-out effect was not modulated by these tasks. This finding introduces a theoretically interesting case, because in contrast to many previous demonstrations regarding other motor-preference effects, the covert simulations in this effect are not susceptible to selective motor interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Process evaluation distributed system

    NASA Technical Reports Server (NTRS)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  18. Heterogeneous distributed databases: A case study

    NASA Technical Reports Server (NTRS)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  19. Kalman approach to accuracy management for interoperable heterogeneous model abstraction within an HLA-compliant simulation

    NASA Astrophysics Data System (ADS)

    Leskiw, Donald M.; Zhau, Junmei

    2000-06-01

    This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.

  20. Database Search Strategies & Tips. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 17 articles presenting strategies and tips for searching databases online appear in this collection, which is one in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…

  1. Automated Analysis of Zooplankton Size and Taxonomic Composition.

    DTIC Science & Technology

    1998-07-01

    Gallager, and P. Alatalo. Seasonal evolution of plankton and particle distributions across Georges Bank as measured using the Video Plankton Recorder... pteropods , and larvaceans estimated from concurrent Video Plankton Recorder and MOCNESS tows in the stratified region of Georges Bank. Deep Sea Res

  2. Concurrent Breakpoints

    DTIC Science & Technology

    2011-12-18

    Proceedings of the SIGMET- RICS Symposium on Parallel and Distributed Tools, pages 48–59, 1998. [8] A. Dinning and E. Schonberg . Detecting access...multi- threaded programs. ACM Trans. Comput. Syst., 15(4):391– 411, 1997. [38] E. Schonberg . On-the-fly detection of access anomalies. In Proceedings

  3. jSPyDB, an open source database-independent tool for data management

    NASA Astrophysics Data System (ADS)

    Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo

    2011-12-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.

  4. Resources | Division of Cancer Prevention

    Cancer.gov

    Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |

  5. A Petri net controller for distributed hierarchical systems. Thesis

    NASA Technical Reports Server (NTRS)

    Peck, Joseph E.

    1991-01-01

    The solutions to a wide variety of problems are often best organized as a distributed hierarchical system. These systems can be graphically and mathematically modeled through the use of Petri nets, which can easily represent synchronous, asynchronous, and concurrent operations. This thesis presents a controller implementation based on Petri nets and a design methodology for the interconnection of distributed Petri nets. Two case studies are presented in which the controller operates a physical system, the Center for Intelligent Robotic Systems for Space Exploration Dual Arm Robotic Testbed.

  6. Analysis and Design of a Distributed System for Management and Distribution of Natural Language Assertions

    DTIC Science & Technology

    2010-09-01

    5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD

  7. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  8. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  9. Design and verification of distributed logic controllers with application of Petri nets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał

    2015-12-31

    The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.

  10. Design and verification of distributed logic controllers with application of Petri nets

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika

    2015-12-01

    The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.

  11. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    PubMed

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-03-21

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.

  12. Determining conserved metabolic biomarkers from a million database queries.

    PubMed

    Kurczy, Michael E; Ivanisevic, Julijana; Johnson, Caroline H; Uritboonthai, Winnie; Hoang, Linh; Fang, Mingliang; Hicks, Matthew; Aldebot, Anthony; Rinehart, Duane; Mellander, Lisa J; Tautenhahn, Ralf; Patti, Gary J; Spilker, Mary E; Benton, H Paul; Siuzdak, Gary

    2015-12-01

    Metabolite databases provide a unique window into metabolome research allowing the most commonly searched biomarkers to be catalogued. Omic scale metabolite profiling, or metabolomics, is finding increased utility in biomarker discovery largely driven by improvements in analytical technologies and the concurrent developments in bioinformatics. However, the successful translation of biomarkers into clinical or biologically relevant indicators is limited. With the aim of improving the discovery of translatable metabolite biomarkers, we present search analytics for over one million METLIN metabolite database queries. The most common metabolites found in METLIN were cross-correlated against XCMS Online, the widely used cloud-based data processing and pathway analysis platform. Analysis of the METLIN and XCMS common metabolite data has two primary implications: these metabolites, might indicate a conserved metabolic response to stressors and, this data may be used to gauge the relative uniqueness of potential biomarkers. METLIN can be accessed by logging on to: https://metlin.scripps.edu siuzdak@scripps.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Antarctic Meteorite Classification and Petrographic Database Enhancements

    NASA Technical Reports Server (NTRS)

    Todd, N. S.; Satterwhite, C. E.; Righter, K.

    2012-01-01

    The Antarctic Meteorite collection, which is comprised of over 18,700 meteorites, is one of the largest collections of meteorites in the world. These meteorites have been collected since the late 1970 s as part of a three-agency agreement between NASA, the National Science Foundation, and the Smithsonian Institution [1]. Samples collected each season are analyzed at NASA s Meteorite Lab and the Smithsonian Institution and results are published twice a year in the Antarctic Meteorite Newsletter, which has been in publication since 1978. Each newsletter lists the samples collected and processed and provides more in-depth details on selected samples of importance to the scientific community. Data about these meteorites is also published on the NASA Curation website [2] and made available through the Meteorite Classification Database allowing scientists to search by a variety of parameters. This paper describes enhancements that have been made to the database and to the data and photo acquisition process to provide the meteorite community with faster access to meteorite data concurrent with the publication of the Antarctic Meteorite Newsletter twice a year.

  14. Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert

    2002-01-01

    The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.

  15. Preliminary demonstration using localized skin temperature elevation as observed with thermal imaging as an indicator of fat-specific absorption during focused-field radiofrequency therapy.

    PubMed

    Key, Douglas J

    2014-07-01

    This study incorporates concurrent thermal camera imaging as a means of both safely extending the length of each treatment session within skin surface temperature tolerances and to demonstrate not only the homogeneous nature of skin surface temperature heating but the distribution of that heating pattern as a reflection of localization of subcutaneous fat distribution. Five subjects were selected because of a desire to reduce abdomen and flank fullness. Full treatment field thermal camera imaging was captured at 15 minute intervals, specifically at 15, 30, and 45 minutes into active treatment with the purpose of monitoring skin temperature and avoiding any patterns of skin temperature excess. Peak areas of heating corresponded anatomically to the patients' areas of greatest fat excess ie, visible "pinchable" fat. Preliminary observation of high-resolution thermal camera imaging used concurrently with focused field RF therapy show peak skin heating patterns overlying the areas of greatest fat excess.

  16. The medical communications officer. A resource for data collection, quality management and medical control.

    PubMed

    Gunderson, Michael; Barnard, Jeff; McPherson, John; Kearns, Conrad T

    2002-08-01

    Pinellas County EMS' Medical Communications Officers provide a wide variety of services to patients, field clinicians, managers and their medical director. The concurrent data collection processes used in the MCO program for performance measurement of resuscitation efforts, intubations, submersion incidents and aeromedical transports for trauma cases have been very effective in the integration of data from multiple computer databases and telephone follow-ups with field crews and receiving emergency department staffs. This has facilitated significant improvements in the performance of these and many other aspects of our EMS system.

  17. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  18. Toward unification of taxonomy databases in a distributed computer environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less

  19. High-Cost Users of Prescription Drugs: A Population-Based Analysis from British Columbia, Canada.

    PubMed

    Weymann, Deirdre; Smolina, Kate; Gladstone, Emilie J; Morgan, Steven G

    2017-04-01

    To examine variation in pharmaceutical spending and patient characteristics across prescription drug user groups. British Columbia's population-based linked administrative health and sociodemographic databases (N = 3,460,763). We classified individuals into empirically derived prescription drug user groups based on pharmaceutical spending patterns outside hospitals from 2007 to 2011. We examined variation in patient characteristics, mortality, and health services usage and applied hierarchical clustering to determine patterns of concurrent drug use identifying high-cost patients. Approximately 1 in 20 British Columbians had persistently high prescription costs for 5 consecutive years, accounting for 42 percent of 2011 province-wide pharmaceutical spending. Less than 1 percent of the population experienced discrete episodes of high prescription costs; an additional 2.8 percent transitioned to or from high-cost episodes of unknown duration. Persistent high-cost users were more likely to concurrently use multiple chronic medications; episodic and transitory users spent more on specialized medicines, including outpatient cancer drugs. Cluster analyses revealed heterogeneity in concurrent medicine use within high-cost groups. Whether low, moderate, or high, costs of prescription drugs for most individuals are persistent over time. Policies controlling high-cost use should focus on reducing polypharmacy and encouraging price competition in drug classes used by ordinary and high-cost users alike. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  20. Uncertain Associations of Major Bleeding and Concurrent Use of Antiplatelet Agents and Chinese Medications: A Nested Case-Crossover Study

    PubMed Central

    Yam, Felix K.

    2017-01-01

    Despite the evidence that some commonly used Chinese medications (CMs) have antiplatelet/anticoagulant effects, many patients still used antiplatelets combined with CMs. We conducted a nested case-crossover study to examine the associations between the concomitant use of antiplatelets and CMs and major bleeding using population-based health database in Taiwan. Among the cohort of 79,463 outpatients prescribed antiplatelets (e.g., aspirin and clopidogrel) continuously, 1,209 patients hospitalized with new occurring bleeding in 2012 and 2013 were included. Those recruited patients served as their own controls to compare different times of exposure to prespecified CMs (e.g., Asian ginseng and dong quai) and antiplatelet agents. The periods of case, control 1, and control 2 were defined as 1–4 weeks, 6–9 weeks, and 13–16 weeks before hospitalization, respectively. Conditional logistic regression analyses found that concurrent use of antiplatelet drugs with any of the prespecified CMs in the case period might not significantly increase the risks of bleeding over that in the control periods (OR = 1.00, 95% CI 0.51 to 1.95 and OR = 1.13, 95% CI 0.65 to 1.97). The study showed no strong relationships between hospitalization for major bleeding events and concurrent use of antiplatelet drugs with the prespecified CMs. PMID:28831288

  1. Word length, set size, and lexical factors: Re-examining what causes the word length effect.

    PubMed

    Guitard, Dominic; Gabel, Andrew J; Saint-Aubin, Jean; Surprenant, Aimée M; Neath, Ian

    2018-04-19

    The word length effect, better recall of lists of short (fewer syllables) than long (more syllables) words has been termed a benchmark effect of working memory. Despite this, experiments on the word length effect can yield quite different results depending on set size and stimulus properties. Seven experiments are reported that address these 2 issues. Experiment 1 replicated the finding of a preserved word length effect under concurrent articulation for large stimulus sets, which contrasts with the abolition of the word length effect by concurrent articulation for small stimulus sets. Experiment 2, however, demonstrated that when the short and long words are equated on more dimensions, concurrent articulation abolishes the word length effect for large stimulus sets. Experiment 3 shows a standard word length effect when output time is equated, but Experiments 4-6 show no word length effect when short and long words are equated on increasingly more dimensions that previous demonstrations have overlooked. Finally, Experiment 7 compared recall of a small and large neighborhood words that were equated on all the dimensions used in Experiment 6 (except for those directly related to neighborhood size) and a neighborhood size effect was still observed. We conclude that lexical factors, rather than word length per se, are better predictors of when the word length effect will occur. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. [Learning virtual routes: what does verbal coding do in working memory?].

    PubMed

    Gyselinck, Valérie; Grison, Élise; Gras, Doriane

    2015-03-01

    Two experiments were run to complete our understanding of the role of verbal and visuospatial encoding in the construction of a spatial model from visual input. In experiment 1 a dual task paradigm was applied to young adults who learned a route in a virtual environment and then performed a series of nonverbal tasks to assess spatial knowledge. Results indicated that landmark knowledge as asserted by the visual recognition of landmarks was not impaired by any of the concurrent task. Route knowledge, assessed by recognition of directions, was impaired both by a tapping task and a concurrent articulation task. Interestingly, the pattern was modulated when no landmarks were available to perform the direction task. A second experiment was designed to explore the role of verbal coding on the construction of landmark and route knowledge. A lexical-decision task was used as a verbal-semantic dual task, and a tone decision task as a nonsemantic auditory task. Results show that these new concurrent tasks impaired differently landmark knowledge and route knowledge. Results can be interpreted as showing that the coding of route knowledge could be grounded on both a coding of the sequence of events and on a semantic coding of information. These findings also point on some limits of Baddeley's working memory model. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  3. Absolute Reliability and Concurrent Validity of Hand Held Dynamometry and Isokinetic Dynamometry in the Hip, Knee and Ankle Joint: Systematic Review and Meta-analysis

    PubMed Central

    Chamorro, Claudio; Armijo-Olivo, Susan; De la Fuente, Carlos; Fuentes, Javiera; Javier Chirosa, Luis

    2017-01-01

    Abstract The purpose of the study is to establish absolute reliability and concurrent validity between hand-held dynamometers (HHDs) and isokinetic dynamometers (IDs) in lower extremity peak torque assessment. Medline, Embase, CINAHL databases were searched for studies related to psychometric properties in muscle dynamometry. Studies considering standard error of measurement SEM (%) or limit of agreement LOA (%) expressed as percentage of the mean, were considered to establish absolute reliability while studies using intra-class correlation coefficient (ICC) were considered to establish concurrent validity between dynamometers. In total, 17 studies were included in the meta-analysis. The COSMIN checklist classified them between fair and poor. Using HHDs, knee extension LOA (%) was 33.59%, 95% confidence interval (CI) 23.91 to 43.26 and ankle plantar flexion LOA (%) was 48.87%, CI 35.19 to 62.56. Using IDs, hip adduction and extension; knee flexion and extension; and ankle dorsiflexion showed LOA (%) under 15%. Lower hip, knee, and ankle LOA (%) were obtained using an ID compared to HHD. ICC between devices ranged between 0.62, CI (0.37 to 0.87) for ankle dorsiflexion to 0.94, IC (0.91to 0.98) for hip adduction. Very high correlation were found for hip adductors and hip flexors and moderate correlations for knee flexors/extensors and ankle plantar/dorsiflexors. PMID:29071305

  4. On the feasibility of concurrent human TMS-EEG-fMRI measurements

    PubMed Central

    Reithler, Joel; Schuhmann, Teresa; de Graaf, Tom; Uludağ, Kâmil; Goebel, Rainer; Sack, Alexander T.

    2013-01-01

    Simultaneously combining the complementary assets of EEG, functional MRI (fMRI), and transcranial magnetic stimulation (TMS) within one experimental session provides synergetic results, offering insights into brain function that go beyond the scope of each method when used in isolation. The steady increase of concurrent EEG-fMRI, TMS-EEG, and TMS-fMRI studies further underlines the added value of such multimodal imaging approaches. Whereas concurrent EEG-fMRI enables monitoring of brain-wide network dynamics with high temporal and spatial resolution, the combination with TMS provides insights in causal interactions within these networks. Thus the simultaneous use of all three methods would allow studying fast, spatially accurate, and distributed causal interactions in the perturbed system and its functional relevance for intact behavior. Concurrent EEG-fMRI, TMS-EEG, and TMS-fMRI experiments are already technically challenging, and the three-way combination of TMS-EEG-fMRI might yield additional difficulties in terms of hardware strain or signal quality. The present study explored the feasibility of concurrent TMS-EEG-fMRI studies by performing safety and quality assurance tests based on phantom and human data combining existing commercially available hardware. Results revealed that combined TMS-EEG-fMRI measurements were technically feasible, safe in terms of induced temperature changes, allowed functional MRI acquisition with comparable image quality as during concurrent EEG-fMRI or TMS-fMRI, and provided artifact-free EEG before and from 300 ms after TMS pulse application. Based on these empirical findings, we discuss the conceptual benefits of this novel complementary approach to investigate the working human brain and list a number of precautions and caveats to be heeded when setting up such multimodal imaging facilities with current hardware. PMID:23221407

  5. Design of special purpose database for credit cooperation bank business processing network system

    NASA Astrophysics Data System (ADS)

    Yu, Yongling; Zong, Sisheng; Shi, Jinfa

    2011-12-01

    With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.

  6. Intraoperative floppy iris syndrome and its association with various concurrent medications, bulbus length, patient age and gender.

    PubMed

    Wahl, Michael; Tipotsch-Maca, Saskia M; Vecsei-Marlovits, Pia V

    2017-01-01

    To evaluate the association between intraoperative floppy iris syndrome (IFIS) and concurrent medications containing selective alpha1A receptor antagonists as well as nonselective alpha1-adrenergic receptor antagonists, bulbus length, patient age and gender. We performed a prospective data acquisition of IFIS occurrence and grading, and retrospective evaluation of concurrent medications, bulbus length, patient age and gender of all patients undergoing cataract surgery over a 6-month period. IFIS was observed in 119 of 947 cases (12.6 %). 31 of those 119 patients (26.1 %) had a concurrent medication with a drug that is associated with a higher risk of causing IFIS. Tamsulosin was the drug most commonly associated with IFIS (n = 11), followed by a combination of drugs (n = 7), doxazosin (n = 4), quetiapine (n = 4), finasterid (n = 2), prothipendyl (n = 2), and mianserin (n = 1). Bulbus length and age did not show any significant association with occurrence or grade of IFIS. Gender distribution among IFIS cases was 57.1 % males (n = 68) and 42.9 % (n = 51) females. The occurrence of IFIS has to be expected with a variety of concurrent medications. The number of IFIS cases and the percentage of females in this series are higher compared to previous reports. The observations might be due to a rising awareness of surgeons or to an increasing number of causative medications on the market.

  7. Towards an integrated European strong motion data distribution

    NASA Astrophysics Data System (ADS)

    Luzi, Lucia; Clinton, John; Cauzzi, Carlo; Puglia, Rodolfo; Michelini, Alberto; Van Eck, Torild; Sleeman, Reinhoud; Akkar, Sinan

    2013-04-01

    Recent decades have seen a significant increase in the quality and quantity of strong motion data collected in Europe, as dense and often real-time and continuously monitored broadband strong motion networks have been constructed in many nations. There has been a concurrent increase in demand for access to strong motion data not only from researchers for engineering and seismological studies, but also from civil authorities and seismic networks for the rapid assessment of ground motion and shaking intensity following significant earthquakes (e.g. ShakeMaps). Aside from a few notable exceptions on the national scale, databases providing access to strong motion data has not appeared to keep pace with these developments. In the framework of the EC infrastructure project NERA (2010 - 2014), that integrates key research infrastructures in Europe for monitoring earthquakes and assessing their hazard and risk, the network activity NA3 deals with the networking of acceleration networks and SM data. Within the NA3 activity two infrastructures are being constructed: i) a Rapid Response Strong Motion (RRSM) database, that following a strong event, automatically parameterises all available on-scale waveform data within the European Integrated waveform Data Archives (EIDA) and makes the waveforms easily available to the seismological community within minutes of an event; and ii) a European Strong Motion (ESM) database of accelerometric records, with associated metadata relevant to earthquake engineering and seismology research communities, using standard, manual processing that reflects the state of the art and research needs in these fields. These two separate repositories form the core infrastructures being built to distribute strong motion data in Europe in order to guarantee rapid and long-term availability of high quality waveform data to both the international scientific community and the hazard mitigation communities. These infrastructures will provide the access to strong motion data in an eventual EPOS seismological service. A working group on Strong Motion data is being created at ORFEUS in 2013. This body, consisting of experts in strong motion data collection, processing and research from across Europe, will provide the umbrella organisation that will 1) have the political clout to negotiate data sharing agreements with strong motion data providers and 2) manage the software during a transition from the end of NERA to the EPOS community. We expect the community providing data to the RRSM and ESM will gradually grow, under the supervision of ORFEUS, and eventually include strong motion data from networks from all European countries that can have an open data policy.

  8. Concurrent topological design of composite structures and materials containing multiple phases of distinct Poisson's ratios

    NASA Astrophysics Data System (ADS)

    Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min

    2018-04-01

    Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.

  9. A cognitive approach to classifying perceived behaviors

    NASA Astrophysics Data System (ADS)

    Benjamin, Dale Paul; Lyons, Damian

    2010-04-01

    This paper describes our work on integrating distributed, concurrent control in a cognitive architecture, and using it to classify perceived behaviors. We are implementing the Robot Schemas (RS) language in Soar. RS is a CSP-type programming language for robotics that controls a hierarchy of concurrently executing schemas. The behavior of every RS schema is defined using port automata. This provides precision to the semantics and also a constructive means of reasoning about the behavior and meaning of schemas. Our implementation uses Soar operators to build, instantiate and connect port automata as needed. Our approach is to use comprehension through generation (similar to NLSoar) to search for ways to construct port automata that model perceived behaviors. The generality of RS permits us to model dynamic, concurrent behaviors. A virtual world (Ogre) is used to test the accuracy of these automata. Soar's chunking mechanism is used to generalize and save these automata. In this way, the robot learns to recognize new behaviors.

  10. Concurrent planning and execution for a walking robot

    NASA Astrophysics Data System (ADS)

    Simmons, Reid

    1990-07-01

    The Planetary Rover project is developing the Ambler, a novel legged robot, and an autonomous software system for walking the Ambler over rough terrain. As part of the project, we have developed a system that integrates perception, planning, and real-time control to navigate a single leg of the robot through complex obstacle courses. The system is integrated using the Task Control Architecture (TCA), a general-purpose set of utilities for building and controlling distributed mobile robot systems. The walking system, as originally implemented, utilized a sequential sense-plan-act control cycle. This report describes efforts to improve the performance of the system by concurrently planning and executing steps. Concurrency was achieved by modifying the existing sequential system to utilize TCA features such as resource management, monitors, temporal constraints, and hierarchical task trees. Performance was increased in excess of 30 percent with only a relatively modest effort to convert and test the system. The results lend support to the utility of using TCA to develop complex mobile robot systems.

  11. Experimental evaluation of dynamic data allocation strategies in a distributed database with changing workloads

    NASA Technical Reports Server (NTRS)

    Brunstrom, Anna; Leutenegger, Scott T.; Simha, Rahul

    1995-01-01

    Traditionally, allocation of data in distributed database management systems has been determined by off-line analysis and optimization. This technique works well for static database access patterns, but is often inadequate for frequently changing workloads. In this paper we address how to dynamically reallocate data for partionable distributed databases with changing access patterns. Rather than complicated and expensive optimization algorithms, a simple heuristic is presented and shown, via an implementation study, to improve system throughput by 30 percent in a local area network based system. Based on artificial wide area network delays, we show that dynamic reallocation can improve system throughput by a factor of two and a half for wide area networks. We also show that individual site load must be taken into consideration when reallocating data, and provide a simple policy that incorporates load in the reallocation decision.

  12. Development of a web-based video management and application processing system

    NASA Astrophysics Data System (ADS)

    Chan, Shermann S.; Wu, Yi; Li, Qing; Zhuang, Yueting

    2001-07-01

    How to facilitate efficient video manipulation and access in a web-based environment is becoming a popular trend for video applications. In this paper, we present a web-oriented video management and application processing system, based on our previous work on multimedia database and content-based retrieval. In particular, we extend the VideoMAP architecture with specific web-oriented mechanisms, which include: (1) Concurrency control facilities for the editing of video data among different types of users, such as Video Administrator, Video Producer, Video Editor, and Video Query Client; different users are assigned various priority levels for different operations on the database. (2) Versatile video retrieval mechanism which employs a hybrid approach by integrating a query-based (database) mechanism with content- based retrieval (CBR) functions; its specific language (CAROL/ST with CBR) supports spatio-temporal semantics of video objects, and also offers an improved mechanism to describe visual content of videos by content-based analysis method. (3) Query profiling database which records the `histories' of various clients' query activities; such profiles can be used to provide the default query template when a similar query is encountered by the same kind of users. An experimental prototype system is being developed based on the existing VideoMAP prototype system, using Java and VC++ on the PC platform.

  13. Polygamy of entanglement in multipartite quantum systems

    NASA Astrophysics Data System (ADS)

    Kim, Jeong San

    2009-08-01

    We show that bipartite entanglement distribution (or entanglement of assistance) in multipartite quantum systems is by nature polygamous. We first provide an analytical upper bound for the concurrence of assistance in bipartite quantum systems and derive a polygamy inequality of multipartite entanglement in arbitrary-dimensional quantum systems.

  14. Global Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav

    2015-11-01

    Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.

  15. Building the Infrastructure of Resource Sharing: Union Catalogs, Distributed Search, and Cross-Database Linkage.

    ERIC Educational Resources Information Center

    Lynch, Clifford A.

    1997-01-01

    Union catalogs and distributed search systems are two ways users can locate materials in print and electronic formats. This article examines the advantages and limitations of both approaches and argues that they should be considered complementary rather than competitive. Discusses technologies creating linkage between catalogs and databases and…

  16. DISTRIBUTED STRUCTURE-SEARCHABLE TOXICITY (DSSTOX) DATABASE NETWORK: MAKING PUBLIC TOXICITY DATA RESOURCES MORE ACCESSIBLE AND USABLE FOR DATA EXPLORATION AND SAR DEVELOPMENT

    EPA Science Inventory


    Distributed Structure-Searchable Toxicity (DSSTox) Database Network: Making Public Toxicity Data Resources More Accessible and U sable for Data Exploration and SAR Development

    Many sources of public toxicity data are not currently linked to chemical structure, are not ...

  17. Score Distributions of the Balance Outcome Measure for Elder Rehabilitation (BOOMER) in Community-Dwelling Older Adults With Vertebral Fracture.

    PubMed

    Brown, Zachary M; Gibbs, Jenna C; Adachi, Jonathan D; Ashe, Maureen C; Hill, Keith D; Kendler, David L; Khan, Aliya; Papaioannou, Alexandra; Prasad, Sadhana; Wark, John D; Giangregorio, Lora M

    2017-11-28

    We sought to evaluate the Balance Outcome Measure for Elder Rehabilitation (BOOMER) in community-dwelling women 65 years and older with vertebral fracture and to describe score distributions and potential ceiling and floor effects. This was a secondary data analysis of baseline data from the Build Better Bones with Exercise randomized controlled trial using the BOOMER. A total of 141 women with osteoporosis and radiographically confirmed vertebral fracture were included. Concurrent validity and internal consistency were assessed in comparison to the Short Physical Performance Battery (SPPB). Normality and ceiling/floor effects of total BOOMER scores and component test items were also assessed. Exploratory analyses of assistive aid use and falls history were performed. Tests for concurrent validity demonstrated moderate correlation between total BOOMER and SPPB scores. The BOOMER component tests showed modest internal consistency. Substantial ceiling effect and nonnormal score distributions were present among overall sample and those not using assistive aids for total BOOMER scores, although scores were normally distributed for those using assistive aids. The static standing with eyes closed test demonstrated the greatest ceiling effects of the component tests, with 92% of participants achieving a maximal score. While the BOOMER compares well with the SPPB in community-dwelling women with vertebral fractures, researchers or clinicians considering using the BOOMER in similar or higher-functioning populations should be aware of the potential for ceiling effects.

  18. SPANG: a SPARQL client supporting generation and reuse of queries for distributed RDF databases.

    PubMed

    Chiba, Hirokazu; Uchiyama, Ikuo

    2017-02-08

    Toward improved interoperability of distributed biological databases, an increasing number of datasets have been published in the standardized Resource Description Framework (RDF). Although the powerful SPARQL Protocol and RDF Query Language (SPARQL) provides a basis for exploiting RDF databases, writing SPARQL code is burdensome for users including bioinformaticians. Thus, an easy-to-use interface is necessary. We developed SPANG, a SPARQL client that has unique features for querying RDF datasets. SPANG dynamically generates typical SPARQL queries according to specified arguments. It can also call SPARQL template libraries constructed in a local system or published on the Web. Further, it enables combinatorial execution of multiple queries, each with a distinct target database. These features facilitate easy and effective access to RDF datasets and integrative analysis of distributed data. SPANG helps users to exploit RDF datasets by generation and reuse of SPARQL queries through a simple interface. This client will enhance integrative exploitation of biological RDF datasets distributed across the Web. This software package is freely available at http://purl.org/net/spang .

  19. ChEMBL web services: streamlining access to drug discovery data and utilities

    PubMed Central

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P.

    2015-01-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. PMID:25883136

  20. A Data-Based Console Logger for Mission Operations Team Coordination

    NASA Technical Reports Server (NTRS)

    Thronesbery, Carroll; Malin, Jane T.; Jenks, Kenneth; Overland, David; Oliver, Patrick; Zhang, Jiajie; Gong, Yang; Zhang, Tao

    2005-01-01

    Concepts and prototypes1,2 are discussed for a data-based console logger (D-Logger) to meet new challenges for coordination among flight controllers arising from new exploration mission concepts. The challenges include communication delays, increased crew autonomy, multiple concurrent missions, reduced-size flight support teams that include multidisciplinary flight controllers during quiescent periods, and migrating some flight support activities to flight controller offices. A spiral development approach has been adopted, making simple, but useful functions available early and adding more extensive support later. Evaluations have guided the development of the D-Logger from the beginning and continue to provide valuable user influence about upcoming requirements. D-Logger is part of a suite of tools designed to support future operations personnel and crew. While these tools can be used independently, when used together, they provide yet another level of support by interacting with one another. Recommendations are offered for the development of similar projects.

  1. Plans for the extreme ultraviolet explorer data base

    NASA Technical Reports Server (NTRS)

    Marshall, Herman L.; Dobson, Carl A.; Malina, Roger F.; Bowyer, Stuart

    1988-01-01

    The paper presents an approach for storage and fast access to data that will be obtained by the Extreme Ultraviolet Explorer (EUVE), a satellite payload scheduled for launch in 1991. The EUVE telescopes will be operated remotely from the EUVE Science Operation Center (SOC) located at the University of California, Berkeley. The EUVE science payload consists of three scanning telescope carrying out an all-sky survey in the 80-800 A spectral region and a Deep Survey/Spectrometer telescope performing a deep survey in the 80-250 A spectral region. Guest Observers will remotely access the EUVE spectrometer database at the SOC. The EUVE database will consist of about 2 X 10 to the 10th bytes of information in a very compact form, very similar to the raw telemetry data. A history file will be built concurrently giving telescope parameters, command history, attitude summaries, engineering summaries, anomalous events, and ephemeris summaries.

  2. Compressing DNA sequence databases with coil.

    PubMed

    White, W Timothy J; Hendy, Michael D

    2008-05-20

    Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  3. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  4. Benchmarking distributed data warehouse solutions for storing genomic variant information

    PubMed Central

    Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.

    2017-01-01

    Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require the storage and analysis of variants from thousands of samples can benefit from the scalability and performance of distributed data warehouse solutions. Database URL: https://github.com/ZSI-Bio/variantsdwh PMID:29220442

  5. Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1990-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.

  6. Collection Fusion Using Bayesian Estimation of a Linear Regression Model in Image Databases on the Web.

    ERIC Educational Resources Information Center

    Kim, Deok-Hwan; Chung, Chin-Wan

    2003-01-01

    Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…

  7. Geodata Modeling and Query in Geographic Information Systems

    NASA Technical Reports Server (NTRS)

    Adam, Nabil

    1996-01-01

    Geographic information systems (GIS) deal with collecting, modeling, man- aging, analyzing, and integrating spatial (locational) and non-spatial (attribute) data required for geographic applications. Examples of spatial data are digital maps, administrative boundaries, road networks, and those of non-spatial data are census counts, land elevations and soil characteristics. GIS shares common areas with a number of other disciplines such as computer- aided design, computer cartography, database management, and remote sensing. None of these disciplines however, can by themselves fully meet the requirements of a GIS application. Examples of such requirements include: the ability to use locational data to produce high quality plots, perform complex operations such as network analysis, enable spatial searching and overlay operations, support spatial analysis and modeling, and provide data management functions such as efficient storage, retrieval, and modification of large datasets; independence, integrity, and security of data; and concurrent access to multiple users. It is on the data management issues that we devote our discussions in this monograph. Traditionally, database management technology have been developed for business applications. Such applications require, among other things, capturing the data requirements of high-level business functions and developing machine- level implementations; supporting multiple views of data and yet providing integration that would minimize redundancy and maintain data integrity and security; providing a high-level language for data definition and manipulation; allowing concurrent access to multiple users; and processing user transactions in an efficient manner. The demands on database management systems have been for speed, reliability, efficiency, cost effectiveness, and user-friendliness. Significant progress have been made in all of these areas over the last two decades to the point that many generalized database platforms are now available for developing data intensive applications that run in real-time. While continuous improvement is still being made at a very fast-paced and competitive rate, new application areas such as computer aided design, image processing, VLSI design, and GIS have been identified by many as the next generation of database applications. These new application areas pose serious challenges to the currently available database technology. At the core of these challenges is the nature of data that is manipulated. In traditional database applications, the database objects do not have any spatial dimension, and as such, can be thought of as point data in a multi-dimensional space. For example, each instance of an entity EMPLOYEE will have a unique value corresponding to every attribute such as employee id, employee name, employee address and so on. Thus, every Employee instance can be thought of as a point in a multi-dimensional space where each dimension is represented by an attribute. Furthermore, all operations on such data are one-dimensional. Thus, users may retrieve all entities satisfying one or more constraints. Examples of such constraints include employees with addresses in a certain area code, or salaries within a certain range. Even though constraints can be specified on multiple attributes (dimensions), the search for such data is essentially orthogonal across these dimensions.

  8. Aircraft Emission Inventories Projected in Year 2015 for a High Speed Civil Transport (HSCT) Universal Airline Network

    NASA Technical Reports Server (NTRS)

    Baughcum, Steven L.; Henderson, Stephen C.

    1995-01-01

    This report describes the development of a three-dimensional database of aircraft fuel burn and emissions (fuel burned, NOx, CO, and hydrocarbons) from projected fleets of high speed civil transports (HSCT's) on a universal airline network.Inventories for 500 and 1000 HSCT fleets, as well as the concurrent subsonic fleets, were calculated. The objective of this work was to evaluate the changes in geographical distribution of the HSCT emissions as the fleet size grew from 500 to 1000 HSCT's. For this work, a new expanded HSCT network was used and flights projected using a market penetration analysis rather than assuming equal penetration as was done in the earlier studies. Emission inventories on this network were calculated for both Mach 2.0 and Mach 2.4 HSCT fleets with NOx cruise emission indices of approximately 5 and 15 grams NOx/kg fuel. These emissions inventories are available for use by atmospheric scientists conducting the Atmospheric Effects of Stratospheric Aircraft (AESA) modeling studies. Fuel burned and emissions of nitrogen oxides (NOx as NO2), carbon monoxide, and hydrocarbons have been calculated on a 1 degree latitude x 1 degree longitude x 1 kilometer attitude grid and delivered to NASA as electronic files.

  9. Does reimportation reduce price differences for prescription drugs? Lessons from the European Union.

    PubMed

    Kyle, Margaret K; Allsbrook, Jennifer S; Schulman, Kevin A

    2008-08-01

    To examine the effect of parallel trade on patterns of price dispersion for prescription drugs in the European Union. Longitudinal data from an IMS Midas database of prices and units sold for drugs in 36 categories in 30 countries from 1993 through 2004. The main outcome measures were mean price differentials and other measures of price dispersion within European Union countries compared with within non-European Union countries. We identified drugs subject to parallel trade using information provided by IMS and by checking membership lists of parallel import trade associations and lists of approved parallel imports. Parallel trade was not associated with substantial reductions in price dispersion in European Union countries. In descriptive and regression analyses, about half of the price differentials exceeded 50 percent in both European Union and non-European Union countries over time, and price distributions among European Union countries did not show a dramatic change concurrent with the adoption of parallel trade. In regression analysis, we found that although price differentials decreased after 1995 in most countries, they decreased less in the European Union than elsewhere. Parallel trade for prescription drugs does not automatically reduce international price differences. Future research should explore how other regulatory schemes might lead to different results elsewhere.

  10. Appraisal of levels and patterns of occupational exposure to 1,3-butadiene.

    PubMed

    Scarselli, Alberto; Corfiati, Marisa; Di Marzi, Davide; Iavicoli, Sergio

    2017-09-01

    Objectives 1,3-butadiene is classified as carcinogenic to human by inhalation and the association with leukemia has been observed in several epidemiological studies. The aim of this study was to evaluate data about occupational exposure levels to 1,3-butadiene in the Italian working force. Methods Airborne concentrations of 1,3-butadiene were extracted from the Italian database on occupational exposure to carcinogens in the period 1996-2015. Descriptive statistics were calculated for exposure-related variables. An analysis through linear mixed model was performed to determine factors influencing the exposure level. The probability of exceeding the exposure limit was predicted using a mixed-effects logistic model. Concurrent exposures with other occupational carcinogens were investigated using the two-step cluster analysis. Results The total number of exposure measurements selected was 23 885, with an overall arithmetic mean of 0.12 mg/m3. The economic sector with the highest number of measurements was manufacturing of chemicals (18 744). The most predictive variables of the exposure level resulted to be the occupational group and its interaction with the measurement year. The highest likelihood of exceeding the exposure limit was found in the manufacture of coke and refined petroleum products. Concurrent exposures were frequently detected, mainly with benzene, acrylonitrile and ethylene dichloride, and three main clusters were identified. Conclusions Exposure to 1,3-butadiene occurs in a wide variety of activity sectors and occupational groups. The use of several statistical analysis methods applied to occupational exposure databases can help to identify exposure situations at high risk for workers' health and better target preventive interventions and research projects.

  11. Measuring spirituality and religiosity in clinical research: a systematic review of instruments available in the Portuguese language.

    PubMed

    Lucchetti, Giancarlo; Lucchetti, Alessandra Lamas Granero; Vallada, Homero

    2013-01-01

    Despite numerous spirituality and/or religiosity (S/R) measurement tools for use in research worldwide, there is little information on S/R instruments in the Portuguese language. The aim of the present study was to map out the S/R scales available for research in the Portuguese language. Systematic review of studies found in databases. A systematic review was conducted in three phases. Phases 1 and 2: articles in Portuguese, Spanish and English, published up to November 2011, dealing with the Portuguese translation and/or validation of S/R measurement tools for clinical research, were selected from six databases. Phase 3: the instruments were grouped according to authorship, cross-cultural adaptation, internal consistency, concurrent and discriminative validity and test-retest procedures. Twenty instruments were found. Forty-five percent of these evaluated religiosity, 40% spirituality, 10% religious/spiritual coping and 5% S/R. Among these, 90% had been produced in (n = 3) or translated to (n = 15) Brazilian Portuguese and two (10%) solely to European Portuguese. Nevertheless, the majority of the instruments had not undergone in-depth psychometric analysis. Only 40% of the instruments presented concurrent validity, 45% discriminative validity and 15% a test-retest procedure. The characteristics of each instrument were analyzed separately, yielding advantages, disadvantages and psychometric properties. Currently, 20 instruments for measuring S/R are available in the Portuguese language. Most have been translated (n = 15) or developed (n = 3) in Brazil and present good internal consistency. Nevertheless, few instruments have been assessed regarding all their psychometric qualities.

  12. DISTRIBUTION OF AQUATIC OFF-CHANNEL HABITATS AND ASSOCIATED RIPARIAN VEGETATION, WILLAMETTE RIVER, OREGON, USA

    EPA Science Inventory

    The extent of aquatic off-channel habitats such as secondary and side channels, sloughs, and alcoves, have been reduced more than 50% since the 1850s along the upper main stem of the Willamette River, Oregon, USA. Concurrently, the hydrogeomorphic potential, and associated flood...

  13. Dental Manpower Fact Book.

    ERIC Educational Resources Information Center

    Ake, James N.; Johnson, Donald W.

    Statistical data on many aspects of dental and allied dental personnel supply, distribution, characteristics, and education and on certain other aspects of dental services are presented and discussed. The data on dentist supply show the national trend in the supply of active dentists since 1950 and the concurrent changes in dentist-to-population…

  14. Design of a Multi Dimensional Database for the Archimed DataWarehouse.

    PubMed

    Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine

    2005-01-01

    The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.

  15. The Make 2D-DB II package: conversion of federated two-dimensional gel electrophoresis databases into a relational format and interconnection of distributed databases.

    PubMed

    Mostaguir, Khaled; Hoogland, Christine; Binz, Pierre-Alain; Appel, Ron D

    2003-08-01

    The Make 2D-DB tool has been previously developed to help build federated two-dimensional gel electrophoresis (2-DE) databases on one's own web site. The purpose of our work is to extend the strength of the first package and to build a more efficient environment. Such an environment should be able to fulfill the different needs and requirements arising from both the growing use of 2-DE techniques and the increasing amount of distributed experimental data.

  16. Establishment of an international database for genetic variants in esophageal cancer.

    PubMed

    Vihinen, Mauno

    2016-10-01

    The establishment of a database has been suggested in order to collect, organize, and distribute genetic information about esophageal cancer. The World Organization for Specialized Studies on Diseases of the Esophagus and the Human Variome Project will be in charge of a central database of information about esophageal cancer-related variations from publications, databases, and laboratories; in addition to genetic details, clinical parameters will also be included. The aim will be to get all the central players in research, clinical, and commercial laboratories to contribute. The database will follow established recommendations and guidelines. The database will require a team of dedicated curators with different backgrounds. Numerous layers of systematics will be applied to facilitate computational analyses. The data items will be extensively integrated with other information sources. The database will be distributed as open access to ensure exchange of the data with other databases. Variations will be reported in relation to reference sequences on three levels--DNA, RNA, and protein-whenever applicable. In the first phase, the database will concentrate on genetic variations including both somatic and germline variations for susceptibility genes. Additional types of information can be integrated at a later stage. © 2016 New York Academy of Sciences.

  17. A model for the distributed storage and processing of large arrays

    NASA Technical Reports Server (NTRS)

    Mehrota, P.; Pratt, T. W.

    1983-01-01

    A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described.

  18. Retrospective Cohort Analysis of Chest Injury Characteristics and Concurrent Injuries in Patients Admitted to Hospital in the Wenchuan and Lushan Earthquakes in Sichuan, China

    PubMed Central

    Yuan, Yong; Zhao, Yong-Fan

    2014-01-01

    Background The aim of this study was to compare retrospectively the characteristics of chest injuries and frequencies of other, concurrent injuries in patients after earthquakes of different seismic intensity. Methods We compared the cause, type, and body location of chest injuries as well as the frequencies of other, concurrent injuries in patients admitted to our hospital after the Wenchuan and Lushan earthquakes in Sichuan, China. We explored possible relationships between seismic intensity and the causes and types of injuries, and we assessed the ability of the Injury Severity Score, New Injury Severity Score, and Chest Injury Index to predict respiratory failure in chest injury patients. Results The incidence of chest injuries was 9.9% in the stronger Wenchuan earthquake and 22.2% in the less intensive Lushan earthquake. The most frequent cause of chest injuries in both earthquakes was being accidentally struck. Injuries due to falls were less prevalent in the stronger Wenchuan earthquake, while injuries due to burial were more prevalent. The distribution of types of chest injury did not vary significantly between the two earthquakes, with rib fractures and pulmonary contusions the most frequent types. Spinal and head injuries concurrent with chest injuries were more prevalent in the less violent Lushan earthquake. All three trauma scoring systems showed poor ability to predict respiratory failure in patients with earthquake-related chest injuries. Conclusions Previous studies may have underestimated the incidence of chest injury in violent earthquakes. The distributions of types of chest injury did not differ between these two earthquakes of different seismic intensity. Earthquake severity and interval between rescue and treatment may influence the prevalence and types of injuries that co-occur with the chest injury. Trauma evaluation scores on their own are inadequate predictors of respiratory failure in patients with earthquake-related chest injuries. PMID:24816485

  19. Retrospective cohort analysis of chest injury characteristics and concurrent injuries in patients admitted to hospital in the Wenchuan and Lushan earthquakes in Sichuan, China.

    PubMed

    Zheng, Xi; Hu, Yang; Yuan, Yong; Zhao, Yong-Fan

    2014-01-01

    The aim of this study was to compare retrospectively the characteristics of chest injuries and frequencies of other, concurrent injuries in patients after earthquakes of different seismic intensity. We compared the cause, type, and body location of chest injuries as well as the frequencies of other, concurrent injuries in patients admitted to our hospital after the Wenchuan and Lushan earthquakes in Sichuan, China. We explored possible relationships between seismic intensity and the causes and types of injuries, and we assessed the ability of the Injury Severity Score, New Injury Severity Score, and Chest Injury Index to predict respiratory failure in chest injury patients. The incidence of chest injuries was 9.9% in the stronger Wenchuan earthquake and 22.2% in the less intensive Lushan earthquake. The most frequent cause of chest injuries in both earthquakes was being accidentally struck. Injuries due to falls were less prevalent in the stronger Wenchuan earthquake, while injuries due to burial were more prevalent. The distribution of types of chest injury did not vary significantly between the two earthquakes, with rib fractures and pulmonary contusions the most frequent types. Spinal and head injuries concurrent with chest injuries were more prevalent in the less violent Lushan earthquake. All three trauma scoring systems showed poor ability to predict respiratory failure in patients with earthquake-related chest injuries. Previous studies may have underestimated the incidence of chest injury in violent earthquakes. The distributions of types of chest injury did not differ between these two earthquakes of different seismic intensity. Earthquake severity and interval between rescue and treatment may influence the prevalence and types of injuries that co-occur with the chest injury. Trauma evaluation scores on their own are inadequate predictors of respiratory failure in patients with earthquake-related chest injuries.

  20. Analysis of Lunar Highland Regolith Samples from Apollo 16 Drive Core 64001/2 and Lunar Regolith Simulants - An Expanding Comparative Database

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Doug; Stoeser, Doug; Wentworth, Susan J.; Botha, Pieter WSK; Butcher, Alan R.; McKay, David; Horsch, Hanna; Benedictus, Aukje; Gottlieb, Paul

    2008-01-01

    We present modal data from QEMSCAN(registered TradeMark) beam analysis of Apollo 16 samples from drive core 64001/2. The analyzed lunar samples are thin sections 64002,6019 (5.0-8.0 cm depth) and 64001,6031 (50.0-53.1 cm depth) and sieved grain mounts 64002,262 and 64001,374 from depths corresponding to the thin sections, respectively. We also analyzed lunar highland regolith simulants NU-LHT-1M, -2M, and OB-1, low-Ti mare simulants JSC-1, -lA, -1AF, and FJS-1, and high-Ti mare simulant MLS-1. The preliminary results comprise the beginning of an internally consistent database of lunar regolith and regolith simulant mineral and glass information. This database, combined with previous and concurrent studies on phase chemistry, bulk chemistry, and with data on particle shape and size distribution, will serve to guide lunar scientists and engineers in choosing simulants for their applications. These results are modal% by phase rather than by particle type, so they are not directly comparable to most previously published lunar data that report lithic fragments, monomineralic particles, agglutinates, etc. Of the highland simulants, 08-1 has an integrated modal composition closer than NU-LHT-1M to that of the 64001/2 samples, However, this and other studies show that NU-LHT-1M and -2M have minor and trace mineral (e.g., Fe-Ti oxides and phosphates) populations and mineral and glass chemistry closer to these lunar samples. The finest fractions (0-20 microns) in the sieved lunar samples are enriched in glass relative to the integrated compositions by approx.30% for 64002,262 and approx.15% for 64001,374. Plagioclase, pyroxene, and olivine are depleted in these finest fractions. This could be important to lunar dust mitigation efforts and astronaut health - none of the analyzed simulants show this trend. Contrary to previously reported modal analyses of monomineralic grains in lunar regolith, these area% modal analyses do not show a systematic increase in plagiociase/pyroxene as size fraction decreases.

  1. The VLBA correlator: Real-time in the distributed era

    NASA Technical Reports Server (NTRS)

    Wells, D. C.

    1992-01-01

    The correlator is the signal processing engine of the Very Long Baseline Array (VLBA). Radio signals are recorded on special wideband (128 Mb/s) digital recorders at the 10 telescopes, with sampling times controlled by hydrogen maser clocks. The magnetic tapes are shipped to the Array Operations Center in Socorro, New Mexico, where they are played back simultaneously into the correlator. Real-time software and firmware controls the playback drives to achieve synchronization, compute models of the wavefront delay, control the numerous modules of the correlator, and record FITS files of the fringe visibilities at the back-end of the correlator. In addition to the more than 3000 custom VLSI chips which handle the massive data flow of the signal processing, the correlator contains a total of more than 100 programmable computers, 8-, 16- and 32-bit CPUs. Code is downloaded into front-end CPU's dependent on operating mode. Low-level code is assembly language, high-level code is C running under a RT OS. We use VxWorks on Motorola MVME147 CPU's. Code development is on a complex of SPARC workstations connected to the RT CPU's by Ethernet. The overall management of the correlation process is dependent on a database management system. We use Ingres running on a Sparcstation-2. We transfer logging information from the database of the VLBA Monitor and Control System to our database using Ingres/NET. Job scripts are computed and are transferred to the real-time computers using NFS, and correlation job execution logs and status flow back by the route. Operator status and control displays use windows on workstations, interfaced to the real-time processes by network protocols. The extensive network protocol support provided by VxWorks is invaluable. The VLBA Correlator's dependence on network protocols is an example of the radical transformation of the real-time world over the past five years. Real-time is becoming more like conventional computing. Paradoxically, 'conventional' computing is also adopting practices from the real-time world: semaphores, shared memory, light-weight threads, and concurrency. This appears to be a convergence of thinking.

  2. A design for the geoinformatics system

    NASA Astrophysics Data System (ADS)

    Allison, M. L.

    2002-12-01

    Informatics integrates and applies information technologies with scientific and technical disciplines. A geoinformatics system targets the spatially based sciences. The system is not a master database, but will collect pertinent information from disparate databases distributed around the world. Seamless interoperability of databases promises quantum leaps in productivity not only for scientific researchers but also for many areas of society including business and government. The system will incorporate: acquisition of analog and digital legacy data; efficient information and data retrieval mechanisms (via data mining and web services); accessibility to and application of visualization, analysis, and modeling capabilities; online workspace, software, and tutorials; GIS; integration with online scientific journal aggregates and digital libraries; access to real time data collection and dissemination; user-defined automatic notification and quality control filtering for selection of new resources; and application to field techniques such as mapping. In practical terms, such a system will provide the ability to gather data over the Web from a variety of distributed sources, regardless of computer operating systems, database formats, and servers. Search engines will gather data about any geographic location, above, on, or below ground, covering any geologic time, and at any scale or detail. A distributed network of digital geolibraries can archive permanent copies of databases at risk of being discontinued and those that continue to be maintained by the data authors. The geoinformatics system will generate results from widely distributed sources to function as a dynamic data network. Instead of posting a variety of pre-made tables, charts, or maps based on static databases, the interactive dynamic system creates these products on the fly, each time an inquiry is made, using the latest information in the appropriate databases. Thus, in the dynamic system, a map generated today may differ from one created yesterday and one to be created tomorrow, because the databases used to make it are constantly (and sometimes automatically) being updated.

  3. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  4. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  5. The reliability and concurrent validity of the Scoliosis Research Society-22r patient questionnaire compared with the Child Health Questionnaire-CF87 patient questionnaire for adolescent spinal deformity.

    PubMed

    Glattes, R Christopher; Burton, Douglas C; Lai, Sue Min; Frasier, Elizabeth; Asher, Marc A

    2007-07-15

    This is a clinic-based cross-sectional study involving 2 health-related quality-of-life (HRQL) questionnaires. To compare the score distribution and reliability of the spinal deformity specific Scoliosis Research Society-22r (SRS-22r) questionnaire and the established generic Child Health Questionnaire-CF87 (CHQ-CF87), and to assess the concurrent validity of the SRS-22r using the CHQ-CF87 in an adolescent spine deformity population. Different questionnaires are commonly thought to be necessary to assess the HRQL of adolescent and adult populations. But since spinal deformities usually begin in the second decade of life, longitudinal follow-up with the same HRQL is desirable. The SRS-22r HRQL has recently been validated for score distribution and internal consistency in a spinal deformity population ranging in age from 7 to 78 years. The SRS-22r and CHQ-CF87 HRQLs were completed by 70 orthopedic spinal deformity outpatients 8 to 18 years of age, of whom 54 returned mailed retest questionnaires at an average of 24 days later. The ceiling effect averaged 27% for the SRS-22r and 36% for the CHQ-CF87. Respective values for internal consistency (Cronbach alpha) were 0.81 and 0.82, and for test-retest reproducibility the intraclass correlations (ICC) were 0.73 and 0.61. Concurrent validity was r > or = 0.68 or more for relevant function, pain, and mental health domains. The SRS Self-Image and particularly the Satisfaction/Dissatisfaction with Management domains did not correlate well with any CHQ-CF87 domains (r = 0.50 and 0.30, respectively). In a spinal deformity population 8 to 18 years of age, the score distribution and reliability, internal consistency, and reproducibility of the SRS-22r were at least as good as the CHQ-CF87. The SRS-22r function, pain, and mental health domains were concurrently valid in comparison to relevant CHQ-CF87 domains, but the SRS-22r self-image and satisfaction/dissatisfaction domains were not, thereby providing health-related quality-of-life information not provided for by the CHQ-CF87.

  6. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    PubMed

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  7. Fully Distributed Monitoring Architecture Supporting Multiple Trackees and Trackers in Indoor Mobile Asset Management Application

    PubMed Central

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-01-01

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407

  8. Pushing typists back on the learning curve: Memory chunking in the hierarchical control of skilled typewriting.

    PubMed

    Yamaguchi, Motonori; Logan, Gordon D

    2016-12-01

    Hierarchical control of skilled performance depends on the ability of higher level control to process several lower level units as a single chunk. The present study investigated the development of hierarchical control of skilled typewriting, focusing on the process of memory chunking. In the first 3 experiments, skilled typists typed words or nonwords under concurrent memory load. Memory chunks developed and consolidated into long-term memory when the same typing materials were repeated in 6 consecutive trials, but chunks did not develop when repetitions were spaced. However, when concurrent memory load was removed during training, memory chunks developed more efficiently with longer lags between repetitions than shorter lags. From these results, it is proposed that memory chunking requires 2 representations of the same letter string to be maintained simultaneously in short-term memory: 1 representation from the current trial, and the other from an earlier trial that is either retained from the immediately preceding trial or retrieved from long-term memory (i.e., study state retrieval). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. The use of a prescription drug monitoring program to develop algorithms to identify providers with unusual prescribing practices for controlled substances.

    PubMed

    Ringwalt, Christopher; Schiro, Sharon; Shanahan, Meghan; Proescholdbell, Scott; Meder, Harold; Austin, Anna; Sachdeva, Nidhi

    2015-10-01

    The misuse, abuse and diversion of controlled substances have reached epidemic proportion in the United States. Contributing to this problem are providers who over-prescribe these substances. Using one state's prescription drug monitoring program, we describe a series of metrics we developed to identify providers manifesting unusual and uncustomary prescribing practices. We then present the results of a preliminary effort to assess the concurrent validity of these algorithms, using death records from the state's vital records database pertaining to providers who wrote prescriptions to patients who then died of a medication or drug overdose within 30 days. Metrics manifesting the strongest concurrent validity with providers identified from these records related to those who co-prescribed benzodiazepines (e.g., valium) and high levels of opioid analgesics (e.g., oxycodone), as well as those who wrote temporally overlapping prescriptions. We conclude with a discussion of a variety of uses to which these metrics may be put, as well as problems and opportunities related to their use.

  10. Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs

    PubMed Central

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-01-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540

  11. Advanced abdominal pregnancy: an increasingly challenging clinical concern for obstetricians

    PubMed Central

    Huang, Ke; Song, Lei; Wang, Longxia; Gao, Zhiying; Meng, Yuanguang; Lu, Yanping

    2014-01-01

    Advanced abdominal pregnancy is rare. The low incidence, high misdiagnosis rate, and lack of specific clinical signs and symptoms explain the fact that there are no standard diagnostic and treatment options available for advanced abdominal pregnancy. We managed a case of abdominal pregnancy in a woman who was pregnant for the first time. This case was further complicated by a concurrent singleton intrauterine pregnancy; the twin pregnancy was not detected until 20 weeks of pregnancy. The case was confirmed at 26 weeks gestational age using MRI to be an abdominal combined with intrauterine pregnancy. The pregnancy was terminated by cesarean section at 33 + 5 weeks gestation. We collected the relevant data of the case while reviewing the advanced abdominal pregnancy-related English literature in the Pubmed, Proquest, and OVID databases. We compared and analyzed the pregnancy history, gestational age when the diagnosis was confirmed, the placental colonization position, the course of treatment and surgical processes, related concurrency rate, post-operative drug treatment programs, and follow-up results with the expectation to provide guidance for other physicians who might encounter similar cases. PMID:25337188

  12. The importance of data quality for generating reliable distribution models for rare, elusive, and cryptic species

    Treesearch

    Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey

    2017-01-01

    The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...

  13. DSSTOX WEBSITE LAUNCH: IMPROVING PUBLIC ACCESS TO DATABASES FOR BUILDING STRUCTURE-TOXICITY PREDICTION MODELS

    EPA Science Inventory

    DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction Models
    Ann M. Richard
    US Environmental Protection Agency, Research Triangle Park, NC, USA

    Distributed: Decentralized set of standardized, field-delimited databases,...

  14. PROGRESS REPORT ON THE DSSTOX DATABASE NETWORK: NEWLY LAUNCHED WEBSITE, APPLICATIONS, FUTURE PLANS

    EPA Science Inventory

    Progress Report on the DSSTox Database Network: Newly Launched Website, Applications, Future Plans

    Progress will be reported on development of the Distributed Structure-Searchable Toxicity (DSSTox) Database Network and the newly launched public website that coordinates and...

  15. Image Databases.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    Different kinds of pictorial databases are described with respect to aims, user groups, search possibilities, storage, and distribution. Some specific examples are given for databases used for the following purposes: (1) labor markets for artists; (2) document management; (3) telling a story; (4) preservation (archives and museums); (5) research;…

  16. Practical Quantum Private Database Queries Based on Passive Round-Robin Differential Phase-shift Quantum Key Distribution.

    PubMed

    Li, Jian; Yang, Yu-Guang; Chen, Xiu-Bo; Zhou, Yi-Hua; Shi, Wei-Min

    2016-08-19

    A novel quantum private database query protocol is proposed, based on passive round-robin differential phase-shift quantum key distribution. Compared with previous quantum private database query protocols, the present protocol has the following unique merits: (i) the user Alice can obtain one and only one key bit so that both the efficiency and security of the present protocol can be ensured, and (ii) it does not require to change the length difference of the two arms in a Mach-Zehnder interferometer and just chooses two pulses passively to interfere with so that it is much simpler and more practical. The present protocol is also proved to be secure in terms of the user security and database security.

  17. Domain Regeneration for Cross-Database Micro-Expression Recognition

    NASA Astrophysics Data System (ADS)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  18. Classroom-level adversity: Associations with children's internalizing and externalizing behaviors across elementary school.

    PubMed

    Abry, Tashia; Bryce, Crystal I; Swanson, Jodi; Bradley, Robert H; Fabes, Richard A; Corwyn, Robert F

    2017-03-01

    Concerns regarding the social-behavioral maladjustment of U.S. youth have spurred efforts among educators and policymakers to identify and remedy educational contexts that exacerbate children's anxiety, depression, aggression, and misconduct. However, investigations of the influence of collective classroom student characteristics on individuals' social-behavioral functioning are few. The present study examined concurrent and longitudinal relations between adversity factors facing the collective classroom student group and levels of children's internalizing and externalizing behaviors across the elementary school years, and whether the pattern of relations differed for girls and boys. First-, third-, and fifth-grade teachers reported on the extent to which adversity-related factors (e.g., home/family life, academic readiness, social readiness, English proficiency, tardiness/absenteeism, student mobility, health) presented a challenge in their classrooms (i.e., classroom-level adversity [CLA]). Mothers reported on their child's internalizing and externalizing behavior at each grade. Autoregressive, lagged panel models controlled for prior levels of internalizing and externalizing behavior, mothers' education, family income-to-needs, and class size. For all children at each grade, CLA was concurrently and positively associated with externalizing behavior. For first-grade girls, but not boys, CLA was also concurrently and positively associated with internalizing behavior. Indirect effects suggested CLA influenced later internalizing and externalizing behavior through its influence on maladjustment in a given year. Discussion highlights possible methods of intervention to reduce CLA or the negative consequences associated with being in a higher-adversity classroom. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. A novel concurrent pictorial choice model of mood-induced relapse in hazardous drinkers.

    PubMed

    Hardy, Lorna; Hogarth, Lee

    2017-12-01

    This study tested whether a novel concurrent pictorial choice procedure, inspired by animal self-administration models, is sensitive to the motivational effect of negative mood induction on alcohol-seeking in hazardous drinkers. Forty-eight hazardous drinkers (scoring ≥7 on the Alcohol Use Disorders Inventory) recruited from the community completed measures of alcohol dependence, depression, and drinking coping motives. Baseline alcohol-seeking was measured by percent choice to enlarge alcohol- versus food-related thumbnail images in two alternative forced-choice trials. Negative and positive mood was then induced in succession by means of self-referential affective statements and music, and percent alcohol choice was measured after each induction in the same way as baseline. Baseline alcohol choice correlated with alcohol dependence severity, r = .42, p = .003, drinking coping motives (in two questionnaires, r = .33, p = .02 and r = .46, p = .001), and depression symptoms, r = .31, p = .03. Alcohol choice was increased by negative mood over baseline (p < .001, ηp2 = .280), and matched baseline following positive mood (p = .54, ηp2 = .008). The negative mood-induced increase in alcohol choice was not related to gender, alcohol dependence, drinking to cope, or depression symptoms (ps ≥ .37). The concurrent pictorial choice measure is a sensitive index of the relative value of alcohol, and provides an accessible experimental model to study negative mood-induced relapse mechanisms in hazardous drinkers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Impact of congenital anomalies and treatment location on the outcomes of infants hospitalized with herpes simplex virus (HSV).

    PubMed

    Lorch, Scott A; Millman, Andrea M; Shah, Samir S

    2010-03-01

    Herpes simplex virus (HSV) is a rare but costly reason for hospitalization in infants under 60 days of age. The impact of coexisting comorbid conditions and treatment location on hospital outcome is poorly understood. Determine patient and hospital factors associated with poor outcomes or death in infants hospitalized with HSV. : Retrospective cohort study using the 2003 Kids' Inpatient Database (KID). U.S. hospitals. Infants under 60 days of age with a diagnosis of HSV. Treatment at different types of hospitals, younger age at admission, and presence of congenital anomalies. Serious complications, in-hospital death. A total of 10% of the 1587 identified HSV hospitalizations had a concurrent congenital anomaly. A total of 267 infants had a serious complication and 50 died. After controlling for clinical and hospital characteristics, concurrent congenital anomalies were associated with higher odds of a serious complication (adjusted odds ratio [OR], 3.34; 95% confidence interval [CI], 2.00-5.56) and higher odds of death (adjusted OR, 4.17; 95% CI, 1.74-10.0). Similar results were found for infants admitted under 7 days of age. Although different hospital types had statistically similar clinical outcomes after controlling for case-mix differences, treatment at a children's hospital was associated with an 18% reduction in length of stay (LOS). Infants with concurrent congenital anomalies infected with HSV were at increased risk for serious complications or death. Health resource use may be improved through identification and adoption of care practiced at children's hospitals.

  1. Concurrent validity and sensitivity to change of Direct Behavior Rating Single-Item Scales (DBR-SIS) within an elementary sample.

    PubMed

    Smith, Rhonda L; Eklund, Katie; Kilgus, Stephen P

    2018-03-01

    The purpose of this study was to evaluate the concurrent validity, sensitivity to change, and teacher acceptability of Direct Behavior Rating single-item scales (DBR-SIS), a brief progress monitoring measure designed to assess student behavioral change in response to intervention. Twenty-four elementary teacher-student dyads implemented a daily report card intervention to promote positive student behavior during prespecified classroom activities. During both baseline and intervention, teachers completed DBR-SIS ratings of 2 target behaviors (i.e., Academic Engagement, Disruptive Behavior) whereas research assistants collected systematic direct observation (SDO) data in relation to the same behaviors. Five change metrics (i.e., absolute change, percent of change from baseline, improvement rate difference, Tau-U, and standardized mean difference; Gresham, 2005) were calculated for both DBR-SIS and SDO data, yielding estimates of the change in student behavior in response to intervention. Mean DBR-SIS scores were predominantly moderately to highly correlated with SDO data within both baseline and intervention, demonstrating evidence of the former's concurrent validity. DBR-SIS change metrics were also significantly correlated with SDO change metrics for both Disruptive Behavior and Academic Engagement, yielding evidence of the former's sensitivity to change. In addition, teacher Usage Rating Profile-Assessment (URP-A) ratings indicated they found DBR-SIS to be acceptable and usable. Implications for practice, study limitations, and areas of future research are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Conducting Privacy-Preserving Multivariable Propensity Score Analysis When Patient Covariate Information Is Stored in Separate Locations.

    PubMed

    Bohn, Justin; Eddings, Wesley; Schneeweiss, Sebastian

    2017-03-15

    Distributed networks of health-care data sources are increasingly being utilized to conduct pharmacoepidemiologic database studies. Such networks may contain data that are not physically pooled but instead are distributed horizontally (separate patients within each data source) or vertically (separate measures within each data source) in order to preserve patient privacy. While multivariable methods for the analysis of horizontally distributed data are frequently employed, few practical approaches have been put forth to deal with vertically distributed health-care databases. In this paper, we propose 2 propensity score-based approaches to vertically distributed data analysis and test their performance using 5 example studies. We found that these approaches produced point estimates close to what could be achieved without partitioning. We further found a performance benefit (i.e., lower mean squared error) for sequentially passing a propensity score through each data domain (called the "sequential approach") as compared with fitting separate domain-specific propensity scores (called the "parallel approach"). These results were validated in a small simulation study. This proof-of-concept study suggests a new multivariable analysis approach to vertically distributed health-care databases that is practical, preserves patient privacy, and warrants further investigation for use in clinical research applications that rely on health-care databases. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. FIBER AND INTEGRATED OPTICS. OPTOELECTRONICS: Some characteristics of formation of volume dynamic holograms by concurrent waves propagating in resonant atomic media

    NASA Astrophysics Data System (ADS)

    Kirilenko, A. K.

    1989-07-01

    An investigation was made of the transient process of formation of volume dynamic holograms by light within the spectral limits of the D2 resonant absorption line of sodium. The observed asymmetry of the spectral distribution of the gain of the signal waves in the case of a concurrent interaction between four beams was attributed to different mechanisms of the interaction, the main of which were a four-wave interaction in the long-wavelength wing and transient two-beam energy transfer in the short-wavelength wing. The results obtained were used to recommend an experimental method for the determination of the relative contributions of these processes to the amplification of signal waves.

  4. System for Performing Single Query Searches of Heterogeneous and Dispersed Databases

    NASA Technical Reports Server (NTRS)

    Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor)

    2017-01-01

    The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.

  5. Organization and dissemination of multimedia medical databases on the WWW.

    PubMed

    Todorovski, L; Ribaric, S; Dimec, J; Hudomalj, E; Lunder, T

    1999-01-01

    In the paper, we focus on the problem of building and disseminating multimedia medical databases on the World Wide Web (WWW). The current results of the ongoing project of building a prototype dermatology images database and its WWW presentation are presented. The dermatology database is part of an ambitious plan concerning an organization of a network of medical institutions building distributed and federated multimedia databases of a much wider scale.

  6. Projecting insect voltinism under high and low greenhouse gas emission conditions

    Treesearch

    Shi Chen; Shelby J. Fleischer; Patrick C. Tobin; Michael C. Saunders

    2011-01-01

    We develop individual-based Monte Carlo methods to explore how climate change can alter insect voltinism under varying greenhouse gas emissions scenarios by using input distributions of diapause termination or spring emergence, development rate, and diapause initiation, linked to daily temperature and photoperiod. We show concurrence of these projections with a field...

  7. Interpreting beyond Syntactics: A Semiotic Learning Model for Computer Programming Languages

    ERIC Educational Resources Information Center

    May, Jeffrey; Dhillon, Gurpreet

    2009-01-01

    In the information systems field there are numerous programming languages that can be used in specifying the behavior of concurrent and distributed systems. In the literature it has been argued that a lack of pragmatic and semantic consideration decreases the effectiveness of such specifications. In other words, to simply understand the syntactic…

  8. 26 CFR 1.642(c)-5 - Definition of pooled income fund.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... contribution to the fund or of the purchase price of those assets purchased by the fund. This definition of... the income interest is designated, such beneficiaries may enjoy their shares of income concurrently... beneficiaries to whom the income is payable and the share of income distributable to each person so specified...

  9. Distributed Memory Compiler Methods for Irregular Problems - Data Copy Reuse and Runtime Partitioning

    DTIC Science & Technology

    1991-09-01

    addition, support for Saltz was provided by NSF from NSF Grant ASC-8819374. i 1, introduction Over the past fewyers, ,we have devoped -methods needed to... network . In Third Conf. on Hypercube Concurrent Computers and Applications, pages 241-27278, 1988. [17] G. Fox, S. Hiranandani, K. Kennedy, C. Koelbel

  10. Tighter monogamy relations of quantum entanglement for multiqubit W-class states

    NASA Astrophysics Data System (ADS)

    Jin, Zhi-Xiang; Fei, Shao-Ming

    2018-01-01

    Monogamy relations characterize the distributions of entanglement in multipartite systems. We investigate monogamy relations for multiqubit generalized W-class states. We present new analytical monogamy inequalities for the concurrence of assistance, which are shown to be tighter than the existing ones. Furthermore, analytical monogamy inequalities are obtained for the negativity of assistance.

  11. Recent advances on terrain database correlation testing

    NASA Astrophysics Data System (ADS)

    Sakude, Milton T.; Schiavone, Guy A.; Morelos-Borja, Hector; Martin, Glenn; Cortes, Art

    1998-08-01

    Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.

  12. Development, deployment and operations of ATLAS databases

    NASA Astrophysics Data System (ADS)

    Vaniachine, A. V.; Schmitt, J. G. v. d.

    2008-07-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.

  13. PIGD: a database for intronless genes in the Poaceae.

    PubMed

    Yan, Hanwei; Jiang, Cuiping; Li, Xiaoyu; Sheng, Lei; Dong, Qing; Peng, Xiaojian; Li, Qian; Zhao, Yang; Jiang, Haiyang; Cheng, Beijiu

    2014-10-01

    Intronless genes are a feature of prokaryotes; however, they are widespread and unequally distributed among eukaryotes and represent an important resource to study the evolution of gene architecture. Although many databases on exons and introns exist, there is currently no cohesive database that collects intronless genes in plants into a single database. In this study, we present the Poaceae Intronless Genes Database (PIGD), a user-friendly web interface to explore information on intronless genes from different plants. Five Poaceae species, Sorghum bicolor, Zea mays, Setaria italica, Panicum virgatum and Brachypodium distachyon, are included in the current release of PIGD. Gene annotations and sequence data were collected and integrated from different databases. The primary focus of this study was to provide gene descriptions and gene product records. In addition, functional annotations, subcellular localization prediction and taxonomic distribution are reported. PIGD allows users to readily browse, search and download data. BLAST and comparative analyses are also provided through this online database, which is available at http://pigd.ahau.edu.cn/. PIGD provides a solid platform for the collection, integration and analysis of intronless genes in the Poaceae. As such, this database will be useful for subsequent bio-computational analysis in comparative genomics and evolutionary studies.

  14. Multi-hazard Assessment and Scenario Toolbox (MhAST): A Framework for Analyzing Compounding Effects of Multiple Hazards

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Moftakhari, H.; AghaKouchak, A.

    2017-12-01

    Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.

  15. Application-Level Interoperability Across Grids and Clouds

    NASA Astrophysics Data System (ADS)

    Jha, Shantenu; Luckow, Andre; Merzky, Andre; Erdely, Miklos; Sehgal, Saurabh

    Application-level interoperability is defined as the ability of an application to utilize multiple distributed heterogeneous resources. Such interoperability is becoming increasingly important with increasing volumes of data, multiple sources of data as well as resource types. The primary aim of this chapter is to understand different ways in which application-level interoperability can be provided across distributed infrastructure. We achieve this by (i) using the canonical wordcount application, based on an enhanced version of MapReduce that scales-out across clusters, clouds, and HPC resources, (ii) establishing how SAGA enables the execution of wordcount application using MapReduce and other programming models such as Sphere concurrently, and (iii) demonstrating the scale-out of ensemble-based biomolecular simulations across multiple resources. We show user-level control of the relative placement of compute and data and also provide simple performance measures and analysis of SAGA-MapReduce when using multiple, different, heterogeneous infrastructures concurrently for the same problem instance. Finally, we discuss Azure and some of the system-level abstractions that it provides and show how it is used to support ensemble-based biomolecular simulations.

  16. Virtual Queue in a Centralized Database Environment

    NASA Astrophysics Data System (ADS)

    Kar, Amitava; Pal, Dibyendu Kumar

    2010-10-01

    Today is the era of the Internet. Every matter whether it be a gather of knowledge or planning a holiday or booking of ticket etc everything can be obtained from the internet. This paper intends to calculate the different queuing measures when some booking or purchase is done through the internet subject to the limitations in the number of tickets or seats. It involves a lot of database activities like read and write. This paper takes care of the time involved in the requests of a service, taken as arrival and the time involved in providing the required information, taken as service and thereby tries to calculate the distribution of arrival and service and the various measures of the queuing. This paper considers the database as centralized database for the sake of simplicity as the alternating concept of distributed database would rather complicate the calculation.

  17. Multireference - Møller-Plesset Perturbation Theory Results on Levels and Transition Rates in Al-like Ions of Iron Group Elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santana, J A; Ishikawa, Y; Tr�abert, E

    2009-02-26

    Ground configuration and low-lying levels of Al-like ions contribute to a variety of laboratory and solar spectra, but the available information in databases are neither complete not necessarily correct. We have performed multireference Moeller-Plesset perturbation theory calculations that approach spectroscopic accuracy in order to check the information that databases hold on the 40 lowest levels of Al-Like ions of iron group elements (K through Ge), and to provide input for the interpretation of concurrent experiments. Our results indicate problems of the database holdings on the levels of the lowest quartet levels in the lighter elements of the range studied. Themore » results of our calculations of the decay rates of five long-lived levels (3s{sup 2}3p {sup 2}p{sup o}{sub 3/2}, 3s3p{sup 2} {sup 4}P{sup o} J and 3s3p3d {sup 4}F{sup o}{sub 9/2}) are compared with lifetime data from beam-foil, electron beam ion trap and heavy-ion storage ring experiments.« less

  18. Digital Video of Live-Scan Fingerprint Data

    National Institute of Standards and Technology Data Gateway

    NIST Digital Video of Live-Scan Fingerprint Data (PC database for purchase)   NIST Special Database 24 contains MPEG-2 (Moving Picture Experts Group) compressed digital video of live-scan fingerprint data. The database is being distributed for use in developing and testing of fingerprint verification systems.

  19. "Mr. Database" : Jim Gray and the History of Database Technologies.

    PubMed

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  20. Embodying a cognitive model in a mobile robot

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Lonsdale, Deryle

    2006-10-01

    The ADAPT project is a collaboration of researchers in robotics, linguistics and artificial intelligence at three universities to create a cognitive architecture specifically designed to be embodied in a mobile robot. There are major respects in which existing cognitive architectures are inadequate for robot cognition. In particular, they lack support for true concurrency and for active perception. ADAPT addresses these deficiencies by modeling the world as a network of concurrent schemas, and modeling perception as problem solving. Schemas are represented using the RS (Robot Schemas) language, and are activated by spreading activation. RS provides a powerful language for distributed control of concurrent processes. Also, The formal semantics of RS provides the basis for the semantics of ADAPT's use of natural language. We have implemented the RS language in Soar, a mature cognitive architecture originally developed at CMU and used at a number of universities and companies. Soar's subgoaling and learning capabilities enable ADAPT to manage the complexity of its environment and to learn new schemas from experience. We describe the issues faced in developing an embodied cognitive architecture, and our implementation choices.

  1. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  2. THE 6-MINUTE WALK TEST AND OTHER CLINICAL ENDPOINTS IN DUCHENNE MUSCULAR DYSTROPHY: RELIABILITY, CONCURRENT VALIDITY, AND MINIMAL CLINICALLY IMPORTANT DIFFERENCES FROM A MULTICENTER STUDY

    PubMed Central

    McDonald, Craig M; Henricson, Erik K; Abresch, R Ted; Florence, Julaine; Eagle, Michelle; Gappmaier, Eduard; Glanzman, Allan M; Spiegel, Robert; Barth, Jay; Elfring, Gary; Reha, Allen; Peltz, Stuart W

    2013-01-01

    Introduction: An international clinical trial enrolled 174 ambulatory males ≥5 years old with nonsense mutation Duchenne muscular dystrophy (nmDMD). Pretreatment data provide insight into reliability, concurrent validity, and minimal clinically important differences (MCIDs) of the 6-minute walk test (6MWT) and other endpoints. Methods: Screening and baseline evaluations included the 6-minute walk distance (6MWD), timed function tests (TFTs), quantitative strength by myometry, the PedsQL, heart rate–determined energy expenditure index, and other exploratory endpoints. Results: The 6MWT proved feasible and reliable in a multicenter context. Concurrent validity with other endpoints was excellent. The MCID for 6MWD was 28.5 and 31.7 meters based on 2 statistical distribution methods. Conclusions: The ratio of MCID to baseline mean is lower for 6MWD than for other endpoints. The 6MWD is an optimal primary endpoint for Duchenne muscular dystrophy (DMD) clinical trials that are focused therapeutically on preservation of ambulation and slowing of disease progression. Muscle Nerve 48: 357–368, 2013 PMID:23674289

  3. Concurrent Mission and Systems Design at NASA Glenn Research Center: The Origins of the COMPASS Team

    NASA Technical Reports Server (NTRS)

    McGuire, Melissa L.; Oleson, Steven R.; Sarver-Verhey, Timothy R.

    2012-01-01

    Established at the NASA Glenn Research Center (GRC) in 2006 to meet the need for rapid mission analysis and multi-disciplinary systems design for in-space and human missions, the Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team is a multidisciplinary, concurrent engineering group whose primary purpose is to perform integrated systems analysis, but it is also capable of designing any system that involves one or more of the disciplines present in the team. The authors were involved in the development of the COMPASS team and its design process, and are continuously making refinements and enhancements. The team was unofficially started in the early 2000s as part of the distributed team known as Team JIMO (Jupiter Icy Moons Orbiter) in support of the multi-center collaborative JIMO spacecraft design during Project Prometheus. This paper documents the origins of a concurrent mission and systems design team at GRC and how it evolved into the COMPASS team, including defining the process, gathering the team and tools, building the facility, and performing studies.

  4. Increasing insect reactions in Alaska: is this related to changing climate?

    PubMed

    Demain, Jeffrey G; Gessner, Bradford D; McLaughlin, Joseph B; Sikes, Derek S; Foote, J Timothy

    2009-01-01

    In 2006, Fairbanks, AK, reported its first cases of fatal anaphylaxis as a result of Hymenoptera stings concurrent with an increase in insect reactions observed throughout the state. This study was designed to determine whether Alaska medical visits for insect reactions have increased. We conducted a retrospective review of three independent patient databases in Alaska to identify trends of patients seeking medical care for adverse reactions after insect-related events. For each database, an insect reaction was defined as a claim for the International Classification of Diseases, Ninth Edition (ICD-9), codes E9053, E906.4, and 989.5. Increases in insect reactions in each region were compared with temperature changes in the same region. Each database revealed a statistically significant trend in patients seeking care for insect reactions. Fairbanks Memorial Hospital Emergency Department reported a fourfold increase in patients in 2006 compared with previous years (1992-2005). The Allergy, Asthma, and Immunology Center of Alaska reported a threefold increase in patients from 1999 to 2002 to 2003 to 2007. A retrospective review of the Alaska Medicaid database from 1999 to 2006 showed increases in medical claims for insect reactions among all regions, with the largest percentage of increases occurring in the most northern areas. Increases in insect reactions in Alaska have occurred after increases in annual and winter temperatures, and these findings may be causally related.

  5. Security in the CernVM File System and the Frontier Distributed Database Caching System

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  6. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.

  7. The distribution of common construction materials at risk to acid deposition in the United States

    NASA Astrophysics Data System (ADS)

    Lipfert, Frederick W.; Daum, Mary L.

    Information on the geographic distribution of various types of exposed materials is required to estimate the economic costs of damage to construction materials from acid deposition. This paper focuses on the identification, evaluation and interpretation of data describing the distributions of exterior construction materials, primarily in the United States. This information could provide guidance on how data needed for future economic assessments might be acquired in the most cost-effective ways. Materials distribution surveys from 16 cities in the U.S. and Canada and five related databases from government agencies and trade organizations were examined. Data on residential buildings are more commonly available than on nonresidential buildings; little geographically resolved information on distributions of materials in infrastructure was found. Survey results generally agree with the appropriate ancillary databases, but the usefulness of the databases is often limited by their coarse spatial resolution. Information on those materials which are most sensitive to acid deposition is especially scarce. Since a comprehensive error analysis has never been performed on the data required for an economic assessment, it is not possible to specify the corresponding detailed requirements for data on the distributions of materials.

  8. Estimating trans-seasonal variability in water column biomass for a highly migratory, deep diving predator.

    PubMed

    O'Toole, Malcolm D; Lea, Mary-Anne; Guinet, Christophe; Hindell, Mark A

    2014-01-01

    The deployment of animal-borne electronic tags is revolutionizing our understanding of how pelagic species respond to their environment by providing in situ oceanographic information such as temperature, salinity, and light measurements. These tags, deployed on pelagic animals, provide data that can be used to study the ecological context of their foraging behaviour and surrounding environment. Satellite-derived measures of ocean colour reveal temporal and spatial variability of surface chlorophyll-a (a useful proxy for phytoplankton distribution). However, this information can be patchy in space and time resulting in poor correspondence with marine animal behaviour. Alternatively, light data collected by animal-borne tag sensors can be used to estimate chlorophyll-a distribution. Here, we use light level and depth data to generate a phytoplankton index that matches daily seal movements. Time-depth-light recorders (TDLRs) were deployed on 89 southern elephant seals (Mirounga leonina) over a period of 6 years (1999-2005). TDLR data were used to calculate integrated light attenuation of the top 250 m of the water column (LA(250)), which provided an index of phytoplankton density at the daily scale that was concurrent with the movement and behaviour of seals throughout their entire foraging trip. These index values were consistent with typical seasonal chl-a patterns as measured from 8-daySea-viewing Wide Field-of-view Sensor (SeaWiFs) images. The availability of data recorded by the TDLRs was far greater than concurrent remotely sensed chl-a at higher latitudes and during winter months. Improving the spatial and temporal availability of phytoplankton information concurrent with animal behaviour has ecological implications for understanding the movement of deep diving predators in relation to lower trophic levels in the Southern Ocean. Light attenuation profiles recorded by animal-borne electronic tags can be used more broadly and routinely to estimate lower trophic distribution at sea in relation to deep diving predator foraging behaviour.

  9. Exploiting virtual synchrony in distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, Thomas A.

    1987-01-01

    Applications of a virtually synchronous environment are described for distributed programming, which underlies a collection of distributed programming tools in the ISIS2 system. A virtually synchronous environment allows processes to be structured into process groups, and makes events like broadcasts to the group as an entity, group membership changes, and even migration of an activity from one place to another appear to occur instantaneously, in other words, synchronously. A major advantage to this approach is that many aspects of a distributed application can be treated independently without compromising correctness. Moreover, user code that is designed as if the system were synchronous can often be executed concurrently. It is argued that this approach to building distributed and fault tolerant software is more straightforward, more flexible, and more likely to yield correct solutions than alternative approaches.

  10. Maritime Operations in Disconnected, Intermittent, and Low-Bandwidth Environments

    DTIC Science & Technology

    2013-06-01

    of a Dynamic Distributed Database ( DDD ) is a core element enabling the distributed operation of networks and applications, as described in this...document. The DDD is a database containing all the relevant information required to reconfigure the applications, routing, and other network services...optimize application configuration. Figure 5 gives a snapshot of entries in the DDD . In current testing, the DDD is replicated using Domino

  11. MARRVEL: Integration of Human and Model Organism Genetic Resources to Facilitate Functional Annotation of the Human Genome.

    PubMed

    Wang, Julia; Al-Ouran, Rami; Hu, Yanhui; Kim, Seon-Young; Wan, Ying-Wooi; Wangler, Michael F; Yamamoto, Shinya; Chao, Hsiao-Tuan; Comjean, Aram; Mohr, Stephanie E; Perrimon, Norbert; Liu, Zhandong; Bellen, Hugo J

    2017-06-01

    One major challenge encountered with interpreting human genetic variants is the limited understanding of the functional impact of genetic alterations on biological processes. Furthermore, there remains an unmet demand for an efficient survey of the wealth of information on human homologs in model organisms across numerous databases. To efficiently assess the large volume of publically available information, it is important to provide a concise summary of the most relevant information in a rapid user-friendly format. To this end, we created MARRVEL (model organism aggregated resources for rare variant exploration). MARRVEL is a publicly available website that integrates information from six human genetic databases and seven model organism databases. For any given variant or gene, MARRVEL displays information from OMIM, ExAC, ClinVar, Geno2MP, DGV, and DECIPHER. Importantly, it curates model organism-specific databases to concurrently display a concise summary regarding the human gene homologs in budding and fission yeast, worm, fly, fish, mouse, and rat on a single webpage. Experiment-based information on tissue expression, protein subcellular localization, biological process, and molecular function for the human gene and homologs in the seven model organisms are arranged into a concise output. Hence, rather than visiting multiple separate databases for variant and gene analysis, users can obtain important information by searching once through MARRVEL. Altogether, MARRVEL dramatically improves efficiency and accessibility to data collection and facilitates analysis of human genes and variants by cross-disciplinary integration of 18 million records available in public databases to facilitate clinical diagnosis and basic research. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  12. The Influence of Recurrent Modes of Climate Variability on the Occurrence of Monthly Temperature Extremes Over South America

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Detzer, Judah; Mechoso, Carlos R.; Lee, Huikyo; Barkhordarian, Armineh

    2017-10-01

    The associations between extreme temperature months and four prominent modes of recurrent climate variability are examined over South America. Associations are computed as the percent of extreme temperature months concurrent with the upper and lower quartiles of the El Niño-Southern Oscillation (ENSO), the Atlantic Niño, the Pacific Decadal Oscillation (PDO), and the Southern Annular Mode (SAM) index distributions, stratified by season. The relationship is strongest for ENSO, with nearly every extreme temperature month concurrent with the upper or lower quartiles of its distribution in portions of northwestern South America during some seasons. The likelihood of extreme warm temperatures is enhanced over parts of northern South America when the Atlantic Niño index is in the upper quartile, while cold extremes are often association with the lowest quartile. Concurrent precipitation anomalies may contribute to these relations. The PDO shows weak associations during December, January, and February, while in June, July, and August its relationship with extreme warm temperatures closely matches that of ENSO. This may be due to the positive relationship between the PDO and ENSO, rather than the PDO acting as an independent physical mechanism. Over Patagonia, the SAM is highly influential during spring and fall, with warm and cold extremes being associated with positive and negative phases of the SAM, respectively. Composites of sea level pressure anomalies for extreme temperature months over Patagonia suggest an important role of local synoptic scale weather variability in addition to a favorable SAM for the occurrence of these extremes.

  13. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  14. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    NASA Astrophysics Data System (ADS)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  15. Practical Quantum Private Database Queries Based on Passive Round-Robin Differential Phase-shift Quantum Key Distribution

    PubMed Central

    Li, Jian; Yang, Yu-Guang; Chen, Xiu-Bo; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    A novel quantum private database query protocol is proposed, based on passive round-robin differential phase-shift quantum key distribution. Compared with previous quantum private database query protocols, the present protocol has the following unique merits: (i) the user Alice can obtain one and only one key bit so that both the efficiency and security of the present protocol can be ensured, and (ii) it does not require to change the length difference of the two arms in a Mach-Zehnder interferometer and just chooses two pulses passively to interfere with so that it is much simpler and more practical. The present protocol is also proved to be secure in terms of the user security and database security. PMID:27539654

  16. Information resources at the National Center for Biotechnology Information.

    PubMed Central

    Woodsmall, R M; Benson, D A

    1993-01-01

    The National Center for Biotechnology Information (NCBI), part of the National Library of Medicine, was established in 1988 to perform basic research in the field of computational molecular biology as well as build and distribute molecular biology databases. The basic research has led to new algorithms and analysis tools for interpreting genomic data and has been instrumental in the discovery of human disease genes for neurofibromatosis and Kallmann syndrome. The principal database responsibility is the National Institutes of Health (NIH) genetic sequence database, GenBank. NCBI, in collaboration with international partners, builds, distributes, and provides online and CD-ROM access to over 112,000 DNA sequences. Another major program is the integration of multiple sequences databases and related bibliographic information and the development of network-based retrieval systems for Internet access. PMID:8374583

  17. Prescription patterns in asthma patients initiating salmeterol in UK general practice: a retrospective cohort study using the General Practice Research Database (GPRD).

    PubMed

    DiSantostefano, Rachael L; Davis, Kourtney J

    2011-06-01

    An association between salmeterol, a long-acting β(2)-agonist (LABA), use and rare serious asthma events or asthma mortality was observed in two large clinical trials. This has resulted in heightened scrutiny of LABAs and comprehensive reviews by regulatory agencies. The aim of this retrospective observational cohort study was to better characterize salmeterol medication use patterns in the UK. We describe asthma prescription patterns in a cohort of patients (n =17,745) in the General Practice Research Database who initiated treatment with salmeterol-containing prescriptions between 2003 and 2006, including salmeterol and salmeterol/fluticasone propionate in a single device. Prescriptions patterns by medication class, including concurrent prescription of salmeterol with inhaled corticosteroids (ICS), were described using 6-month intervals in the 1-year period before and after the salmeterol-containing index prescription. In the 0- to 6-month and 7- to 12-month periods prior to initiation of the salmeterol-containing prescription, the cohort experienced worsening of asthma, measured by an increase in the proportion of patients with prescriptions for short-acting β-agonists [SABA] (73-89%), ICS (70-81%) and systemic corticosteroids (14-28%). Nearly all patients prescribed salmeterol were concurrently prescribed ICS (≥95% within 90 days). In the 12 months following initiation of the salmeterol-containing prescription, a decrease in asthma prescriptions was observed. These results support the appropriate prescribing of salmeterol-containing medications, as per recommendations in asthma treatment guidelines in the UK. Salmeterol was consistently prescribed as an add-on asthma-controller with an ICS for most patients, and was associated with improvements in asthma control, as indicated by decreases in SABA and systemic corticosteroid prescriptions following salmeterol introduction.

  18. Evaluation and validity of a LORETA normative EEG database.

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-04-01

    To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.

  19. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    NASA Astrophysics Data System (ADS)

    Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.

    2003-09-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also be presented.

  20. Apollo2Go: a web service adapter for the Apollo genome viewer to enable distributed genome annotation.

    PubMed

    Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus F X

    2007-08-30

    Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from ftp://ftpmips.gsf.de/plants/apollo_webservice.

  1. Apollo2Go: a web service adapter for the Apollo genome viewer to enable distributed genome annotation

    PubMed Central

    Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus FX

    2007-01-01

    Background Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. Results To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. Conclusion This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from . PMID:17760972

  2. Content-based image retrieval in medical applications for picture archiving and communication systems

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.; Guld, Mark O.; Thies, Christian; Fischer, Benedikt; Keysers, Daniel; Kohnen, Michael; Schubert, Henning; Wein, Berthold B.

    2003-05-01

    Picture archiving and communication systems (PACS) aim to efficiently provide the radiologists with all images in a suitable quality for diagnosis. Modern standards for digital imaging and communication in medicine (DICOM) comprise alphanumerical descriptions of study, patient, and technical parameters. Currently, this is the only information used to select relevant images within PACS. Since textual descriptions insufficiently describe the great variety of details in medical images, content-based image retrieval (CBIR) is expected to have a strong impact when integrated into PACS. However, existing CBIR approaches usually are limited to a distinct modality, organ, or diagnostic study. In this state-of-the-art report, we present first results implementing a general approach to content-based image retrieval in medical applications (IRMA) and discuss its integration into PACS environments. Usually, a PACS consists of a DICOM image server and several DICOM-compliant workstations, which are used by radiologists for reading the images and reporting the findings. Basic IRMA components are the relational database, the scheduler, and the web server, which all may be installed on the DICOM image server, and the IRMA daemons running on distributed machines, e.g., the radiologists" workstations. These workstations can also host the web-based front-ends of IRMA applications. Integrating CBIR and PACS, a special focus is put on (a) location and access transparency for data, methods, and experiments, (b) replication transparency for methods in development, (c) concurrency transparency for job processing and feature extraction, (d) system transparency at method implementation time, and (e) job distribution transparency when issuing a query. Transparent integration will have a certain impact on diagnostic quality supporting both evidence-based medicine and case-based reasoning.

  3. The Integrated Medical Model: A Probabilistic Simulation Model for Predicting In-Flight Medical Risks

    NASA Technical Reports Server (NTRS)

    Keenan, Alexandra; Young, Millennia; Saile, Lynn; Boley, Lynn; Walton, Marlei; Kerstman, Eric; Shah, Ronak; Goodenow, Debra A.; Myers, Jerry G.

    2015-01-01

    The Integrated Medical Model (IMM) is a probabilistic model that uses simulation to predict mission medical risk. Given a specific mission and crew scenario, medical events are simulated using Monte Carlo methodology to provide estimates of resource utilization, probability of evacuation, probability of loss of crew, and the amount of mission time lost due to illness. Mission and crew scenarios are defined by mission length, extravehicular activity (EVA) schedule, and crew characteristics including: sex, coronary artery calcium score, contacts, dental crowns, history of abdominal surgery, and EVA eligibility. The Integrated Medical Evidence Database (iMED) houses the model inputs for one hundred medical conditions using in-flight, analog, and terrestrial medical data. Inputs include incidence, event durations, resource utilization, and crew functional impairment. Severity of conditions is addressed by defining statistical distributions on the dichotomized best and worst-case scenarios for each condition. The outcome distributions for conditions are bounded by the treatment extremes of the fully treated scenario in which all required resources are available and the untreated scenario in which no required resources are available. Upon occurrence of a simulated medical event, treatment availability is assessed, and outcomes are generated depending on the status of the affected crewmember at the time of onset, including any pre-existing functional impairments or ongoing treatment of concurrent conditions. The main IMM outcomes, including probability of evacuation and loss of crew life, time lost due to medical events, and resource utilization, are useful in informing mission planning decisions. To date, the IMM has been used to assess mission-specific risks with and without certain crewmember characteristics, to determine the impact of eliminating certain resources from the mission medical kit, and to design medical kits that maximally benefit crew health while meeting mass and volume constraints.

  4. The Integrated Medical Model: A Probabilistic Simulation Model Predicting In-Flight Medical Risks

    NASA Technical Reports Server (NTRS)

    Keenan, Alexandra; Young, Millennia; Saile, Lynn; Boley, Lynn; Walton, Marlei; Kerstman, Eric; Shah, Ronak; Goodenow, Debra A.; Myers, Jerry G., Jr.

    2015-01-01

    The Integrated Medical Model (IMM) is a probabilistic model that uses simulation to predict mission medical risk. Given a specific mission and crew scenario, medical events are simulated using Monte Carlo methodology to provide estimates of resource utilization, probability of evacuation, probability of loss of crew, and the amount of mission time lost due to illness. Mission and crew scenarios are defined by mission length, extravehicular activity (EVA) schedule, and crew characteristics including: sex, coronary artery calcium score, contacts, dental crowns, history of abdominal surgery, and EVA eligibility. The Integrated Medical Evidence Database (iMED) houses the model inputs for one hundred medical conditions using in-flight, analog, and terrestrial medical data. Inputs include incidence, event durations, resource utilization, and crew functional impairment. Severity of conditions is addressed by defining statistical distributions on the dichotomized best and worst-case scenarios for each condition. The outcome distributions for conditions are bounded by the treatment extremes of the fully treated scenario in which all required resources are available and the untreated scenario in which no required resources are available. Upon occurrence of a simulated medical event, treatment availability is assessed, and outcomes are generated depending on the status of the affected crewmember at the time of onset, including any pre-existing functional impairments or ongoing treatment of concurrent conditions. The main IMM outcomes, including probability of evacuation and loss of crew life, time lost due to medical events, and resource utilization, are useful in informing mission planning decisions. To date, the IMM has been used to assess mission-specific risks with and without certain crewmember characteristics, to determine the impact of eliminating certain resources from the mission medical kit, and to design medical kits that maximally benefit crew health while meeting mass and volume constraints.

  5. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (CONCURRENT VERSION)

    NASA Technical Reports Server (NTRS)

    Pearson, R. W.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  6. Constructing distributed Hippocratic video databases for privacy-preserving online patient training and counseling.

    PubMed

    Peng, Jinye; Babaguchi, Noboru; Luo, Hangzai; Gao, Yuli; Fan, Jianping

    2010-07-01

    Digital video now plays an important role in supporting more profitable online patient training and counseling, and integration of patient training videos from multiple competitive organizations in the health care network will result in better offerings for patients. However, privacy concerns often prevent multiple competitive organizations from sharing and integrating their patient training videos. In addition, patients with infectious or chronic diseases may not want the online patient training organizations to identify who they are or even which video clips they are interested in. Thus, there is an urgent need to develop more effective techniques to protect both video content privacy and access privacy . In this paper, we have developed a new approach to construct a distributed Hippocratic video database system for supporting more profitable online patient training and counseling. First, a new database modeling approach is developed to support concept-oriented video database organization and assign a degree of privacy of the video content for each database level automatically. Second, a new algorithm is developed to protect the video content privacy at the level of individual video clip by filtering out the privacy-sensitive human objects automatically. In order to integrate the patient training videos from multiple competitive organizations for constructing a centralized video database indexing structure, a privacy-preserving video sharing scheme is developed to support privacy-preserving distributed classifier training and prevent the statistical inferences from the videos that are shared for cross-validation of video classifiers. Our experiments on large-scale video databases have also provided very convincing results.

  7. Controlled Substance Reconciliation Accuracy Improvement Using Near Real-Time Drug Transaction Capture from Automated Dispensing Cabinets.

    PubMed

    Epstein, Richard H; Dexter, Franklin; Gratch, David M; Perino, Michael; Magrann, Jerry

    2016-06-01

    Accurate accounting of controlled drug transactions by inpatient hospital pharmacies is a requirement in the United States under the Controlled Substances Act. At many hospitals, manual distribution of controlled substances from pharmacies is being replaced by automated dispensing cabinets (ADCs) at the point of care. Despite the promise of improved accountability, a high prevalence (15%) of controlled substance discrepancies between ADC records and anesthesia information management systems (AIMS) has been published, with a similar incidence (15.8%; 95% confidence interval [CI], 15.3% to 16.2%) noted at our institution. Most reconciliation errors are clerical. In this study, we describe a method to capture drug transactions in near real-time from our ADCs, compare them with documentation in our AIMS, and evaluate subsequent improvement in reconciliation accuracy. ADC-controlled substance transactions are transmitted to a hospital interface server, parsed, reformatted, and sent to a software script written in Perl. The script extracts the data and writes them to a SQL Server database. Concurrently, controlled drug totals for each patient having care are documented in the AIMS and compared with the balance of the ADC transactions (i.e., vending, transferring, wasting, and returning drug). Every minute, a reconciliation report is available to anesthesia providers over the hospital Intranet from AIMS workstations. The report lists all patients, the current provider, the balance of ADC transactions, the totals from the AIMS, the difference, and whether the case is still ongoing or had concluded. Accuracy and latency of the ADC transaction capture process were assessed via simulation and by comparison with pharmacy database records, maintained by the vendor on a central server located remotely from the hospital network. For assessment of reconciliation accuracy over time, data were collected from our AIMS from January 2012 to June 2013 (Baseline), July 2013 to April 2014 (Next Day Reports), and May 2014 to September 2015 (Near Real-Time Reports) and reconciled against pharmacy records from the central pharmacy database maintained by the vendor. Control chart (batch means) methods were used between successive epochs to determine if improvement had taken place. During simulation, 100% of 10,000 messages, transmitted at a rate of 1295 per minute, were accurately captured and inserted into the database. Latency (transmission time to local database insertion time) was 46.3 ± 0.44 milliseconds (SEM). During acceptance testing, only 1 of 1384 transactions analyzed had a difference between the near real-time process and what was in the central database; this was for a "John Doe" patient whose name had been changed subsequent to data capture. Once a transaction was entered at the ADC workstation, 84.9% (n = 18 bins; 95% CI, 78.4% to 91.3%) of these transactions were available in the database on the AIMS server within 2 minutes. Within 5 minutes, 98.2% (n = 18 bins; 95% CI, 97.2% to 99.3%) were available. Among 145,642 transactions present in the central pharmacy database, only 24 were missing from the local database table (mean = 0.018%; 95% CI, 0.002% to 0.034%). Implementation of near real-time reporting improved the controlled substance reconciliation error rate compared to the previous Next Day Reports epoch, from 8.8% to 5.2% (difference = -3.6%; 95% CI, -4.3% to -2.8%; P < 10). Errors were distributed among staff, with 50% of discrepancies accounted for by 12.4% of providers and 80% accounted for by 28.5% of providers executing transactions during the Near Real-Time Reports epoch. The near real-time system for the capture of transactional data flowing over the hospital network was highly accurate, reliable, and exhibited acceptable latency. This methodology can be used to implement similar data capture for transactions from their drug ADCs. Reconciliation accuracy improved significantly as a result of implementation. Our approach may be of particular utility at facilities with limited pharmacy resources to audit anesthesia records for controlled substance administration and reconcile them against dispensing records.

  8. Database Development for Ocean Impacts: Imaging, Outreach, and Rapid Response

    DTIC Science & Technology

    2012-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Database Development for Ocean Impacts: Imaging, Outreach...Development for Ocean Impacts: Imaging, Outreach, and Rapid Response 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...hoses ( Applied Ocean Physics & Engineering department, WHOI, to evaluate wear and locate in mooring optical cables used in the Right Whale monitoring

  9. Relativistic quantum private database queries

    NASA Astrophysics Data System (ADS)

    Sun, Si-Jia; Yang, Yu-Guang; Zhang, Ming-Ou

    2015-04-01

    Recently, Jakobi et al. (Phys Rev A 83, 022301, 2011) suggested the first practical private database query protocol (J-protocol) based on the Scarani et al. (Phys Rev Lett 92, 057901, 2004) quantum key distribution protocol. Unfortunately, the J-protocol is just a cheat-sensitive private database query protocol. In this paper, we present an idealized relativistic quantum private database query protocol based on Minkowski causality and the properties of quantum information. Also, we prove that the protocol is secure in terms of the user security and the database security.

  10. The Network Configuration of an Object Relational Database Management System

    NASA Technical Reports Server (NTRS)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  11. Effects of cacheing on multitasking efficiency and programming strategy on an ELXSI 6400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.; Benner, R.E.

    1985-12-01

    The impact of a cache/shared memory architecture, and, in particular, the cache coherency problem, upon concurrent algorithm and program development is discussed. In this context, a simple set of programming strategies are proposed which streamline code development and improve code performance when multitasking in a cache/shared memory or distributed memory environment.

  12. Concurrent infections with Cryptosporidium spp., Giardia duodenalis, Enterocytozoon bieneusi, and Blastocystis spp. in naturally infected dairy cattle from birth to two years of age

    USDA-ARS?s Scientific Manuscript database

    Fecal specimens were collected directly at weekly and then monthly intervals from each of 30 dairy calves from birth to 24 months to determine the prevalence and age distribution of Cryptosporidium spp., Giardia duodenalis assemblages, Enterocytozoon bieneusi genotypes, and Blastocystis spp subtypes...

  13. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    NASA Technical Reports Server (NTRS)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  15. GlobTherm, a global database on thermal tolerances for aquatic and terrestrial organisms.

    PubMed

    Bennett, Joanne M; Calosi, Piero; Clusella-Trullas, Susana; Martínez, Brezo; Sunday, Jennifer; Algar, Adam C; Araújo, Miguel B; Hawkins, Bradford A; Keith, Sally; Kühn, Ingolf; Rahbek, Carsten; Rodríguez, Laura; Singer, Alexander; Villalobos, Fabricio; Ángel Olalla-Tárraga, Miguel; Morales-Castilla, Ignacio

    2018-03-13

    How climate affects species distributions is a longstanding question receiving renewed interest owing to the need to predict the impacts of global warming on biodiversity. Is climate change forcing species to live near their critical thermal limits? Are these limits likely to change through natural selection? These and other important questions can be addressed with models relating geographical distributions of species with climate data, but inferences made with these models are highly contingent on non-climatic factors such as biotic interactions. Improved understanding of climate change effects on species will require extensive analysis of thermal physiological traits, but such data are both scarce and scattered. To overcome current limitations, we created the GlobTherm database. The database contains experimentally derived species' thermal tolerance data currently comprising over 2,000 species of terrestrial, freshwater, intertidal and marine multicellular algae, plants, fungi, and animals. The GlobTherm database will be maintained and curated by iDiv with the aim to keep expanding it, and enable further investigations on the effects of climate on the distribution of life on Earth.

  16. Distributed data collection for a database of radiological image interpretations

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  17. Library Micro-Computing, Vol. 2. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 19 articles pertaining to library microcomputing appear in this collection, the second of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…

  18. Web Database Development: Implications for Academic Publishing.

    ERIC Educational Resources Information Center

    Fernekes, Bob

    This paper discusses the preliminary planning, design, and development of a pilot project to create an Internet accessible database and search tool for locating and distributing company data and scholarly work. Team members established four project objectives: (1) to develop a Web accessible database and decision tool that creates Web pages on the…

  19. Prevalence and geographical distribution of Usher syndrome in Germany.

    PubMed

    Spandau, Ulrich H M; Rohrschneider, Klaus

    2002-06-01

    To estimate the prevalence of Usher syndrome in Heidelberg and Mannheim and to map its geographical distribution in Germany. Usher syndrome patients were ascertained through the databases of the Low Vision Department at the University of Heidelberg, and of the patient support group Pro Retina. Ophthalmic and audiologic examinations and medical records were used to classify patients into one of the subtypes. The database of the University of Heidelberg contains 247 Usher syndrome patients, 63 with Usher syndrome type 1 (USH1) and 184 with Usher syndrome type 2 (USH2). The USH1:USH2 ratio in the Heidelberg database was 1:3. The Pro Retina database includes 248 Usher syndrome patients, 21 with USH1 and 227 with USH2. The total number of Usher syndrome patients was 424, with 75 USH1 and 349 USH2 patients; 71 patients were in both databases. The prevalence of Usher syndrome in Heidelberg and suburbs was calculated to be 6.2 per 100,000 inhabitants. There seems to be a homogeneous distribution in Germany for both subtypes. Knowledge of the high prevalence of Usher syndrome, with up to 5,000 patients in Germany, should lead to increased awareness and timely diagnosis by ophthalmologists and otologists. It should also ensure that these patients receive good support through hearing and vision aids.

  20. Geologic map and map database of parts of Marin, San Francisco, Alameda, Contra Costa, and Sonoma counties, California

    USGS Publications Warehouse

    Blake, M.C.; Jones, D.L.; Graymer, R.W.; digital database by Soule, Adam

    2000-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  1. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  2. ChEMBL web services: streamlining access to drug discovery data and utilities.

    PubMed

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P

    2015-07-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. LOLAweb: a containerized web server for interactive genomic locus overlap enrichment analysis.

    PubMed

    Nagraj, V P; Magee, Neal E; Sheffield, Nathan C

    2018-06-06

    The past few years have seen an explosion of interest in understanding the role of regulatory DNA. This interest has driven large-scale production of functional genomics data and analytical methods. One popular analysis is to test for enrichment of overlaps between a query set of genomic regions and a database of region sets. In this way, new genomic data can be easily connected to annotations from external data sources. Here, we present an interactive interface for enrichment analysis of genomic locus overlaps using a web server called LOLAweb. LOLAweb accepts a set of genomic ranges from the user and tests it for enrichment against a database of region sets. LOLAweb renders results in an R Shiny application to provide interactive visualization features, enabling users to filter, sort, and explore enrichment results dynamically. LOLAweb is built and deployed in a Linux container, making it scalable to many concurrent users on our servers and also enabling users to download and run LOLAweb locally.

  4. Design of Instant Messaging System of Multi-language E-commerce Platform

    NASA Astrophysics Data System (ADS)

    Yang, Heng; Chen, Xinyi; Li, Jiajia; Cao, Yaru

    2017-09-01

    This paper aims at researching the message system in the instant messaging system based on the multi-language e-commerce platform in order to design the instant messaging system in multi-language environment and exhibit the national characteristics based information as well as applying national languages to e-commerce. In order to develop beautiful and friendly system interface for the front end of the message system and reduce the development cost, the mature jQuery framework is adopted in this paper. The high-performance server Tomcat is adopted at the back end to process user requests, and MySQL database is adopted for data storage to persistently store user data, and meanwhile Oracle database is adopted as the message buffer for system optimization. Moreover, AJAX technology is adopted for the client to actively pull the newest data from the server at the specified time. In practical application, the system has strong reliability, good expansibility, short response time, high system throughput capacity and high user concurrency.

  5. The Effects of Cost Sharing on Adherence to Medications Prescribed for Concurrent Use: Do Definitions Matter?

    PubMed

    Sacks, Naomi C; Burgess, James F; Cabral, Howard J; McDonnell, Marie E; Pizer, Steven D

    2015-08-01

    Accurate estimates of the effects of cost sharing on adherence to medications prescribed for use together, also called concurrent adherence, are important for researchers, payers, and policymakers who want to reduce barriers to adherence for chronic condition patients prescribed multiple medications concurrently. But measure definition consensus is lacking, and the effects of different definitions on estimates of cost-related nonadherence are unevaluated. To (a) compare estimates of cost-related nonadherence using different measure definitions and (b) provide guidance for analyses of the effects of cost sharing on concurrent adherence. This is a retrospective cohort study of Medicare Part D beneficiaries aged 65 years and older who used multiple oral antidiabetics concurrently in 2008 and 2009. We compared patients with standard coverage, which contains cost-sharing requirements in deductible (100%), initial (25%), and coverage gap (100%) phases, to patients with a low-income subsidy (LIS) and minimal cost-sharing requirements. Data source was the IMS Health Longitudinal Prescription Database. Patients with standard coverage were propensity matched to controls with LIS coverage. Propensity score was developed using logistic regression to model likelihood of Part D standard enrollment, controlling for sociodemographic and health status characteristics. For analysis, 3 definitions were used for unadjusted and adjusted estimates of adherence: (1) patients adherent to All medications; (2) patients adherent on Average; and (3) patients adherent to Any medication. Analyses were conducted using the full study sample and then repeated in analytic subgroups where patients used (a) 1 or more costly branded oral antidiabetics or (b) inexpensive generics only. We identified 12,771 propensity matched patients with Medicare Part D standard (N = 6,298) or LIS (N = 6,473) coverage who used oral antidiabetics in 2 or more of the same classes in 2008 and 2009. In this sample, estimates of the effects of cost sharing on concurrent adherence varied by measure definition, coverage type, and proportion of patients using more costly branded drugs. Adherence rates ranged from 37% (All: standard patients using 1+ branded) to 97% (Any: LIS using generics only). In adjusted estimates, standard patients using branded drugs had 0.63 (95% CI = 0.57-0.70) and 0.70 (95% CI = 0.63-0.77) times the odds of concurrent adherence using All and Average definitions, respectively. The Any subgroup was not significant (OR = 0.89, 95% CI = 0.87-1.17). Estimates also varied in the full-study sample (All: OR = 0.79, 95% CI = 0.74-0.85; Average: OR = 0.83, 95% CI = 0.77-0.89) and generics-only subgroup, although cost-sharing effects were smaller. The Any subgroup generated no significant estimates. Different concurrent adherence measure definitions lead to markedly different findings of the effects of cost sharing on concurrent adherence, with All and Average subgroups sensitive to these effects. However, when more study patients use inexpensive generics, estimates of these effects on adherence to branded medications with higher cost-sharing requirements may be diluted. When selecting a measure definition, researchers, payers, and policy analysts should consider the range of medication prices patients face, use a measure sensitive to the effects of cost sharing on adherence, and perform subgroup analyses for patients prescribed more medications for which they must pay more, since these patients are most vulnerable to cost-related nonadherence.

  6. Income distribution patterns from a complete social security database

    NASA Astrophysics Data System (ADS)

    Derzsy, N.; Néda, Z.; Santos, M. A.

    2012-11-01

    We analyze the income distribution of employees for 9 consecutive years (2001-2009) using a complete social security database for an economically important district of Romania. The database contains detailed information on more than half million taxpayers, including their monthly salaries from all employers where they worked. Besides studying the characteristic distribution functions in the high and low/medium income limits, the database allows us a detailed dynamical study by following the time-evolution of the taxpayers income. To our knowledge, this is the first extensive study of this kind (a previous Japanese taxpayers survey was limited to two years). In the high income limit we prove once again the validity of Pareto’s law, obtaining a perfect scaling on four orders of magnitude in the rank for all the studied years. The obtained Pareto exponents are quite stable with values around α≈2.5, in spite of the fact that during this period the economy developed rapidly and also a financial-economic crisis hit Romania in 2007-2008. For the low and medium income category we confirmed the exponential-type income distribution. Following the income of employees in time, we have found that the top limit of the income distribution is a highly dynamical region with strong fluctuations in the rank. In this region, the observed dynamics is consistent with a multiplicative random growth hypothesis. Contrarily with previous results obtained for the Japanese employees, we find that the logarithmic growth-rate is not independent of the income.

  7. Extending GIS Technology to Study Karst Features of Southeastern Minnesota

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Tipping, R. G.; Alexander, E. C.; Alexander, S. C.

    2001-12-01

    This paper summarizes ongoing research on karst feature distribution of southeastern Minnesota. The main goals of this interdisciplinary research are: 1) to look for large-scale patterns in the rate and distribution of sinkhole development; 2) to conduct statistical tests of hypotheses about the formation of sinkholes; 3) to create management tools for land-use managers and planners; and 4) to deliver geomorphic and hydrogeologic criteria for making scientifically valid land-use policies and ethical decisions in karst areas of southeastern Minnesota. Existing county and sub-county karst feature datasets of southeastern Minnesota have been assembled into a large GIS-based database capable of analyzing the entire data set. The central database management system (DBMS) is a relational GIS-based system interacting with three modules: GIS, statistical and hydrogeologic modules. ArcInfo and ArcView were used to generate a series of 2D and 3D maps depicting karst feature distributions in southeastern Minnesota. IRIS ExplorerTM was used to produce satisfying 3D maps and animations using data exported from GIS-based database. Nearest-neighbor analysis has been used to test sinkhole distributions in different topographic and geologic settings. All current nearest-neighbor analyses testify that sinkholes in southeastern Minnesota are not evenly distributed in this area (i.e., they tend to be clustered). More detailed statistical methods such as cluster analysis, histograms, probability estimation, correlation and regression have been used to study the spatial distributions of some mapped karst features of southeastern Minnesota. A sinkhole probability map for Goodhue County has been constructed based on sinkhole distribution, bedrock geology, depth to bedrock, GIS buffer analysis and nearest-neighbor analysis. A series of karst features for Winona County including sinkholes, springs, seeps, stream sinks and outcrop has been mapped and entered into the Karst Feature Database of Southeastern Minnesota. The Karst Feature Database of Winona County is being expanded to include all the mapped karst features of southeastern Minnesota. Air photos from 1930s to 1990s of Spring Valley Cavern Area in Fillmore County were scanned and geo-referenced into our GIS system. This technology has been proved to be very useful to identify sinkholes and study the rate of sinkhole development.

  8. Distributed processor allocation for launching applications in a massively connected processors complex

    DOEpatents

    Pedretti, Kevin

    2008-11-18

    A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

  9. Practical private database queries based on a quantum-key-distribution protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakobi, Markus; Humboldt-Universitaet zu Berlin, D-10117 Berlin; Simon, Christoph

    2011-02-15

    Private queries allow a user, Alice, to learn an element of a database held by a provider, Bob, without revealing which element she is interested in, while limiting her information about the other elements. We propose to implement private queries based on a quantum-key-distribution protocol, with changes only in the classical postprocessing of the key. This approach makes our scheme both easy to implement and loss tolerant. While unconditionally secure private queries are known to be impossible, we argue that an interesting degree of security can be achieved by relying on fundamental physical principles instead of unverifiable security assumptions inmore » order to protect both the user and the database. We think that the scope exists for such practical private queries to become another remarkable application of quantum information in the footsteps of quantum key distribution.« less

  10. A revision of the distribution of sea kraits (Reptilia, Laticauda) with an updated occurrence dataset for ecological and conservation research

    PubMed Central

    Gherghel, Iulian; Papeş, Monica; Brischoux, François; Sahlean, Tiberiu; Strugariu, Alexandru

    2016-01-01

    Abstract The genus Laticauda (Reptilia: Elapidae), commonly known as sea kraits, comprises eight species of marine amphibious snakes distributed along the shores of the Western Pacific Ocean and the Eastern Indian Ocean. We review the information available on the geographic range of sea kraits and analyze their distribution patterns. Generally, we found that south and south-west of Japan, Philippines Archipelago, parts of Indonesia, and Vanuatu have the highest diversity of sea krait species. Further, we compiled the information available on sea kraits’ occurrences from a variety of sources, including museum records, field surveys, and the scientific literature. The final database comprises 694 occurrence records, with Laticauda colubrina having the highest number of records and Laticauda schistorhyncha the lowest. The occurrence records were georeferenced and compiled as a database for each sea krait species. This database can be freely used for future studies. PMID:27110155

  11. A revision of the distribution of sea kraits (Reptilia, Laticauda) with an updated occurrence dataset for ecological and conservation research.

    PubMed

    Gherghel, Iulian; Papeş, Monica; Brischoux, François; Sahlean, Tiberiu; Strugariu, Alexandru

    2016-01-01

    The genus Laticauda (Reptilia: Elapidae), commonly known as sea kraits, comprises eight species of marine amphibious snakes distributed along the shores of the Western Pacific Ocean and the Eastern Indian Ocean. We review the information available on the geographic range of sea kraits and analyze their distribution patterns. Generally, we found that south and south-west of Japan, Philippines Archipelago, parts of Indonesia, and Vanuatu have the highest diversity of sea krait species. Further, we compiled the information available on sea kraits' occurrences from a variety of sources, including museum records, field surveys, and the scientific literature. The final database comprises 694 occurrence records, with Laticauda colubrina having the highest number of records and Laticauda schistorhyncha the lowest. The occurrence records were georeferenced and compiled as a database for each sea krait species. This database can be freely used for future studies.

  12. NASA's computer science research program

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1983-01-01

    Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.

  13. ACMES: fast multiple-genome searches for short repeat sequences with concurrent cross-species information retrieval

    PubMed Central

    Reneker, Jeff; Shyu, Chi-Ren; Zeng, Peiyu; Polacco, Joseph C.; Gassmann, Walter

    2004-01-01

    We have developed a web server for the life sciences community to use to search for short repeats of DNA sequence of length between 3 and 10 000 bases within multiple species. This search employs a unique and fast hash function approach. Our system also applies information retrieval algorithms to discover knowledge of cross-species conservation of repeat sequences. Furthermore, we have incorporated a part of the Gene Ontology database into our information retrieval algorithms to broaden the coverage of the search. Our web server and tutorial can be found at http://acmes.rnet.missouri.edu. PMID:15215469

  14. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.

  15. Mugshot Identification Database (MID)

    National Institute of Standards and Technology Data Gateway

    NIST Mugshot Identification Database (MID) (Web, free access)   NIST Special Database 18 is being distributed for use in development and testing of automated mugshot identification systems. The database consists of three CD-ROMs, containing a total of 3248 images of variable size using lossless compression. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  16. Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model

    DTIC Science & Technology

    2014-04-01

    time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest

  17. Does Reimportation Reduce Price Differences for Prescription Drugs? Lessons from the European Union

    PubMed Central

    Kyle, Margaret K; Allsbrook, Jennifer S; Schulman, Kevin A

    2008-01-01

    Objective To examine the effect of parallel trade on patterns of price dispersion for prescription drugs in the European Union. Data Sources Longitudinal data from an IMS Midas database of prices and units sold for drugs in 36 categories in 30 countries from 1993 through 2004. Study Design The main outcome measures were mean price differentials and other measures of price dispersion within European Union countries compared with within non-European Union countries. Data Collection/Extraction Methods We identified drugs subject to parallel trade using information provided by IMS and by checking membership lists of parallel import trade associations and lists of approved parallel imports. Principal Findings Parallel trade was not associated with substantial reductions in price dispersion in European Union countries. In descriptive and regression analyses, about half of the price differentials exceeded 50 percent in both European Union and non-European Union countries over time, and price distributions among European Union countries did not show a dramatic change concurrent with the adoption of parallel trade. In regression analysis, we found that although price differentials decreased after 1995 in most countries, they decreased less in the European Union than elsewhere. Conclusions Parallel trade for prescription drugs does not automatically reduce international price differences. Future research should explore how other regulatory schemes might lead to different results elsewhere. PMID:18355258

  18. Impact of tumour bed boost integration on acute and late toxicity in patients with breast cancer: A systematic review.

    PubMed

    Hamilton, Daniel George; Bale, Rebecca; Jones, Claire; Fitzgerald, Emma; Khor, Richard; Knight, Kellie; Wasiak, Jason

    2016-06-01

    The purpose of this systematic review was to summarise the evidence from studies investigating the integration of tumour bed boosts into whole breast irradiation for patients with Stage 0-III breast cancer, with a focus on its impact on acute and late toxicities. A comprehensive systematic electronic search through the Ovid MEDLINE, EMBASE and PubMed databases from January 2000 to January 2015 was conducted. Studies were considered eligible if they investigated the efficacy of hypo- or normofractionated whole breast irradiation with the inclusion of a daily concurrent boost. The primary outcomes of interest were the degree of observed acute and late toxicity following radiotherapy treatment. Methodological quality assessment was performed on all included studies using either the Newcastle-Ottawa Scale or a previously published investigator-derived quality instrument. The search identified 35 articles, of which 17 satisfied our eligibility criteria. Thirteen and eleven studies reported on acute and late toxicities respectively. Grade 3 acute skin toxicity ranged from 1 to 7% whilst moderate to severe fibrosis and telangiectasia were both limited to 9%. Reported toxicity profiles were comparable to historical data at similar time-points. Studies investigating the delivery of concurrent boosts with whole breast radiotherapy courses report safe short to medium-term toxicity profiles and cosmesis rates. Whilst the quality of evidence and length of follow-up supporting these findings is low, sufficient evidence has been generated to consider concurrent boost techniques as an alternative to conventional sequential techniques. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Spine Surgery Outcomes in Elderly Patients Versus General Adult Patients in the United States: A MarketScan Analysis.

    PubMed

    Lagman, Carlito; Ugiliweneza, Beatrice; Boakye, Maxwell; Drazin, Doniel

    2017-07-01

    To compare spine surgery outcomes in elderly patients (80-103 years old) versus general adult patients (18-79 years-old) in the United States. Truven Health Analytics MarketScan Research Databases (2000-2012) were queried. Patients with a diagnosis of degenerative disease of the spine without concurrent spinal stenosis, spinal stenosis without concurrent degenerative disease, or degenerative disease with concurrent spinal stenosis and who had undergone decompression without fusion, fusion without decompression, or decompression with fusion procedures were included. Indirect outcome measures included length of stay, in-hospital mortality, in-hospital and 30-day complications, and discharge disposition. Patients (N = 155,720) were divided into elderly (n = 10,232; 6.57%) and general adult (n = 145,488; 93.4%) populations. Mean length of stay was longer in elderly patients versus general adult patients (3.62 days vs. 3.11 days; P < 0.0001). In-hospital mortality was more common in elderly patients versus general adult patients (0.31% vs. 0.06%; P < 0.0001). In-hospital and 30-day complications were more common in elderly patients versus general adult patients (11.3% vs. 7.15% and 17.8% vs. 12.6%; P < 0.0001). Nonroutine discharge was more common in elderly patients versus general adult patients (33.7% vs. 16.2%; P < 0.0001). Our results revealed significantly longer hospital stays, more in-hospital mortalities, and more in-hospital and 30-day complications after decompression without fusion, fusion without decompression, or decompression with fusion procedures in elderly patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Manufacturing process and material selection in concurrent collaborative design of MEMS devices

    NASA Astrophysics Data System (ADS)

    Zha, Xuan F.; Du, H.

    2003-09-01

    In this paper we present knowledge of an intensive approach and system for selecting suitable manufacturing processes and materials for microelectromechanical systems (MEMS) devices in concurrent collaborative design environment. In the paper, fundamental issues on MEMS manufacturing process and material selection such as concurrent design framework, manufacturing process and material hierarchies, and selection strategy are first addressed. Then, a fuzzy decision support scheme for a multi-criteria decision-making problem is proposed for estimating, ranking and selecting possible manufacturing processes, materials and their combinations. A Web-based prototype advisory system for the MEMS manufacturing process and material selection, WebMEMS-MASS, is developed based on the client-knowledge server architecture and framework to help the designer find good processes and materials for MEMS devices. The system, as one of the important parts of an advanced simulation and modeling tool for MEMS design, is a concept level process and material selection tool, which can be used as a standalone application or a Java applet via the Web. The running sessions of the system are inter-linked with webpages of tutorials and reference pages to explain the facets, fabrication processes and material choices, and calculations and reasoning in selection are performed using process capability and material property data from a remote Web-based database and interactive knowledge base that can be maintained and updated via the Internet. The use of the developed system including operation scenario, use support, and integration with an MEMS collaborative design system is presented. Finally, an illustration example is provided.

  1. Effects of long-term AA attendance and spirituality on the course of depressive symptoms in individuals with alcohol use disorder.

    PubMed

    Wilcox, Claire E; Pearson, Matthew R; Tonigan, J Scott

    2015-06-01

    Alcohol use disorder (AUD) is associated with depression. Although attendance at Alcoholics Anonymous (AA) meetings predicts reductions in drinking, results have been mixed about the salutary effects of AA on reducing depressive symptoms. In this single-group study, early AA affiliates (n = 253) were recruited, consented, and assessed at baseline, 3, 6, 9, 12, 18, and 24 months. Lagged growth models were used to investigate the predictive effect of AA attendance on depression, controlling for concurrent drinking and treatment attendance. Depression was measured using the Beck Depression Inventory (BDI) and was administered at baseline 3, 6, 12, 18, and 24 months. Additional predictors of depression tested included spiritual gains (Religious Background and Behavior questionnaire [RBB]) and completion of 12-step work (Alcoholics Anonymous Inventory [AAI]). Eighty-five percent of the original sample provided follow-up data at 24 months. Overall, depression decreased over the 24 month follow-up period. AA attendance predicted later reductions in depression (slope = -3.40, p = .01) even after controlling for concurrent drinking and formal treatment attendance. Finally, increased spiritual gains (RBB) also predicted later reductions in depression (slope = -0.10, p = .02) after controlling for concurrent drinking, treatment, and AA attendance. In summary, reductions in alcohol consumption partially explained decreases in depression in this sample of early AA affiliates, and other factors such as AA attendance and increased spiritual practices also accounted for reductions in depression beyond that explained by drinking. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  2. Diagnostic cost groups (DCGs) and concurrent utilization among patients with substance abuse disorders.

    PubMed

    Rosen, Amy K; Loveland, Susan A; Anderson, Jennifer J; Hankin, Cheryl S; Breckenridge, James N; Berlowitz, Dan R

    2002-08-01

    To assess the performance of Diagnostic Cost Groups (DCGs) in explaining variation in concurrent utilization for a defined subgroup, patients with substance abuse (SA) disorders, within the Department of Veterans Affairs (VA). A 60 percent random sample of veterans who used health care services during Fiscal Year (FY) 1997 was obtained from VA administrative databases. Patients with SA disorders (13.3 percent) were identified from primary and secondary ICD-9-CM diagnosis codes. Concurrent risk adjustment models were fitted and tested using the DCG/HCC model. Three outcome measures were defined: (1) "service days" (the sum of a patient's inpatient and outpatient visit days), (2) mental health/substance abuse (MH/SA) service days, and (3) ambulatory provider encounters. To improve model performance, we ran three DCG/HCC models with additional indicators for patients with SA disorders. To create a single file of veterans who used health care services in FY 1997, we merged records from all VA inpatient and outpatient files. Adding indicators for patients with mild/moderate SA disorders did not appreciably improve the R-squares for any of the outcome measures. When indicators were added for patients with severe SA who were in the most costly category, the explanatory ability of the models was modestly improved for all three outcomes. Modifying the DCG/HCC model with additional markers for SA modestly improved homogeneity and model prediction. Because considerable variation still remained after modeling, we conclude that health care systems should evaluate "off-the-shelf" risk adjustment systems before applying them to their own populations.

  3. Diagnostics Cost Groups and Concurrent Utilization among Patients

    PubMed Central

    Rosen, Amy K; Loveland, Susan A; Anderson, Jennifer J; Hankin, Cheryl S; Breckenridge, James N; Berlowitz, Dan R

    2002-01-01

    Objective To assess the performance of Diagnostic Cost Groups (DCGs) in explaining variation in concurrent utilization for a defined subgroup, patients with substance abuse (SA) disorders, within the Department of Veterans Affairs (VA). Data Sources A 60 percent random sample of veterans who used health care services during Fiscal Year (FY) 1997 was obtained from VA administrative databases. Patients with SA disorders (13.3 percent) were identified from primary and secondary ICD-9-CM diagnosis codes. Study Design Concurrent risk adjustment models were fitted and tested using the DCG/HCC model. Three outcome measures were defined: (1) “service days” (the sum of a patient's inpatient and outpatient visit days), (2) mental health/substance abuse (MH/SA) service days, and (3) ambulatory provider encounters. To improve model performance, we ran three DCG/HCC models with additional indicators for patients with SA disorders. Data Collection To create a single file of veterans who used health care services in FY 1997, we merged records from all VA inpatient and outpatient files. Principal Findings Adding indicators for patients with mild/moderate SA disorders did not appreciably improve the R-squares for any of the outcome measures. When indicators were added for patients with severe SA who were in the most costly category, the explanatory ability of the models was modestly improved for all three outcomes. Conclusions Modifying the DCG/HCC model with additional markers for SA modestly improved homogeneity and model prediction. Because considerable variation still remained after modeling, we conclude that health care systems should evaluate “off-the-shelf” risk adjustment systems before applying them to their own populations. PMID:12236385

  4. Erectile dysfunction--an observable marker of diabetes mellitus? A large national epidemiological study.

    PubMed

    Sun, Peter; Cameron, Ann; Seftel, Allen; Shabsigh, Ridwan; Niederberger, Craig; Guay, Andre

    2006-09-01

    We examined whether men with erectile dysfunction are more likely to have diabetes mellitus than men without erectile dysfunction, and whether erectile dysfunction can be used as an observable early marker of diabetes mellitus. Using a nationally representative managed care claims database from 51 health plans and 28 million members in the United States, we conducted a retrospective cohort study to compare the prevalence rates of diabetes mellitus between men with erectile dysfunction (285,436) and men without erectile dysfunction (1,584,230) during 1995 to 2001. Logistic regression models were used to isolate the effect of erectile dysfunction on the likelihood of having diabetes mellitus with adjustment for age, region and 7 concurrent diseases. The diabetes mellitus prevalence rates were 20.0% in men with erectile dysfunction and 7.5% in men without erectile dysfunction. With adjustment for age, region and concurrent diseases, the odds ratio of having diabetes mellitus between men with erectile dysfunction and without erectile dysfunction was 1.60 (p <0.0001). With adjustment for regions and concurrent diseases, the age specific odds ratios ranged from 2.94 (p <0.0001, age 26 to 35) to 1.05 (p = 0.1717, age 76 to 85). Men with erectile dysfunction were more than twice as likely to have diabetes mellitus as men without erectile dysfunction. Erectile dysfunction is an observable marker of diabetes mellitus, strongly so for men 45 years old or younger and likely for men 46 to 65 years old, but it is not a marker for men older than 66 years.

  5. DNApod: DNA polymorphism annotation database from next-generation sequence read archives.

    PubMed

    Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu

    2017-01-01

    With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information.

  6. DNApod: DNA polymorphism annotation database from next-generation sequence read archives

    PubMed Central

    Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu

    2017-01-01

    With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information. PMID:28234924

  7. Association Between Use of Non–Vitamin K Oral Anticoagulants With and Without Concurrent Medications and Risk of Major Bleeding in Nonvalvular Atrial Fibrillation

    PubMed Central

    Chang, Shang-Hung; Chou, I-Jun; Yeh, Yung-Hsin; Chiou, Meng-Jiun; Wen, Ming-Shien; Kuo, Chi-Tai; See, Lai-Chu

    2017-01-01

    Importance Non–vitamin K oral anticoagulants (NOACs) are commonly prescribed with other medications that share metabolic pathways that may increase major bleeding risk. Objective To assess the association between use of NOACs with and without concurrent medications and risk of major bleeding in patients with nonvalvular atrial fibrillation. Design, Setting, and Participants Retrospective cohort study using data from the Taiwan National Health Insurance database and including 91 330 patients with nonvalvular atrial fibrillation who received at least 1 NOAC prescription of dabigatran, rivaroxaban, or apixaban from January 1, 2012, through December 31, 2016, with final follow-up on December 31, 2016. Exposures NOAC with or without concurrent use of atorvastatin; digoxin; verapamil; diltiazem; amiodarone; fluconazole; ketoconazole, itraconazole, voriconazole, or posaconazole; cyclosporine; erythromycin or clarithromycin; dronedarone; rifampin; or phenytoin. Main Outcomes and Measures Major bleeding, defined as hospitalization or emergency department visit with a primary diagnosis of intracranial hemorrhage or gastrointestinal, urogenital, or other bleeding. Adjusted incidence rate differences between person-quarters (exposure time for each person during each quarter of the calendar year) of NOAC with or without concurrent medications were estimated using Poisson regression and inverse probability of treatment weighting using the propensity score. Results Among 91 330 patients with nonvalvular atrial fibrillation (mean age, 74.7 years [SD, 10.8]; men, 55.8%; NOAC exposure: dabigatran, 45 347 patients; rivaroxaban, 54 006 patients; and apixaban, 12 886 patients), 4770 major bleeding events occurred during 447 037 person-quarters with NOAC prescriptions. The most common medications co-prescribed with NOACs over all person-quarters were atorvastatin (27.6%), diltiazem (22.7%), digoxin (22.5%), and amiodarone (21.1%). Concurrent use of amiodarone, fluconazole, rifampin, and phenytoin with NOACs had a significant increase in adjusted incidence rates per 1000 person-years of major bleeding than NOACs alone: 38.09 for NOAC use alone vs 52.04 for amiodarone (difference, 13.94 [99% CI, 9.76-18.13]); 102.77 for NOAC use alone vs 241.92 for fluconazole (difference, 138.46 [99% CI, 80.96-195.97]); 65.66 for NOAC use alone vs 103.14 for rifampin (difference, 36.90 [99% CI, 1.59-72.22); and 56.07 for NOAC use alone vs 108.52 for phenytoin (difference, 52.31 [99% CI, 32.18-72.44]; P < .01 for all comparisons). Compared with NOAC use alone, the adjusted incidence rate for major bleeding was significantly lower for concurrent use of atorvastatin, digoxin, and erythromycin or clarithromycin and was not significantly different for concurrent use of verapamil; diltiazem; cyclosporine; ketoconazole, itraconazole, voriconazole, or posaconazole; and dronedarone. Conclusions and Relevance Among patients taking NOACs for nonvalvular atrial fibrillation, concurrent use of amiodarone, fluconazole, rifampin, and phenytoin compared with the use of NOACs alone, was associated with increased risk of major bleeding. Physicians prescribing NOAC medications should consider the potential risks associated with concomitant use of other drugs. PMID:28973247

  8. Database Creation and Statistical Analysis: Finding Connections Between Two or More Secondary Storage Device

    DTIC Science & Technology

    2017-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE SECONDARY...BLANK ii Approved for public release. Distribution is unlimited. DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE...Problem and Motivation . . . . . . . . . . . . . . . . . . . 1 1.2 DOD Applicability . . . . . . . . . . . . . . . . .. . . . . . . 2 1.3 Research

  9. Checkpointing and Recovery in Distributed and Database Systems

    ERIC Educational Resources Information Center

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  10. Library Micro-Computing, Vol. 1. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 18 articles pertaining to library microcomputing appear in this collection, the first of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1) an integrated library…

  11. An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning

    PubMed Central

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-01-01

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197

  12. An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.

    PubMed

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-08-21

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.

  13. An Adaptive Flow Solver for Air-Borne Vehicles Undergoing Time-Dependent Motions/Deformations

    NASA Technical Reports Server (NTRS)

    Singh, Jatinder; Taylor, Stephen

    1997-01-01

    This report describes a concurrent Euler flow solver for flows around complex 3-D bodies. The solver is based on a cell-centered finite volume methodology on 3-D unstructured tetrahedral grids. In this algorithm, spatial discretization for the inviscid convective term is accomplished using an upwind scheme. A localized reconstruction is done for flow variables which is second order accurate. Evolution in time is accomplished using an explicit three-stage Runge-Kutta method which has second order temporal accuracy. This is adapted for concurrent execution using another proven methodology based on concurrent graph abstraction. This solver operates on heterogeneous network architectures. These architectures may include a broad variety of UNIX workstations and PCs running Windows NT, symmetric multiprocessors and distributed-memory multi-computers. The unstructured grid is generated using commercial grid generation tools. The grid is automatically partitioned using a concurrent algorithm based on heat diffusion. This results in memory requirements that are inversely proportional to the number of processors. The solver uses automatic granularity control and resource management techniques both to balance load and communication requirements, and deal with differing memory constraints. These ideas are again based on heat diffusion. Results are subsequently combined for visualization and analysis using commercial CFD tools. Flow simulation results are demonstrated for a constant section wing at subsonic, transonic, and a supersonic case. These results are compared with experimental data and numerical results of other researchers. Performance results are under way for a variety of network topologies.

  14. Privacy-Aware Location Database Service for Granular Queries

    NASA Astrophysics Data System (ADS)

    Kiyomoto, Shinsaku; Martin, Keith M.; Fukushima, Kazuhide

    Future mobile markets are expected to increasingly embrace location-based services. This paper presents a new system architecture for location-based services, which consists of a location database and distributed location anonymizers. The service is privacy-aware in the sense that the location database always maintains a degree of anonymity. The location database service permits three different levels of query and can thus be used to implement a wide range of location-based services. Furthermore, the architecture is scalable and employs simple functions that are similar to those found in general database systems.

  15. Maintaining Consistency in Distributed Systems

    DTIC Science & Technology

    1991-11-01

    type of 8 concurrency is readily controlled using synchronization tools such as monitors or semaphores . which are a standard part of most threads...sug- gested that these issues are often best solved using traditional synchronization constructs, such as monitors and semaphores , and that...data structures would normally arise within individual programs, and be controlled using mutual exclusion constructs, such as semaphores and monitors

  16. MELD: A Logical Approach to Distributed and Parallel Programming

    DTIC Science & Technology

    2012-03-01

    0215 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) Seth Copen Goldstein Flavio Cruz 5d. PROJECT NUMBER BI20 5e. TASK...Comp. Sci., vol. 50, pp. 1–102, 1987. [33] P. Ló pez, F. Pfenning, J. Polakow, and K. Watkins , “Monadic concurrent linear logic programming,” in

  17. Translating Data into Action: A Data Team Model as the Seed of Comprehensive District Change

    ERIC Educational Resources Information Center

    Ruffner, Karen Blake

    2010-01-01

    Educational reform is not easy. As school leaders search for a format that leads to improvement on many fronts concurrently, data teams is one such promising practice. The data team design not only involves sensemaking of data as evidence of effective teaching and learning, but also builds a professional learning community, distributes leadership,…

  18. Tighter monogamy relations in multiqubit systems

    NASA Astrophysics Data System (ADS)

    Jin, Zhi-Xiang; Li, Jun; Li, Tao; Fei, Shao-Ming

    2018-03-01

    Monogamy relations characterize the distributions of entanglement in multipartite systems. We investigate monogamy relations related to the concurrence C , the entanglement of formation E , negativity Nc, and Tsallis-q entanglement Tq. Monogamy relations for the α th power of entanglement have been derived, which are tighter than the existing entanglement monogamy relations for some classes of quantum states. Detailed examples are presented.

  19. Preattentive representation of feature conjunctions for concurrent spatially distributed auditory objects.

    PubMed

    Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István

    2005-09-01

    The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.

  20. Saguaro: a distributed operating system based on pools of servers. Annual report, 1 January 1984-31 December 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, G.R.

    1986-03-03

    Prototypes of components of the Saguaro distributed operating system were implemented and the design of the entire system refined based on the experience. The philosophy behind Saguaro is to support the illusion of a single virtual machine while taking advantage of the concurrency and robustness that are possible in a network architecture. Within the system, these advantages are realized by the use of pools of server processes and decentralized allocation protocols. Potential concurrency and robustness are also made available to the user through low-cost mechanisms to control placement of executing commands and files, and to support semi-transparent file replication andmore » access. Another unique aspect of Saguaro is its extensive use of type system to describe user data such as files and to specify the types of arguments to commands and procedures. This enables the system to assist in type checking and leads to a user interface in which command-specific templates are available to facilitate command invocation. A mechanism, channels, is also provided to enable users to construct applications containing general graphs of communication processes.« less

  1. Nuclear Forensics Analysis with Missing and Uncertain Data

    DOE PAGES

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    2015-10-05

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  2. A multidisciplinary database for global distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, P.J.

    The issue of selenium toxicity in the environment has been documented in the scientific literature for over 50 years. Recent studies reveal a complex connection between selenium and human and animal populations. This article introduces a bibliographic citation database on selenium in the environment developed for global distribution via the Internet by the University of Wyoming Libraries. The database incorporates material from commercial sources, print abstracts, indexes, and U.S. government literature, resulting in a multidisciplinary resource. Relevant disciplines include, biology, medicine, veterinary science, botany, chemistry, geology, pollution, aquatic sciences, ecology, and others. It covers the years 1985-1996 for most subjectmore » material, with additional years being added as resources permit.« less

  3. Antarctic ice sheet thickness estimation using the horizontal-to-vertical spectral ratio method with single-station seismic ambient noise

    NASA Astrophysics Data System (ADS)

    Yan, Peng; Li, Zhiwei; Li, Fei; Yang, Yuande; Hao, Weifeng; Bao, Feng

    2018-03-01

    We report on a successful application of the horizontal-to-vertical spectral ratio (H / V) method, generally used to investigate the subsurface velocity structures of the shallow crust, to estimate the Antarctic ice sheet thickness for the first time. Using three-component, five-day long, seismic ambient noise records gathered from more than 60 temporary seismic stations located on the Antarctic ice sheet, the ice thickness measured at each station has comparable accuracy to the Bedmap2 database. Preliminary analysis revealed that 60 out of 65 seismic stations on the ice sheet obtained clear peak frequencies (f0) related to the ice sheet thickness in the H / V spectrum. Thus, assuming that the isotropic ice layer lies atop a high velocity half-space bedrock, the ice sheet thickness can be calculated by a simple approximation formula. About half of the calculated ice sheet thicknesses were consistent with the Bedmap2 ice thickness values. To further improve the reliability of ice thickness measurements, two-type models were built to fit the observed H / V spectrum through non-linear inversion. The two-type models represent the isotropic structures of single- and two-layer ice sheets, and the latter depicts the non-uniform, layered characteristics of the ice sheet widely distributed in Antarctica. The inversion results suggest that the ice thicknesses derived from the two-layer ice models were in good concurrence with the Bedmap2 ice thickness database, and that ice thickness differences between the two were within 300 m at almost all stations. Our results support previous finding that the Antarctic ice sheet is stratified. Extensive data processing indicates that the time length of seismic ambient noise records can be shortened to two hours for reliable ice sheet thickness estimation using the H / V method. This study extends the application fields of the H / V method and provides an effective and independent way to measure ice sheet thickness in Antarctica.

  4. Pushing typists back on the learning curve: Memory chunking improves retrieval of prior typing episodes.

    PubMed

    Yamaguchi, Motonori; Randle, James M; Wilson, Thomas L; Logan, Gordon D

    2017-09-01

    Hierarchical control of skilled performance depends on chunking of several lower-level units into a single higher-level unit. The present study examined the relationship between chunking and recognition of trained materials in the context of typewriting. In 3 experiments, participants were trained with typing nonwords and were later tested on their recognition of the trained materials. In Experiment 1, participants typed the same words or nonwords in 5 consecutive trials while performing a concurrent memory task. In Experiment 2, participants typed the materials with lags between repetitions without a concurrent memory task. In both experiments, recognition of typing materials was associated with better chunking of the materials. Experiment 3 used the remember-know procedure to test the recollection and familiarity components of recognition. Remember judgments were associated with better chunking than know judgments or nonrecognition. These results indicate that chunking is associated with explicit recollection of prior typing episodes. The relevance of the existing memory models to chunking in typewriting was considered, and it is proposed that memory chunking improves retrieval of trained typing materials by integrating contextual cues into the memory traces. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs.

    PubMed

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-05-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A critical review of complementary and alternative medicine use among people with arthritis: a focus upon prevalence, cost, user profiles, motivation, decision-making, perceived benefits and communication.

    PubMed

    Yang, Lu; Sibbritt, David; Adams, Jon

    2017-03-01

    A critical review of complementary and alternative medicine (CAM) use among people with arthritis was conducted focusing upon prevalence and profile of CAM users as well as their motivation, decision-making, perceived benefits and communication with healthcare providers. A comprehensive search of peer-reviewed literature published from 2008 to 2015 was undertaken via CINAHL, Medline and AMED databases. The initial search identified 4331 articles, of which 49 articles met selection criteria. The review shows a high prevalence of CAM use (often multiple types and concurrent to conventional medical care) among those with arthritis which is not restricted to any particular geographic or social-economic status. A large proportion of arthritis sufferers using CAM consider these medicines to be somewhat or very effective but almost half do not inform their healthcare provider about their CAM use. It is suggested that rheumatologists and others providing health care for patients with arthritis should be cognizant of the high prevalence of CAM use and the challenges associated with possible concurrent use of CAM and conventional medicine among their patients.

  7. An approach for access differentiation design in medical distributed applications built on databases.

    PubMed

    Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K

    1999-01-01

    A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.

  8. A Utility Maximizing and Privacy Preserving Approach for Protecting Kinship in Genomic Databases.

    PubMed

    Kale, Gulce; Ayday, Erman; Tastan, Oznur

    2017-09-12

    Rapid and low cost sequencing of genomes enabled widespread use of genomic data in research studies and personalized customer applications, where genomic data is shared in public databases. Although the identities of the participants are anonymized in these databases, sensitive information about individuals can still be inferred. One such information is kinship. We define two routes kinship privacy can leak and propose a technique to protect kinship privacy against these risks while maximizing the utility of shared data. The method involves systematic identification of minimal portions of genomic data to mask as new participants are added to the database. Choosing the proper positions to hide is cast as an optimization problem in which the number of positions to mask is minimized subject to privacy constraints that ensure the familial relationships are not revealed.We evaluate the proposed technique on real genomic data. Results indicate that concurrent sharing of data pertaining to a parent and an offspring results in high risks of kinship privacy, whereas the sharing data from further relatives together is often safer. We also show arrival order of family members have a high impact on the level of privacy risks and on the utility of sharing data. Available at: https://github.com/tastanlab/Kinship-Privacy. erman@cs.bilkent.edu.tr or oznur.tastan@cs.bilkent.edu.tr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. A Response Surface Methodology for Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Altus, Troy David; Sobieski, Jaroslaw (Technical Monitor)

    2002-01-01

    The report describes a new method for optimization of engineering systems such as aerospace vehicles whose design must harmonize a number of subsystems and various physical phenomena, each represented by a separate computer code, e.g., aerodynamics, structures, propulsion, performance, etc. To represent the system internal couplings, the codes receive output from other codes as part of their inputs. The system analysis and optimization task is decomposed into subtasks that can be executed concurrently, each subtask conducted using local state and design variables and holding constant a set of the system-level design variables. The subtasks results are stored in form of the Response Surfaces (RS) fitted in the space of the system-level variables to be used as the subtask surrogates in a system-level optimization whose purpose is to optimize the system objective(s) and to reconcile the system internal couplings. By virtue of decomposition and execution concurrency, the method enables a broad workfront in organization of an engineering project involving a number of specialty groups that might be geographically dispersed, and it exploits the contemporary computing technology of massively concurrent and distributed processing. The report includes a demonstration test case of supersonic business jet design.

  10. Development and validation of a novel patient-reported treatment satisfaction measure for hyperfunctional facial lines: facial line satisfaction questionnaire.

    PubMed

    Pompilus, Farrah; Burgess, Somali; Hudgens, Stacie; Banderas, Benjamin; Daniels, Selena

    2015-12-01

    Facial lines or wrinkles are among the most visible signs of aging, and minimally invasive cosmetic procedures are becoming increasingly popular. The aim of this study was to develop and validate the Facial Line Satisfaction Questionnaire (FLSQ) for use in adults with upper facial lines (UFL). A literature review, concept elicitation interviews (n = 33), and cognitive debriefing interviews (n = 23) of adults with UFL were conducted to develop the FLSQ. The FLSQ comprises Baseline and Follow-up versions and was field-tested with 150 subjects in a US observational study designed to assess its psychometric performance. Analyses included acceptability (item and scale distribution [i.e. missingness, floor, and ceiling effects]), reliability, and validity (including concurrent validity). In total, 69 concepts were elicited during patient interviews. Following cognitive debriefing interviews, the FLSQ-Baseline version included 11 items and the Follow-up version included 13 items. Response rates for the FLSQ were 100% and 73% at baseline and follow-up, respectively; no items had excessive missing data. Questionnaire scale scores were normally distributed. Most domain scores demonstrated good internal consistency reliability (Cronbach's α ≥ 0.70). Most items within their respective domains exhibited good convergent (item-scale correlations > 0.40) and discriminant (items had higher correlation with their hypothesized scales than other scales) validity. Concurrent validity correlation coefficients of the FLSQ domain scores with the associated concurrent measures were acceptable (range: r = 0.40-0.70). Six FLSQ items demonstrated reliability and validity as stand-alone items outside their domains. The FLSQ is a valid questionnaire for assessing treatment expectations, satisfaction, impact, and preference in adults with UFL. © 2015 The Authors. Journal of Cosmetic Dermatology Published by Wiley Periodicals, Inc.

  11. Microdistribution of fluorescently-labeled monoclonal antibody in a peritoneal dissemination model of ovarian cancer

    NASA Astrophysics Data System (ADS)

    Kosaka, Nobuyuki; Ogawa, Mikako; Paik, David S.; Paik, Chang H.; Choyke, Peter L.; Kobayashi, Hisataka

    2010-02-01

    The microdistribution of therapeutic monoclonal antibodies within a tumor is important for determining clinical response. Nonuniform microdistribution predicts therapy failure. Herein, we developed a semiquantitative method for measuring microdistribution of an antibody within a tumor using in situ fluorescence microscopy and sought to modulate the microdistribution by altering the route and timing of antibody dosing. The microdistribution of a fluorescently-labeled antibody, trastuzumab (50-μg and 150-μg intraperitoneal injection (i.p.), and 100-μg intravenous injection (i.v.)) was evaluated in a peritoneal dissemination mouse model of ovarian cancer. In addition, we evaluated the microdistribution of concurrently-injected (30-μg i.p. and 100-μg i.v.) or serial (two doses of 30-μg i.p.) trastuzumab using in situ multicolor fluorescence microscopy. After the administration of 50-μg i.p. and 100-μg i.v. trastuzumab fluorescence imaging showed no significant difference in the central to peripheral signal ratio (C/P ratio) and demonstrated a peripheral-dominant accumulation, whereas administration of 150-μg i.p. trastuzumab showed relatively uniform, central dominant accumulation. With concurrent-i.p.-i.v. injections trastuzumab showed slightly higher C/P ratio than concurrently-injected i.p. trastuzumab. Moreover, in the serial injection study, the second injection of trastuzumab distributed more centrally than the first injection, while no difference was observed in the control group. Our results suggest that injection routes do not affect the microdistribution pattern of antibody in small peritoneal disseminations. However, increasing the dose results in a more uniform antibody distribution within peritoneal nodules. Furthermore, the serial i.p. injection of antibody can modify the microdistribution within tumor nodules. This work has implications for the optimal delivery of antibody based cancer therapies.

  12. A Web application for the management of clinical workflow in image-guided and adaptive proton therapy for prostate cancer treatments.

    PubMed

    Yeung, Daniel; Boes, Peter; Ho, Meng Wei; Li, Zuofeng

    2015-05-08

    Image-guided radiotherapy (IGRT), based on radiopaque markers placed in the prostate gland, was used for proton therapy of prostate patients. Orthogonal X-rays and the IBA Digital Image Positioning System (DIPS) were used for setup correction prior to treatment and were repeated after treatment delivery. Following a rationale for margin estimates similar to that of van Herk,(1) the daily post-treatment DIPS data were analyzed to determine if an adaptive radiotherapy plan was necessary. A Web application using ASP.NET MVC5, Entity Framework, and an SQL database was designed to automate this process. The designed features included state-of-the-art Web technologies, a domain model closely matching the workflow, a database-supporting concurrency and data mining, access to the DIPS database, secured user access and roles management, and graphing and analysis tools. The Model-View-Controller (MVC) paradigm allowed clean domain logic, unit testing, and extensibility. Client-side technologies, such as jQuery, jQuery Plug-ins, and Ajax, were adopted to achieve a rich user environment and fast response. Data models included patients, staff, treatment fields and records, correction vectors, DIPS images, and association logics. Data entry, analysis, workflow logics, and notifications were implemented. The system effectively modeled the clinical workflow and IGRT process.

  13. Technology and Its Use in Education: Present Roles and Future Prospects

    ERIC Educational Resources Information Center

    Courville, Keith

    2011-01-01

    (Purpose) This article describes two current trends in Educational Technology: distributed learning and electronic databases. (Findings) Topics addressed in this paper include: (1) distributed learning as a means of professional development; (2) distributed learning for content visualization; (3) usage of distributed learning for educational…

  14. MIPS: a database for protein sequences, homology data and yeast genome information.

    PubMed Central

    Mewes, H W; Albermann, K; Heumann, K; Liebl, S; Pfeiffer, F

    1997-01-01

    The MIPS group (Martinsried Institute for Protein Sequences) at the Max-Planck-Institute for Biochemistry, Martinsried near Munich, Germany, collects, processes and distributes protein sequence data within the framework of the tripartite association of the PIR-International Protein Sequence Database (,). MIPS contributes nearly 50% of the data input to the PIR-International Protein Sequence Database. The database is distributed on CD-ROM together with PATCHX, an exhaustive supplement of unique, unverified protein sequences from external sources compiled by MIPS. Through its WWW server (http://www.mips.biochem.mpg.de/ ) MIPS permits internet access to sequence databases, homology data and to yeast genome information. (i) Sequence similarity results from the FASTA program () are stored in the FASTA database for all proteins from PIR-International and PATCHX. The database is dynamically maintained and permits instant access to FASTA results. (ii) Starting with FASTA database queries, proteins have been classified into families and superfamilies (PROT-FAM). (iii) The HPT (hashed position tree) data structure () developed at MIPS is a new approach for rapid sequence and pattern searching. (iv) MIPS provides access to the sequence and annotation of the complete yeast genome (), the functional classification of yeast genes (FunCat) and its graphical display, the 'Genome Browser' (). A CD-ROM based on the JAVA programming language providing dynamic interactive access to the yeast genome and the related protein sequences has been compiled and is available on request. PMID:9016498

  15. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    National Institute of Standards and Technology Data Gateway

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  16. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  17. The StarLite Project Prototyping Real-Time Software

    DTIC Science & Technology

    1991-10-01

    multiversion data objects using the prototyping environment. Section 5 concludes the paper. 2. Message-Based Simulation When prototyping distributed...phase locking and priority-based synchronization algorithms, and between a multiversion database and its corresponding single-version database, through...its deadline, since the transaction is only aborted in the validation phase. 4.5. A Multiversion Database System To illustrate the effctivcness of the

  18. Distributing the ERIC Database on SilverPlatter Compact Disc--A Brief Case History.

    ERIC Educational Resources Information Center

    Brandhorst, Ted

    This description of the development of the Education Resources Information Center (ERIC) compact disc by two companies, SilverPlatter and ORI, Inc., provides background information on ERIC and the ERIC database, discusses reasons for choosing to put the ERIC database on compact discs, and describes the formulation of an ERIC CD-ROM team as part of…

  19. Chesapeake Bay Program Water Quality Database

    EPA Pesticide Factsheets

    The Chesapeake Information Management System (CIMS), designed in 1996, is an integrated, accessible information management system for the Chesapeake Bay Region. CIMS is an organized, distributed library of information and software tools designed to increase basin-wide public access to Chesapeake Bay information. The information delivered by CIMS includes technical and public information, educational material, environmental indicators, policy documents, and scientific data. Through the use of relational databases, web-based programming, and web-based GIS a large number of Internet resources have been established. These resources include multiple distributed on-line databases, on-demand graphing and mapping of environmental data, and geographic searching tools for environmental information. Baseline monitoring data, summarized data and environmental indicators that document ecosystem status and trends, confirm linkages between water quality, habitat quality and abundance, and the distribution and integrity of biological populations are also available. One of the major features of the CIMS network is the Chesapeake Bay Program's Data Hub, providing users access to a suite of long- term water quality and living resources databases. Chesapeake Bay mainstem and tidal tributary water quality, benthic macroinvertebrates, toxics, plankton, and fluorescence data can be obtained for a network of over 800 monitoring stations.

  20. The phytophthora genome initiative database: informatics and analysis for distributed pathogenomic research.

    PubMed

    Waugh, M; Hraber, P; Weller, J; Wu, Y; Chen, G; Inman, J; Kiphart, D; Sobral, B

    2000-01-01

    The Phytophthora Genome Initiative (PGI) is a distributed collaboration to study the genome and evolution of a particularly destructive group of plant pathogenic oomycete, with the goal of understanding the mechanisms of infection and resistance. NCGR provides informatics support for the collaboration as well as a centralized data repository. In the pilot phase of the project, several investigators prepared Phytophthora infestans and Phytophthora sojae EST and Phytophthora sojae BAC libraries and sent them to another laboratory for sequencing. Data from sequencing reactions were transferred to NCGR for analysis and curation. An analysis pipeline transforms raw data by performing simple analyses (i.e., vector removal and similarity searching) that are stored and can be retrieved by investigators using a web browser. Here we describe the database and access tools, provide an overview of the data therein and outline future plans. This resource has provided a unique opportunity for the distributed, collaborative study of a genus from which relatively little sequence data are available. Results may lead to insight into how better to control these pathogens. The homepage of PGI can be accessed at http:www.ncgr.org/pgi, with database access through the database access hyperlink.

  1. Towards G2G: Systems of Technology Database Systems

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Bell, David

    2005-01-01

    We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.

  2. Comparison of Measurement And Modeling Of Current Profile Changes Due To Neutral Bean Ion Redistribution During TAE Avalanches in NSTX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darrow, Douglas

    Brief "avalanches" of toroidal Alfven eigenmodes (TAEs) are observed in NSTX plasmas with several different n numbers simultaneously present. These affect the neutral beam ion distribution as evidenced by a concurrent drop in the neutron rate and, sometimes, beam ion loss. Guiding center orbit modeling has shown that the modes can transiently render portions of the beam ion phase space stochastic. The resulting redistribution of beam ions can also create a broader beam-driven current profile and produce other changes in the beam ion distribution function

  3. Biokinetics and effects of titania nano-material after inhalation and i.v. injection

    NASA Astrophysics Data System (ADS)

    Landsiedel, Robert; Fabian, Eric; Ma-Hock, Lan; Wiench, Karin; van Ravenzwaay, Bennard

    2009-05-01

    Within NanoSafe2 we developed a special inhalation model to investigate deposition of inhaled particles in the lung and the further distribution in the body after. Concurrently, the effects of the inhaled materials in the lung were examined. The results for nano-Titania were compared to results from inhalation studies with micron-sized (non-nano) Titania particles and to quartz particles (DQ12, known to be potent lung toxicants). To build a PBPK model for nano-Titania the tissue distribution of the material was also examined following intravenous (i.v.) administration.

  4. Quantum partial search for uneven distribution of multiple target items

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Korepin, Vladimir

    2018-06-01

    Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.

  5. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    The general inadequacy of Ada for programming systems that must survive processor loss was shown. A solution to the problem was proposed in which there are no syntatic changes to Ada. The approach was evaluated using a full-scale, realistic application. The application used was the Advanced Transport Operating System (ATOPS), an experimental computer control system developed for a modified Boeing 737 aircraft. The ATOPS system is a full authority, real-time avionics system providing a large variety of advanced features. Methods of building fault tolerance into concurrent systems were explored. A set of criteria by which the proposed method will be judged was examined. Extensive interaction with personnel from Computer Sciences Corporation and NASA Langley occurred to determine the requirements of the ATOPS software. Backward error recovery in concurrent systems was assessed.

  6. Study on Big Database Construction and its Application of Sample Data Collected in CHINA'S First National Geographic Conditions Census Based on Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Cheng, T.; Zhou, X.; Jia, Y.; Yang, G.; Bai, J.

    2018-04-01

    In the project of China's First National Geographic Conditions Census, millions of sample data have been collected all over the country for interpreting land cover based on remote sensing images, the quantity of data files reaches more than 12,000,000 and has grown in the following project of National Geographic Conditions Monitoring. By now, using database such as Oracle for storing the big data is the most effective method. However, applicable method is more significant for sample data's management and application. This paper studies a database construction method which is based on relational database with distributed file system. The vector data and file data are saved in different physical location. The key issues and solution method are discussed. Based on this, it studies the application method of sample data and analyzes some kinds of using cases, which could lay the foundation for sample data's application. Particularly, sample data locating in Shaanxi province are selected for verifying the method. At the same time, it takes 10 first-level classes which defined in the land cover classification system for example, and analyzes the spatial distribution and density characteristics of all kinds of sample data. The results verify that the method of database construction which is based on relational database with distributed file system is very useful and applicative for sample data's searching, analyzing and promoted application. Furthermore, sample data collected in the project of China's First National Geographic Conditions Census could be useful in the earth observation and land cover's quality assessment.

  7. An open access database for the evaluation of heart sound algorithms.

    PubMed

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  8. Kounis syndrome due to antibiotics: A global overview from pharmacovigilance databases.

    PubMed

    Renda, Francesca; Marotta, Elena; Landoni, Giovanni; Belletti, Alessandro; Cuconato, Virginia; Pani, Luca

    2016-12-01

    Kounis syndrome (KS) is characterized by concurrent presence of anaphylactic and cardiac components. Available evidence suggests that antibiotics are frequently associated to KS. We therefore analyzed KS cases associated with antibiotics use from the two largest pharmacovigilance databases. Two pharmacovigilance databases, EudraVigilance and VigiLyze, were searched for cases reporting the adverse reaction "Kounis Syndrome" with antibiotics as suspected active substance. We analyzed the period from December 1st, 2001 to February 16th, 2016. For the most reported active substance, proportional reporting ratio (PRR) was calculated. A total of 10 cases of KS associated with antibiotic use were retrieved from EudraVigilance database. Mean patients' age was 58.2years and 70% were male. The most frequently reported suspected antibiotic was the combination amoxicillin/clavulanic acid (four cases). VigiLyze database reported 13 KS cases associated to antibiotics. Mean age was 56years and 61% of patients were male. The most frequently reported antibiotic was again the combination amoxicillin/clavulanic acid (five cases). Seven duplicate cases were identified, leaving a total of 16 cases of KS, with six of them associated to amoxicillin/clavulanic acid use. The PRR value for amoxicillin/clavulanic acid against other kinds of antibiotics was 2.62 considering EudraVigilance data and 1.61 considering VigiLyze data. This analysis provided a complete picture of the cases of KS associated with antibiotic use and identified a possible association between amoxicillin/clavulanic acid and KS. Since the number of cases is low, especially considering its wide use, further analyses are needed to confirm the association. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Development of geotechnical analysis and design modules for the Virginia Department of Transportation's geotechnical database.

    DOT National Transportation Integrated Search

    2005-01-01

    In 2003, an Internet-based Geotechnical Database Management System (GDBMS) was developed for the Virginia Department of Transportation (VDOT) using distributed Geographic Information System (GIS) methodology for data management, archival, retrieval, ...

  10. DSSTox and Chemical Information Technologies in Support of PredictiveToxicology

    EPA Science Inventory

    The EPA NCCT Distributed Structure-Searchable Toxicity (DSSTox) Database project initially focused on the curation and publication of high-quality, standardized, chemical structure-annotated toxicity databases for use in structure-activity relationship (SAR) modeling. In recent y...

  11. A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories

    NASA Astrophysics Data System (ADS)

    Brown, Christa L.

    National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.

  12. Electron-Impact Ionization Cross Section Database

    National Institute of Standards and Technology Data Gateway

    SRD 107 Electron-Impact Ionization Cross Section Database (Web, free access)   This is a database primarily of total ionization cross sections of molecules by electron impact. The database also includes cross sections for a small number of atoms and energy distributions of ejected electrons for H, He, and H2. The cross sections were calculated using the Binary-Encounter-Bethe (BEB) model, which combines the Mott cross section with the high-incident energy behavior of the Bethe cross section. Selected experimental data are included.

  13. Distributed Access View Integrated Database (DAVID) system

    NASA Technical Reports Server (NTRS)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  14. Central Appalachian basin natural gas database: distribution, composition, and origin of natural gases

    USGS Publications Warehouse

    Román Colón, Yomayra A.; Ruppert, Leslie F.

    2015-01-01

    The U.S. Geological Survey (USGS) has compiled a database consisting of three worksheets of central Appalachian basin natural gas analyses and isotopic compositions from published and unpublished sources of 1,282 gas samples from Kentucky, Maryland, New York, Ohio, Pennsylvania, Tennessee, Virginia, and West Virginia. The database includes field and reservoir names, well and State identification number, selected geologic reservoir properties, and the composition of natural gases (methane; ethane; propane; butane, iso-butane [i-butane]; normal butane [n-butane]; iso-pentane [i-pentane]; normal pentane [n-pentane]; cyclohexane, and hexanes). In the first worksheet, location and American Petroleum Institute (API) numbers from public or published sources are provided for 1,231 of the 1,282 gas samples. A second worksheet of 186 gas samples was compiled from published sources and augmented with public location information and contains carbon, hydrogen, and nitrogen isotopic measurements of natural gas. The third worksheet is a key for all abbreviations in the database. The database can be used to better constrain the stratigraphic distribution, composition, and origin of natural gas in the central Appalachian basin.

  15. RAId_DbS: Peptide Identification using Database Searches with Realistic Statistics

    PubMed Central

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2007-01-01

    Background The key to mass-spectrometry-based proteomics is peptide identification. A major challenge in peptide identification is to obtain realistic E-values when assigning statistical significance to candidate peptides. Results Using a simple scoring scheme, we propose a database search method with theoretically characterized statistics. Taking into account possible skewness in the random variable distribution and the effect of finite sampling, we provide a theoretical derivation for the tail of the score distribution. For every experimental spectrum examined, we collect the scores of peptides in the database, and find good agreement between the collected score statistics and our theoretical distribution. Using Student's t-tests, we quantify the degree of agreement between the theoretical distribution and the score statistics collected. The T-tests may be used to measure the reliability of reported statistics. When combined with reported P-value for a peptide hit using a score distribution model, this new measure prevents exaggerated statistics. Another feature of RAId_DbS is its capability of detecting multiple co-eluted peptides. The peptide identification performance and statistical accuracy of RAId_DbS are assessed and compared with several other search tools. The executables and data related to RAId_DbS are freely available upon request. PMID:17961253

  16. Incentive-Based Voltage Regulation in Distribution Networks: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Xinyang; Chen, Lijun; Dall'Anese, Emiliano

    This paper considers distribution networks fea- turing distributed energy resources, and designs incentive-based mechanisms that allow the network operator and end-customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Two different network-customer coordination mechanisms that require different amounts of information shared between the network operator and end-customers are developed to identify a solution of a well-defined social-welfare maximization prob- lem. Notably, the signals broadcast by the network operator assume the connotation of prices/incentives that induce the end- customers to adjust the generated/consumed powers in order to avoid the violation of the voltagemore » constraints. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  17. Incentive-Based Voltage Regulation in Distribution Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Baker, Kyri A; Zhou, Xinyang

    This paper considers distribution networks fea- turing distributed energy resources, and designs incentive-based mechanisms that allow the network operator and end-customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Two different network-customer coordination mechanisms that require different amounts of information shared between the network operator and end-customers are developed to identify a solution of a well-defined social-welfare maximization prob- lem. Notably, the signals broadcast by the network operator assume the connotation of prices/incentives that induce the end- customers to adjust the generated/consumed powers in order to avoid the violation of the voltagemore » constraints. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  18. Analysis of Lunar Highland Regolith Samples From Apollo 16 Drive Core 64001/2 and Lunar Regolith Simulants - an Expanding Comparative Database

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Doug; Stoeser, Douglas; Wentworth, Susan; McKay, Dave S.; Botha, Pieter; Butcher, Alan R.; Horsch, Hanna E.; Benedictus, Aukje; Gottlieb, Paul

    2008-01-01

    This slide presentation reviews the work to analyze the lunar highland regolith samples that came from the Apollo 16 core sample 64001/2 and simulants of lunar regolith, and build a comparative database. The work is part of a larger effort to compile an internally consistent database on lunar regolith (Apollo Samples) and lunar regolith simulants. This is in support of a future lunar outpost. The work is to characterize existing lunar regolith and simulants in terms of particle type, particle size distribution, particle shape distribution, bulk density, and other compositional characteristics, and to evaluate the regolith simulants by the same properties in comparison to the Apollo sample lunar regolith.

  19. Specification and Verification of Secure Concurrent and Distributed Software Systems

    DTIC Science & Technology

    1992-02-01

    primitive search strategies work for operating systems that contain relatively few operations . As the number of operations increases, so does the the...others have granted him access to, etc . The burden of security falls on the operating system , although appropriate hardware support can minimize the...Guttag, J. Horning, and R. Levin. Synchronization primitives for a multiprocessor: a formal specification. Symposium on Operating System Principles

  20. Concepts of Concurrent Programming

    DTIC Science & Technology

    1990-04-01

    to the material presented. Carriero89 Carriero, N., and Gelernter, D. " How to Write Parallel Programs : A Guide to the Perplexed." ACM...between the architectures on which programs can be executed and the application domains from which problems are drawn. Our goal is to show how programs ...Sept. 1989), 251-510. Abstract: There are four papers: 1. Programming Languages for Distributed Computing Systems (52); 2. How to Write Parallel

  1. Future War: An Assessment of Aerospace Campaigns in 2010,

    DTIC Science & Technology

    1996-01-01

    theoretician: "The impending sixth generation of warfare, with its centerpiece of superior data-processing to support precision smart weaponry, will radically...tions concept of " smart push, warrior pull." If JFACC were colocated with the worldwide intelligence manager, unit taskings and the applicable...intelligence information could be distributed concurrently (" smart push"). Intelligence officers sitting alongside the operational tasking officers would

  2. Next-Generation Undersea Warfare and Undersea Distributed Networked Systems

    DTIC Science & Technology

    2007-01-31

    Probability of false alarm R5 Redeployment, refueling, repositioning, replacement, and recovery ROE Rules of engagement RSTA Reconnaissance, surveillance...and decision aids) at a given point, considering mission, tasks, rules of engagement (ROE), objectives, and other appropriate factors. " Manning within...trajectories are important and must occur concurrently; they must, however, be governed by different rule sets.21 II Mission Capability Centric .•UDNS

  3. Cauldrons: An Abstraction for Concurrent Problems Solving. Revision.

    DTIC Science & Technology

    1986-09-01

    iaCtio1ns are the cauldron, a mechanism for organizing inference into distinct reasoning contexts: the flmse. a way of mnodularly describing the components...reasoning systems. 1 . Main Points This paper develops three mechanisms for organizing large distributed reasoning systems: Cauldrons -- A chunk of...introduction.s ’Thc goal-node/frame mechanism supports a useful abstraction over the hare cauldrons implementation, providing a protocol for organizing

  4. Distributed Structure-Searchable Toxicity (DSSTox) Database

    EPA Pesticide Factsheets

    The Distributed Structure-Searchable Toxicity network provides a public forum for publishing downloadable, structure-searchable, standardized chemical structure files associated with chemical inventories or toxicity data sets of environmental relevance.

  5. UNSODA UNSATURATED SOIL HYDRAULIC DATABASE USER'S MANUAL VERSION 1.0

    EPA Science Inventory

    This report contains general documentation and serves as a user manual of the UNSODA program. UNSODA is a database of unsaturated soil hydraulic properties (water retention, hydraulic conductivity, and soil water diffusivity), basic soil properties (particle-size distribution, b...

  6. Yaquina Bay, Oregon, Intertidal Sediment Temperature Database, 1998 - 2006.

    EPA Science Inventory

    Detailed, long term sediment temperature records were obtained and compiled in a database to determine the influence of daily, monthly, seasonal and annual temperature variation on eelgrass distribution across the intertidal habitat in Yaquina Bay, Oregon. Both currently and hi...

  7. The Starlite Project

    DTIC Science & Technology

    1990-09-01

    conflicts. The current prototyping tool also provides a multiversion data object control mechanism. From a series of experiments, we found that the...performance of a multiversion distributed database system is quite sensitive to the size of read-sets and write-sets of transactions. A multiversion database...510-512. (18) Son, S. H. and N. Haghighi, "Performance Evaluation of Multiversion Database Systems," Sixth IEEE International Conference on Data

  8. Database interfaces on NASA's heterogeneous distributed database system

    NASA Technical Reports Server (NTRS)

    Huang, S. H. S.

    1986-01-01

    The purpose of the ORACLE interface is to enable the DAVID program to submit queries and transactions to databases running under the ORACLE DBMS. The interface package is made up of several modules. The progress of these modules is described below. The two approaches used in implementing the interface are also discussed. Detailed discussion of the design of the templates is shown and concluding remarks are presented.

  9. An Incentive-based Online Optimization Framework for Distribution Grids

    DOE PAGES

    Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun; ...

    2017-10-09

    This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  10. An Incentive-based Online Optimization Framework for Distribution Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun

    This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  11. Estimation of the processes controlling variability in phytoplankton pigment distributions on the southeastern U.S. continental shelf

    NASA Technical Reports Server (NTRS)

    Mcclain, Charles R.; Ishizaka, Joji; Hofmann, Eileen E.

    1990-01-01

    Five coastal-zone-color-scanner images from the southeastern U.S. continental shelf are combined with concurrent moored current meter measurements to assess the processes controlling the variability in chlorophyll concentration and distribution in this region. An equation governing the space and time distribution of a nonconservative quantity such as chlorophyll is used in the calculations. The terms of the equation, estimated from observations, show that advective, diffusive, and local processes contribute to the plankton distributions and vary with time and location. The results from this calculation are compared with similar results obtained using a numerical physical-biological model with circulation fields derived from an optimal interpolation of the current meter observations and it is concluded that the two approaches produce different estimates of the processes controlling phytoplankton variability.

  12. Spatial distribution of pulmonary blood flow in dogs in increased force environments

    NASA Technical Reports Server (NTRS)

    Greenleaf, J. F.; Ritman, E. L.; Chevalier, P. A.; Sass, D. J.; Wood, E. H.

    1978-01-01

    Spatial distribution of pulmonary blood flow during 2- to 3-min exposures to 6-8 Gy acceleration was studied, using radioactive microspheres in dogs, and compared to previously reported 1 Gy control distributions. Isotope distributions were measured by scintiscanning individual 1-cm-thick cross sections of excised, fixed lungs. Results indicate: (1) the fraction of cardiac output traversing left and right lungs did not change systematically with the duration and magnitude of acceleration; but (2) the fraction is strongly affected by the occurrence or absence of fast deep breaths, which cause an increase or decrease, respectively, in blood flow through the dependent lung; and (3) Gy acceleration caused a significant increase in relative pulmonary vascular resistance (PVR) in nondependent and dependent regions of the lung concurrent with a decrease in PVR in the midsagittal region of the thorax.

  13. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework

    PubMed Central

    2012-01-01

    Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909

  14. Restoration, Enhancement, and Distribution of the ATLAS-1 Imaging Spectrometric Observatory (ISO) Space Science Data Set

    NASA Technical Reports Server (NTRS)

    Germany, G. A.

    2001-01-01

    The primary goal of the funded task was to restore and distribute the ISO ATLAS-1 space science data set with enhanced software and database utilities. The first year was primarily dedicated to physically transferring the data from its original format to its initial CD archival format. The remainder of the first year was devoted to the verification of the restored data set and database. The second year was devoted to the enhancement of the data set, especially the development of IDL utilities and redesign of the database and search interface as needed. This period was also devoted to distribution of the rescued data set, principally the creation and maintenance of a web interface to the data set. The final six months was dedicated to working with NSSDC to create a permanent, off site, hive of the data set and supporting utilities. This time was also used to resolve last minute quality and design issues.

  15. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    PubMed

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  16. Effects of concurrent exposure to tributyltin and 1,1-dichloro-2,2 bis (p-chlorophenyl) ethylene (p,p'-DDE) on immature male Wistar rats.

    PubMed

    Makita, Yuji; Omura, Minoru; Tanaka, Akiyo; Kiyohara, Chikako

    2005-12-01

    Tributyltin and 1, 1-dichloro-2, 2 bis (p-chlorophenyl) ethylene (p,p'-DDE) have been ubiquitously distributed over the world. In Japan, p,p'-DDE and tributyltin are ingested through marine products, in which these substances are accumulated through bio-concentration and the food chain. However, the consequence of potential combined hazards of these substances remains unknown. Therefore, the effects of concurrent exposure to 125 ppm p,p'-DDE and 25 ppm tributyltin were investigated in immature male Wistar rats by oral administration during puberty. In this study, tributyltin promoted the growth of pubertal male rats, while p,p'-DDE itself did not affect the growth but inhibited the growth enhancement by tributyltin. Furthermore, tributyltin reduced thymus weight but p,p'-DDE also prevented this weight reduction. Neither development of male sexual accessory organs nor sexual maturation was affected even by concurrent exposure to p,p'-DDE and tributyltin. No significant changes of serum testosterone, luteinizing hormone, follicle-stimulating hormone concentrations, and epididymal sperm numbers were observed with the administration of p,p'-DDE and/or tributyltin. These results indicate that sexual maturation, male reproductive organ development and sperm production is scarcely affected in immature male Wistar rats even by concurrent exposure to p,p'-DDE and tributyltin at a daily dose of ca. 2 mg/kg tributyltin and 10 mg/kg p,p'-DDE. Moreover, the simultaneous administration of p,p'-DDE with tributyltin counterbalanced the effects that were attributed to tributyltin alone.

  17. Chemoradiotherapy enhanced the efficacy of radiotherapy in nasopharyngeal carcinoma patients: a network meta-analysis

    PubMed Central

    He, Jian; Wu, Ping; Tang, Yaoyun; Liu, Sulai; Xie, Chubo; Luo, Shi; Zeng, Junfeng; Xu, Jing; Zhao, Suping

    2017-01-01

    Object A Bayesian network meta-analysis (NMA) was conducted to estimate the overall survival (OS) and complete response (CR) performance in nasopharyngeal carcinoma (NPC) patients who have been given the treatment of radiotherapy, concurrent chemoradiotherapy (C), adjuvant chemotherapy (A), neoadjuvant chemotherapy (N), concurrent chemoradiotherapy with adjuvant chemotherapy (C+A), concurrent chemoradiotherapy with neoadjuvant chemotherapy (C+N) and neoadjuvant chemotherapy with adjuvant chemotherapy (N+A). Methods Literature search was conducted in electronic databases. Hazard ratios (HRs) accompanied their 95% confidence intervals (95%CIs) or 95% credible intervals (95%CrIs) were applied to measure the relative survival benefit between two comparators. Meanwhile odd ratios (ORs) with their 95% CIs or CrIs were given to present CR data from individual studies. RESULTS Totally 52 qualified studies with 10,081 patients were included in this NMA. In conventional meta-analysis (MA), patients with N+C exhibited an average increase of 9% in the 3-year OS in relation to those with C+A. As for the NMA results, five therapies were associated with a significantly reduced HR when compared with the control group when concerning 5-year OS. C, C+A and N+A also presented a decreased HR compared with A. There was continuity among 1-year, 3-year and 5-year OS status. Cluster analysis suggested that the three chemoradiotherapy appeared to be divided into the most compete group which is located in the upper right corner of the cluster plot. Conclusion In view of survival rate and complete response, the NMA results revealed that C, C+A and C+N showed excellent efficacy. As a result, these 3 therapies were supposed to be considered as the first-line treatment according to this NMA. PMID:28418901

  18. Predictive Factors for Prophylactic Percutaneous Endoscopic Gastrostomy (PEG) Tube Placement and Use in Head and Neck Patients Following Intensity-Modulated Radiation Therapy (IMRT) Treatment: Concordance, Discrepancies, and the Role of Gabapentin.

    PubMed

    Yang, Wuyang; McNutt, Todd R; Dudley, Sara A; Kumar, Rachit; Starmer, Heather M; Gourin, Christine G; Moore, Joseph A; Evans, Kimberly; Allen, Mysha; Agrawal, Nishant; Richmon, Jeremy D; Chung, Christine H; Quon, Harry

    2016-04-01

    The prophylactic placement of a percutaneous endoscopic gastrostomy (PEG) tube in the head and neck cancer (HNC) patient is controversial. We sought to identify factors associated with prophylactic PEG placement and actual PEG use. Since 2010, data regarding PEG placement and use were prospectively recorded in a departmental database from January 2010 to December 2012. HNC patients treated with intensity-modulated radiation therapy (IMRT) were retrospectively evaluated from 2010 to 2012. Variables potentially associated with patient post-radiation dysphagia from previous literature, and our experience was evaluated. We performed multivariate logistic regression on these variables with PEG placement and PEG use, respectively, to compare the difference of association between the two arms. We identified 192 HNC patients treated with IMRT. Prophylactic PEG placement occurred in 121 (63.0 %) patients, with PEG use in 97 (80.2 %) patients. PEG placement was associated with male gender (p < .01), N stage ≥ N2 (p < .05), pretreatment swallowing difficulties (p < .01), concurrent chemotherapy (p < .01), pretreatment KPS ≥80 (p = .01), and previous surgery (p = .02). Concurrent chemotherapy (p = .03) was positively associated with the use of PEG feeding by the patient, whereas pretreatment KPS ≥80 (p = .03) and prophylactic gabapentin use (p < .01) were negatively associated with PEG use. The analysis suggests there were discrepancies between prophylactic PEG tube placement and actual use. Favorable pretreatment KPS, no pretreatment dysphagia, no concurrent chemotherapy, and the use of gabapentin were significantly associated with reduced PEG use. This analysis may help refine the indications for prophylactic PEG placement.

  19. Pregnancy, prescription medicines and the potential risk of herb-drug interactions: a cross-sectional survey.

    PubMed

    McLay, James S; Izzati, Naila; Pallivalapila, Abdul R; Shetty, Ashalatha; Pande, Binita; Rore, Craig; Al Hail, Moza; Stewart, Derek

    2017-12-19

    Pregnant women are routinely prescribed medicines while self-medicating with herbal natural products to treat predominantly pregnancy related conditions. The aim of this study was to assess the potential for herb-drug interactions (HDIs) in pregnant women and to explore possible herb-drug interactions and their potential clinical significance. A cross-sectional survey of women during early pregnancy or immediately postpartum in North-East Scotland. Outcome measures included; Prescription medicines use excluding vitamins and potential HDIs assessed using Natural Medicines Comprehensive Database. The survey was completed by 889 respondents (73% response rate). 45.3% (403) reported the use of at least one prescription medicine, excluding vitamins. Of those taking prescription medicines, 44.9% (181) also reported concurrent use of at least one HNP (Range 1-12). A total of 91 different prescription medicines were reported by respondents using HNPs. Of those taking prescription medicines, 44.9% (181) also reported concurrent use of at least one HNP (Range 1-12). Thirty-four herb-drug interactions were identified in 23 (12.7%) women with the potential to increase the risk of postpartum haemorrhage, alter maternal haemodynamics, and enhance maternal/fetal CNS depression. Almost all were rated as moderate (93.9%), one as a potentially major (ginger and nifedipine) and only one minor (ondansetron and chamomile). Almost half of pregnant women in this study were prescribed medicines excluding vitamins and minerals and almost half of these used HNPs. Potential moderate to severe HDIs were identified in an eighth of the study cohort. Healthcare professionals should be aware that the concurrent use of HNPs and prescription medicines during pregnancy is common and carries potential risks.

  20. Concurrency and climate change signal in Scottish flooding

    NASA Astrophysics Data System (ADS)

    Harding, A. E.; Butler, A.; Goody, N.; Bertram, D.; Baggaley, N.; Tett, S. F.

    2013-12-01

    The Scottish Environment Protection Agency maintains a database of river gauging stations and intensity rain-gauges with a 3-hourly resolution that covers the majority of Scotland. Both SEPA and a number of other Scottish agencies are invested in climate change attribution in this data set. SEPA's main interest lies in trend detection and changes in river level (';stage') data throughout Scotland. Emergency response teams are more concerned with the concurrency of multiple flood events that might stretch their ability to respond effectively. Unfortunately, much of the rainfall signal within SEPA's river-gauge data is altered by land use changes, modified by artificial interventions such as reservoirs, compromised by tidal flow, or obscured by measurement issues. Data reduction techniques, indices of extreme rainfall, and hydrology-driven discrimination have been employed to produce a reduced set of flood-relevant information for 24-hour ';flashy' events. Links between this set and North Atlantic circulation have been explored, as have patterns of mutual occurrence across Scotland and location- and seasonally- dependent trends through time. Both frontal systems and summer convective storms have been characterised in terms of subsequent flood-inducing flow regime, their changing behaviour over the last fifty years, and their spatial extent. This is the first stage of an ongoing project that will intelligently expand to take less robust river and rain-gauge stations into account through statistical analysis and hydrological modelling. It is also the first study of its type to analyse a nation-scale dataset of both rainfall and river flow from multiple catchments for flood event concurrency. As rainfall events are expected to intensify across much of Europe, this kind of research is likely to have an increasing degree of relevance for policy-makers. This project demonstrates that productive, policy-relevant and mutually-rewarding partnerships are already underway.

  1. ROS1 fusions rarely overlap with other oncogenic drivers in non-small cell lung cancer

    PubMed Central

    Lin, Jessica J.; Ritterhouse, Lauren L.; Ali, Siraj M.; Bailey, Mark; Schrock, Alexa B.; Gainor, Justin F.; Ferris, Lorin A.; Mino-Kenudson, Mari; Miller, Vincent A.; Iafrate, Anthony J.; Lennerz, Jochen K.; Shaw, Alice T.

    2017-01-01

    Introduction Chromosomal rearrangements involving the ROS proto-oncogene 1 receptor tyrosine kinase gene (ROS1) define a distinct molecular subset of non-small cell lung cancer (NSCLC) with sensitivity to ROS1 inhibitors. Recent reports have suggested a significant overlap between ROS1 fusions and other oncogenic driver alterations, including mutations in epidermal growth factor receptor (EGFR) and KRAS proto-oncogene (KRAS). Methods We identified patients at our institution with ROS1-rearranged NSCLC who had undergone testing for genetic alterations in additional oncogenes, including EGFR, KRAS, and anaplastic lymphoma kinase (ALK). Clinicopathologic features and genetic testing results were reviewed. We also examined a separate database of ROS1-rearranged NSCLCs identified through a commercial FoundationOne assay. Results Among 62 patients with ROS1-rearranged NSCLC evaluated at our institution, none harbored concurrent ALK fusions (0%) or EGFR activating mutations (0%). KRAS mutations were detected in two cases (3.2%), one of which harbored a concurrent non-canonical KRAS I24N mutation of unknown biological significance. In a separate ROS1 FISH-positive case, targeted sequencing failed to confirm a ROS1 fusion, but instead identified a KRAS G13D mutation. No concurrent mutations in BRAF, ERBB2, PIK3CA, AKT1, or MAP2K1 were detected. Analysis of an independent dataset of 166 ROS1-rearranged NSCLCs identified by FoundationOne demonstrated rare cases with co-occurring driver mutations in EGFR (1/166) and KRAS (3/166), and no cases with co-occurring ROS1 and ALK rearrangements. Conclusions ROS1 rearrangements rarely overlap with alterations in EGFR, KRAS, ALK, or other targetable oncogenes in NSCLC. PMID:28088512

  2. "Effects of networking on career success: A longitudinal study": Correction to Wolff and Moser (2009).

    PubMed

    2017-02-01

    Reports an error in "Effects of networking on career success: A longitudinal study" by Hans-Georg Wolff and Klaus Moser ( Journal of Applied Psychology , 2009[Jan], Vol 94[1], 196-206). In the article, results from a confirmatory factor analysis on subjective career success in the Measures section contained an error in the reported Chi-square (i.e., χ² (5, N = 257) = 9.17). This error does not alter any conclusions or substantive statements in the original article. The correct fit indices are " χ²(5, N = 257) 9.67, p = .08, RMSEA = 0.059, CFI = 1.00." (The following abstract of the original article appeared in record 2009-00697-007.) Previous research has reported effects of networking, defined as building, maintaining, and using relationships, on career success. However, empirical studies have relied exclusively on concurrent or retrospective designs that rest upon strong assumptions about the causal direction of this relation and depict a static snapshot of the relation at a given point in time. This study provides a dynamic perspective on the effects of networking on career success and reports results of a longitudinal study. Networking was assessed with 6 subscales that resulted from combining measures of the facets of (a) internal versus external networking and (b) building versus maintaining versus using contacts. Objective (salary) and subjective (career satisfaction) measures of career success were obtained for 3 consecutive years. Multilevel analyses showed that networking is related to concurrent salary and that it is related to the growth rate of salary over time. Networking is also related to concurrent career satisfaction. As satisfaction remained stable over time, no effects of networking on the growth of career satisfaction were found. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Concurrent and prognostic utility of subtyping anorexia nervosa along dietary and negative affect dimensions.

    PubMed

    Forbush, Kelsie T; Hagan, Kelsey E; Salk, Rachel H; Wildes, Jennifer E

    2017-03-01

    Bulimia nervosa can be reliably classified into subtypes based on dimensions of dietary restraint and negative affect. Community and clinical studies have shown that dietary-negative affect subtypes have greater test-retest reliability and concurrent and predictive validity compared to subtypes based on the Diagnostic and Statistical Manual of Mental Disorders (DSM). Although dietary-negative affect subtypes have shown utility for characterizing eating disorders that involve binge eating, this framework may have broader implications for understanding restrictive eating disorders. The purpose of this study was to test the concurrent and predictive validity of dietary-negative affect subtypes among patients with anorexia nervosa (AN; N = 194). Latent profile analysis was used to identify subtypes of AN based on dimensions of dietary restraint and negative affect. Chi-square and multivariate analysis of variance were used to characterize baseline differences between identified subtypes. Structural equation modeling was used to test whether dietary-negative affect subtypes would outperform DSM categories in predicting clinically relevant outcomes. Results supported a 2-profile model that replicated dietary-negative affect subtypes: Latent Profile 1 (n = 68) had clinically elevated scores on restraint only; Latent Profile 2 (n = 126) had elevated scores on both restraint and negative affect. Validation analyses showed that membership in the dietary-negative affect profile was associated with greater lifetime psychiatric comorbidity and psychosocial impairment compared to the dietary class. Dietary-negative affect subtypes only outperformed DSM categories in predicting quality-of-life impairment at 1-year follow-up. Findings highlight the clinical utility of subtyping AN based on dietary restraint and negative affect for informing future treatment-matching or personalized medicine strategies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. The evaluation of β-adrenoceptor blocking agents in patients with COPD and congestive heart failure: a nationwide study

    PubMed Central

    Lin, Tien-Yu; Huang, Yaw-Bin; Chen, Chung-Yu

    2017-01-01

    Objective β-Blockers are safe and improve survival in patients with both congestive heart failure (CHF) and COPD. However, the superiority of different types of β-blockers is still unclear among patients with CHF and COPD. The association between β-blockers and CHF exacerbation as well as COPD exacerbation remains unclear. The objective of this study was to compare the outcome of different β-blockers in patients with concurrent CHF and COPD. Patients and methods We used the National Health Insurance Research Database in Taiwan to conduct a retrospective cohort study. The inclusion criteria for CHF were patients who were >20 years old and were diagnosed with CHF between January 1, 2005 and December 31, 2012. COPD patients included those who had outpatient visit claims ≥2 times within 365 days or 1 claim for hospitalization with a COPD diagnosis. A time-dependent Cox proportional hazards regression model was applied to evaluate the effectiveness of β-blockers in the study population. Results We identified 1,872 patients with concurrent CHF and COPD. Only high-dose bisoprolol significantly reduced the risk of death and slightly decreased the hospitalization rate due to CHF exacerbation (death: adjusted hazard ratio [aHR] =0.51, 95% confidence interval [CI] =0.29–0.89; hospitalization rate due to CHF exacerbation: aHR =0.48, 95% CI =0.23–1.00). No association was observed between β-blocker use and COPD exacerbation. Conclusion In patients with concurrent CHF and COPD, β-blockers reduced mortality, CHF exacerbation, and the need for hospitalization. Bisoprolol was found to reduce mortality and CHF exacerbation compared to carvedilol and metoprolol. PMID:28894360

  5. An exploratory analysis of emotion dynamics between mothers and adolescents during conflict discussions.

    PubMed

    Main, Alexandra; Paxton, Alexandra; Dale, Rick

    2016-09-01

    Dynamic patterns of influence between parents and children have long been considered key to understanding family relationships. Despite this, most observational research on emotion in parent-child interactions examines global behaviors at the expense of exploring moment-to-moment fluctuations in emotions that are important for relational outcomes. Using recurrence quantification analysis (RQA) and growth curve analysis, this investigation explored emotion dynamics during parent-adolescent conflict interactions, focusing not only on concurrently shared emotional states but also on time-lagged synchrony of parents' and adolescents' emotions relative to one another. Mother-adolescent dyads engaged in a 10-min conflict discussion and reported on their satisfaction with the process and outcome of the discussion. Emotions were coded using the Specific Affect Coding System (SPAFF) and were collapsed into the following categories: negativity, positivity, and validation/interest. RQA and growth curve analyses revealed that negative and positive emotions were characterized by a concurrently synchronous pattern across all dyads, with the highest recurrence rates occurring around simultaneity. However, lower levels of concurrent synchrony of negative emotions were associated with higher discussion satisfaction. We also found that patterns of negativity differed with age: Mothers led negativity in dyads with younger adolescents, and adolescents led negativity in dyads with older adolescents. In contrast to negative and positive emotions, validation/interest showed the time-lagged pattern characteristic of turn-taking, and more highly satisfied dyads showed stronger patterns of time-lagged coordination in validation/interest. Our findings underscore the dynamic nature of emotions in parent-adolescent interactions and highlight the important contributions of these moment-to-moment dynamics toward overall interaction quality. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Software defined radio (SDR) architecture for concurrent multi-satellite communications

    NASA Astrophysics Data System (ADS)

    Maheshwarappa, Mamatha R.

    SDRs have emerged as a viable approach for space communications over the last decade by delivering low-cost hardware and flexible software solutions. The flexibility introduced by the SDR concept not only allows the realisation of concurrent multiple standards on one platform, but also promises to ease the implementation of one communication standard on differing SDR platforms by signal porting. This technology would facilitate implementing reconfigurable nodes for parallel satellite reception in Mobile/Deployable Ground Segments and Distributed Satellite Systems (DSS) for amateur radio/university satellite operations. This work outlines the recent advances in embedded technologies that can enable new communication architectures for concurrent multi-satellite or satellite-to-ground missions where multi-link challenges are associated. This research proposes a novel concept to run advanced parallelised SDR back-end technologies in a Commercial-Off-The-Shelf (COTS) embedded system that can support multi-signal processing for multi-satellite scenarios simultaneously. The initial SDR implementation could support only one receiver chain due to system saturation. However, the design was optimised to facilitate multiple signals within the limited resources available on an embedded system at any given time. This was achieved by providing a VHDL solution to the existing Python and C/C++ programming languages along with parallelisation so as to accelerate performance whilst maintaining the flexibility. The improvement in the performance was validated at every stage through profiling. Various cases of concurrent multiple signals with different standards such as frequency (with Doppler effect) and symbol rates were simulated in order to validate the novel architecture proposed in this research. Also, the architecture allows the system to be reconfigurable by providing the opportunity to change the communication standards in soft real-time. The chosen COTS solution provides a generic software methodology for both ground and space applications that will remain unaltered despite new evolutions in hardware, and supports concurrent multi-standard, multi-channel and multi-rate telemetry signals.

  7. Kristin Munch | NREL

    Science.gov Websites

    Information Management System, Materials Research Society Fall Meeting (2013) Photovoltaics Informatics scientific data management, database and data systems design, database clusters, storage systems integration , and distributed data analytics. She has used her experience in laboratory data management systems, lab

  8. Development of the interconnectivity and enhancement (ICE) module in the Virginia Department of Transportation's Geotechnical Database Management System Framework.

    DOT National Transportation Integrated Search

    2007-01-01

    An Internet-based, spatiotemporal Geotechnical Database Management System (GDBMS) Framework was implemented at the Virginia Department of Transportation (VDOT) in 2002 to manage geotechnical data using a distributed Geographical Information System (G...

  9. Bibliographic Databases Outside of the United States.

    ERIC Educational Resources Information Center

    McGinn, Thomas P.; And Others

    1988-01-01

    Eight articles describe the development, content, and structure of databases outside of the United States. Features discussed include library involvement, authority control, shared cataloging services, union catalogs, thesauri, abstracts, and distribution methods. Countries and areas represented are Latin America, Australia, the United Kingdom,…

  10. Eta-Sub-Earth Projection from Kepler Data

    NASA Technical Reports Server (NTRS)

    Traub, Wesley A.

    2012-01-01

    Outline of talk: (1) The Kepler database (2) Biases (3) The radius distribution (4) The period distribution (5) Projecting from the sam ple to the population (6) Extrapolating the period distribution (7) The Habitable Zone (8) Calculating the number of terrestrial, HZ plan ets (10) Conclusions

  11. Host range, host ecology, and distribution of more than 11800 fish parasite species

    USGS Publications Warehouse

    Strona, Giovanni; Palomares, Maria Lourdes D.; Bailly, Nicholas; Galli, Paolo; Lafferty, Kevin D.

    2013-01-01

    Our data set includes 38 008 fish parasite records (for Acanthocephala, Cestoda, Monogenea, Nematoda, Trematoda) compiled from the scientific literature, Internet databases, and museum collections paired to the corresponding host ecological, biogeographical, and phylogenetic traits (maximum length, growth rate, life span, age at maturity, trophic level, habitat preference, geographical range size, taxonomy). The data focus on host features, because specific parasite traits are not consistently available across records. For this reason, the data set is intended as a flexible resource able to extend the principles of ecological niche modeling to the host–parasite system, providing researchers with the data to model parasite niches based on their distribution in host species and the associated host features. In this sense, the database offers a framework for testing general ecological, biogeographical, and phylogenetic hypotheses based on the identification of hosts as parasite habitat. Potential applications of the data set are, for example, the investigation of species–area relationships or the taxonomic distribution of host-specificity. The provided host–parasite list is that currently used by Fish Parasite Ecology Software Tool (FishPEST, http://purl.oclc.org/fishpest), which is a website that allows researchers to model several aspects of the relationships between fish parasites and their hosts. The database is intended for researchers who wish to have more freedom to analyze the database than currently possible with FishPEST. However, for readers who have not seen FishPEST, we recommend using this as a starting point for interacting with the database.

  12. Accessing and distributing EMBL data using CORBA (common object request broker architecture).

    PubMed

    Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P

    2000-01-01

    The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.

  13. Accessing and distributing EMBL data using CORBA (common object request broker architecture)

    PubMed Central

    Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip

    2000-01-01

    Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259

  14. Geologic database for digital geology of California, Nevada, and Utah: an application of the North American Data Model

    USGS Publications Warehouse

    Bedford, David R.; Ludington, Steve; Nutt, Constance M.; Stone, Paul A.; Miller, David M.; Miller, Robert J.; Wagner, David L.; Saucedo, George J.

    2003-01-01

    The USGS is creating an integrated national database for digital state geologic maps that includes stratigraphic, age, and lithologic information. The majority of the conterminous 48 states have digital geologic base maps available, often at scales of 1:500,000. This product is a prototype, and is intended to demonstrate the types of derivative maps that will be possible with the national integrated database. This database permits the creation of a number of types of maps via simple or sophisticated queries, maps that may be useful in a number of areas, including mineral-resource assessment, environmental assessment, and regional tectonic evolution. This database is distributed with three main parts: a Microsoft Access 2000 database containing geologic map attribute data, an Arc/Info (Environmental Systems Research Institute, Redlands, California) Export format file containing points representing designation of stratigraphic regions for the Geologic Map of Utah, and an ArcView 3.2 (Environmental Systems Research Institute, Redlands, California) project containing scripts and dialogs for performing a series of generalization and mineral resource queries. IMPORTANT NOTE: Spatial data for the respective stage geologic maps is not distributed with this report. The digital state geologic maps for the states involved in this report are separate products, and two of them are produced by individual state agencies, which may be legally and/or financially responsible for this data. However, the spatial datasets for maps discussed in this report are available to the public. Questions regarding the distribution, sale, and use of individual state geologic maps should be sent to the respective state agency. We do provide suggestions for obtaining and formatting the spatial data to make it compatible with data in this report. See section ‘Obtaining and Formatting Spatial Data’ in the PDF version of the report.

  15. An Overview of ARL’s Multimodal Signatures Database and Web Interface

    DTIC Science & Technology

    2007-12-01

    ActiveX components, which hindered distribution due to license agreements and run-time license software to use such components. g. Proprietary...Overview The database consists of multimodal signature data files in the HDF5 format. Generally, each signature file contains all the ancillary...only contains information in the database, Web interface, and signature files that is releasable to the public. The Web interface consists of static

  16. DESPIC: Detecting Early Signatures of Persuasion in Information Cascades

    DTIC Science & Technology

    2015-08-27

    over NoSQL Databases, Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). 26-MAY-14, . : , P...over NoSQL Databases. Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). Chicago, IL, USA...distributed NoSQL databases including HBase and Riak, we finalized the requirements of the optimal computational architecture to support our framework

  17. Verification of the databases EXFOR and ENDF

    NASA Astrophysics Data System (ADS)

    Berton, Gottfried; Damart, Guillaume; Cabellos, Oscar; Beauzamy, Bernard; Soppera, Nicolas; Bossant, Manuel

    2017-09-01

    The objective of this work is for the verification of large experimental (EXFOR) and evaluated nuclear reaction databases (JEFF, ENDF, JENDL, TENDL…). The work is applied to neutron reactions in EXFOR data, including threshold reactions, isomeric transitions, angular distributions and data in the resonance region of both isotopes and natural elements. Finally, a comparison of the resonance integrals compiled in EXFOR database with those derived from the evaluated libraries is also performed.

  18. Temporal texture of associative encoding modulates recall processes.

    PubMed

    Tibon, Roni; Levy, Daniel A

    2014-02-01

    Binding aspects of an experience that are distributed over time is an important element of episodic memory. In the current study, we examined how the temporal complexity of an experience may govern the processes required for its retrieval. We recorded event-related potentials during episodic cued recall following pair associate learning of concurrently and sequentially presented object-picture pairs. Cued recall success effects over anterior and posterior areas were apparent in several time windows. In anterior locations, these recall success effects were similar for concurrently and sequentially encoded pairs. However, in posterior sites clustered over parietal scalp the effect was larger for the retrieval of sequentially encoded pairs. We suggest that anterior aspects of the mid-latency recall success effects may reflect working-with-memory operations or direct access recall processes, while more posterior aspects reflect recollective processes which are required for retrieval of episodes of greater temporal complexity. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Comparison of concurrent strain gage- and pressure transducer-measured flight loads on a lifting reentry vehicle and correlation with wind tunnel predictions

    NASA Technical Reports Server (NTRS)

    Tang, M. H.; Sefic, W. J.; Sheldon, R. G.

    1978-01-01

    Concurrent strain gage and pressure transducer measured flight loads on a lifting reentry vehicle are compared and correlated with wind tunnel-predicted loads. Subsonic, transonic, and supersonic aerodynamic loads are presented for the left fin and control surfaces of the X-24B lifting reentry vehicle. Typical left fin pressure distributions are shown. The effects of variations in angle of attack, angle of sideslip, and Mach number on the left fin loads and rudder hinge moments are presented in coefficient form. Also presented are the effects of variations in angle of attack and Mach number on the upper flap, lower flap, and aileron hinge-moment coefficients. The effects of variations in lower flap hinge moments due to changes in lower flap deflection and Mach number are presented in terms of coefficient slopes.

  20. Concurrent engineering research center

    NASA Technical Reports Server (NTRS)

    Callahan, John R.

    1995-01-01

    The projects undertaken by The Concurrent Engineering Research Center (CERC) at West Virginia University are reported and summarized. CERC's participation in the Department of Defense's Defense Advanced Research Project relating to technology needed to improve the product development process is described, particularly in the area of advanced weapon systems. The efforts committed to improving collaboration among the diverse and distributed health care providers are reported, along with the research activities for NASA in Independent Software Verification and Validation. CERC also takes part in the electronic respirator certification initiated by The National Institute for Occupational Safety and Health, as well as in the efforts to find a solution to the problem of producing environment-friendly end-products for product developers worldwide. The 3M Fiber Metal Matrix Composite Model Factory Program is discussed. CERC technologies, facilities,and personnel-related issues are described, along with its library and technical services and recent publications.

  1. Concurrent application of TMS and near-infrared optical imaging: methodological considerations and potential artifacts

    PubMed Central

    Parks, Nathan A.

    2013-01-01

    The simultaneous application of transcranial magnetic stimulation (TMS) with non-invasive neuroimaging provides a powerful method for investigating functional connectivity in the human brain and the causal relationships between areas in distributed brain networks. TMS has been combined with numerous neuroimaging techniques including, electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and positron emission tomography (PET). Recent work has also demonstrated the feasibility and utility of combining TMS with non-invasive near-infrared optical imaging techniques, functional near-infrared spectroscopy (fNIRS) and the event-related optical signal (EROS). Simultaneous TMS and optical imaging affords a number of advantages over other neuroimaging methods but also involves a unique set of methodological challenges and considerations. This paper describes the methodology of concurrently performing optical imaging during the administration of TMS, focusing on experimental design, potential artifacts, and approaches to controlling for these artifacts. PMID:24065911

  2. Experimental determination of entanglement with a single measurement.

    PubMed

    Walborn, S P; Souto Ribeiro, P H; Davidovich, L; Mintert, F; Buchleitner, A

    2006-04-20

    Nearly all protocols requiring shared quantum information--such as quantum teleportation or key distribution--rely on entanglement between distant parties. However, entanglement is difficult to characterize experimentally. All existing techniques for doing so, including entanglement witnesses or Bell inequalities, disclose the entanglement of some quantum states but fail for other states; therefore, they cannot provide satisfactory results in general. Such methods are fundamentally different from entanglement measures that, by definition, quantify the amount of entanglement in any state. However, these measures suffer from the severe disadvantage that they typically are not directly accessible in laboratory experiments. Here we report a linear optics experiment in which we directly observe a pure-state entanglement measure, namely concurrence. Our measurement set-up includes two copies of a quantum state: these 'twin' states are prepared in the polarization and momentum degrees of freedom of two photons, and concurrence is measured with a single, local measurement on just one of the photons.

  3. Examining the Effect of the Die Angle on Tool Load and Wear in the Extrusion Process

    NASA Astrophysics Data System (ADS)

    Nowotyńska, Irena; Kut, Stanisław

    2014-04-01

    The tool durability is a crucial factor in each manufacturing process, and this also includes the extrusion process. Striving to achieve the higher product quality should be accompanied by a long-term tool life and production cost reduction. This article presents the comparative research of load and wear of die at various angles of working cone during the concurrent extrusion. The numerical calculations of a tool load during the concurrent extrusion were performed using the MSC MARC software using the finite element method (FEM). Archard model was used to determine and compare die wear. This model was implemented in the software using the FEM. The examined tool deformations and stress distribution were determined based on the performed analyses. The die wear depth at various working cone angles was determined. Properly shaped die has an effect on the extruded material properties, but also controls loads, elastic deformation, and the tool life.

  4. Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2012-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less

  5. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  6. An inventory of Arctic Ocean data in the World Ocean Database

    NASA Astrophysics Data System (ADS)

    Zweng, Melissa M.; Boyer, Tim P.; Baranova, Olga K.; Reagan, James R.; Seidov, Dan; Smolyar, Igor V.

    2018-03-01

    The World Ocean Database (WOD) contains over 1.3 million oceanographic casts (where cast refers to an oceanographic profile or set of profiles collected concurrently at more than one depth between the ocean surface and ocean bottom) collected in the Arctic Ocean basin and its surrounding marginal seas. The data, collected from 1849 to the present, come from many submitters and countries, and were collected using a variety of instruments and platforms. These data, along with the derived products World Ocean Atlas (WOA) and the Arctic Regional Climatologies, are exceptionally useful - the data are presented in a standardized, easy to use format and include metadata and quality control information. Collecting data in the Arctic Ocean is challenging, and coverage in space and time ranges from excellent to nearly non-existent. WOD continues to compile a comprehensive collection of Arctic Ocean profile data, ideal for oceanographic, environmental and climatic analyses (https://doi.org/10.7289/V54Q7S16).

  7. Read Code Quality Assurance

    PubMed Central

    Schulz, Erich; Barrett, James W.; Price, Colin

    1998-01-01

    As controlled clinical vocabularies assume an increasing role in modern clinical information systems, so the issue of their quality demands greater attention. In order to meet the resulting stringent criteria for completeness and correctness, a quality assurance system comprising a database of more than 500 rules is being developed and applied to the Read Thesaurus. The authors discuss the requirement to apply quality assurance processes to their dynamic editing database in order to ensure the quality of exported products. Sources of errors include human, hardware, and software factors as well as new rules and transactions. The overall quality strategy includes prevention, detection, and correction of errors. The quality assurance process encompasses simple data specification, internal consistency, inspection procedures and, eventually, field testing. The quality assurance system is driven by a small number of tables and UNIX scripts, with “business rules” declared explicitly as Structured Query Language (SQL) statements. Concurrent authorship, client-server technology, and an initial failure to implement robust transaction control have all provided valuable lessons. The feedback loop for error management needs to be short. PMID:9670131

  8. Bioinformatics analysis of transcriptome dynamics during growth in angus cattle longissimus muscle.

    PubMed

    Moisá, Sonia J; Shike, Daniel W; Graugnard, Daniel E; Rodriguez-Zas, Sandra L; Everts, Robin E; Lewin, Harris A; Faulkner, Dan B; Berger, Larry L; Loor, Juan J

    2013-01-01

    Transcriptome dynamics in the longissimus muscle (LM) of young Angus cattle were evaluated at 0, 60, 120, and 220 days from early-weaning. Bioinformatic analysis was performed using the dynamic impact approach (DIA) by means of Kyoto Encyclopedia of Genes and Genomes (KEGG) and Database for Annotation, Visualization and Integrated Discovery (DAVID) databases. Between 0 to 120 days (growing phase) most of the highly-impacted pathways (eg, ascorbate and aldarate metabolism, drug metabolism, cytochrome P450 and Retinol metabolism) were inhibited. The phase between 120 to 220 days (finishing phase) was characterized by the most striking differences with 3,784 differentially expressed genes (DEGs). Analysis of those DEGs revealed that the most impacted KEGG canonical pathway was glycosylphosphatidylinositol (GPI)-anchor biosynthesis, which was inhibited. Furthermore, inhibition of calpastatin and activation of tyrosine aminotransferase ubiquitination at 220 days promotes proteasomal degradation, while the concurrent activation of ribosomal proteins promotes protein synthesis. Therefore, the balance of these processes likely results in a steady-state of protein turnover during the finishing phase. Results underscore the importance of transcriptome dynamics in LM during growth.

  9. Read Code quality assurance: from simple syntax to semantic stability.

    PubMed

    Schulz, E B; Barrett, J W; Price, C

    1998-01-01

    As controlled clinical vocabularies assume an increasing role in modern clinical information systems, so the issue of their quality demands greater attention. In order to meet the resulting stringent criteria for completeness and correctness, a quality assurance system comprising a database of more than 500 rules is being developed and applied to the Read Thesaurus. The authors discuss the requirement to apply quality assurance processes to their dynamic editing database in order to ensure the quality of exported products. Sources of errors include human, hardware, and software factors as well as new rules and transactions. The overall quality strategy includes prevention, detection, and correction of errors. The quality assurance process encompasses simple data specification, internal consistency, inspection procedures and, eventually, field testing. The quality assurance system is driven by a small number of tables and UNIX scripts, with "business rules" declared explicitly as Structured Query Language (SQL) statements. Concurrent authorship, client-server technology, and an initial failure to implement robust transaction control have all provided valuable lessons. The feedback loop for error management needs to be short.

  10. Intelligent distributed medical image management

    NASA Astrophysics Data System (ADS)

    Garcia, Hong-Mei C.; Yun, David Y.

    1995-05-01

    The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.

  11. rAvis: an R-package for downloading information stored in Proyecto AVIS, a citizen science bird project.

    PubMed

    Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael

    2014-01-01

    Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years.

  12. rAvis: An R-Package for Downloading Information Stored in Proyecto AVIS, a Citizen Science Bird Project

    PubMed Central

    Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael

    2014-01-01

    Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years. PMID:24626233

  13. A TEX86 surface sediment database and extended Bayesian calibration

    NASA Astrophysics Data System (ADS)

    Tierney, Jessica E.; Tingley, Martin P.

    2015-06-01

    Quantitative estimates of past temperature changes are a cornerstone of paleoclimatology. For a number of marine sediment-based proxies, the accuracy and precision of past temperature reconstructions depends on a spatial calibration of modern surface sediment measurements to overlying water temperatures. Here, we present a database of 1095 surface sediment measurements of TEX86, a temperature proxy based on the relative cyclization of marine archaeal glycerol dialkyl glycerol tetraether (GDGT) lipids. The dataset is archived in a machine-readable format with geospatial information, fractional abundances of lipids (if available), and metadata. We use this new database to update surface and subsurface temperature calibration models for TEX86 and demonstrate the applicability of the TEX86 proxy to past temperature prediction. The TEX86 database confirms that surface sediment GDGT distribution has a strong relationship to temperature, which accounts for over 70% of the variance in the data. Future efforts, made possible by the data presented here, will seek to identify variables with secondary relationships to GDGT distributions, such as archaeal community composition.

  14. Database interfaces on NASA's heterogeneous distributed database system

    NASA Technical Reports Server (NTRS)

    Huang, Shou-Hsuan Stephen

    1987-01-01

    The purpose of Distributed Access View Integrated Database (DAVID) interface module (Module 9: Resident Primitive Processing Package) is to provide data transfer between local DAVID systems and resident Data Base Management Systems (DBMSs). The result of current research is summarized. A detailed description of the interface module is provided. Several Pascal templates were constructed. The Resident Processor program was also developed. Even though it is designed for the Pascal templates, it can be modified for templates in other languages, such as C, without much difficulty. The Resident Processor itself can be written in any programming language. Since Module 5 routines are not ready yet, there is no way to test the interface module. However, simulation shows that the data base access programs produced by the Resident Processor do work according to the specifications.

  15. A Multiagent System for Dynamic Data Aggregation in Medical Research

    PubMed Central

    Urovi, Visara; Barba, Imanol; Aberer, Karl; Schumacher, Michael Ignaz

    2016-01-01

    The collection of medical data for research purposes is a challenging and long-lasting process. In an effort to accelerate and facilitate this process we propose a new framework for dynamic aggregation of medical data from distributed sources. We use agent-based coordination between medical and research institutions. Our system employs principles of peer-to-peer network organization and coordination models to search over already constructed distributed databases and to identify the potential contributors when a new database has to be built. Our framework takes into account both the requirements of a research study and current data availability. This leads to better definition of database characteristics such as schema, content, and privacy parameters. We show that this approach enables a more efficient way to collect data for medical research. PMID:27975063

  16. A Comparison of PETSC Library and HPF Implementations of an Archetypal PDE Computation

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Keyes, David E.; Mehrotra, Piyush

    1997-01-01

    Two paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation a nonlinear, structured-grid partial differential equation boundary value problem using the same algorithm on the same hardware. Both paradigms, parallel libraries represented by Argonne's PETSC, and parallel languages represented by the Portland Group's HPF, are found to be easy to use for this problem class, and both are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under either paradigm includes specification of the data partitioning (corresponding to a geometrically simple decomposition of the domain of the PDE). Programming in SPAM style for the PETSC library requires writing the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global- to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm, introducing concurrency through subdomain blocking (an effort similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Correctness and scalability are cross-validated on up to 32 nodes of an IBM SP2.

  17. Design of a network for concurrent message passing systems

    NASA Astrophysics Data System (ADS)

    Song, Paul Y.

    1988-08-01

    We describe the design of the network design frame (NDF), a self-timed routing chip for a message-passing concurrent computer. The NDF uses a partitioned data path, low-voltage output drivers, and a distributed token-passing arbiter to provide a bandwidth of 450 Mbits/sec into the network. Wormhole routing and bidirectional virtual channels are used to provide low latency communications, less than 2us latency to deliver a 216 bit message across the diameter of a 1K node mess-connected machine. To support concurrent software systems, the NDF provides two logical networks, one for user messages and one for system messages. The two networks share the same set of physical wires. To facilitate the development of network nodes, the NDF is a design frame. The NDF circuitry is integrated into the pad frame of a chip leaving the center of the chip uncommitted. We define an analytic framework in which to study the effects of network size, network buffering capacity, bidirectional channels, and traffic on this class of networks. The response of the network to various combinations of these parameters are obtained through extensive simulation of the network model. Through simulation, we are able to observe the macro behavior of the network as opposed to the micro behavior of the NDF routing controller.

  18. Weather Radar Studies

    DTIC Science & Technology

    1988-03-31

    radar operation and data - collection activities, a large data -analysis effort has been under way in support of automatic wind-shear detection algorithm ...REDUCTION AND ALGORITHM DEVELOPMENT 49 A. General-Purpose Software 49 B. Concurrent Computer Systems 49 C. Sun Workstations 51 D. Radar Data Analysis 52...1. Algorithm Verification 52 2. Other Studies 53 3. Translations 54 4. Outside Distributions 55 E. Mesonet/LLWAS Data Analysis 55 1. 1985 Data 55 2

  19. Concurrent development of fault management hardware and software in the SSM/PMAD. [Space Station Module/Power Management And Distribution

    NASA Technical Reports Server (NTRS)

    Freeman, Kenneth A.; Walsh, Rick; Weeks, David J.

    1988-01-01

    Space Station issues in fault management are discussed. The system background is described with attention given to design guidelines and power hardware. A contractually developed fault management system, FRAMES, is integrated with the energy management functions, the control switchgear, and the scheduling and operations management functions. The constraints that shaped the FRAMES system and its implementation are considered.

  20. Systematic and Scalable Testing of Concurrent Programs

    DTIC Science & Technology

    2013-12-16

    The evaluation of CHESS [107] checked eight different programs ranging from process management libraries to a distributed execution engine to a research...tool (§3.1) targets systematic testing of scheduling nondeterminism in multi- threaded components of the Omega cluster management system [129], while...tool for systematic testing of multithreaded com- ponents of the Omega cluster management system [129]. In particular, §3.1.1 defines a model for

Top