Sample records for extremely large databases

  1. Applications of Precipitation Feature Databases from GPM core and constellation Satellites

    NASA Astrophysics Data System (ADS)

    Liu, C.

    2017-12-01

    Using the observations from Global Precipitation Mission (GPM) core and constellation satellites, global precipitation was quantitatively described from the perspective of precipitation systems and their properties. This presentation will introduce the development of precipitation feature databases, and several scientific questions that have been tackled using this database, including the topics of global snow precipitation, extreme intensive convection, hail storms, extreme precipitation, and microphysical properties derived with dual frequency radars at the top of convective cores. As more and more observations of constellation satellites become available, it is anticipated that the precipitation feature approach will help to address a large variety of scientific questions in the future. For anyone who is interested, all the current precipitation feature databases are freely open to public at: http://atmos.tamucc.edu/trmm/.

  2. A k-Vector Approach to Sampling, Interpolation, and Approximation

    NASA Astrophysics Data System (ADS)

    Mortari, Daniele; Rogers, Jonathan

    2013-12-01

    The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.

  3. The National Extreme Events Data and Research Center (NEED)

    NASA Astrophysics Data System (ADS)

    Gulledge, J.; Kaiser, D. P.; Wilbanks, T. J.; Boden, T.; Devarakonda, R.

    2014-12-01

    The Climate Change Science Institute at Oak Ridge National Laboratory (ORNL) is establishing the National Extreme Events Data and Research Center (NEED), with the goal of transforming how the United States studies and prepares for extreme weather events in the context of a changing climate. NEED will encourage the myriad, distributed extreme events research communities to move toward the adoption of common practices and will develop a new database compiling global historical data on weather- and climate-related extreme events (e.g., heat waves, droughts, hurricanes, etc.) and related information about impacts, costs, recovery, and available research. Currently, extreme event information is not easy to access and is largely incompatible and inconsistent across web sites. NEED's database development will take into account differences in time frames, spatial scales, treatments of uncertainty, and other parameters and variables, and leverage informatics tools developed at ORNL (i.e., the Metadata Editor [1] and Mercury [2]) to generate standardized, robust documentation for each database along with a web-searchable catalog. In addition, NEED will facilitate convergence on commonly accepted definitions and standards for extreme events data and will enable integrated analyses of coupled threats, such as hurricanes/sea-level rise/flooding and droughts/wildfires. Our goal and vision is that NEED will become the premiere integrated resource for the general study of extreme events. References: [1] Devarakonda, Ranjeet, et al. "OME: Tool for generating and managing metadata to handle BigData." Big Data (Big Data), 2014 IEEE International Conference on. IEEE, 2014. [2] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94.

  4. Fractures from trampolines: results from a national database, 2002 to 2011.

    PubMed

    Loder, Randall T; Schultz, William; Sabatino, Meagan

    2014-01-01

    No study specifically analyzes trampoline fracture patterns across a large population. The purpose of this study was to determine such patterns. We queried the National Electronic Injury Surveillance System database for trampoline injuries between 2002 and 2011, and the patients were analyzed by age, sex, race, anatomic location of the injury, geographical location of the injury, and disposition from the emergency department (ED). Statistical analyses were performed with SUDAAN 10 software. Estimated expenses were determined using 2010 data. There were an estimated 1,002,735 ED visits for trampoline-related injuries; 288,876 (29.0%) sustained fractures. The average age for those with fractures was 9.5 years; 92.7% were aged 16 years or younger; 51.7% were male, 95.1% occurred at home, and 9.9% were admitted. The fractures were located in the upper extremity (59.9%), lower extremity (35.7%), and axial skeleton (spine, skull/face, rib/sternum) (4.4%-spine 1.0%, skull/face 2.9%, rib/sternum 0.5%). Those in the axial skeleton were older (16.5 y) than the upper extremity (8.7 y) or lower extremity (10.0 y) (P<0.0001) and more frequently male (67.9%). Lower extremity fractures were more frequently female (54.0%) (P<0.0001). The forearm (37%) and elbow (19%) were most common in the upper extremity; elbow fractures were most frequently admitted (20.0%). The tibia/fibula (39.5%) and ankle (31.5%) were most common in the lower extremity; femur fractures were most frequently admitted (57.9%). Cervical (36.4%) and lumbar (24.7%) were most common locations in the spine; cervical fractures were the most frequently admitted (75.6%). The total ED expense for all trampoline injuries over this 10-year period was $1.002 billion and $408 million for fractures. Trampoline fractures most frequently involve the upper extremity followed by the lower extremity, >90% occur in children. The financial burden to society is large. Further efforts for prevention are needed.

  5. Large Scale Landslide Database System Established for the Reservoirs in Southern Taiwan

    NASA Astrophysics Data System (ADS)

    Tsai, Tsai-Tsung; Tsai, Kuang-Jung; Shieh, Chjeng-Lun

    2017-04-01

    Typhoon Morakot seriously attack southern Taiwan awaken the public awareness of large scale landslide disasters. Large scale landslide disasters produce large quantity of sediment due to negative effects on the operating functions of reservoirs. In order to reduce the risk of these disasters within the study area, the establishment of a database for hazard mitigation / disaster prevention is necessary. Real time data and numerous archives of engineering data, environment information, photo, and video, will not only help people make appropriate decisions, but also bring the biggest concern for people to process and value added. The study tried to define some basic data formats / standards from collected various types of data about these reservoirs and then provide a management platform based on these formats / standards. Meanwhile, in order to satisfy the practicality and convenience, the large scale landslide disasters database system is built both provide and receive information abilities, which user can use this large scale landslide disasters database system on different type of devices. IT technology progressed extreme quick, the most modern system might be out of date anytime. In order to provide long term service, the system reserved the possibility of user define data format /standard and user define system structure. The system established by this study was based on HTML5 standard language, and use the responsive web design technology. This will make user can easily handle and develop this large scale landslide disasters database system.

  6. VizieR Online Data Catalog: 2nd and 3d parameters of HB of globular clusters (Gratton+, 2010)

    NASA Astrophysics Data System (ADS)

    Gratton, R. G.; Carretta, E.; Bragaglia, A.; Lucatello, S.; S'orazii, V.

    2010-05-01

    The second parameter (the first being metallicity) defining the distribution of stars on the horizontal branch (HB) of globular clusters (GCs) has long been one of the major open issues in our understanding of the evolution of normal stars. Large photometric and spectroscopic databases are now available: they include large and homogeneous sets of colour-magnitude diagrams, cluster ages, and homogeneous data about chemical compositions from our FLAMES survey. We use these databases to re-examine this issue. Methods. We use the photometric data to derive median and extreme (i.e., the values including 90% of the distribution) colours and magnitudes of stars along the HB for about a hundred GCs. We transform these into median and extreme masses of stars on the HB, using the models developed by the Pisa group, and taking into account evolutionary effects. We compare these masses with those expected at the tip of the red giant branch (RGB) to derive the total mass lost by the stars. (11 data files).

  7. Compressing DNA sequence databases with coil.

    PubMed

    White, W Timothy J; Hendy, Michael D

    2008-05-20

    Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  8. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  9. Nuclear Data and Reaction Rate Databases in Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Lippuner, Jonas

    2018-06-01

    Astrophysical simulations and models require a large variety of micro-physics data, such as equation of state tables, atomic opacities, properties of nuclei, and nuclear reaction rates. Some of the required data is experimentally accessible, but the extreme conditions present in many astrophysical scenarios cannot be reproduced in the laboratory and thus theoretical models are needed to supplement the empirical data. Collecting data from various sources and making them available as a database in a unified format is a formidable task. I will provide an overview of the data requirements in astrophysics with an emphasis on nuclear astrophysics. I will then discuss some of the existing databases, the science they enable, and their limitations. Finally, I will offer some thoughts on how to design a useful database.

  10. Characterization and prediction of extreme events in turbulence

    NASA Astrophysics Data System (ADS)

    Fonda, Enrico; Iyer, Kartik P.; Sreenivasan, Katepalli R.

    2017-11-01

    Extreme events in Nature such as tornadoes, large floods and strong earthquakes are rare but can have devastating consequences. The predictability of these events is very limited at present. Extreme events in turbulence are the very large events in small scales that are intermittent in character. We examine events in energy dissipation rate and enstrophy which are several tens to hundreds to thousands of times the mean value. To this end we use our DNS database of homogeneous and isotropic turbulence with Taylor Reynolds numbers spanning a decade, computed with different small scale resolutions and different box sizes, and study the predictability of these events using machine learning. We start with an aggressive data augmentation to virtually increase the number of these rare events by two orders of magnitude and train a deep convolutional neural network to predict their occurrence in an independent data set. The goal of the work is to explore whether extreme events can be predicted with greater assurance than can be done by conventional methods (e.g., D.A. Donzis & K.R. Sreenivasan, J. Fluid Mech. 647, 13-26, 2010).

  11. Perspectives in astrophysical databases

    NASA Astrophysics Data System (ADS)

    Frailis, Marco; de Angelis, Alessandro; Roberto, Vito

    2004-07-01

    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.

  12. Big Data and Total Hip Arthroplasty: How Do Large Databases Compare?

    PubMed

    Bedard, Nicholas A; Pugely, Andrew J; McHugh, Michael A; Lux, Nathan R; Bozic, Kevin J; Callaghan, John J

    2018-01-01

    Use of large databases for orthopedic research has become extremely popular in recent years. Each database varies in the methods used to capture data and the population it represents. The purpose of this study was to evaluate how these databases differed in reported demographics, comorbidities, and postoperative complications for primary total hip arthroplasty (THA) patients. Primary THA patients were identified within National Surgical Quality Improvement Programs (NSQIP), Nationwide Inpatient Sample (NIS), Medicare Standard Analytic Files (MED), and Humana administrative claims database (HAC). NSQIP definitions for comorbidities and complications were matched to corresponding International Classification of Diseases, 9th Revision/Current Procedural Terminology codes to query the other databases. Demographics, comorbidities, and postoperative complications were compared. The number of patients from each database was 22,644 in HAC, 371,715 in MED, 188,779 in NIS, and 27,818 in NSQIP. Age and gender distribution were clinically similar. Overall, there was variation in prevalence of comorbidities and rates of postoperative complications between databases. As an example, NSQIP had more than twice the obesity than NIS. HAC and MED had more than 2 times the diabetics than NSQIP. Rates of deep infection and stroke 30 days after THA had more than 2-fold difference between all databases. Among databases commonly used in orthopedic research, there is considerable variation in complication rates following THA depending upon the database used for analysis. It is important to consider these differences when critically evaluating database research. Additionally, with the advent of bundled payments, these differences must be considered in risk adjustment models. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. The Global Streamflow Indices and Metadata archive (G-SIM): A compilation of global streamflow time series indices and meta-data

    NASA Astrophysics Data System (ADS)

    Do, Hong; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth; Senerivatne, Sonia

    2017-04-01

    In-situ observations of daily streamflow with global coverage are a crucial asset for understanding large-scale freshwater resources which are an essential component of the Earth system and a prerequisite for societal development. Here we present the Global Streamflow Indices and Metadata archive (G-SIM), a collection indices derived from more than 20,000 daily streamflow time series across the globe. These indices are designed to support global assessments of change in wet and dry extremes, and have been compiled from 12 free-to-access online databases (seven national databases and five international collections). The G-SIM archive also includes significant metadata to help support detailed understanding of streamflow dynamics, with the inclusion of drainage area shapefile and many essential catchment properties such as land cover type, soil and topographic characteristics. The automated procedure in data handling and quality control of the project makes G-SIM a reproducible, extendible archive and can be utilised for many purposes in large-scale hydrology. Some potential applications include the identification of observational trends in hydrological extremes, the assessment of climate change impacts on streamflow regimes, and the validation of global hydrological models.

  14. GLAD: a system for developing and deploying large-scale bioinformatics grid.

    PubMed

    Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong

    2005-03-01

    Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.

  15. Extreme Precipitation and High-Impact Landslides

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia; Adler, Robert; Huffman, George; Peters-Lidard, Christa

    2012-01-01

    It is well known that extreme or prolonged rainfall is the dominant trigger of landslides; however, there remain large uncertainties in characterizing the distribution of these hazards and meteorological triggers at the global scale. Researchers have evaluated the spatiotemporal distribution of extreme rainfall and landslides at local and regional scale primarily using in situ data, yet few studies have mapped rainfall-triggered landslide distribution globally due to the dearth of landslide data and consistent precipitation information. This research uses a newly developed Global Landslide Catalog (GLC) and a 13-year satellite-based precipitation record from Tropical Rainfall Measuring Mission (TRMM) data. For the first time, these two unique products provide the foundation to quantitatively evaluate the co-occurence of precipitation and rainfall-triggered landslides globally. The GLC, available from 2007 to the present, contains information on reported rainfall-triggered landslide events around the world using online media reports, disaster databases, etc. When evaluating this database, we observed that 2010 had a large number of high-impact landslide events relative to previous years. This study considers how variations in extreme and prolonged satellite-based rainfall are related to the distribution of landslides over the same time scales for three active landslide areas: Central America, the Himalayan Arc, and central-eastern China. Several test statistics confirm that TRMM rainfall generally scales with the observed increase in landslide reports and fatal events for 2010 and previous years over each region. These findings suggest that the co-occurrence of satellite precipitation and landslide reports may serve as a valuable indicator for characterizing the spatiotemporal distribution of landslide-prone areas in order to establish a global rainfall-triggered landslide climatology. This research also considers the sources for this extreme rainfall, citing teleconnections from ENSO as likely contributors to regional precipitation variability. This work demonstrates the potential for using satellite-based precipitation estimates to identify potentially active landslide areas at the global scale in order to improve landslide cataloging and quantify landslide triggering at daily, monthly and yearly time scales.

  16. Evaluating sub-seasonal skill in probabilistic forecasts of Atmospheric Rivers and associated extreme events

    NASA Astrophysics Data System (ADS)

    Subramanian, A. C.; Lavers, D.; Matsueda, M.; Shukla, S.; Cayan, D. R.; Ralph, M.

    2017-12-01

    Atmospheric rivers (ARs) - elongated plumes of intense moisture transport - are a primary source of hydrological extremes, water resources and impactful weather along the West Coast of North America and Europe. There is strong demand in the water management, societal infrastructure and humanitarian sectors for reliable sub-seasonal forecasts, particularly of extreme events, such as floods and droughts so that actions to mitigate disastrous impacts can be taken with sufficient lead-time. Many recent studies have shown that ARs in the Pacific and the Atlantic are modulated by large-scale modes of climate variability. Leveraging the improved understanding of how these large-scale climate modes modulate the ARs in these two basins, we use the state-of-the-art multi-model forecast systems such as the North American Multi-Model Ensemble (NMME) and the Subseasonal-to-Seasonal (S2S) database to help inform and assess the probabilistic prediction of ARs and related extreme weather events over the North American and European West Coasts. We will present results from evaluating probabilistic forecasts of extreme precipitation and AR activity at the sub-seasonal scale. In particular, results from the comparison of two winters (2015-16 and 2016-17) will be shown, winters which defied canonical El Niño teleconnection patterns over North America and Europe. We further extend this study to analyze probabilistic forecast skill of AR events in these two basins and the variability in forecast skill during certain regimes of large-scale climate modes.

  17. HBLAST: Parallelised sequence similarity--A Hadoop MapReducable basic local alignment search tool.

    PubMed

    O'Driscoll, Aisling; Belogrudov, Vladislav; Carroll, John; Kropp, Kai; Walsh, Paul; Ghazal, Peter; Sleator, Roy D

    2015-04-01

    The recent exponential growth of genomic databases has resulted in the common task of sequence alignment becoming one of the major bottlenecks in the field of computational biology. It is typical for these large datasets and complex computations to require cost prohibitive High Performance Computing (HPC) to function. As such, parallelised solutions have been proposed but many exhibit scalability limitations and are incapable of effectively processing "Big Data" - the name attributed to datasets that are extremely large, complex and require rapid processing. The Hadoop framework, comprised of distributed storage and a parallelised programming framework known as MapReduce, is specifically designed to work with such datasets but it is not trivial to efficiently redesign and implement bioinformatics algorithms according to this paradigm. The parallelisation strategy of "divide and conquer" for alignment algorithms can be applied to both data sets and input query sequences. However, scalability is still an issue due to memory constraints or large databases, with very large database segmentation leading to additional performance decline. Herein, we present Hadoop Blast (HBlast), a parallelised BLAST algorithm that proposes a flexible method to partition both databases and input query sequences using "virtual partitioning". HBlast presents improved scalability over existing solutions and well balanced computational work load while keeping database segmentation and recompilation to a minimum. Enhanced BLAST search performance on cheap memory constrained hardware has significant implications for in field clinical diagnostic testing; enabling faster and more accurate identification of pathogenic DNA in human blood or tissue samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Breast Imaging in the Era of Big Data: Structured Reporting and Data Mining.

    PubMed

    Margolies, Laurie R; Pandey, Gaurav; Horowitz, Eliot R; Mendelson, David S

    2016-02-01

    The purpose of this article is to describe structured reporting and the development of large databases for use in data mining in breast imaging. The results of millions of breast imaging examinations are reported with structured tools based on the BI-RADS lexicon. Much of these data are stored in accessible media. Robust computing power creates great opportunity for data scientists and breast imagers to collaborate to improve breast cancer detection and optimize screening algorithms. Data mining can create knowledge, but the questions asked and their complexity require extremely powerful and agile databases. New data technologies can facilitate outcomes research and precision medicine.

  19. Morphology-based Query for Galaxy Image Databases

    NASA Astrophysics Data System (ADS)

    Shamir, Lior

    2017-02-01

    Galaxies of rare morphology are of paramount scientific interest, as they carry important information about the past, present, and future Universe. Once a rare galaxy is identified, studying it more effectively requires a set of galaxies of similar morphology, allowing generalization and statistical analysis that cannot be done when N=1. Databases generated by digital sky surveys can contain a very large number of galaxy images, and therefore once a rare galaxy of interest is identified it is possible that more instances of the same morphology are also present in the database. However, when a researcher identifies a certain galaxy of rare morphology in the database, it is virtually impossible to mine the database manually in the search for galaxies of similar morphology. Here we propose a computer method that can automatically search databases of galaxy images and identify galaxies that are morphologically similar to a certain user-defined query galaxy. That is, the researcher provides an image of a galaxy of interest, and the pattern recognition system automatically returns a list of galaxies that are visually similar to the target galaxy. The algorithm uses a comprehensive set of descriptors, allowing it to support different types of galaxies, and it is not limited to a finite set of known morphologies. While the list of returned galaxies is neither clean nor complete, it contains a far higher frequency of galaxies of the morphology of interest, providing a substantial reduction of the data. Such algorithms can be integrated into data management systems of autonomous digital sky surveys such as the Large Synoptic Survey Telescope (LSST), where the number of galaxies in the database is extremely large. The source code of the method is available at http://vfacstaff.ltu.edu/lshamir/downloads/udat.

  20. Hierarchical Data Distribution Scheme for Peer-to-Peer Networks

    NASA Astrophysics Data System (ADS)

    Bhushan, Shashi; Dave, M.; Patel, R. B.

    2010-11-01

    In the past few years, peer-to-peer (P2P) networks have become an extremely popular mechanism for large-scale content sharing. P2P systems have focused on specific application domains (e.g. music files, video files) or on providing file system like capabilities. P2P is a powerful paradigm, which provides a large-scale and cost-effective mechanism for data sharing. P2P system may be used for storing data globally. Can we implement a conventional database on P2P system? But successful implementation of conventional databases on the P2P systems is yet to be reported. In this paper we have presented the mathematical model for the replication of the partitions and presented a hierarchical based data distribution scheme for the P2P networks. We have also analyzed the resource utilization and throughput of the P2P system with respect to the availability, when a conventional database is implemented over the P2P system with variable query rate. Simulation results show that database partitions placed on the peers with higher availability factor perform better. Degradation index, throughput, resource utilization are the parameters evaluated with respect to the availability factor.

  1. Extreme Gleason Upgrading From Biopsy to Radical Prostatectomy: A Population-based Analysis.

    PubMed

    Winters, Brian R; Wright, Jonathan L; Holt, Sarah K; Lin, Daniel W; Ellis, William J; Dalkin, Bruce L; Schade, George R

    2016-10-01

    To examine the risk factors associated with the odds of extreme Gleason upgrading at radical prostatectomy (RP) (defined as a Gleason prognostic group score increase of ≥2), we utilized a large, population-based cancer registry. The Surveillance, Epidemiologic, and End Results database was queried (2010-2011) for all patients diagnosed with Gleason 3 + 3 or 3 + 4 on prostate needle biopsy. Available clinicopathologic factors and the odds of upgrading and extreme upgrading at RP were evaluated using multivariate logistic regression. A total of 12,459 patients were identified, with a median age of 61 (interquartile range: 56-65) and a diagnostic prostate-specific antigen (PSA) of 5.5 ng/mL (interquartile range: 4.3-7.5). Upgrading was observed in 34% of men, including 44% of 7402 patients with Gleason 3 + 3 and 19% of 5057 patients with Gleason 3 + 4 disease. Age, clinical stage, diagnostic PSA, and % prostate needle biopsy cores positive were independently associated with odds of any upgrading at RP. In baseline Gleason 3 + 3 disease, extreme upgrading was observed in 6%, with increasing age, diagnostic PSA, and >50% core positivity associated with increased odds. In baseline Gleason 3 + 4 disease, extreme upgrading was observed in 4%, with diagnostic PSA and palpable disease remaining predictive. Positive surgical margins were significantly higher in patients with extreme upgrading at RP (P < .001). Gleason upgrading at RP is common in this large population-based cohort, including extreme upgrading in a clinically significant portion. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Comparison of injury severity between AIS 2005 and AIS 1990 in a large injury database

    PubMed Central

    Barnes, J; Hassan, A; Cuerden, R; Cookson, R; Kohlhofer, J

    2009-01-01

    The aim of this study is to investigate the differences in car occupant injury severity recorded in AIS 2005 compared to AIS 1990 and to outline the likely effects on future data analysis findings. Occupant injury data in the UK Cooperative Crash Injury Study Database (CCIS) were coded for the period February 2006 to November 2007 using both AIS 1990 and AIS 2005. Data for 1,994 occupants with over 6000 coded injuries were reviewed at the AIS and MAIS level of severities and body regions to determine changes between the two coding methodologies. Overall there was an apparent general trend for fewer injuries to be coded at the AIS 4+ severity and more injuries to be coded at the AIS 2 severity. When these injury trends were reviewed in more detail it was found that the body regions which contributed the most to these changes in severity were the head, thorax and extremities. This is one of the first studies to examine the implications for large databases when changing to an updated method for coding injuries. PMID:20184835

  3. Fullerene data mining using bibliometrics and database tomography

    PubMed

    Kostoff; Braun; Schubert; Toothman; Humenik

    2000-01-01

    Database tomography (DT) is a textual database analysis system consisting of two major components: (1) algorithms for extracting multiword phrase frequencies and phrase proximities (physical closeness of the multiword technical phrases) from any type of large textual database, to augment (2) interpretative capabilities of the expert human analyst. DT was used to derive technical intelligence from a fullerenes database derived from the Science Citation Index and the Engineering Compendex. Phrase frequency analysis by the technical domain experts provided the pervasive technical themes of the fullerenes database, and phrase proximity analysis provided the relationships among the pervasive technical themes. Bibliometric analysis of the fullerenes literature supplemented the DT results with author/journal/institution publication and citation data. Comparisons of fullerenes results with past analyses of similarly structured near-earth space, chemistry, hypersonic/supersonic flow, aircraft, and ship hydrodynamics databases are made. One important finding is that many of the normalized bibliometric distribution functions are extremely consistent across these diverse technical domains and could reasonably be expected to apply to broader chemical topics than fullerenes that span multiple structural classes. Finally, lessons learned about integrating the technical domain experts with the data mining tools are presented.

  4. Perforator-based propeller flaps reliability in upper extremity soft tissue reconstruction: a systematic review.

    PubMed

    Vitse, J; Bekara, F; Bertheuil, N; Sinna, R; Chaput, B; Herlin, C

    2017-02-01

    Current data on upper extremity propeller flaps are poor and do not allow the assessment of the safety of this technique. A systematic literature review was conducted searching PubMed, EMBASE, and the Cochrane Library electronic databases, and the selection process was adapted from the preferred reporting items for systematic reviews and meta-analysis statement. The final analysis included ten relevant articles involving 117 flaps. The majority of flaps were used for the hand, distal wrist, and elbow. The radial artery perforator and ulnar artery perforator were the most frequently used flaps. The were 7% flaps with venous congestion and 3% with complete necrosis. No difference in complications rate was found for different flaps sites. Perforator-based propeller flaps appear to be an interesting procedure for covering soft tissue defects involving the upper extremities, even for large defects, but the procedure requires experience and close monitoring. II.

  5. GenomeVista

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poliakov, Alexander; Couronne, Olivier

    2002-11-04

    Aligning large vertebrate genomes that are structurally complex poses a variety of problems not encountered on smaller scales. Such genomes are rich in repetitive elements and contain multiple segmental duplications, which increases the difficulty of identifying true orthologous SNA segments in alignments. The sizes of the sequences make many alignment algorithms designed for comparing single proteins extremely inefficient when processing large genomic intervals. We integrated both local and global alignment tools and developed a suite of programs for automatically aligning large vertebrate genomes and identifying conserved non-coding regions in the alignments. Our method uses the BLAT local alignment program tomore » find anchors on the base genome to identify regions of possible homology for a query sequence. These regions are postprocessed to find the best candidates which are then globally aligned using the AVID global alignment program. In the last step conserved non-coding segments are identified using VISTA. Our methods are fast and the resulting alignments exhibit a high degree of sensitivity, covering more than 90% of known coding exons in the human genome. The GenomeVISTA software is a suite of Perl programs that is built on a MySQL database platform. The scheduler gets control data from the database, builds a queve of jobs, and dispatches them to a PC cluster for execution. The main program, running on each node of the cluster, processes individual sequences. A Perl library acts as an interface between the database and the above programs. The use of a separate library allows the programs to function independently of the database schema. The library also improves on the standard Perl MySQL database interfere package by providing auto-reconnect functionality and improved error handling.« less

  6. Towards communication-efficient quantum oblivious key distribution

    NASA Astrophysics Data System (ADS)

    Panduranga Rao, M. V.; Jakobi, M.

    2013-01-01

    Symmetrically private information retrieval, a fundamental problem in the field of secure multiparty computation, is defined as follows: A database D of N bits held by Bob is queried by a user Alice who is interested in the bit Db in such a way that (1) Alice learns Db and only Db and (2) Bob does not learn anything about Alice's choice b. While solutions to this problem in the classical domain rely largely on unproven computational complexity theoretic assumptions, it is also known that perfect solutions that guarantee both database and user privacy are impossible in the quantum domain. Jakobi [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.83.022301 83, 022301 (2011)] proposed a protocol for oblivious transfer using well-known quantum key device (QKD) techniques to establish an oblivious key to solve this problem. Their solution provided a good degree of database and user privacy (using physical principles like the impossibility of perfectly distinguishing nonorthogonal quantum states and the impossibility of superluminal communication) while being loss-resistant and implementable with commercial QKD devices (due to the use of the Scarani-Acin-Ribordy-Gisin 2004 protocol). However, their quantum oblivious key distribution (QOKD) protocol requires a communication complexity of O(NlogN). Since modern databases can be extremely large, it is important to reduce this communication as much as possible. In this paper, we first suggest a modification of their protocol wherein the number of qubits that need to be exchanged is reduced to O(N). A subsequent generalization reduces the quantum communication complexity even further in such a way that only a few hundred qubits are needed to be transferred even for very large databases.

  7. Lessons Learned Implementing DOORS in a Citrix Environment

    NASA Technical Reports Server (NTRS)

    Bussman, Marie

    2005-01-01

    NASA's James Web Space Telescope (JWST) Project is a large multi-national project with geographically dispersed contractors that all need access to the Projects requirement database. Initially, the project utilized multiple DOORS databases with the built-in partitions feature to exchange modules amongst the various contractor sites. As the requirements databases matured the use of partitions became extremely difficult. There have been many issues such as incompatible versions of DOORS, inefficient mechanism for sharing modules, security concerns, performance issues, and inconsistent document import and export formats. Deployment of the client software with limited IT resources available was also an issue. The solution chosen by JWST was to integrate the use of a Citrix environment with the DOORS database to address most of the project concerns. The use of the Citrix solution allowed a single Requirements database in a secure environment via a web interface. The Citrix environment allows JWST to upgrade to the most current version of DOORS without having to coordinate multiple sites and user upgrades. The single requirements database eliminates a multitude of Configuration Management concerns and facilitated the standardization of documentation formats. This paper discusses the obstacles and the lessons learned throughout the installation, implementation, usage and deployment process of a centralized DOORS database solution.

  8. De novo comparative transcriptome analysis of genes involved in fruit morphology of pumpkin cultivars with extreme size difference and development of EST-SSR markers.

    PubMed

    Xanthopoulou, Aliki; Ganopoulos, Ioannis; Psomopoulos, Fotis; Manioudaki, Maria; Moysiadis, Theodoros; Kapazoglou, Aliki; Osathanunkul, Maslin; Michailidou, Sofia; Kalivas, Apostolos; Tsaftaris, Athanasios; Nianiou-Obeidat, Irini; Madesis, Panagiotis

    2017-07-30

    The genetic basis of fruit size and shape was investigated for the first time in Cucurbita species and genetic loci associated with fruit morphology have been identified. Although extensive genomic resources are available at present for tomato (Solanum lycopersicum), cucumber (Cucumis sativus), melon (Cucumis melo) and watermelon (Citrullus lanatus), genomic databases for Cucurbita species are limited. Recently, our group reported the generation of pumpkin (Cucurbita pepo) transcriptome databases from two contrasting cultivars with extreme fruit sizes. In the current study we used these databases to perform comparative transcriptome analysis in order to identify genes with potential roles in fruit morphology and fruit size. Differential Gene Expression (DGE) analysis between cv. 'Munchkin' (small-fruit) and cv. 'Big Moose' (large-fruit) revealed a variety of candidate genes associated with fruit morphology with significant differences in gene expression between the two cultivars. In addition, we have set the framework for generating EST-SSR markers, which discriminate different C. pepo cultivars and show transferability to related Cucurbitaceae species. The results of the present study will contribute to both further understanding the molecular mechanisms regulating fruit morphology and furthermore identifying the factors that determine fruit size. Moreover, they may lead to the development of molecular marker tools for selecting genotypes with desired morphological traits. Copyright © 2017. Published by Elsevier B.V.

  9. Changes in extremely hot days under stabilized 1.5 and 2.0 °C global warming scenarios as simulated by the HAPPI multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Wehner, Michael; Stone, Dáithí; Mitchell, Dann; Shiogama, Hideo; Fischer, Erich; Graff, Lise S.; Kharin, Viatcheslav V.; Lierhammer, Ludwig; Sanderson, Benjamin; Krishnan, Harinarayan

    2018-03-01

    The half a degree additional warming, prognosis and projected impacts (HAPPI) experimental protocol provides a multi-model database to compare the effects of stabilizing anthropogenic global warming of 1.5 °C over preindustrial levels to 2.0 °C over these levels. The HAPPI experiment is based upon large ensembles of global atmospheric models forced by sea surface temperature and sea ice concentrations plausible for these stabilization levels. This paper examines changes in extremes of high temperatures averaged over three consecutive days. Changes in this measure of extreme temperature are also compared to changes in hot season temperatures. We find that over land this measure of extreme high temperature increases from about 0.5 to 1.5 °C over present-day values in the 1.5 °C stabilization scenario, depending on location and model. We further find an additional 0.25 to 1.0 °C increase in extreme high temperatures over land in the 2.0 °C stabilization scenario. Results from the HAPPI models are consistent with similar results from the one available fully coupled climate model. However, a complicating factor in interpreting extreme temperature changes across the HAPPI models is their diversity of aerosol forcing changes.

  10. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  11. A daily wetness index from satellite gravity for near-real time global monitoring of hydrological extremes

    NASA Astrophysics Data System (ADS)

    Gouweleeuw, Ben; Kvas, Andreas; Gruber, Christian; Mayer-Gürr, Torsten; Flechtner, Frank; Hasan, Mehedi; Güntner, Andreas

    2017-04-01

    Since April 2002, the Gravity Recovery and Climate Experiment (GRACE) satellite mission has been churning out water storage anomaly data, which has been shown to be a unique descriptor of large-scale hydrological extreme events. Nonetheless, efforts to assess the comprehensive information from GRACE on total water storage variations for near-real time flood or drought monitoring have been limited so far, primarily due to its coarse temporal (weekly to monthly) and spatial (> 150.000 km2) resolution and the latency of standard products of about 2 months,. Pending the status of the aging GRACE satellite mission, the Horizon 2020 funded EGSIEM (European Gravity Service for Improved Emergency Management) project is scheduled to launch a 6 month duration near-real time test run of GRACE gravity field data from April 2017 onward, which will provide daily gridded data with a latency of 5 days. This fast availability allows the monitoring of total water storage variations related to hydrological extreme events, as they occur, as opposed to a 'confirmation after occurrence', which is the current situation. This contribution proposes a global GRACE-derived gridded wetness indicator, expressed as a gravity anomaly in dimensionless units of standard deviation. Results of a retrospective evaluation (April 2002-December 2015) of the proposed index against databases of hydrological extremes will be presented. It is shown that signals for large extreme floods related to heavy/monsoonal rainfall are picked up really well in the Southern Hemisphere and lower Northern Hemisphere (Africa, S-America, Australia, S-Asia), while extreme floods in the Northern Hemisphere (Russia) related to snow melt are often not. The latter is possibly related to a lack of mass movement over longer distances, e.g. when melt water is not drained due to river ice blocking.

  12. ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes.

    PubMed

    Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim

    2010-03-01

    Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith-Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. The database can be accessed through http://proteinworlddb.org

  13. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quock, D. E. R.; Cianciarulo, M. B.; APS Engineering Support Division

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, themore » necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.« less

  14. Upper extremity deep venous thrombosis after port insertion: What are the risk factors?

    PubMed

    Tabatabaie, Omidreza; Kasumova, Gyulnara G; Kent, Tara S; Eskander, Mariam F; Fadayomi, Ayotunde B; Ng, Sing Chau; Critchlow, Jonathan F; Tawa, Nicholas E; Tseng, Jennifer F

    2017-08-01

    Totally implantable venous access devices (ports) are widely used, especially for cancer chemotherapy. Although their use has been associated with upper extremity deep venous thrombosis, the risk factors of upper extremity deep venous thrombosis in patients with a port are not studied adequately. The Healthcare Cost and Utilization Project's Florida State Ambulatory Surgery and Services Database was queried between 2007 and 2011 for patients who underwent outpatient port insertion, identified by Current Procedural Terminology code. Patients were followed in the State Ambulatory Surgery and Services Database, State Inpatient Database, and State Emergency Department Database for upper extremity deep venous thrombosis occurrence. The cohort was divided into a test cohort and a validation cohort based on the year of port placement. A multivariable logistic regression model was developed to identify risk factors for upper extremity deep venous thrombosis in patients with a port. The model then was tested on the validation cohort. Of the 51,049 patients in the derivation cohort, 926 (1.81%) developed an upper extremity deep venous thrombosis. On multivariate analysis, independently significant predictors of upper extremity deep venous thrombosis included age <65 years (odds ratio = 1.22), Elixhauser score of 1 to 2 compared with zero (odds ratio = 1.17), end-stage renal disease (versus no kidney disease; odds ratio = 2.63), history of any deep venous thrombosis (odds ratio = 1.77), all-cause 30-day revisit (odds ratio = 2.36), African American race (versus white; odds ratio = 1.86), and other nonwhite races (odds ratio = 1.35). Additionally, compared with genitourinary malignancies, patients with gastrointestinal (odds ratio = 1.55), metastatic (odds ratio = 1.76), and lung cancers (odds ratio = 1.68) had greater risks of developing an upper extremity deep venous thrombosis. This study identified major risk factors of upper extremity deep venous thrombosis. Further studies are needed to evaluate the appropriateness of thromboprophylaxis in patients at greater risk of upper extremity deep venous thrombosis. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Crash analysis of lower extremity injuries in children restrained in forward-facing car seats during front and rear impacts.

    PubMed

    Bennett, Tellen D; Kaufman, Robert; Schiff, Melissa; Mock, Charles; Quan, Linda

    2006-09-01

    The mechanism, crash characteristics, and spectrum of lower extremity injuries in children restrained in forward-facing car seats during front and rear impacts have not been described. We identified in two databases children who sustained lower extremity injuries while restrained in forward-facing car seats. To identify the mechanism, we analyzed crash reconstructions from three frontal-impact cases from the Crash Injury Research and Engineering Network. To further describe the crash and injury characteristics we evaluated children between 1 and 4 years of age with lower extremity injuries from front or rear impacts in the National Automotive Sampling System (NASS) Crashworthiness Data System (CDS) database. Crash reconstruction data demonstrated that the likely mechanism of lower extremity injury was contact between the legs and the front seatbacks. In the CDS database, we identified 15 children with lower extremity injuries in a forward-facing child seat, usually (13 out of 15) placed in the rear seat, incurred in frontal impacts (11 out of 15). Several (5 out of 15) children were in unbelted or improperly secured forward-facing car seats. Injury Severity Scores varied widely (5-50). Children in forward-facing car seats involved in severe front or rear crashes may incur a range of lower extremity injury from impact with the car interior component in front of them. Crash scene photography can provide useful information about anatomic sites at risk for injury and alert emergency department providers to possible subtle injury.

  16. The development of personality extremity from childhood to adolescence: relations to internalizing and externalizing problems.

    PubMed

    Van den Akker, Alithe L; Prinzie, Peter; Deković, Maja; De Haan, Amaranta D; Asscher, Jessica J; Widiger, Thomas

    2013-12-01

    This study investigated the development of personality extremity (deviation of an average midpoint of all 5 personality dimensions together) across childhood and adolescence, as well as relations between personality extremity and adjustment problems. For 598 children (mean age at Time 1 = 7.5 years), mothers and fathers reported the Big Five personality dimensions 4 times across 8 years. Children's vector length in a 5-dimensional configuration of the Big Five dimensions represented personality extremity. Mothers, fathers, and teachers reported children's internalizing and externalizing problems at the 1st and final measurement. In a cohort-sequential design, we modeled personality extremity in children and adolescents from ages 6 to 17 years. Growth mixture modeling revealed a similar solution for both mother and father reports: a large group with relatively short vectors that were stable over time (mother reports: 80.3%; father reports: 84.7%) and 2 smaller groups with relatively long vectors (i.e., extreme personality configuration). One group started out relatively extreme and decreased over time (mother reports: 13.2%; father reports: 10.4%), whereas the other group started out only slightly higher than the short vector group but increased across time (mother reports: 6.5%; father reports: 4.9%). Children who belonged to the increasingly extreme class experienced more internalizing and externalizing problems in late adolescence, controlling for previous levels of adjustment problems and the Big Five personality dimensions. Personality extremity may be important to consider when identifying children at risk for adjustment problems. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. Attempting to physically explain space-time correlation of extremes

    NASA Astrophysics Data System (ADS)

    Bernardara, Pietro; Gailhard, Joel

    2010-05-01

    Spatial and temporal clustering of hydro-meteorological extreme events is scientific evidence. Moreover, the statistical parameters characterizing their local frequencies of occurrence show clear spatial patterns. Thus, in order to robustly assess the hydro-meteorological hazard, statistical models need to be able to take into account spatial and temporal dependencies. Statistical models considering long term correlation for quantifying and qualifying temporal and spatial dependencies are available, such as multifractal approach. Furthermore, the development of regional frequency analysis techniques allows estimating the frequency of occurrence of extreme events taking into account spatial patterns on the extreme quantiles behaviour. However, in order to understand the origin of spatio-temporal clustering, an attempt to find physical explanation should be done. Here, some statistical evidences of spatio-temporal correlation and spatial patterns of extreme behaviour are given on a large database of more than 400 rainfall and discharge series in France. In particular, the spatial distribution of multifractal and Generalized Pareto distribution parameters shows evident correlation patterns in the behaviour of frequency of occurrence of extremes. It is then shown that the identification of atmospheric circulation pattern (weather types) can physically explain the temporal clustering of extreme rainfall events (seasonality) and the spatial pattern of the frequency of occurrence. Moreover, coupling this information with the hydrological modelization of a watershed (as in the Schadex approach) an explanation of spatio-temporal distribution of extreme discharge can also be provided. We finally show that a hydro-meteorological approach (as the Schadex approach) can explain and take into account space and time dependencies of hydro-meteorological extreme events.

  18. Learning Optimized Local Difference Binaries for Scalable Augmented Reality on Mobile Devices.

    PubMed

    Xin Yang; Kwang-Ting Cheng

    2014-06-01

    The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile augmented reality (AR) system. However, existing descriptors are either too computationally expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Learning-based Local Difference Binary (LLDB). LLDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. To select an optimized set of grid cell pairs, we densely sample grid cells from an image patch and then leverage a modified AdaBoost algorithm to automatically extract a small set of critical ones with the goal of maximizing the Hamming distance between mismatches while minimizing it between matches. Experimental results demonstrate that LLDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Compared to the state-of-the-art binary descriptors, primarily designed for speed, LLDB has similar efficiency for descriptor construction, while achieving a greater accuracy and faster matching speed when matching over a large database with 2.3M descriptors on mobile devices.

  19. ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes

    PubMed Central

    Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim

    2010-01-01

    Motivation: Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith–Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid™, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. Availability: The database can be accessed through http://proteinworlddb.org Contact: otto@fiocruz.br PMID:20089515

  20. Andromeda: a peptide search engine integrated into the MaxQuant environment.

    PubMed

    Cox, Jürgen; Neuhauser, Nadin; Michalski, Annette; Scheltema, Richard A; Olsen, Jesper V; Mann, Matthias

    2011-04-01

    A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data, Andromeda performs as well as Mascot, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly phosphorylated peptides, and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination enables analysis of large data sets in a simple analysis workflow on a desktop computer. For searching individual spectra Andromeda is also accessible via a web server. We demonstrate the flexibility of the system by implementing the capability to identify cofragmented peptides, significantly improving the total number of identified peptides.

  1. NSDNA: a manually curated database of experimentally supported ncRNAs associated with nervous system diseases

    PubMed Central

    Wang, Jianjian; Cao, Yuze; Zhang, Huixue; Wang, Tianfeng; Tian, Qinghua; Lu, Xiaoyu; Lu, Xiaoyan; Kong, Xiaotong; Liu, Zhaojun; Wang, Ning; Zhang, Shuai; Ma, Heping; Ning, Shangwei; Wang, Lihua

    2017-01-01

    The Nervous System Disease NcRNAome Atlas (NSDNA) (http://www.bio-bigdata.net/nsdna/) is a manually curated database that provides comprehensive experimentally supported associations about nervous system diseases (NSDs) and noncoding RNAs (ncRNAs). NSDs represent a common group of disorders, some of which are characterized by high morbidity and disabilities. The pathogenesis of NSDs at the molecular level remains poorly understood. ncRNAs are a large family of functionally important RNA molecules. Increasing evidence shows that diverse ncRNAs play a critical role in various NSDs. Mining and summarizing NSD–ncRNA association data can help researchers discover useful information. Hence, we developed an NSDNA database that documents 24 713 associations between 142 NSDs and 8593 ncRNAs in 11 species, curated from more than 1300 articles. This database provides a user-friendly interface for browsing and searching and allows for data downloading flexibility. In addition, NSDNA offers a submission page for researchers to submit novel NSD–ncRNA associations. It represents an extremely useful and valuable resource for researchers who seek to understand the functions and molecular mechanisms of ncRNA involved in NSDs. PMID:27899613

  2. Toxicity tests aiming to protect Brazilian aquatic systems: current status and implications for management.

    PubMed

    Martins, Samantha Eslava; Bianchini, Adalto

    2011-07-01

    The current status of toxicological tests performed with Brazilian native species was evaluated through a survey of the scientific data available in the literature. The information gathered was processed and an electronic toxicology database (http://www.inct-ta.furg.br/bd_toxicologico.php) was generated. This database provides valuable information for researchers to select sensitive and tolerant aquatic species to a large variety of aquatic pollutants. Furthermore, the toxicology database allows researchers to select species representative of an ecosystem of interest. Analysis of the toxicology database showed that ecotoxicological assays have significantly improved in Brazil over the last decade, in spite of the still relatively low number of tests performed and the restricted number of native species tested. This is because most of the research is developed in a few laboratories concentrated in certain regions of Brazil, especially in Southern and Southeast regions. Considering the extremely rich biodiversity and the large variety of aquatic ecosystems in Brazil, this finding points to the urgent need for the development of ecotoxicological studies with other groups of aquatic animals, such as insects, foraminifera, cnidarians, worms, amphibians, among others. This would help to derive more realistic water quality criteria (WQC) values, which would better protect the different aquatic ecosystems in Brazil. Finally, the toxicology database generated presents solid and science based information, which can encourage and drive the Environmental Regulatory Agencies in Brazil to derive WQC based on native species. In this context, the present paper discusses the historical evolution of ecotoxicological studies in Brazil, and how they have contributed to the improvement of the Brazilian Federal and Regional regulations for environment.

  3. Aviation Safety Issues Database

    NASA Technical Reports Server (NTRS)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  4. Measures of dependence for multivariate Lévy distributions

    NASA Astrophysics Data System (ADS)

    Boland, J.; Hurd, T. R.; Pivato, M.; Seco, L.

    2001-02-01

    Recent statistical analysis of a number of financial databases is summarized. Increasing agreement is found that logarithmic equity returns show a certain type of asymptotic behavior of the largest events, namely that the probability density functions have power law tails with an exponent α≈3.0. This behavior does not vary much over different stock exchanges or over time, despite large variations in trading environments. The present paper proposes a class of multivariate distributions which generalizes the observed qualities of univariate time series. A new consequence of the proposed class is the "spectral measure" which completely characterizes the multivariate dependences of the extreme tails of the distribution. This measure on the unit sphere in M-dimensions, in principle completely general, can be determined empirically by looking at extreme events. If it can be observed and determined, it will prove to be of importance for scenario generation in portfolio risk management.

  5. Toward high-throughput genotyping: dynamic and automatic software for manipulating large-scale genotype data using fluorescently labeled dinucleotide markers.

    PubMed

    Li, J L; Deng, H; Lai, D B; Xu, F; Chen, J; Gao, G; Recker, R R; Deng, H W

    2001-07-01

    To efficiently manipulate large amounts of genotype data generated with fluorescently labeled dinucleotide markers, we developed a Microsoft database management system, named. offers several advantages. First, it accommodates the dynamic nature of the accumulations of genotype data during the genotyping process; some data need to be confirmed or replaced by repeat lab procedures. By using, the raw genotype data can be imported easily and continuously and incorporated into the database during the genotyping process that may continue over an extended period of time in large projects. Second, almost all of the procedures are automatic, including autocomparison of the raw data read by different technicians from the same gel, autoadjustment among the allele fragment-size data from cross-runs or cross-platforms, autobinning of alleles, and autocompilation of genotype data for suitable programs to perform inheritance check in pedigrees. Third, provides functions to track electrophoresis gel files to locate gel or sample sources for any resultant genotype data, which is extremely helpful for double-checking consistency of raw and final data and for directing repeat experiments. In addition, the user-friendly graphic interface of renders processing of large amounts of data much less labor-intensive. Furthermore, has built-in mechanisms to detect some genotyping errors and to assess the quality of genotype data that then are summarized in the statistic reports automatically generated by. The can easily handle >500,000 genotype data entries, a number more than sufficient for typical whole-genome linkage studies. The modules and programs we developed for the can be extended to other database platforms, such as Microsoft SQL server, if the capability to handle still greater quantities of genotype data simultaneously is desired.

  6. High Temperature, high pressure equation of state density correlations and viscosity correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tapriyal, D.; Enick, R.; McHugh, M.

    2012-07-31

    Global increase in oil demand and depleting reserves has derived a need to find new oil resources. To find these untapped reservoirs, oil companies are exploring various remote and harsh locations such as deep waters in Gulf of Mexico, remote arctic regions, unexplored deep deserts, etc. Further, the depth of new oil/gas wells being drilled has increased considerably to tap these new resources. With the increase in the well depth, the bottomhole temperature and pressure are also increasing to extreme values (i.e. up to 500 F and 35,000 psi). The density and viscosity of natural gas and crude oil atmore » reservoir conditions are critical fundamental properties required for accurate assessment of the amount of recoverable petroleum within a reservoir and the modeling of the flow of these fluids within the porous media. These properties are also used to design appropriate drilling and production equipment such as blow out preventers, risers, etc. With the present state of art, there is no accurate database for these fluid properties at extreme conditions. As we have begun to expand this experimental database it has become apparent that there are neither equations of state for density or transport models for viscosity that can be used to predict these fundamental properties of multi-component hydrocarbon mixtures over a wide range of temperature and pressure. Presently, oil companies are using correlations based on lower temperature and pressure databases that exhibit an unsatisfactory predictive capability at extreme conditions (e.g. as great as {+-} 50%). From the perspective of these oil companies that are committed to safely producing these resources, accurately predicting flow rates, and assuring the integrity of the flow, the absence of an extensive experimental database at extreme conditions and models capable of predicting these properties over an extremely wide range of temperature and pressure (including extreme conditions) makes their task even more daunting.« less

  7. [New population curves in spanish extremely preterm neonates].

    PubMed

    García-Muñoz Rodrigo, F; García-Alix Pérez, A; Figueras Aloy, J; Saavedra Santana, P

    2014-08-01

    Most anthropometric reference data for extremely preterm infants used in Spain are outdated and based on non-Spanish populations, or are derived from small hospital-based samples that failed to include neonates of borderline viability. To develop gender-specific, population-based curves for birth weight, length, and head circumference in extremely preterm Caucasian infants, using a large contemporary sample size of Spanish singletons. Anthropometric data from neonates ≤ 28 weeks of gestational age were collected between January 2002 and December 2010 using the Spanish database SEN1500. Gestational age was estimated according to obstetric data (early pregnancy ultrasound). The data were analyzed with the SPSS.20 package, and centile tables were created for males and females using the Cole and Green LMS method. This study presents the first population-based growth curves for extremely preterm infants, including those of borderline viability, in Spain. A sexual dimorphism is evident for all of the studied parameters, starting at early gestation. These new gender-specific and population-based data could be useful for the improvement of growth assessments of extremely preterm infants in our country, for the development of epidemiological studies, for the evaluation of temporal trends, and for clinical or public health interventions seeking to optimize fetal growth. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  8. Very large hail occurrence in Poland from 2007 to 2015

    NASA Astrophysics Data System (ADS)

    Pilorz, Wojciech

    2015-10-01

    Very large hail is known as a presence of a hailstone greater or equal to 5 cm in diameter. This phenomenon is rare but its significant consequences, not only to agriculture but also to automobiles, households and people outdoor makes it essential thing to examine. Hail appearance is strictly connected with storms frequency and its kind. The most hail-endangered kind of storm is supercell storm. Geographical distribution of hailstorms was compared with geographical distribution of storms in Poland. Similarities were found. The area of the largest number of storms is southeastern Poland. Analyzed European Severe Weather Database (ESWD) data showed that most of very large hail reports occurred in this part of Poland. The probable reason for this situation is the longest period of lasting tropical airmasses in southeastern Poland. Spatial distribution analysis shows also more hail incidents over Upper Silesia, Lesser Poland, Subcarpathia and Świętokrzyskie regions. The information source about hail occurrence was ESWD - open database, where everyone can add report and find reports which meet given search criteria. 69 hailstorms in the period of 2007 - 2015 were examined. They caused 121 very large hail reports. It was found that there is large disproportion in number of hailstorms and hail reports between individual years. Very large hail season in Poland begins in May and ends in September with cumulation in July. Most of hail occurs between 12:00 and 17:00 UTC, but there were some cases of very large (one extremely large) hail at night and early morning hours. However very large hail is a spectacular phenomenon, its local character determines potentially high information loss rate and it is the most significant problem in hail research.

  9. LIPS database with LIPService: a microscopic image database of intracellular structures in Arabidopsis guard cells.

    PubMed

    Higaki, Takumi; Kutsuna, Natsumaro; Hasezawa, Seiichiro

    2013-05-16

    Intracellular configuration is an important feature of cell status. Recent advances in microscopic imaging techniques allow us to easily obtain a large number of microscopic images of intracellular structures. In this circumstance, automated microscopic image recognition techniques are of extreme importance to future phenomics/visible screening approaches. However, there was no benchmark microscopic image dataset for intracellular organelles in a specified plant cell type. We previously established the Live Images of Plant Stomata (LIPS) database, a publicly available collection of optical-section images of various intracellular structures of plant guard cells, as a model system of environmental signal perception and transduction. Here we report recent updates to the LIPS database and the establishment of a database table, LIPService. We updated the LIPS dataset and established a new interface named LIPService to promote efficient inspection of intracellular structure configurations. Cell nuclei, microtubules, actin microfilaments, mitochondria, chloroplasts, endoplasmic reticulum, peroxisomes, endosomes, Golgi bodies, and vacuoles can be filtered using probe names or morphometric parameters such as stomatal aperture. In addition to the serial optical sectional images of the original LIPS database, new volume-rendering data for easy web browsing of three-dimensional intracellular structures have been released to allow easy inspection of their configurations or relationships with cell status/morphology. We also demonstrated the utility of the new LIPS image database for automated organelle recognition of images from another plant cell image database with image clustering analyses. The updated LIPS database provides a benchmark image dataset for representative intracellular structures in Arabidopsis guard cells. The newly released LIPService allows users to inspect the relationship between organellar three-dimensional configurations and morphometrical parameters.

  10. Black Swan Tropical Cyclones

    NASA Astrophysics Data System (ADS)

    Emanuel, K.; Lin, N.

    2012-12-01

    Virtually all assessments of tropical cyclone risk are based on historical records, which are limited to a few hundred years at most. Yet stronger TCs may occur in the future and at places that have not been affected historically. Such events lie outside the realm of historically based expectations and may have extreme impacts. Their occurrences are also often made explainable after the fact (e.g., Hurricane Katrina). We nickname such potential future TCs, characterized by rarity, extreme impact, and retrospective predictability, "black swans" (Nassim Nicholas Taleb, 2007). As, by definition, black swan TCs have yet to happen, statistical methods that solely rely on historical track data cannot predict their occurrence. Global climate models lack the capability to predict intense storms, even with a resolution as high as 14 km (Emanuel et al. 2010). Also, most dynamic downscaling methods (e.g., Bender et al. 2010) are still limited in horizontal resolution and are too expensive to implement to generate enough events to include rare ones. In this study, we apply a simpler statistical/deterministic hurricane model (Emanuel et al. 2006) to simulate large numbers of synthetic storms under a given (observed or projected) climate condition. The method has been shown to generate realistic extremes in various basins (Emanuel et al. 2008 and 2010). We also apply a hydrodynamic model (ADCIRC; Luettich et al. 1992) to simulate the storm surges generated by these storms. We then search for black swan TCs, in terms of the joint wind and surge damage potential, in the generated large databases. Heavy rainfall is another important TC hazard and will be considered in a future study. We focus on three areas: Tampa Bay in the U.S., the Persian Gulf, and Darwin in Australia. Tampa Bay is highly vulnerable to storm surge as it is surrounded by shallow water and low-lying lands, much of which may be inundated by a storm tide of 6 m. High surges are generated by storms with a broad spectrum of characteristics in our synthetic database, although no large surge has been recorded historically as only one moderate storm passed by the area. Tampa black swans are identified as those that move northward parallel to the west Florida coast with high intensities and resonant with the Florida-shelf edge waves to generate extreme surges up to 10 m in Tampa Bay. The Arabian Sea area has sea surface temperatures warm enough to support the development of severe TCs, but TC development has been limited by low humidity and high wind shear, and only one recorded TC (super cyclonic storm Gonu in 2007) moved close to the Persian Gulf, making landfall in Oman and Iran. Our analysis shows that black swan TCs can originate within the Persian Gulf and make landfall with high intensities in populous places; extreme surges over 9 m for Abu Dubai and Doha and over 7 m for Dubai are possible. Darwin experienced immense devastation from Cyclone Tracy of 1974, but the damage was mainly due to the strong winds (the surge was only about 1.6 m). Our analysis includes extremely intense black swan TCs that make landfall just south of Darwin, generating surges above 10 m; these results may prompt the city to reconsider its TC risk. We are currently analyzing the join probability of the extreme wind and surge of these black swan TCs to more clearly assess their full damage potentials.

  11. NSDNA: a manually curated database of experimentally supported ncRNAs associated with nervous system diseases.

    PubMed

    Wang, Jianjian; Cao, Yuze; Zhang, Huixue; Wang, Tianfeng; Tian, Qinghua; Lu, Xiaoyu; Lu, Xiaoyan; Kong, Xiaotong; Liu, Zhaojun; Wang, Ning; Zhang, Shuai; Ma, Heping; Ning, Shangwei; Wang, Lihua

    2017-01-04

    The Nervous System Disease NcRNAome Atlas (NSDNA) (http://www.bio-bigdata.net/nsdna/) is a manually curated database that provides comprehensive experimentally supported associations about nervous system diseases (NSDs) and noncoding RNAs (ncRNAs). NSDs represent a common group of disorders, some of which are characterized by high morbidity and disabilities. The pathogenesis of NSDs at the molecular level remains poorly understood. ncRNAs are a large family of functionally important RNA molecules. Increasing evidence shows that diverse ncRNAs play a critical role in various NSDs. Mining and summarizing NSD-ncRNA association data can help researchers discover useful information. Hence, we developed an NSDNA database that documents 24 713 associations between 142 NSDs and 8593 ncRNAs in 11 species, curated from more than 1300 articles. This database provides a user-friendly interface for browsing and searching and allows for data downloading flexibility. In addition, NSDNA offers a submission page for researchers to submit novel NSD-ncRNA associations. It represents an extremely useful and valuable resource for researchers who seek to understand the functions and molecular mechanisms of ncRNA involved in NSDs. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Identifying climate analogues for precipitation extremes for Denmark based on RCM simulations from the ENSEMBLES database.

    PubMed

    Arnbjerg-Nielsen, K; Funder, S G; Madsen, H

    2015-01-01

    Climate analogues, also denoted Space-For-Time, may be used to identify regions where the present climatic conditions resemble conditions of a past or future state of another location or region based on robust climate variable statistics in combination with projections of how these statistics change over time. The study focuses on assessing climate analogues for Denmark based on current climate data set (E-OBS) observations as well as the ENSEMBLES database of future climates with the aim of projecting future precipitation extremes. The local present precipitation extremes are assessed by means of intensity-duration-frequency curves for urban drainage design for the relevant locations being France, the Netherlands, Belgium, Germany, the United Kingdom, and Denmark. Based on this approach projected increases of extreme precipitation by 2100 of 9 and 21% are expected for 2 and 10 year return periods, respectively. The results should be interpreted with caution as the best region to represent future conditions for Denmark is the coastal areas of Northern France, for which only little information is available with respect to present precipitation extremes.

  13. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    PubMed

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  14. [Drug Repositioning Research Utilizing a Large-scale Medical Claims Database to Improve Survival Rates after Cardiopulmonary Arrest].

    PubMed

    Zamami, Yoshito; Niimura, Takahiro; Takechi, Kenshi; Imanishi, Masaki; Koyama, Toshihiro; Ishizawa, Keisuke

    2017-01-01

     Approximately 100000 people suffer cardiopulmonary arrest in Japan every year, and the aging of society means that this number is expected to increase. Worldwide, approximately 100 million develop cardiac arrest annually, making it an international issue. Although survival has improved thanks to advances in cardiopulmonary resuscitation, there is a high rate of postresuscitation encephalopathy after the return of spontaneous circulation, and the proportion of patients who can return to normal life is extremely low. Treatment for postresuscitation encephalopathy is long term, and if sequelae persist then nursing care is required, causing immeasurable economic burdens as a result of ballooning medical costs. As at present there is no drug treatment to improve postresuscitation encephalopathy as a complication of cardiopulmonary arrest, the development of novel drug treatments is desirable. In recent years, new efficacy for existing drugs used in the clinical setting has been discovered, and drug repositioning has been proposed as a strategy for developing those drugs as therapeutic agents for different diseases. This review describes a large-scale database study carried out following a discovery strategy for drug repositioning with the objective of improving survival rates after cardiopulmonary arrest and discusses future repositioning prospects.

  15. Changes in the extreme wave heights over the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Kudryavtseva, Nadia; Soomere, Tarmo

    2017-04-01

    Storms over the Baltic Sea and northwestern Europe have a large impact on the population, offshore industry, and shipping. The understanding of extreme events in sea wave heights and their change due to the climate change and variability is critical for assessment of flooding risks and coastal protection. The BACCII Assessment of Climate Change for the Baltic Sea Basin showed that the extreme events analysis of wind waves is currently not very well addressed, as well as satellite observations of the wave heights. Here we discuss the analysis of all existing satellite altimetry data over the Baltic Sea Basin regarding extremes in the wave heights. In this talk for the first time, we present an analysis of 100-yr return periods, fitted generalized Pareto and Weibull distributions, number, and frequency of extreme events in wave heights in the Baltic Sea measured by the multi-mission satellite altimetry. The data span more than 23 years and provide an excellent spatial coverage over the Baltic Sea, allowing to study in details spatial variations and changes in extreme wave heights. The analysis is based on an application of the Initial Distribution Method, Annual Maxima method and Peak-Over-Threshold approach to satellite altimetry data, all validated in comparison with in-situ wave height measurements. Here we show that the 100-yr return periods of wave heights show significant spatial changes over the Baltic Sea indicating a decrease in the southern part of the Baltic Sea and an increase in adjacent areas, which can significantly affect coast vulnerability. Here we compare the observed shift with storm track database data and discuss a spatial correlation and possible connection between the changes in the storm tracks over the Baltic Sea and the change in the extreme wave heights.

  16. A Mediterranean coastal database for assessing the impacts of sea-level rise and associated hazards

    NASA Astrophysics Data System (ADS)

    Wolff, Claudia; Vafeidis, Athanasios T.; Muis, Sanne; Lincke, Daniel; Satta, Alessio; Lionello, Piero; Jimenez, Jose A.; Conte, Dario; Hinkel, Jochen

    2018-03-01

    We have developed a new coastal database for the Mediterranean basin that is intended for coastal impact and adaptation assessment to sea-level rise and associated hazards on a regional scale. The data structure of the database relies on a linear representation of the coast with associated spatial assessment units. Using information on coastal morphology, human settlements and administrative boundaries, we have divided the Mediterranean coast into 13 900 coastal assessment units. To these units we have spatially attributed 160 parameters on the characteristics of the natural and socio-economic subsystems, such as extreme sea levels, vertical land movement and number of people exposed to sea-level rise and extreme sea levels. The database contains information on current conditions and on plausible future changes that are essential drivers for future impacts, such as sea-level rise rates and socio-economic development. Besides its intended use in risk and impact assessment, we anticipate that the Mediterranean Coastal Database (MCD) constitutes a useful source of information for a wide range of coastal applications.

  17. A Mediterranean coastal database for assessing the impacts of sea-level rise and associated hazards

    PubMed Central

    Wolff, Claudia; Vafeidis, Athanasios T.; Muis, Sanne; Lincke, Daniel; Satta, Alessio; Lionello, Piero; Jimenez, Jose A.; Conte, Dario; Hinkel, Jochen

    2018-01-01

    We have developed a new coastal database for the Mediterranean basin that is intended for coastal impact and adaptation assessment to sea-level rise and associated hazards on a regional scale. The data structure of the database relies on a linear representation of the coast with associated spatial assessment units. Using information on coastal morphology, human settlements and administrative boundaries, we have divided the Mediterranean coast into 13 900 coastal assessment units. To these units we have spatially attributed 160 parameters on the characteristics of the natural and socio-economic subsystems, such as extreme sea levels, vertical land movement and number of people exposed to sea-level rise and extreme sea levels. The database contains information on current conditions and on plausible future changes that are essential drivers for future impacts, such as sea-level rise rates and socio-economic development. Besides its intended use in risk and impact assessment, we anticipate that the Mediterranean Coastal Database (MCD) constitutes a useful source of information for a wide range of coastal applications. PMID:29583140

  18. Characteristics of atmospheric circulation patterns associated with extreme temperatures over North America in observations and climate models

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.

    Motivated by a desire to understand the physical mechanisms involved in future anthropogenic changes in extreme temperature events, the key atmospheric circulation patterns associated with extreme daily temperatures over North America in the current climate are identified. Several novel metrics are used to systematically identify and describe these patterns for the entire continent. The orientation, physical characteristics, and spatial scale of these circulation patterns vary based on latitude, season, and proximity to important geographic features (i.e., mountains, coastlines). The anomaly patterns associated with extreme cold events tend to be similar to, but opposite in sign of, those associated with extreme warm events, especially within the westerlies, and tend to scale with temperature in the same locations. The influence of the Pacific North American (PNA) pattern, the Northern Annular Mode (NAM), and the El Niño-Southern Oscillation (ENSO) on extreme temperature days and months shows that associations between extreme temperatures and the PNA and NAM are stronger than associations with ENSO. In general, the association with extremes tends to be stronger on monthly than daily time scales. Extreme temperatures are associated with the PNA and NAM in locations typically influenced by these circulation patterns; however many extremes still occur on days when the amplitude and polarity of these patterns do not favor their occurrence. In winter, synoptic-scale, transient weather disturbances are important drivers of extreme temperature days; however these smaller-scale events are often concurrent with amplified PNA or NAM patterns. Associations are weaker in summer when other physical mechanisms affecting the surface energy balance, such as anomalous soil moisture content, are associated with extreme temperatures. Analysis of historical runs from seventeen climate models from the CMIP5 database suggests that most models simulate realistic circulation patterns associated with extreme temperature days in most places. Model-simulated patterns tend to resemble observed patterns better in the winter than the summer and at 500 hPa than at the surface. There is substantial variability among the suite of models analyzed and most models simulate circulation patterns more realistically away from influential features such as large bodies of water and complex topography.

  19. D Webgis and Visualization Issues for Architectures and Large Sites

    NASA Astrophysics Data System (ADS)

    De Amicis, R.; Conti, G.; Girardi, G.; Andreolli, M.

    2011-09-01

    Traditionally, within the field of archaeology and, more generally, within the cultural heritage domain, Geographical Information Systems (GIS) have been mostly used as support to cataloguing activities, essentially operating as gateways to large geo-referenced archives of specialised cultural heritage information. Additionally GIS have proved to be essential to help cultural heritage institutions improve management of their historical information, providing the means for detection of otherwise hard-to-discover spatial patterns, supporting with computation tools necessary to perform spatial clustering, proximity and orientation analysis. This paper presents a platform developed to answer to both the aforementioned issues, by allowing geo-referenced cataloguing of multi-media resources of cultural relevance as well as access, in a user-friendly manner, through an interactive 3D geobrowser which operates as single point of access to the available digital repositories. The solution has been showcased in the context of "Festival dell'economia" (the Fair of Economics) a major event recently occurred in Trento, Italy and it has allowed visitors of the event to interactively access an extremely large repository of information, as well as their metadata, available across the area of the Autonomous Province of Trento, in Italy. Within the event, an extremely large repository was made accessible, via the network, through web-services, from a 3D interactive geobrowser developed by the authors. The 3D scene was enriched with a number of Points of Interest (POIs) linking to information available within various databases. The software package was deployed with a complex hardware set-up composed of a large composite panoramic screen covering a horizontal field of view of 240 degrees.

  20. Overcoming Dietary Assessment Challenges in Low-Income Countries: Technological Solutions Proposed by the International Dietary Data Expansion (INDDEX) Project.

    PubMed

    Coates, Jennifer C; Colaiezzi, Brooke A; Bell, Winnie; Charrondiere, U Ruth; Leclercq, Catherine

    2017-03-16

    An increasing number of low-income countries (LICs) exhibit high rates of malnutrition coincident with rising rates of overweight and obesity. Individual-level dietary data are needed to inform effective responses, yet dietary data from large-scale surveys conducted in LICs remain extremely limited. This discussion paper first seeks to highlight the barriers to collection and use of individual-level dietary data in LICs. Second, it introduces readers to new technological developments and research initiatives to remedy this situation, led by the International Dietary Data Expansion (INDDEX) Project. Constraints to conducting large-scale dietary assessments include significant costs, time burden, technical complexity, and limited investment in dietary research infrastructure, including the necessary tools and databases required to collect individual-level dietary data in large surveys. To address existing bottlenecks, the INDDEX Project is developing a dietary assessment platform for LICs, called INDDEX24, consisting of a mobile application integrated with a web database application, which is expected to facilitate seamless data collection and processing. These tools will be subject to rigorous testing including feasibility, validation, and cost studies. To scale up dietary data collection and use in LICs, the INDDEX Project will also invest in food composition databases, an individual-level dietary data dissemination platform, and capacity development activities. Although the INDDEX Project activities are expected to improve the ability of researchers and policymakers in low-income countries to collect, process, and use dietary data, the global nutrition community is urged to commit further significant investments in order to adequately address the range and scope of challenges described in this paper.

  1. Overcoming Dietary Assessment Challenges in Low-Income Countries: Technological Solutions Proposed by the International Dietary Data Expansion (INDDEX) Project

    PubMed Central

    Coates, Jennifer C.; Colaiezzi, Brooke A.; Bell, Winnie; Charrondiere, U. Ruth; Leclercq, Catherine

    2017-01-01

    An increasing number of low-income countries (LICs) exhibit high rates of malnutrition coincident with rising rates of overweight and obesity. Individual-level dietary data are needed to inform effective responses, yet dietary data from large-scale surveys conducted in LICs remain extremely limited. This discussion paper first seeks to highlight the barriers to collection and use of individual-level dietary data in LICs. Second, it introduces readers to new technological developments and research initiatives to remedy this situation, led by the International Dietary Data Expansion (INDDEX) Project. Constraints to conducting large-scale dietary assessments include significant costs, time burden, technical complexity, and limited investment in dietary research infrastructure, including the necessary tools and databases required to collect individual-level dietary data in large surveys. To address existing bottlenecks, the INDDEX Project is developing a dietary assessment platform for LICs, called INDDEX24, consisting of a mobile application integrated with a web database application, which is expected to facilitate seamless data collection and processing. These tools will be subject to rigorous testing including feasibility, validation, and cost studies. To scale up dietary data collection and use in LICs, the INDDEX Project will also invest in food composition databases, an individual-level dietary data dissemination platform, and capacity development activities. Although the INDDEX Project activities are expected to improve the ability of researchers and policymakers in low-income countries to collect, process, and use dietary data, the global nutrition community is urged to commit further significant investments in order to adequately address the range and scope of challenges described in this paper. PMID:28300759

  2. Propensity and Risk Assessment for Solar Particle Events: Consideration of Integral Fluence at High Proton Energies

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Hayat, Matthew J.; Feiveson, alan H.; Cucinotta, Francis A.

    2008-01-01

    For future space missions with longer duration, exposure to large solar particle events (SPEs) with high energy levels is the major concern during extra-vehicular activities (EVAs) on the lunar and Mars surface. The expected SPE propensity for large proton fluence was estimated from a non-homogeneous Poisson model using the historical database for measurements of protons with energy > 30 MeV, Phi(sub 30). The database includes a continuous data set for the past 5 solar cycles. The resultant SPE risk analysis for a specific mission period was made including the 95% confidence level. In addition to total particle intensity of SPE, the detailed energy spectra of protons especially at high energy levels were recognized as extremely important parameter for the risk assessment, since there remains a significant cancer risks from those energetic particles for large events. Using all the recorded proton fluence of SPEs for energies >60 and >100 MeV, Phi(sub 60) and Phi(sub 100), respectively, the expected propensities of SPEs abundant with high energy protons were estimated from the same non-homogeneous Poisson model and the representative cancer risk was analyzed. The dependencies of risk with different energy spectra, for e.g. between soft and hard SPEs, were evaluated. Finally, we describe approaches to improve radiation protection of astronauts and optimize mission planning for future space missions.

  3. Natural scene logo recognition by joint boosting feature selection in salient regions

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Sun, Jun; Naoi, Satoshi; Minagawa, Akihiro; Hotta, Yoshinobu

    2011-01-01

    Logos are considered valuable intellectual properties and a key component of the goodwill of a business. In this paper, we propose a natural scene logo recognition method which is segmentation-free and capable of processing images extremely rapidly and achieving high recognition rates. The classifiers for each logo are trained jointly, rather than independently. In this way, common features can be shared across multiple classes for better generalization. To deal with large range of aspect ratio of different logos, a set of salient regions of interest (ROI) are extracted to describe each class. We ensure the selected ROIs to be both individually informative and two-by-two weakly dependant by a Class Conditional Entropy Maximization criteria. Experimental results on a large logo database demonstrate the effectiveness and efficiency of our proposed method.

  4. A regressive storm model for extreme space weather

    NASA Astrophysics Data System (ADS)

    Terkildsen, Michael; Steward, Graham; Neudegg, Dave; Marshall, Richard

    2012-07-01

    Extreme space weather events, while rare, pose significant risk to society in the form of impacts on critical infrastructure such as power grids, and the disruption of high end technological systems such as satellites and precision navigation and timing systems. There has been an increased focus on modelling the effects of extreme space weather, as well as improving the ability of space weather forecast centres to identify, with sufficient lead time, solar activity with the potential to produce extreme events. This paper describes the development of a data-based model for predicting the occurrence of extreme space weather events from solar observation. The motivation for this work was to develop a tool to assist space weather forecasters in early identification of solar activity conditions with the potential to produce extreme space weather, and with sufficient lead time to notify relevant customer groups. Data-based modelling techniques were used to construct the model, and an extensive archive of solar observation data used to train, optimise and test the model. The optimisation of the base model aimed to eliminate false negatives (missed events) at the expense of a tolerable increase in false positives, under the assumption of an iterative improvement in forecast accuracy during progression of the solar disturbance, as subsequent data becomes available.

  5. A web-based quantitative signal detection system on adverse drug reaction in China.

    PubMed

    Li, Chanjuan; Xia, Jielai; Deng, Jianxiong; Chen, Wenge; Wang, Suzhen; Jiang, Jing; Chen, Guanquan

    2009-07-01

    To establish a web-based quantitative signal detection system for adverse drug reactions (ADRs) based on spontaneous reporting to the Guangdong province drug-monitoring database in China. Using Microsoft Visual Basic and Active Server Pages programming languages and SQL Server 2000, a web-based system with three software modules was programmed to perform data preparation and association detection, and to generate reports. Information component (IC), the internationally recognized measure of disproportionality for quantitative signal detection, was integrated into the system, and its capacity for signal detection was tested with ADR reports collected from 1 January 2002 to 30 June 2007 in Guangdong. A total of 2,496 associations including known signals were mined from the test database. Signals (e.g., cefradine-induced hematuria) were found early by using the IC analysis. In addition, 291 drug-ADR associations were alerted for the first time in the second quarter of 2007. The system can be used for the detection of significant associations from the Guangdong drug-monitoring database and could be an extremely useful adjunct to the expert assessment of very large numbers of spontaneously reported ADRs for the first time in China.

  6. Young students, satellites aid understanding of climate-biosphere link

    NASA Astrophysics Data System (ADS)

    White, Michael A.; Schwartz, Mark D.; Running, Steven W.

    Data collected by young students from kindergarten through high school are being combined with satellite data to develop a more consistent understanding of the intimate connection between climate dynamics and the terrestrial biosphere. Comparison of the two sets of data involving the onset of budburst among trees and other vegetation has been extremely encouraging.Surface-atmosphere interactions involving exchanges of carbon, water, and energy are strongly affected by interannual variability in the timing and length of the vegetation growing season, and satellite remote sensing has the unique ability to consistently monitor global spatiotemporal variability in growing season dynamics. But without a clear picture of how satellite information (Figure 1) relates to ground conditions, the application of satellite growing season estimates for monitoring of climate-vegetation interactions, calculation of energy budgets, and large-scale ecological modeling is extremely limited.The integrated phenological analysis of field data, satellite observations, and climate advocated by Schwartz [1998], for example, has been primarily limited by the lack of geographically extensive and consistently measured phenology databases.

  7. The Prevalence of Congenital Hand and Upper Extremity Anomalies Based Upon the New York Congenital Malformations Registry.

    PubMed

    Goldfarb, Charles A; Shaw, Neil; Steffen, Jennifer A; Wall, Lindley B

    2017-03-01

    There have been few publications regarding the prevalence of congenital upper extremity anomalies and no recent reports from the United States. The purpose of this investigation was to examine the prevalence of congenital upper extremity anomalies in the total birth population of New York State over a 19-year period utilizing the New York Congenital Malformations Registry (NYCMR) database. The NYCMR includes children with at least 1 birth anomaly diagnosed by 2 years of age and listed by diagnosis code. We scrutinized these codes for specific upper extremity anomalies, including polydactyly, syndactyly, reduction defects, clubhand malformations, and syndromes with upper limb anomalies. We included children born between 1992 and 2010. There were a total of 4,883,072 live births in New York State during the study period. The overall prevalence of congenital upper extremity anomalies was 27.2 cases per 10,000 live births. Polydactyly was most common with 12,418 cases and a prevalence rate of 23.4 per 10,000 live births. The next most common anomalies included syndactyly with 627 cases affecting the hands (1498 total) and reduction defects (1111 cases). Specific syndromes were quite rare and were noted in a total of 215 live births. The prevalence of anomalies was higher in New York City compared with New York State populations at 33.0 and 21.9 per 10,000 live births, respectively. The NYCMR data demonstrate that congenital upper extremity anomalies are more common than previously reported. This is in large part due to the high prevalence of polydactyly. Although registries are imperfect, such data are helpful in monitoring prevalence rates over time, identifying potential causes or associations, and guiding health care planning and future research. Level I-diagnostic.

  8. Pseudonymisation of radiology data for research purposes

    NASA Astrophysics Data System (ADS)

    Noumeir, Rita; Lemay, Alain; Lina, Jean-Marc

    2005-04-01

    Medical image processing methods and algorithms, developed by researchers, need to be validated and tested. Test data should ideally be real clinical data especially when that clinical data is varied and exists in large volume. In nowadays, clinical data is accessible electronically and has important value for researchers. However, the usage of clinical data for research purposes should respect data confidentiality, patient right to privacy and the patient consent. In fact, clinical data is nominative given that it contains information about the patient such as name, age and identification number. Evidently, clinical data should be de-identified to be exported to research databases. However, the same patient is usually followed during a long period of time. The disease progression and the diagnostic evolution represent extremely valuable information for researchers, as well. Our objective is to build a research database from de-identified clinical data while enabling the database to be easily incremented by exporting new pseudonymous data, acquired over a long period of time. Pseudonymisation is data de-identification such that data belonging to the same individual in the clinical environment bear the same relation to each other in the de-identified research version. In this paper, we propose a software architecture that enables the implementation of a research database that can be incremented in time. We also evaluate its security and discuss its security pitfalls.

  9. The Amordad database engine for metagenomics.

    PubMed

    Behnam, Ehsan; Smith, Andrew D

    2014-10-15

    Several technical challenges in metagenomic data analysis, including assembling metagenomic sequence data or identifying operational taxonomic units, are both significant and well known. These forms of analysis are increasingly cited as conceptually flawed, given the extreme variation within traditionally defined species and rampant horizontal gene transfer. Furthermore, computational requirements of such analysis have hindered content-based organization of metagenomic data at large scale. In this article, we introduce the Amordad database engine for alignment-free, content-based indexing of metagenomic datasets. Amordad places the metagenome comparison problem in a geometric context, and uses an indexing strategy that combines random hashing with a regular nearest neighbor graph. This framework allows refinement of the database over time by continual application of random hash functions, with the effect of each hash function encoded in the nearest neighbor graph. This eliminates the need to explicitly maintain the hash functions in order for query efficiency to benefit from the accumulated randomness. Results on real and simulated data show that Amordad can support logarithmic query time for identifying similar metagenomes even as the database size reaches into the millions. Source code, licensed under the GNU general public license (version 3) is freely available for download from http://smithlabresearch.org/amordad andrewds@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Individuals with chronic ankle instability exhibit dynamic postural stability deficits and altered unilateral landing biomechanics: A systematic review.

    PubMed

    Simpson, Jeffrey D; Stewart, Ethan M; Macias, David M; Chander, Harish; Knight, Adam C

    2018-06-13

    To evaluate the literature regarding unilateral landing biomechanics and dynamic postural stability in individuals with and without chronic ankle instability (CAI). Four online databases (PubMed, ScienceDirect, Scopus, and SportDiscus) were searched from the earliest records to 31 January 2018, as well as reference sections of related journal articles, to complete the systematic search. Studies investigating the influence of CAI on unilateral landing biomechanics and dynamic postural stability were systematically reviewed and evaluated. Twenty articles met the criteria and were included in the systematic review. Individuals with CAI were found to have deficits in dynamic postural stability on the affected limb with medium to large effect sizes and altered lower extremity kinematics, most notably in the ankle and knee, with medium to large effect sizes. Additionally, greater loading rates and peak ground reaction forces, in addition to reductions in ankle muscle activity were also found in individuals with CAI during unilateral jump-landing tasks. Individuals with CAI demonstrate dynamic postural stability deficits, lower extremity kinematic alterations, and reduced neuromuscular control during unilateral jump-landings. These are likely factors that contribute recurrent lateral ankle sprain injuries during dynamic activity in individuals with CAI. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Fermi-LAT Gamma-Ray Bursts and Insights from Swift

    NASA Technical Reports Server (NTRS)

    Racusin, Judith L.

    2010-01-01

    A new revolution in Gamma-ray Burst (GRB) observations and theory has begun over the last two years since the launch of the Fermi Gamma-ray Space Telescope. The new window into high energy gamma-rays opened by the Fermi-Large Area Telescope (LAT) is providing insight into prompt emission mechanisms and possibly also afterglow physics. The LAT detected GRBs appear to be a new unique subset of extremely energetic and bright bursts compared to the large sample detected by Swift over the last 6 years. In this talk, I will discuss the context and recent discoveries from these LAT GRBs and the large database of broadband observations collected by the Swift X-ray Telescope (XRT) and UV/Optical Telescope (UVOT). Through comparisons between the GRBs detected by Swift-BAT, G8M, and LAT, we can learn about the unique characteristics, physical differences, and the relationships between each population. These population characteristics provide insight into the different physical parameters that contribute to the diversity of observational GRB properties.

  12. Data Structures for Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahan, Simon

    As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less

  13. Global Performance Characterization of the Three Burn Trans-Earth Injection Maneuver Sequence over the Lunar Nodal Cycle

    NASA Technical Reports Server (NTRS)

    Williams, Jacob; Davis, Elizabeth C.; Lee, David E.; Condon, Gerald L.; Dawn, Tim

    2009-01-01

    The Orion spacecraft will be required to perform a three-burn trans-Earth injection (TEI) maneuver sequence to return to Earth from low lunar orbit. The origin of this approach lies in the Constellation Program requirements for access to any lunar landing site location combined with anytime lunar departure. This paper documents the development of optimized databases used to rapidly model the performance requirements of the TEI three-burn sequence for an extremely large number of mission cases. It also discusses performance results for lunar departures covering a complete 18.6 year lunar nodal cycle as well as general characteristics of the optimized three-burn TEI sequence.

  14. Hot topics in biodiversity and climate change research

    PubMed Central

    Brook, Barry W.; Fordham, Damien A.

    2015-01-01

    With scientific and societal interest in biodiversity impacts of climate change growing enormously over the last decade, we analysed directions and biases in the recent most highly cited data papers in this field of research (from 2012 to 2014). The majority of this work relied on leveraging large databases of already collected historical information (but not paleo- or genetic data), and coupled these to new methodologies for making forward projections of shifts in species’ geographical ranges, with a focus on temperate and montane plants. A consistent finding was that the pace of climate-driven habitat change, along with increased frequency of extreme events, is outpacing the capacity of species or ecological communities to respond and adapt. PMID:26594350

  15. Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA)

    NASA Technical Reports Server (NTRS)

    Lichtwardt, Jonathan; Paciano, Eric; Jameson, Tina; Fong, Robert; Marshall, David

    2012-01-01

    With the very recent advent of NASA's Environmentally Responsible Aviation Project (ERA), which is dedicated to designing aircraft that will reduce the impact of aviation on the environment, there is a need for research and development of methodologies to minimize fuel burn, emissions, and reduce community noise produced by regional airliners. ERA tackles airframe technology, propulsion technology, and vehicle systems integration to meet performance objectives in the time frame for the aircraft to be at a Technology Readiness Level (TRL) of 4-6 by the year of 2020 (deemed N+2). The proceeding project that investigated similar goals to ERA was NASA's Subsonic Fixed Wing (SFW). SFW focused on conducting research to improve prediction methods and technologies that will produce lower noise, lower emissions, and higher performing subsonic aircraft for the Next Generation Air Transportation System. The work provided in this investigation was a NASA Research Announcement (NRA) contract #NNL07AA55C funded by Subsonic Fixed Wing. The project started in 2007 with a specific goal of conducting a large-scale wind tunnel test along with the development of new and improved predictive codes for the advanced powered-lift concepts. Many of the predictive codes were incorporated to refine the wind tunnel model outer mold line design. The large scale wind tunnel test goal was to investigate powered lift technologies and provide an experimental database to validate current and future modeling techniques. Powered-lift concepts investigated were Circulation Control (CC) wing in conjunction with over-the-wing mounted engines to entrain the exhaust to further increase the lift generated by CC technologies alone. The NRA was a five-year effort; during the first year the objective was to select and refine CESTOL concepts and then to complete a preliminary design of a large-scale wind tunnel model for the large scale test. During the second, third, and fourth years the large-scale wind tunnel model design would be completed, manufactured, and calibrated. During the fifth year the large scale wind tunnel test was conducted. This technical memo will describe all phases of the Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA) project and provide a brief summary of the background and modeling efforts involved in the NRA. The conceptual designs considered for this project and the decision process for the selected configuration adapted for a wind tunnel model will be briefly discussed. The internal configuration of AMELIA, and the internal measurements chosen in order to satisfy the requirements of obtaining a database of experimental data to be used for future computational model validations. The external experimental techniques that were employed during the test, along with the large-scale wind tunnel test facility are covered in great detail. Experimental measurements in the database include forces and moments, and surface pressure distributions, local skin friction measurements, boundary and shear layer velocity profiles, far-field acoustic data and noise signatures from turbofan propulsion simulators. Results and discussion of the circulation control performance, over-the-wing mounted engines, and the combined performance are also discussed in great detail.

  16. EUV spectroscopy of highly charged high Z ions in the Large Helical Device plasmas

    NASA Astrophysics Data System (ADS)

    Suzuki, C.; Koike, F.; Murakami, I.; Tamura, N.; Sudo, S.; Sakaue, H. A.; Nakamura, N.; Morita, S.; Goto, M.; Kato, D.; Nakano, T.; Higashiguchi, T.; Harte, C. S.; OʼSullivan, G.

    2014-11-01

    We present recent results on the extreme ultraviolet (EUV) spectroscopy of highly charged high Z ions in plasmas produced in the Large Helical Device (LHD) at the National Institute for Fusion Science. Tungsten, bismuth and lanthanide elements have recently been studied in the LHD in terms of their importance in fusion research and EUV light source development. In relatively low temperature plasmas, quasicontinuum emissions from open 4d or 4f subshell ions are predominant in the EUV region, while the spectra tend to be dominated by discrete lines from open 4s or 4p subshell ions in higher temperature plasmas. Comparative analyses using theoretical calculations and charge-separated spectra observed in an electron beam ion trap have been performed to achieve better agreement with the spectra measured in the LHD. As a result, databases on Z dependence of EUV spectra in plasmas have been widely extended.

  17. Temporal Wind Pairs for Space Launch Vehicle Capability Assessment and Risk Mitigation

    NASA Technical Reports Server (NTRS)

    Decker, Ryan K.; Barbre, Robert E., Jr.

    2015-01-01

    Space launch vehicles incorporate upper-level wind assessments to determine wind effects on the vehicle and for a commit to launch decision. These assessments make use of wind profiles measured hours prior to launch and may not represent the actual wind the vehicle will fly through. Uncertainty in the winds over the time period between the assessment and launch introduces uncertainty in assessment of vehicle controllability and structural integrity that must be accounted for to ensure launch safety. Temporal wind pairs are used in engineering development of allowances to mitigate uncertainty. Five sets of temporal wind pairs at various times (0.75, 1.5, 2, 3 and 4-hrs) at the United States Air Force Eastern Range and Western Range, as well as the National Aeronautics and Space Administration's Wallops Flight Facility are developed for use in upper-level wind assessments on vehicle performance. Historical databases are compiled from balloon-based and vertically pointing Doppler radar wind profiler systems. Various automated and manual quality control procedures are used to remove unacceptable profiles. Statistical analyses on the resultant wind pairs from each site are performed to determine if the observed extreme wind changes in the sample pairs are representative of extreme temporal wind change. Wind change samples in the Eastern Range and Western Range databases characterize extreme wind change. However, the small sample sizes in the Wallops Flight Facility databases yield low confidence that the sample population characterizes extreme wind change that could occur.

  18. Temporal Wind Pairs for Space Launch Vehicle Capability Assessment and Risk Mitigation

    NASA Technical Reports Server (NTRS)

    Decker, Ryan K.; Barbre, Robert E., Jr.

    2014-01-01

    Space launch vehicles incorporate upper-level wind assessments to determine wind effects on the vehicle and for a commit to launch decision. These assessments make use of wind profiles measured hours prior to launch and may not represent the actual wind the vehicle will fly through. Uncertainty in the winds over the time period between the assessment and launch introduces uncertainty in assessment of vehicle controllability and structural integrity that must be accounted for to ensure launch safety. Temporal wind pairs are used in engineering development of allowances to mitigate uncertainty. Five sets of temporal wind pairs at various times (0.75, 1.5, 2, 3 and 4-hrs) at the United States Air Force Eastern Range and Western Range, as well as the National Aeronautics and Space Administration's Wallops Flight Facility are developed for use in upper-level wind assessments on vehicle performance. Historical databases are compiled from balloon-based and vertically pointing Doppler radar wind profiler systems. Various automated and manual quality control procedures are used to remove unacceptable profiles. Statistical analyses on the resultant wind pairs from each site are performed to determine if the observed extreme wind changes in the sample pairs are representative of extreme temporal wind change. Wind change samples in the Eastern Range and Western Range databases characterize extreme wind change. However, the small sample sizes in the Wallops Flight Facility databases yield low confidence that the sample population characterizes extreme wind change that could occur.

  19. Back to the Scriptorium: Database Marketplace 2009

    ERIC Educational Resources Information Center

    Tenopir, Carol; Baker, Gayle; Grogg, Jill E.

    2009-01-01

    The 2009 database marketplace is bounded by two extremes: massive digitization projects to increase access, and retrenchment owing to budget worries. Picture medieval monks hunched over their desks in the scriptorium as they labor to copy manuscripts. A 21st-century version of this activity is being repeated daily in the world's libraries and…

  20. A Novel Observation-Guided Approach for Evaluating Mesoscale Convective Systems Simulated by the DOE ACME Model

    NASA Astrophysics Data System (ADS)

    Feng, Z.; Ma, P. L.; Hardin, J. C.; Houze, R.

    2017-12-01

    Mesoscale convective systems (MCSs) are the largest type of convective storms that develop when convection aggregates and induces mesoscale circulation features. Over North America, MCSs contribute over 60% of the total warm-season precipitation and over half of the extreme daily precipitation in the central U.S. Our recent study (Feng et al. 2016) found that the observed increases in springtime total and extreme rainfall in this region are dominated by increased frequency and intensity of long-lived MCSs*. To date, global climate models typically do not run at a resolution high enough to explicitly simulate individual convective elements and may not have adequate process representations for MCSs, resulting in a large deficiency in projecting changes of the frequency of extreme precipitation events in future climate. In this study, we developed a novel observation-guided approach specifically designed to evaluate simulated MCSs in the Department of Energy's climate model, Accelerated Climate Modeling for Energy (ACME). The ACME model has advanced treatments for convection and subgrid variability and for this study is run at 25 km and 100 km grid spacings. We constructed a robust MCS database consisting of over 500 MCSs from 3 warm-season observations by applying a feature-tracking algorithm to 4-km resolution merged geostationary satellite and 3-D NEXRAD radar network data over the Continental US. This high-resolution MCS database is then down-sampled to the 25 and 100 km ACME grids to re-characterize key MCS properties. The feature-tracking algorithm is adapted with the adjusted characteristics to identify MCSs from ACME model simulations. We demonstrate that this new analysis framework is useful for evaluating ACME's warm-season precipitation statistics associated with MCSs, and provides insights into the model process representations related to extreme precipitation events for future improvement. *Feng, Z., L. R. Leung, S. Hagos, R. A. Houze, C. D. Burleyson, and K. Balaguru (2016), More frequent intense and long-lived storms dominate the springtime trend in central US rainfall, Nat Commun, 7, 13429, doi: 10.1038/ncomms13429.

  1. Traditional Chinese and western medicine for the prevention of deep venous thrombosis after lower extremity orthopedic surgery: a meta-analysis of randomized controlled trials.

    PubMed

    Zhu, Shibai; Song, Yi; Chen, Xi; Qian, Wenwei

    2018-04-10

    Chinese herbal medicine has traditionally been considered to promote blood circulation to remove obstruction in the channels and clear pathogenic heat to drain dampness effects. We conducted this meta-analysis to evaluate its benefits for the prevention of deep venous thrombosis (DVT) after lower extremity orthopedic surgery. Relevant, published studies were identified using the following keywords: lower extremity orthopedic surgery, arthroplasty, joint replacement, fracture, traditional Chinese and western medicine, Chinese herbal medicine, deep venous thrombosis (DVT), and Venous thromboembolism (VTE). The following databases were used to identify the literature consisting of RCTs with a date of search of 31 May 2017: PubMed, Cochrane Library, Web of knowledge, the Chinese National Knowledge Infrastructure Database, the Chongqing VIP Database, the Chinese Biomedical Database, and the Wanfang Database (including three English and four Chinese databases). All relevant data were collected from studies meeting the inclusion criteria. The outcome variables were the incidence rate of DVT, activated partial thromboplastin time (APTT), prothrombin time (PT), and D-dimer; subcutaneous hematoma; and other reported outcomes. RevMan5.2. software was adopted for the meta-analysis. A total of 20 published studies (1862 cases) met the inclusion criteria. The experimental group, 910 patients (48.87%), received the Chinese herbal medicine or traditional Chinese and western medicine for prevention of DVT; the control group, 952 patients (51.13%), received the standard western treatment. The meta-analysis showed that traditional Chinese and western medicine therapy reduced the incidence rates of DVT significantly when compared with controls (risk ratio [RR] = 0.40; 95% CI, 0.30 to 0.54; P < 0.00001), and the D-dimer was lower in the experimental group (P = 0.01). Besides, the incidence rate of subcutaneous hematoma was lower in the experimental group (P < 0.0001). However, no significant difference was found in the PT (P = 0.98) and APTT (P = 0.75) in two groups. No serious adverse events were reported. Traditional Chinese and western medicine therapy may be a safe, effective prevention modality for DVT after lower extremity orthopedic surgery. Further rigorously designed, randomized trials are warranted.

  2. Diversity of virus-host systems in hypersaline Lake Retba, Senegal.

    PubMed

    Sime-Ngando, Télesphore; Lucas, Soizick; Robin, Agnès; Tucker, Kimberly Pause; Colombet, Jonathan; Bettarel, Yvan; Desmond, Elie; Gribaldo, Simonetta; Forterre, Patrick; Breitbart, Mya; Prangishvili, David

    2011-08-01

    Remarkable morphological diversity of virus-like particles was observed by transmission electron microscopy in a hypersaline water sample from Lake Retba, Senegal. The majority of particles morphologically resembled hyperthermophilic archaeal DNA viruses isolated from extreme geothermal environments. Some hypersaline viral morphotypes have not been previously observed in nature, and less than 1% of observed particles had a head-and-tail morphology, which is typical for bacterial DNA viruses. Culture-independent analysis of the microbial diversity in the sample suggested the dominance of extremely halophilic archaea. Few of the 16S sequences corresponded to known archeal genera (Haloquadratum, Halorubrum and Natronomonas), whereas the majority represented novel archaeal clades. Three sequences corresponded to a new basal lineage of the haloarchaea. Bacteria belonged to four major phyla, consistent with the known diversity in saline environments. Metagenomic sequencing of DNA from the purified virus-like particles revealed very few similarities to the NCBI non-redundant database at either the nucleotide or amino acid level. Some of the identifiable virus sequences were most similar to previously described haloarchaeal viruses, but no sequence similarities were found to archaeal viruses from extreme geothermal environments. A large proportion of the sequences had similarity to previously sequenced viral metagenomes from solar salterns. © 2010 Society for Applied Microbiology and Blackwell Publishing Ltd.

  3. Does morbid obesity negatively affect the hospital course of patients undergoing treatment of closed, lower-extremity diaphyseal long-bone fractures?

    PubMed

    Baldwin, Keith D; Matuszewski, Paul E; Namdari, Surena; Esterhai, John L; Mehta, Samir

    2011-01-03

    Obesity is prevalent in the developed world and is associated with significant costs to the health care system. The effect of morbid obesity in patients operatively treated for long-bone fractures of the lower extremity is largely unknown. The National Trauma Data Bank was accessed to determine if morbidly obese patients (body mass index >40) with lower extremity fractures have longer length of hospital stay, higher cost, greater rehabilitation admission rates, and more complications than nonobese patients. We identified patients with operatively treated diaphyseal femur (6920) and tibia (5190) fractures. Polytrauma patients and patients younger than 16 years were excluded. Morbidly obese patients were identified by ICD-9 and database comorbidity designation (femur, 131 morbidly obese; tibia, 75 morbidly obese). Patients meeting these criteria who were not morbidly obese were used as controls. Sensitivity analyses were performed to analyze patients with isolated trauma to the tibia or femur. Morbidly obese patients were more likely to be admitted to a subacute facility. Length of stay trended higher in morbidly obese patients. There was no significant relationship between obesity and inpatient mortality or inpatient complications. These trends held true when considering patients with multiple injuries and patients who had isolated long-bone injuries. Our study showed that morbidly obese patients may have greater rehabilitation needs following long-bone fractures in the lower extremity. Our study showed no difference in mortality or complications, although further studies are needed to confirm these findings. Copyright 2011, SLACK Incorporated.

  4. Hazards of Extreme Weather: Flood Fatalities in Texas

    NASA Astrophysics Data System (ADS)

    Sharif, H. O.; Jackson, T.; Bin-Shafique, S.

    2009-12-01

    The Federal Emergency Management Agency (FEMA) considers flooding “America’s Number One Natural Hazard”. Despite flood management efforts in many communities, U.S. flood damages remain high, due, in large part, to increasing population and property development in flood-prone areas. Floods are the leading cause of fatalities related to natural disasters in Texas. Texas leads the nation in flash flood fatalities. There are three times more fatalities in Texas (840) than the following state Pennsylvania (265). This study examined flood fatalities that occurred in Texas between 1960 and 2008. Flood fatality statistics were extracted from three sources: flood fatality databases from the National Climatic Data Center, the Spatial Hazard Event and Loss Database for the United States, and the Texas Department of State Health Services. The data collected for flood fatalities include the date, time, gender, age, location, and weather conditions. Inconsistencies among the three databases were identified and discussed. Analysis reveals that most fatalities result from driving into flood water (about 65%). Spatial analysis indicates that more fatalities occurred in counties containing major urban centers. Hydrologic analysis of a flood event that resulted in five fatalities was performed. A hydrologic model was able to simulate the water level at a location where a vehicle was swept away by flood water resulting in the death of the driver.

  5. Extreme precipitation and floods in the Iberian Peninsula and its socio-economic impacts

    NASA Astrophysics Data System (ADS)

    Ramos, A. M.; Pereira, S.; Trigo, R. M.; Zêzere, J. L.

    2017-12-01

    Extreme precipitation events in the Iberian Peninsula can induce floods and landslides that have often major socio-economic impacts. The DISASTER database gathered the basic information on past floods and landslides that caused social consequences in Portugal for the period 1865-2015. This database was built under the assumption that social consequences of floods and landslides are sufficient relevant to be reported by newspapers, that provide the data source. Three extreme historical events were analysed in detail taking into account their associated wide socio-economic impacts. The December 1876 record precipitation and flood event leading to an all-time record flow in two large international rivers (Tagus and Guadiana). As a direct consequence, several Portuguese and Spanish towns and villages located in the banks of both rivers suffered serious flood damage on 7 December 1876. The 20-28 December 1909 event recorded the highest number of flood and landslide cases that occurred in Portugal in the period 1865-2015, having triggered the highest floods in 200 years at the Douro river's mouth and causing 89 fatalities in both Portugal and Spain northern regions. More recently the deadliest flash-flooding event affecting Portugal since, at least, the early 19th century, took place on the 25 and 26 November 1967 causing more than 500 fatalities in the Lisbon region. We provide a detailed analysis of each of these events, including their human impacts, precipitation analyses based on historical datasets and the associated atmospheric circulation conditions from reanalysis datasets. Acknowledgements: This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [PTDC / ATPGEO / 1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012). The financial support for attending this workshop was also possible through FCT project UID/GEO/50019/2013 - Instituto Dom Luiz.

  6. Effects of temperature and precipitation variability on the risk of violence in sub-Saharan Africa, 1980–2012

    PubMed Central

    O’Loughlin, John; Linke, Andrew M.; Witmer, Frank D. W.

    2014-01-01

    Ongoing debates in the academic community and in the public policy arena continue without clear resolution about the significance of global climate change for the risk of increased conflict. Sub-Saharan Africa is generally agreed to be the region most vulnerable to such climate impacts. Using a large database of conflict events and detailed climatological data covering the period 1980–2012, we apply a multilevel modeling technique that allows for a more nuanced understanding of a climate–conflict link than has been seen heretofore. In the aggregate, high temperature extremes are associated with more conflict; however, different types of conflict and different subregions do not show consistent relationship with temperature deviations. Precipitation deviations, both high and low, are generally not significant. The location and timing of violence are influenced less by climate anomalies (temperature or precipitation variations from normal) than by key political, economic, and geographic factors. We find important distinctions in the relationship between temperature extremes and conflict by using multiple methods of analysis and by exploiting our time-series cross-sectional dataset for disaggregated analyses. PMID:25385621

  7. Bookshelf: a simple curation system for the storage of biomolecular simulation data.

    PubMed

    Vohra, Shabana; Hall, Benjamin A; Holdbrook, Daniel A; Khalid, Syma; Biggin, Philip C

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call 'Bookshelf', that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/

  8. Bookshelf: a simple curation system for the storage of biomolecular simulation data

    PubMed Central

    Vohra, Shabana; Hall, Benjamin A.; Holdbrook, Daniel A.; Khalid, Syma; Biggin, Philip C.

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call ‘Bookshelf’, that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/ PMID:21169341

  9. When is Chemical Similarity Significant? The Statistical Distribution of Chemical Similarity Scores and Its Extreme Values

    PubMed Central

    Baldi, Pierre

    2010-01-01

    As repositories of chemical molecules continue to expand and become more open, it becomes increasingly important to develop tools to search them efficiently and assess the statistical significance of chemical similarity scores. Here we develop a general framework for understanding, modeling, predicting, and approximating the distribution of chemical similarity scores and its extreme values in large databases. The framework can be applied to different chemical representations and similarity measures but is demonstrated here using the most common binary fingerprints with the Tanimoto similarity measure. After introducing several probabilistic models of fingerprints, including the Conditional Gaussian Uniform model, we show that the distribution of Tanimoto scores can be approximated by the distribution of the ratio of two correlated Normal random variables associated with the corresponding unions and intersections. This remains true also when the distribution of similarity scores is conditioned on the size of the query molecules in order to derive more fine-grained results and improve chemical retrieval. The corresponding extreme value distributions for the maximum scores are approximated by Weibull distributions. From these various distributions and their analytical forms, Z-scores, E-values, and p-values are derived to assess the significance of similarity scores. In addition, the framework allows one to predict also the value of standard chemical retrieval metrics, such as Sensitivity and Specificity at fixed thresholds, or ROC (Receiver Operating Characteristic) curves at multiple thresholds, and to detect outliers in the form of atypical molecules. Numerous and diverse experiments carried in part with large sets of molecules from the ChemDB show remarkable agreement between theory and empirical results. PMID:20540577

  10. Extreme Value Analysis of hydro meteorological extremes in the ClimEx Large-Ensemble

    NASA Astrophysics Data System (ADS)

    Wood, R. R.; Martel, J. L.; Willkofer, F.; von Trentini, F.; Schmid, F. J.; Leduc, M.; Frigon, A.; Ludwig, R.

    2017-12-01

    Many studies show an increase in the magnitude and frequency of hydrological extreme events in the course of climate change. However the contribution of natural variability to the magnitude and frequency of hydrological extreme events is not yet settled. A reliable estimate of extreme events is from great interest for water management and public safety. In the course of the ClimEx Project (www.climex-project.org) a new single-model large-ensemble was created by dynamically downscaling the CanESM2 large-ensemble with the Canadian Regional Climate Model version 5 (CRCM5) for an European Domain and a Northeastern North-American domain. By utilizing the ClimEx 50-Member Large-Ensemble (CRCM5 driven by CanESM2 Large-Ensemble) a thorough analysis of natural variability in extreme events is possible. Are the current extreme value statistical methods able to account for natural variability? How large is the natural variability for e.g. a 1/100 year return period derived from a 50-Member Large-Ensemble for Europe and Northeastern North-America? These questions should be answered by applying various generalized extreme value distributions (GEV) to the ClimEx Large-Ensemble. Hereby various return levels (5-, 10-, 20-, 30-, 60- and 100-years) based on various lengths of time series (20-, 30-, 50-, 100- and 1500-years) should be analyzed for the maximum one day precipitation (RX1d), the maximum three hourly precipitation (RX3h) and the streamflow for selected catchments in Europe. The long time series of the ClimEx Ensemble (7500 years) allows us to give a first reliable estimate of the magnitude and frequency of certain extreme events.

  11. GIS applications for military operations in coastal zones

    USGS Publications Warehouse

    Fleming, S.; Jordan, T.; Madden, M.; Usery, E.L.; Welch, R.

    2009-01-01

    In order to successfully support current and future US military operations in coastal zones, geospatial information must be rapidly integrated and analyzed to meet ongoing force structure evolution and new mission directives. Coastal zones in a military-operational environment are complex regions that include sea, land and air features that demand high-volume databases of extreme detail within relatively narrow geographic corridors. Static products in the form of analog maps at varying scales traditionally have been used by military commanders and their operational planners. The rapidly changing battlefield of 21st Century warfare, however, demands dynamic mapping solutions. Commercial geographic information system (GIS) software for military-specific applications is now being developed and employed with digital databases to provide customized digital maps of variable scale, content and symbolization tailored to unique demands of military units. Research conducted by the Center for Remote Sensing and Mapping Science at the University of Georgia demonstrated the utility of GIS-based analysis and digital map creation when developing large-scale (1:10,000) products from littoral warfare databases. The methodology employed-selection of data sources (including high resolution commercial images and Lidar), establishment of analysis/modeling parameters, conduct of vehicle mobility analysis, development of models and generation of products (such as a continuous sea-land DEM and geo-visualization of changing shorelines with tidal levels)-is discussed. Based on observations and identified needs from the National Geospatial-Intelligence Agency, formerly the National Imagery and Mapping Agency, and the Department of Defense, prototype GIS models for military operations in sea, land and air environments were created from multiple data sets of a study area at US Marine Corps Base Camp Lejeune, North Carolina. Results of these models, along with methodologies for developing large-scale littoral warfare databases, aid the National Geospatial-Intelligence Agency in meeting littoral warfare analysis, modeling and map generation requirements for US military organizations. ?? 2008 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  12. GIS applications for military operations in coastal zones

    NASA Astrophysics Data System (ADS)

    Fleming, S.; Jordan, T.; Madden, M.; Usery, E. L.; Welch, R.

    In order to successfully support current and future US military operations in coastal zones, geospatial information must be rapidly integrated and analyzed to meet ongoing force structure evolution and new mission directives. Coastal zones in a military-operational environment are complex regions that include sea, land and air features that demand high-volume databases of extreme detail within relatively narrow geographic corridors. Static products in the form of analog maps at varying scales traditionally have been used by military commanders and their operational planners. The rapidly changing battlefield of 21st Century warfare, however, demands dynamic mapping solutions. Commercial geographic information system (GIS) software for military-specific applications is now being developed and employed with digital databases to provide customized digital maps of variable scale, content and symbolization tailored to unique demands of military units. Research conducted by the Center for Remote Sensing and Mapping Science at the University of Georgia demonstrated the utility of GIS-based analysis and digital map creation when developing large-scale (1:10,000) products from littoral warfare databases. The methodology employed-selection of data sources (including high resolution commercial images and Lidar), establishment of analysis/modeling parameters, conduct of vehicle mobility analysis, development of models and generation of products (such as a continuous sea-land DEM and geo-visualization of changing shorelines with tidal levels)-is discussed. Based on observations and identified needs from the National Geospatial-Intelligence Agency, formerly the National Imagery and Mapping Agency, and the Department of Defense, prototype GIS models for military operations in sea, land and air environments were created from multiple data sets of a study area at US Marine Corps Base Camp Lejeune, North Carolina. Results of these models, along with methodologies for developing large-scale littoral warfare databases, aid the National Geospatial-Intelligence Agency in meeting littoral warfare analysis, modeling and map generation requirements for US military organizations.

  13. Fermi-LAT Gamma-ray Bursts and Insight from Swift

    NASA Technical Reports Server (NTRS)

    Racusin, Judith L.

    2011-01-01

    A new revolution in GRB observation and theory has begun over the last 3 years since the launch of the Fermi gamma-ray space telescope. The new window into high energy gamma-rays opened by the Fermi-LAT is providing insight into prompt emission mechanisms and possibly also afterglow physics. The LAT detected GRBs appear to be a new unique subset of extremely energetic and bright bursts. In this talk I will discuss the context and recent discoveries from these LAT GRBs and the large database of broadband observations collected by Swift over the last 7 years and how through comparisons between the Swift, GBM, and LAT GRB samples, we can learn about the unique characteristics and relationships between each population.

  14. Attributing Historical Changes in Probabilities of Record-Breaking Daily Temperature and Precipitation Extreme Events

    DOE PAGES

    Shiogama, Hideo; Imada, Yukiko; Mori, Masato; ...

    2016-08-07

    Here, we describe two unprecedented large (100-member), longterm (61-year) ensembles based on MRI-AGCM3.2, which were driven by historical and non-warming climate forcing. These ensembles comprise the "Database for Policy Decision making for Future climate change (d4PDF)". We compare these ensembles to large ensembles based on another climate model, as well as to observed data, to investigate the influence of anthropogenic activities on historical changes in the numbers of record-breaking events, including: the annual coldest daily minimum temperature (TNn), the annual warmest daily maximum temperature (TXx) and the annual most intense daily precipitation event (Rx1day). These two climate model ensembles indicatemore » that human activity has already had statistically significant impacts on the number of record-breaking extreme events worldwide mainly in the Northern Hemisphere land. Specifically, human activities have altered the likelihood that a wider area globally would suffer record-breaking TNn, TXx and Rx1day events than that observed over the 2001- 2010 period by a factor of at least 0.6, 5.4 and 1.3, respectively. However, we also find that the estimated spatial patterns and amplitudes of anthropogenic impacts on the probabilities of record-breaking events are sensitive to the climate model and/or natural-world boundary conditions used in the attribution studies.« less

  15. Anomaly detection applied to a materials control and accounting database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteson, R.; Spanks, L.; Yarbro, T.

    An important component of the national mission of reducing the nuclear danger includes accurate recording of the processing and transportation of nuclear materials. Nuclear material storage facilities, nuclear chemical processing plants, and nuclear fuel fabrication facilities collect and store large amounts of data describing transactions that involve nuclear materials. To maintain confidence in the integrity of these data, it is essential to identify anomalies in the databases. Anomalous data could indicate error, theft, or diversion of material. Yet, because of the complex and diverse nature of the data, analysis and evaluation are extremely tedious. This paper describes the authors workmore » in the development of analysis tools to automate the anomaly detection process for the Material Accountability and Safeguards System (MASS) that tracks and records the activities associated with accountable quantities of nuclear material at Los Alamos National Laboratory. Using existing guidelines that describe valid transactions, the authors have created an expert system that identifies transactions that do not conform to the guidelines. Thus, this expert system can be used to focus the attention of the expert or inspector directly on significant phenomena.« less

  16. Unbiased Protein Association Study on the Public Human Proteome Reveals Biological Connections between Co-Occurring Protein Pairs

    PubMed Central

    2017-01-01

    Mass-spectrometry-based, high-throughput proteomics experiments produce large amounts of data. While typically acquired to answer specific biological questions, these data can also be reused in orthogonal ways to reveal new biological knowledge. We here present a novel method for such orthogonal data reuse of public proteomics data. Our method elucidates biological relationships between proteins based on the co-occurrence of these proteins across human experiments in the PRIDE database. The majority of the significantly co-occurring protein pairs that were detected by our method have been successfully mapped to existing biological knowledge. The validity of our novel method is substantiated by the extremely few pairs that can be mapped to existing knowledge based on random associations between the same set of proteins. Moreover, using literature searches and the STRING database, we were able to derive meaningful biological associations for unannotated protein pairs that were detected using our method, further illustrating that as-yet unknown associations present highly interesting targets for follow-up analysis. PMID:28480704

  17. Applicability of computational systems biology in toxicology.

    PubMed

    Kongsbak, Kristine; Hadrup, Niels; Audouze, Karine; Vinggaard, Anne Marie

    2014-07-01

    Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search. However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally. This is possible due to the existence of comprehensive databases containing information on networks of human protein-protein interactions and protein-disease associations. Experimentally determined targets of the specific chemical of interest can be fed into these networks to obtain additional information that can be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method in the hypothesis-generating phase of toxicological research. © 2014 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  18. Extremity fractures associated with ATVs and dirt bikes: a 10-year national epidemiologic study.

    PubMed

    Lombardo, D J; Jelsema, T; Gambone, A; Weisman, M; Petersen-Fitts, G; Whaley, J D; Sabesan, V J

    2017-08-01

    Morbidity and mortality of all-terrain vehicles and dirt bikes have been studied, as well as the association of helmet use and head injury. The purpose of this study is to compare and contrast the patterns of extremity fractures associated with ATVs and dirt bikes. We believe there will be unique and potentially preventable injury patterns associated with dirt bikes and three-wheeled ATVs due to the poor stability of these vehicles. Descriptive epidemiology study. The National Electronic Injury Surveillance System (NEISS) was used to acquire data for extremity fractures related to ATV (three wheels, four wheels, and number of wheels undefined) and dirt bike use from 2007 to 2012. Nationwide estimation of injury incidence was determined using NEISS weight calculations. The database yielded an estimate of 229,362 extremity fractures from 2007 to 2012. The incidence rates of extremity fractures associated with ATV and dirt bike use were 3.87 and 6.85 per 1000 participant-years. The largest proportion of all fractures occurred in the shoulder (27.2%), followed by the wrist and lower leg (13.8 and 12.4%, respectively). There were no differences in the distribution of the location of fractures among four-wheeled or unspecified ATVs. However, three-wheeled ATVs and dirt bikes had much larger proportion of lower leg, foot, and ankle fractures compared to the other vehicle types. While upper extremity fractures were the most commonly observed in this database, three-wheeled ATVs and dirt bikes showed increased proportions of lower extremity fractures. Several organizations have previously advocated for better regulation of the sale and use of these specific vehicles due to increased risks. These findings help illustrate some of the specific risks associated with these commonly used vehicles.

  19. GTRAC: fast retrieval from compressed collections of genomic variants

    PubMed Central

    Tatwawadi, Kedar; Hernaez, Mikel; Ochoa, Idoia; Weissman, Tsachy

    2016-01-01

    Motivation: The dramatic decrease in the cost of sequencing has resulted in the generation of huge amounts of genomic data, as evidenced by projects such as the UK10K and the Million Veteran Project, with the number of sequenced genomes ranging in the order of 10 K to 1 M. Due to the large redundancies among genomic sequences of individuals from the same species, most of the medical research deals with the variants in the sequences as compared with a reference sequence, rather than with the complete genomic sequences. Consequently, millions of genomes represented as variants are stored in databases. These databases are constantly updated and queried to extract information such as the common variants among individuals or groups of individuals. Previous algorithms for compression of this type of databases lack efficient random access capabilities, rendering querying the database for particular variants and/or individuals extremely inefficient, to the point where compression is often relinquished altogether. Results: We present a new algorithm for this task, called GTRAC, that achieves significant compression ratios while allowing fast random access over the compressed database. For example, GTRAC is able to compress a Homo sapiens dataset containing 1092 samples in 1.1 GB (compression ratio of 160), while allowing for decompression of specific samples in less than a second and decompression of specific variants in 17 ms. GTRAC uses and adapts techniques from information theory, such as a specialized Lempel-Ziv compressor, and tailored succinct data structures. Availability and Implementation: The GTRAC algorithm is available for download at: https://github.com/kedartatwawadi/GTRAC Contact: kedart@stanford.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27587665

  20. GTRAC: fast retrieval from compressed collections of genomic variants.

    PubMed

    Tatwawadi, Kedar; Hernaez, Mikel; Ochoa, Idoia; Weissman, Tsachy

    2016-09-01

    The dramatic decrease in the cost of sequencing has resulted in the generation of huge amounts of genomic data, as evidenced by projects such as the UK10K and the Million Veteran Project, with the number of sequenced genomes ranging in the order of 10 K to 1 M. Due to the large redundancies among genomic sequences of individuals from the same species, most of the medical research deals with the variants in the sequences as compared with a reference sequence, rather than with the complete genomic sequences. Consequently, millions of genomes represented as variants are stored in databases. These databases are constantly updated and queried to extract information such as the common variants among individuals or groups of individuals. Previous algorithms for compression of this type of databases lack efficient random access capabilities, rendering querying the database for particular variants and/or individuals extremely inefficient, to the point where compression is often relinquished altogether. We present a new algorithm for this task, called GTRAC, that achieves significant compression ratios while allowing fast random access over the compressed database. For example, GTRAC is able to compress a Homo sapiens dataset containing 1092 samples in 1.1 GB (compression ratio of 160), while allowing for decompression of specific samples in less than a second and decompression of specific variants in 17 ms. GTRAC uses and adapts techniques from information theory, such as a specialized Lempel-Ziv compressor, and tailored succinct data structures. The GTRAC algorithm is available for download at: https://github.com/kedartatwawadi/GTRAC CONTACT: : kedart@stanford.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Confidence assignment for mass spectrometry based peptide identifications via the extreme value distribution.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2016-09-01

    There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  2. Aging Will Amplify the Heat-related Mortality Risk under a Changing Climate: Projection for the Elderly in Beijing, China

    NASA Astrophysics Data System (ADS)

    Li, Tiantian; Horton, Radley M.; Bader, Daniel A.; Zhou, Maigeng; Liang, Xudong; Ban, Jie; Sun, Qinghua; Kinney, Patrick L.

    2016-06-01

    An aging population could substantially enhance the burden of heat-related health risks in a warming climate because of their higher susceptibility to extreme heat health effects. Here, we project heat-related mortality for adults 65 years and older in Beijing China across 31 downscaled climate models and 2 representative concentration pathways (RCPs) in the 2020s, 2050s, and 2080s. Under a scenario of medium population and RCP8.5, by the 2080s, Beijing is projected to experience 14,401 heat-related deaths per year for elderly individuals, which is a 264.9% increase compared with the 1980s. These impacts could be moderated through adaptation. In the 2080s, even with the 30% and 50% adaptation rate assumed in our study, the increase in heat-related death is approximately 7.4 times and 1.3 times larger than in the 1980s respectively under a scenario of high population and RCP8.5. These findings could assist countries in establishing public health intervention policies for the dual problems of climate change and aging population. Examples could include ensuring facilities with large elderly populations are protected from extreme heat (for example through back-up power supplies and/or passive cooling) and using databases and community networks to ensure the home-bound elderly are safe during extreme heat events.

  3. Aging Will Amplify the Heat-Related Mortality Risk Under a Changing Climate: Projection for the Elderly in Beijing, China

    NASA Technical Reports Server (NTRS)

    Li, Tiantian; Horton, Radley M.; Bader, Daniel A.; Zhou, Maigeng; Liang, Xudong; Ban, Jie; Sun, Qinghua; Kinney, Patrick L.

    2016-01-01

    An aging population could substantially enhance the burden of heat-related health risks in a warming climate because of their higher susceptibility to extreme heat health effects. Here, we project heatrelated mortality for adults 65 years and older in Beijing China across 31 downscaled climate models and 2 representative concentration pathways (RCPs) in the 2020s, 2050s, and 2080s. Under a scenario of medium population and RCP8.5, by the 2080s, Beijing is projected to experience 14,401 heat-related deaths per year for elderly individuals, which is a 264.9% increase compared with the 1980s. These impacts could be moderated through adaptation. In the 2080s, even with the 30% and 50% adaptation rate assumed in our study, the increase in heat-related death is approximately 7.4 times and 1.3 times larger than in the 1980s respectively under a scenario of high population and RCP8.5. These findings could assist countries in establishing public health intervention policies for the dual problems of climate change and aging population. Examples could include ensuring facilities with large elderly populations are protected from extreme heat (for example through back-up power supplies and/or passive cooling) and using databases and community networks to ensure the home-bound elderly are safe during extreme heat events.

  4. Validity of the Dictionary of Occupational Titles for Assessing Upper Extremity Work Demands

    PubMed Central

    Opsteegh, Lonneke; Soer, Remko; Reinders-Messelink, Heleen A.; Reneman, Michiel F.; van der Sluis, Corry K.

    2010-01-01

    Objectives The Dictionary of Occupational Titles (DOT) is used in vocational rehabilitation to guide decisions about the ability of a person with activity limitations to perform activities at work. The DOT has categorized physical work demands in five categories. The validity of this categorization is unknown. Aim of this study was to investigate whether the DOT could be used validly to guide decisions for patients with injuries to the upper extremities. Four hypotheses were tested. Methods A database including 701 healthy workers was used. All subjects filled out the Dutch Musculoskeletal Questionnaire, from which an Upper Extremity Work Demands score (UEWD) was derived. First, relation between the DOT-categories and UEWD-score was analysed using Spearman correlations. Second, variance of the UEWD-score in occupational groups was tested by visually inspecting boxplots and assessing kurtosis of the distribution. Third, it was investigated whether occupations classified in one DOT-category, could significantly differ on UEWD-scores. Fourth, it was investigated whether occupations in different DOT-categories could have similar UEWD-scores using Mann Whitney U-tests (MWU). Results Relation between the DOT-categories and the UEWD-score was weak (rsp = 0.40; p<.01). Overlap between categories was found. Kurtosis exceeded ±1.0 in 3 occupational groups, indicating large variance. UEWD-scores were significantly different within one DOT-category (MWU = 1.500; p<.001). UEWD scores between DOT-categories were not significantly different (MWU = 203.000; p = .49). Conclusion All four hypotheses could not be rejected. The DOT appears to be invalid for assessing upper extremity work demands. PMID:21151934

  5. To the fringe and back: Violent extremism and the psychology of deviance.

    PubMed

    Kruglanski, Arie W; Jasko, Katarzyna; Chernikova, Marina; Dugas, Michelle; Webber, David

    2017-04-01

    We outline a general psychological theory of extremism and apply it to the special case of violent extremism (VE). Extremism is defined as motivated deviance from general behavioral norms and is assumed to stem from a shift from a balanced satisfaction of basic human needs afforded by moderation to a motivational imbalance wherein a given need dominates the others. Because motivational imbalance is difficult to sustain, only few individuals do, rendering extreme behavior relatively rare, hence deviant. Thus, individual dynamics translate into social patterns wherein majorities of individuals practice moderation, whereas extremism is the province of the few. Both extremism and moderation require the ability to successfully carry out the activities that these demand. Ability is partially determined by the activities' difficulty, controllable in part by external agents who promote or oppose extremism. Application of this general framework to VE identifies the specific need that animates it and offers broad guidelines for addressing this pernicious phenomenon. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Large-Scale Meteorological Patterns Associated with Extreme Precipitation in the US Northeast

    NASA Astrophysics Data System (ADS)

    Agel, L. A.; Barlow, M. A.

    2016-12-01

    Patterns of daily large-scale circulation associated with Northeast US extreme precipitation are identified using both k-means clustering (KMC) and Self-Organizing Maps (SOM) applied to tropopause height. Tropopause height provides a compact representation of large-scale circulation patterns, as it is linked to mid-level circulation, low-level thermal contrasts and low-level diabatic heating. Extreme precipitation is defined as the top 1% of daily wet-day observations at 35 Northeast stations, 1979-2008. KMC is applied on extreme precipitation days only, while the SOM algorithm is applied to all days in order to place the extreme results into a larger context. Six tropopause patterns are identified on extreme days: a summertime tropopause ridge, a summertime shallow trough/ridge, a summertime shallow eastern US trough, a deeper wintertime eastern US trough, and two versions of a deep cold-weather trough located across the east-central US. Thirty SOM patterns for all days are identified. Results for all days show that 6 SOM patterns account for almost half of the extreme days, although extreme precipitation occurs in all SOM patterns. The same SOM patterns associated with extreme precipitation also routinely produce non-extreme precipitation; however, on extreme precipitation days the troughs, on average, are deeper and the downstream ridges more pronounced. Analysis of other fields associated with the large-scale patterns show various degrees of anomalously strong upward motion during, and moisture transport preceding, extreme precipitation events.

  7. Exploring Antarctic Land Surface Temperature Extremes Using Condensed Anomaly Databases

    NASA Astrophysics Data System (ADS)

    Grant, Glenn Edwin

    Satellite observations have revolutionized the Earth Sciences and climate studies. However, data and imagery continue to accumulate at an accelerating rate, and efficient tools for data discovery, analysis, and quality checking lag behind. In particular, studies of long-term, continental-scale processes at high spatiotemporal resolutions are especially problematic. The traditional technique of downloading an entire dataset and using customized analysis code is often impractical or consumes too many resources. The Condensate Database Project was envisioned as an alternative method for data exploration and quality checking. The project's premise was that much of the data in any satellite dataset is unneeded and can be eliminated, compacting massive datasets into more manageable sizes. Dataset sizes are further reduced by retaining only anomalous data of high interest. Hosting the resulting "condensed" datasets in high-speed databases enables immediate availability for queries and exploration. Proof of the project's success relied on demonstrating that the anomaly database methods can enhance and accelerate scientific investigations. The hypothesis of this dissertation is that the condensed datasets are effective tools for exploring many scientific questions, spurring further investigations and revealing important information that might otherwise remain undetected. This dissertation uses condensed databases containing 17 years of Antarctic land surface temperature anomalies as its primary data. The study demonstrates the utility of the condensate database methods by discovering new information. In particular, the process revealed critical quality problems in the source satellite data. The results are used as the starting point for four case studies, investigating Antarctic temperature extremes, cloud detection errors, and the teleconnections between Antarctic temperature anomalies and climate indices. The results confirm the hypothesis that the condensate databases are a highly useful tool for Earth Science analyses. Moreover, the quality checking capabilities provide an important method for independent evaluation of dataset veracity.

  8. Very large database of lipids: rationale and design.

    PubMed

    Martin, Seth S; Blaha, Michael J; Toth, Peter P; Joshi, Parag H; McEvoy, John W; Ahmed, Haitham M; Elshazly, Mohamed B; Swiger, Kristopher J; Michos, Erin D; Kwiterovich, Peter O; Kulkarni, Krishnaji R; Chimera, Joseph; Cannon, Christopher P; Blumenthal, Roger S; Jones, Steven R

    2013-11-01

    Blood lipids have major cardiovascular and public health implications. Lipid-lowering drugs are prescribed based in part on categorization of patients into normal or abnormal lipid metabolism, yet relatively little emphasis has been placed on: (1) the accuracy of current lipid measures used in clinical practice, (2) the reliability of current categorizations of dyslipidemia states, and (3) the relationship of advanced lipid characterization to other cardiovascular disease biomarkers. To these ends, we developed the Very Large Database of Lipids (NCT01698489), an ongoing database protocol that harnesses deidentified data from the daily operations of a commercial lipid laboratory. The database includes individuals who were referred for clinical purposes for a Vertical Auto Profile (Atherotech Inc., Birmingham, AL), which directly measures cholesterol concentrations of low-density lipoprotein, very low-density lipoprotein, intermediate-density lipoprotein, high-density lipoprotein, their subclasses, and lipoprotein(a). Individual Very Large Database of Lipids studies, ranging from studies of measurement accuracy, to dyslipidemia categorization, to biomarker associations, to characterization of rare lipid disorders, are investigator-initiated and utilize peer-reviewed statistical analysis plans to address a priori hypotheses/aims. In the first database harvest (Very Large Database of Lipids 1.0) from 2009 to 2011, there were 1 340 614 adult and 10 294 pediatric patients; the adult sample had a median age of 59 years (interquartile range, 49-70 years) with even representation by sex. Lipid distributions closely matched those from the population-representative National Health and Nutrition Examination Survey. The second harvest of the database (Very Large Database of Lipids 2.0) is underway. Overall, the Very Large Database of Lipids database provides an opportunity for collaboration and new knowledge generation through careful examination of granular lipid data on a large scale. © 2013 Wiley Periodicals, Inc.

  9. Application of Radar-Rainfall Estimates to Probable Maximum Precipitation in the Carolinas

    NASA Astrophysics Data System (ADS)

    England, J. F.; Caldwell, R. J.; Sankovich, V.

    2011-12-01

    Extreme storm rainfall data are essential in the assessment of potential impacts on design precipitation amounts, which are used in flood design criteria for dams and nuclear power plants. Probable Maximum Precipitation (PMP) from National Weather Service Hydrometeorological Report 51 (HMR51) is currently used for design rainfall estimates in the eastern U.S. The extreme storm database associated with the report has not been updated since the early 1970s. In the past several decades, several extreme precipitation events have occurred that have the potential to alter the PMP values, particularly across the Southeast United States (e.g., Hurricane Floyd 1999). Unfortunately, these and other large precipitation-producing storms have not been analyzed with the detail required for application in design studies. This study focuses on warm-season tropical cyclones (TCs) in the Carolinas, as these systems are the critical maximum rainfall mechanisms in the region. The goal is to discern if recent tropical events may have reached or exceeded current PMP values. We have analyzed 10 storms using modern datasets and methodologies that provide enhanced spatial and temporal resolution relative to point measurements used in past studies. Specifically, hourly multisensor precipitation reanalysis (MPR) data are used to estimate storm total precipitation accumulations at various durations throughout each storm event. The accumulated grids serve as input to depth-area-duration calculations. Individual storms are then maximized using back-trajectories to determine source regions for moisture. The development of open source software has made this process time and resource efficient. Based on the current methodology, two of the ten storms analyzed have the potential to challenge HMR51 PMP values. Maximized depth-area curves for Hurricane Floyd indicate exceedance at 24- and 72-hour durations for large area sizes, while Hurricane Fran (1996) appears to exceed PMP at large area sizes for short-duration, 6-hour storms. Utilizing new methods and data, however, requires careful consideration of the potential limitations and caveats associated with the analysis and further evaluation of the newer storms within the context of historical storms from HMR51. Here, we provide a brief background on extreme rainfall in the Carolinas, along with an overview of the methods employed for converting MPR to depth-area relationships. Discussion of the issues and limitations, evaluation of the various techniques, and comparison to HMR51 storms and PMP values are also presented.

  10. Deep transcriptome annotation enables the discovery and functional characterization of cryptic small proteins

    PubMed Central

    Delcourt, Vivian; Lucier, Jean-François; Gagnon, Jules; Beaudoin, Maxime C; Vanderperre, Benoît; Breton, Marc-André; Motard, Julie; Jacques, Jean-François; Brunelle, Mylène; Gagnon-Arsenault, Isabelle; Fournier, Isabelle; Ouangraoua, Aida; Hunting, Darel J; Cohen, Alan A; Landry, Christian R; Scott, Michelle S

    2017-01-01

    Recent functional, proteomic and ribosome profiling studies in eukaryotes have concurrently demonstrated the translation of alternative open-reading frames (altORFs) in addition to annotated protein coding sequences (CDSs). We show that a large number of small proteins could in fact be coded by these altORFs. The putative alternative proteins translated from altORFs have orthologs in many species and contain functional domains. Evolutionary analyses indicate that altORFs often show more extreme conservation patterns than their CDSs. Thousands of alternative proteins are detected in proteomic datasets by reanalysis using a database containing predicted alternative proteins. This is illustrated with specific examples, including altMiD51, a 70 amino acid mitochondrial fission-promoting protein encoded in MiD51/Mief1/SMCR7L, a gene encoding an annotated protein promoting mitochondrial fission. Our results suggest that many genes are multicoding genes and code for a large protein and one or several small proteins. PMID:29083303

  11. Evaluation of pediatric upper extremity peripheral nerve injuries.

    PubMed

    Ho, Emily S

    2015-01-01

    The evaluation of motor and sensory function of the upper extremity after a peripheral nerve injury is critical to diagnose the location and extent of nerve injury as well as document functional recovery in children. The purpose of this paper is to describe an approach to the evaluation of the pediatric upper extremity peripheral nerve injuries through a critical review of currently used tests of sensory and motor function. Outcome studies on pediatric upper extremity peripheral nerve injuries in the Medline database were reviewed. The evaluation of the outcome in children less than 10 years of age with an upper extremity peripheral nerve injury includes careful observation of preferred prehension patterns, examination of muscle atrophy and sudomotor function, provocative tests, manual muscle testing and tests of sensory threshold and tactile gnosis. The evaluation of outcome in children with upper extremity peripheral nerve injuries warrants a unique approach. Copyright © 2015 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  12. Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan.

    NASA Astrophysics Data System (ADS)

    Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey

    2017-04-01

    Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this research. Hopefully, all results developed from this research can be used as a warning system for Predicting Large Scale Landslides in the southern Taiwan. Keywords:Heavy Rainfall, Large Scale, landslides, Critical Rainfall Value

  13. Evolution of precipitation extremes in two large ensembles of climate simulations

    NASA Astrophysics Data System (ADS)

    Martel, Jean-Luc; Mailhot, Alain; Talbot, Guillaume; Brissette, François; Ludwig, Ralf; Frigon, Anne; Leduc, Martin; Turcotte, Richard

    2017-04-01

    Recent studies project significant changes in the future distribution of precipitation extremes due to global warming. It is likely that extreme precipitation intensity will increase in a future climate and that extreme events will be more frequent. In this work, annual maxima daily precipitation series from the Canadian Earth System Model (CanESM2) 50-member large ensemble (spatial resolution of 2.8°x2.8°) and the Community Earth System Model (CESM1) 40-member large ensemble (spatial resolution of 1°x1°) are used to investigate extreme precipitation over the historical (1980-2010) and future (2070-2100) periods. The use of these ensembles results in respectively 1 500 (30 years x 50 members) and 1200 (30 years x 40 members) simulated years over both the historical and future periods. These large datasets allow the computation of empirical daily extreme precipitation quantiles for large return periods. Using the CanESM2 and CESM1 large ensembles, extreme daily precipitation with return periods ranging from 2 to 100 years are computed in historical and future periods to assess the impact of climate change. Results indicate that daily precipitation extremes generally increase in the future over most land grid points and that these increases will also impact the 100-year extreme daily precipitation. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety. Estimated increases in precipitation associated to very extreme precipitation events (e.g. 100 years) will drastically change the likelihood of flooding and their extent in future climate. These results, although interesting, need to be extended to sub-daily durations, relevant for urban flooding protection and urban infrastructure design (e.g. sewer networks, culverts). Models and simulations at finer spatial and temporal resolution are therefore needed.

  14. Extreme Precipitation and Flooding: Exposure Characterization and the Association Between Exposure and Mortality in 108 United States Communities, 1987-2005

    NASA Astrophysics Data System (ADS)

    Severson, R. L.; Peng, R. D.; Anderson, G. B.

    2017-12-01

    There is substantial evidence that extreme precipitation and flooding are serious threats to public health and safety. These threats are predicted to increase with climate change. Epidemiological studies investigating the health effects of these events vary in the methods used to characterize exposure. Here, we compare two sources of precipitation data (National Oceanic and Atmospheric Administration (NOAA) station-based and North American Land Data Assimilation Systems (NLDAS-2) Reanalysis data-based) for estimating exposure to extreme precipitation and two sources of flooding data, based on United States Geological Survey (USGS) streamflow gages and the NOAA Storm Events database. We investigate associations between each of the four exposure metrics and short-term risk of four causes of mortality (accidental, respiratory-related, cardiovascular-related, and all-cause) in the United States from 1987 through 2005. Average daily precipitation values from the two precipitation data sources were moderately correlated (Spearman's rho = 0.74); however, values from the two data sources were less correlated when comparing binary metrics of exposure to extreme precipitation days (Jaccard index (J) = 0.35). Binary metrics of daily flood exposure were poorly correlated between the two flood data sources (Spearman's rho = 0.07; J = 0.05). There was little correlation between extreme precipitation exposure and flood exposure in study communities. We did not observe evidence of a positive association between any of the four exposure metrics and risk of any of the four mortality outcomes considered. Our results suggest, due to the observed lack of agreement between different extreme precipitation and flood metrics, that exposure to extreme precipitation may not serve as an effective surrogate for exposures related to flooding. Furthermore, It is possible that extreme precipitation and flood exposures may often be too localized to allow accurate exposure assessment at the community level for epidemiological studies.

  15. Evolution of Precipitation Extremes in Three Large Ensembles of Climate Simulations - Impact of Spatial and Temporal Resolutions

    NASA Astrophysics Data System (ADS)

    Martel, J. L.; Brissette, F.; Mailhot, A.; Wood, R. R.; Ludwig, R.; Frigon, A.; Leduc, M.; Turcotte, R.

    2017-12-01

    Recent studies indicate that the frequency and intensity of extreme precipitation will increase in future climate due to global warming. In this study, we compare annual maxima precipitation series from three large ensembles of climate simulations at various spatial and temporal resolutions. The first two are at the global scale: the Canadian Earth System Model (CanESM2) 50-member large ensemble (CanESM2-LE) at a 2.8° resolution and the Community Earth System Model (CESM1) 40-member large ensemble (CESM1-LE) at a 1° resolution. The third ensemble is at the regional scale over both Eastern North America and Europe: the Canadian Regional Climate Model (CRCM5) 50-member large ensemble (CRCM5-LE) at a 0.11° resolution, driven at its boundaries by the CanESM-LE. The CRCM5-LE is a new ensemble issued from the ClimEx project (http://www.climex-project.org), a Québec-Bavaria collaboration. Using these three large ensembles, change in extreme precipitations over the historical (1980-2010) and future (2070-2100) periods are investigated. This results in 1 500 (30 years x 50 members for CanESM2-LE and CRCM5-LE) and 1200 (30 years x 40 members for CESM1-LE) simulated years over both the historical and future periods. Using these large datasets, the empirical daily (and sub-daily for CRCM5-LE) extreme precipitation quantiles for large return periods ranging from 2 to 100 years are computed. Results indicate that daily extreme precipitations generally will increase over most land grid points of both domains according to the three large ensembles. Regarding the CRCM5-LE, the increase in sub-daily extreme precipitations will be even more important than the one observed for daily extreme precipitations. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety.

  16. Six and Three-Hourly Meteorological Observations From 223 Former U.S.S.R. Stations (NPD-048)

    DOE Data Explorer

    Razuvaev, V. N. [All-Russian Research Institute of Hydrometeorological Information, World Data Center, Russia; Apasova, E. B. [All-Russian Research Institute of Hydrometeorological Information, World Data Center, Russia; Martuganov, R. A. [All-Russian Research Institute of Hydrometeorological Information, World Data Center, Russia; Kaiser, D. P. [CDIAC, Oak Ridge National Laboratory; Marino, G. P. [CDIAC, Oak Ridge National Laboratory

    2007-11-01

    This database contains 6- and 3-hourly meteorological observations from a 223-station network of the former Soviet Union. These data have been made available through cooperation between the two principal climate data centers of the United States and Russia: the National Climatic Data Center (NCDC), in Asheville, North Carolina, and the All-Russian Research Institute of Hydrometeorological Information-World Data Centre (RIHMI-WDC) in Obninsk, Russia. The first version of this database extended through the mid-1980s (ending year dependent upon station) and was made available in 1995 by the Carbon Dioxide Information Analysis Center (CDIAC) as NDP-048. A second version of the database extended the data records through 1990. This third, and current version of the database includes data through 2000 for over half of the stations (mainly for Russia), whereas the remainder of the stations have records extending through various years of the 1990s. Because of the break up of the Soviet Union in 1991, and since RIHMI-WDC is a Russian institution, only Russain stations are generally available through 2000. The non-Russian station records in this database typically extend through 1991. Station records consist of 6- and 3-hourly observations of some 24 meteorological variables including temperature, past and present weather type, precipitation amount, cloud amount and type, sea level pressure, relative humidity, and wind direction and speed. The 6-hourly observations extend from 1936 through 1965; the 3-hourly observations extend from 1966 through 2000 (or through the latest year available). These data have undergone extensive quality assurance checks by RIHMI-WDC, NCDC, and CDIAC. The database represents a wealth of meteorological information for a large and climatologically important portion of the earth's land area, and should prove extremely useful for a wide variety of regional climate change studies.

  17. Extreme-value dependence: An application to exchange rate markets

    NASA Astrophysics Data System (ADS)

    Fernandez, Viviana

    2007-04-01

    Extreme value theory (EVT) focuses on modeling the tail behavior of a loss distribution using only extreme values rather than the whole data set. For a sample of 10 countries with dirty/free float regimes, we investigate whether paired currencies exhibit a pattern of asymptotic dependence. That is, whether an extremely large appreciation or depreciation in the nominal exchange rate of one country might transmit to another. In general, after controlling for volatility clustering and inertia in returns, we do not find evidence of extreme-value dependence between paired exchange rates. However, for asymptotic-independent paired returns, we find that tail dependency of exchange rates is stronger under large appreciations than under large depreciations.

  18. Component Database for the APS Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veseli, S.; Arnold, N. D.; Jarosz, D. P.

    The Advanced Photon Source Upgrade (APS-U) project will replace the existing APS storage ring with a multi-bend achromat (MBA) lattice to provide extreme transverse coherence and extreme brightness x-rays to its users. As the time to replace the existing storage ring accelerator is of critical concern, an aggressive one-year removal/installation/testing period is being planned. To aid in the management of the thousands of components to be installed in such a short time, the Component Database (CDB) application is being developed with the purpose to identify, document, track, locate, and organize components in a central database. Three major domains are beingmore » addressed: Component definitions (which together make up an exhaustive "Component Catalog"), Designs (groupings of components to create subsystems), and Component Instances (“Inventory”). Relationships between the major domains offer additional "system knowledge" to be captured that will be leveraged with future tools and applications. It is imperative to provide sub-system engineers with a functional application early in the machine design cycle. Topics discussed in this paper include the initial design and deployment of CDB, as well as future development plans.« less

  19. Creating a comprehensive quality-controlled dataset of severe weather occurrence in Europe

    NASA Astrophysics Data System (ADS)

    Groenemeijer, P.; Kühne, T.; Liang, Z.; Holzer, A.; Feuerstein, B.; Dotzek, N.

    2010-09-01

    Ground-truth quality-controlled data on severe weather occurrence is required for meaningful research on severe weather hazards. Such data are collected by observation networks of several authorities in Europe, most prominently the National Hydrometeorological Institutes (NHMS). However, some events challenge the capabilities of such conventional networks by their isolated and short-lived nature. These rare and very localized but extreme events include thunderstorm wind gusts, large hail and tornadoes and are poorly resolved by synoptic observations. Moreover, their detection by remote-sensing techniques such as radar and satellites is in development and has proven to be difficult. Using the fact that all across across Europe there are many people with a special personal or professional interest in such events, who are typically organized in associations, allows pursuing a different strategy. Data delivered to the European Severe Weather Database is recorded and quality controlled by ESSL and a large number of partners including the Hydrometeorological Institutes of Germany, Finland, Austria, Italy and Bulgaria. Additionally, nine associations of storm spotters and centres of expertise in these and other countries are involved. The two categories of organizations (NHMSes/other) each have different privileges in the quality control procedure, which involves assigning a quality level of QC0+ (plausibility checked), QC1 (confirmed by reliable sources) or QC2 (verified) to each of the reports. Within the EWENT project funded by the EU 7th framework programme, the RegioExakt project funded by the German Ministry of Education and Research, and with support from the German Weather Service (DWD), several enhancements of the ESWD have been and will be carried out. Completed enhancements include the creation of an interface that allows partner organizations to upload data automatically, in the case of our German partner "Skywarn Germany" in near-real time. Moreover, the database's web-interface has been translated into 14 European languages. At the time of writing, a nowcast-mode to the web interface, which renders the ESWD a convenient tool for meteorologists in forecast centres, is being developed. In the near future, within the EWENT project, an extension of the data set with several other isolated but extreme events including avalanches, landslides, heavy snowfall and extremely powerful lightning flashes, is foreseen. The resulting ESWD dataset, that grows at a rate of 4000-5000 events per year, is used for wide range of purposes including the validation of remote-sensing techniques, forecast verification studies, projections of the future severe storm climate, and risk assessments. Its users include scientists working for EUMETSAT, NASA, NSSL, DLR, and several reinsurance companies.

  20. Data Mining of Extremely Large Ad Hoc Data Sets to Produce Inverted Indices

    DTIC Science & Technology

    2016-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited DATA MINING OF...COVERED Master’s Thesis 4. TITLE AND SUBTITLE DATA MINING OF EXTREMELY LARGE AD HOC DATA SETS TO PRODUCE INVERTED INDICES 5. FUNDING NUMBERS 6...INTENTIONALLY LEFT BLANK iii Approved for public release; distribution is unlimited DATA MINING OF EXTREMELY LARGE AD HOC DATA SETS TO PRODUCE

  1. Variability Properties of Four Million Sources in the TESS Input Catalog Observed with the Kilodegree Extremely Little Telescope Survey

    NASA Astrophysics Data System (ADS)

    Oelkers, Ryan J.; Rodriguez, Joseph E.; Stassun, Keivan G.; Pepper, Joshua; Somers, Garrett; Kafka, Stella; Stevens, Daniel J.; Beatty, Thomas G.; Siverd, Robert J.; Lund, Michael B.; Kuhn, Rudolf B.; James, David; Gaudi, B. Scott

    2018-01-01

    The Kilodegree Extremely Little Telescope (KELT) has been surveying more than 70% of the celestial sphere for nearly a decade. While the primary science goal of the survey is the discovery of transiting, large-radii planets around bright host stars, the survey has collected more than 106 images, with a typical cadence between 10–30 minutes, for more than four million sources with apparent visual magnitudes in the approximate range 7< V< 13. Here, we provide a catalog of 52,741 objects showing significant large-amplitude fluctuations likely caused by stellar variability, as well as 62,229 objects identified with likely stellar rotation periods. The detected variability ranges in rms-amplitude from ∼3 mmag to ∼2.3 mag, and the detected periods range from ∼0.1 to ≳2000 days. We provide variability upper limits for all other ∼4,000,000 sources. These upper limits are principally a function of stellar brightness, but we achieve typical 1σ sensitivity on 30 min timescales down to ∼5 mmag at V∼ 8, and down to ∼43 mmag at V∼ 13. We have matched our catalog to the TESS Input catalog and the AAVSO Variable Star Index to precipitate the follow-up and classification of each source. The catalog is maintained as a living database on the Filtergraph visualization portal at the URL https://filtergraph.com/kelt_vars.

  2. The role of Natural Flood Management in managing floods in large scale basins during extreme events

    NASA Astrophysics Data System (ADS)

    Quinn, Paul; Owen, Gareth; ODonnell, Greg; Nicholson, Alex; Hetherington, David

    2016-04-01

    There is a strong evidence database showing the negative impacts of land use intensification and soil degradation in NW European river basins on hydrological response and to flood impact downstream. However, the ability to target zones of high runoff production and the extent to which we can manage flood risk using nature-based flood management solution are less known. A move to planting more trees and having less intense farmed landscapes is part of natural flood management (NFM) solutions and these methods suggest that flood risk can be managed in alternative and more holistic ways. So what local NFM management methods should be used, where in large scale basin should they be deployed and how does flow is propagate to any point downstream? Generally, how much intervention is needed and will it compromise food production systems? If we are observing record levels of rainfall and flow, for example during Storm Desmond in Dec 2015 in the North West of England, what other flood management options are really needed to complement our traditional defences in large basins for the future? In this paper we will show examples of NFM interventions in the UK that have impacted at local scale sites. We will demonstrate the impact of interventions at local, sub-catchment (meso-scale) and finally at the large scale. These tools include observations, process based models and more generalised Flood Impact Models. Issues of synchronisation and the design level of protection will be debated. By reworking observed rainfall and discharge (runoff) for observed extreme events in the River Eden and River Tyne, during Storm Desmond, we will show how much flood protection is needed in large scale basins. The research will thus pose a number of key questions as to how floods may have to be managed in large scale basins in the future. We will seek to support a method of catchment systems engineering that holds water back across the whole landscape as a major opportunity to management water in large scale basins in the future. The broader benefits of engineering landscapes to hold water for pollution control, sediment loss and drought minimisation will also be shown.

  3. Protein Sequence Classification with Improved Extreme Learning Machine Algorithms

    PubMed Central

    2014-01-01

    Precisely classifying a protein sequence from a large biological protein sequences database plays an important role for developing competitive pharmacological products. Comparing the unseen sequence with all the identified protein sequences and returning the category index with the highest similarity scored protein, conventional methods are usually time-consuming. Therefore, it is urgent and necessary to build an efficient protein sequence classification system. In this paper, we study the performance of protein sequence classification using SLFNs. The recent efficient extreme learning machine (ELM) and its invariants are utilized as the training algorithms. The optimal pruned ELM is first employed for protein sequence classification in this paper. To further enhance the performance, the ensemble based SLFNs structure is constructed where multiple SLFNs with the same number of hidden nodes and the same activation function are used as ensembles. For each ensemble, the same training algorithm is adopted. The final category index is derived using the majority voting method. Two approaches, namely, the basic ELM and the OP-ELM, are adopted for the ensemble based SLFNs. The performance is analyzed and compared with several existing methods using datasets obtained from the Protein Information Resource center. The experimental results show the priority of the proposed algorithms. PMID:24795876

  4. Non-thermal recombination - a neglected source of flare hard X-rays and fast electron diagnostics (Corrigendum)

    NASA Astrophysics Data System (ADS)

    Brown, J. C.; Mallik, P. C. V.; Badnell, N. R.

    2010-06-01

    Brown and Mallik (BM) recently claimed that non-thermal recombination (NTR) can be a dominant source of flare hard X-rays (HXRs) from hot coronal and chromospheric sources. However, major discrepancies between the thermal continua predicted by BM and by the Chianti database as well as RHESSI flare data, led us to discover substantial errors in the heuristic expression used by BM to extend the Kramers expressions beyond the hydrogenic case. Here we present the relevant corrected expressions and show the key modified results. We conclude that, in most cases, NTR emission was overestimated by a factor of 1-8 by BM but is typically still large enough (as much as 20-30% of the total emission) to be very important for electron spectral inference and detection of electron spectral features such as low energy cut-offs since the recombination spectra contain sharp edges. For extreme temperature regimes and/or if the Fe abundance were as high as some values claimed, NTR could even be the dominant source of flare HXRs, reducing the electron number and energy budget, problems such as in the extreme coronal HXR source cases reported by e.g. Krucker et al.

  5. Environmental Factors and Internal Processes Contributing to Interrupted Rapid Decay of Hurricane Joaquin (2015)

    NASA Astrophysics Data System (ADS)

    Hendricks, E. A.; Elsberry, R. L.; Velden, C.; Creasey, R.; Jorgensen, A.; Jordan, M.

    2017-12-01

    Hurricane Joaquin (2015) was the most intense Atlantic hurricane with a non-tropical origin during the satellite era. In addition to its rapid intensification, Joaquin was noteworthy for the difficulty in forecasting its post-recurvature track to the northeast after having struck the Bahama Islands. Such a track typically leads to a decay as the hurricane moves poleward over colder water, and Joaquin had an extreme decay rate from 135 kt to 65 kt in only 30 h. The focus of this study is on the environmental and internal factors that interrupted this extreme decay at 1800 UTC 4 October, and then how Joaquin re-intensified to 75 kt and maintained that intensity for 30 hours. The real-time Statistical Hurricane Intensity Prediction System (SHIPS) database is used to calculate each six hours six environmental variables that Hendricks et al. (2010) had found contributed to intensity change. Only the deep-layer vertical wind shear (VWS) from SHIPS, and also from the Cooperative Institute for Meteorological Satellite Studies (CIMSS), had a well-defined relationship with both the interrupted rapid decay and the subsequent constant intensity period. A special dataset of Atmospheric Motion Vectors (AMVs) at 15-minute intervals prepared by CIMSS is then utilized to create a continuous VWS record that documents the large ( 15 m s-1) VWS throughout most of the rapid decay period, and then a rapid decrease in VWS to moderate ( 8 m s-1) values at and following the rapid decay period. Horizontal distributions of these CIMSS VWSs demonstrate that during this period Joaquin was located in a large gradient region between large VWSs to the north and near-zero VWSs to the south, which was favorable for sustaining Joaquin at hurricane intensity.

  6. Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis.

    PubMed

    You, Zhu-Hong; Lei, Ying-Ke; Zhu, Lin; Xia, Junfeng; Wang, Bing

    2013-01-01

    Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time.

  7. Sex Determination, Sex Chromosomes, and Karyotype Evolution in Insects.

    PubMed

    Blackmon, Heath; Ross, Laura; Bachtrog, Doris

    2017-01-01

    Insects harbor a tremendous diversity of sex determining mechanisms both within and between groups. For example, in some orders such as Hymenoptera, all members are haplodiploid, whereas Diptera contain species with homomorphic as well as male and female heterogametic sex chromosome systems or paternal genome elimination. We have established a large database on karyotypes and sex chromosomes in insects, containing information on over 13000 species covering 29 orders of insects. This database constitutes a unique starting point to report phylogenetic patterns on the distribution of sex determination mechanisms, sex chromosomes, and karyotypes among insects and allows us to test general theories on the evolutionary dynamics of karyotypes, sex chromosomes, and sex determination systems in a comparative framework. Phylogenetic analysis reveals that male heterogamety is the ancestral mode of sex determination in insects, and transitions to female heterogamety are extremely rare. Many insect orders harbor species with complex sex chromosomes, and gains and losses of the sex-limited chromosome are frequent in some groups. Haplodiploidy originated several times within insects, and parthenogenesis is rare but evolves frequently. Providing a single source to electronically access data previously distributed among more than 500 articles and books will not only accelerate analyses of the assembled data, but also provide a unique resource to guide research on which taxa are likely to be informative to address specific questions, for example, for genome sequencing projects or large-scale comparative studies. © The American Genetic Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. The VO-Dance web application at the IA2 data center

    NASA Astrophysics Data System (ADS)

    Molinaro, Marco; Knapic, Cristina; Smareglia, Riccardo

    2012-09-01

    Italian center for Astronomical Archives (IA2, http://ia2.oats.inaf.it) is a national infrastructure project of the Italian National Institute for Astrophysics (Istituto Nazionale di AstroFisica, INAF) that provides services for the astronomical community. Besides data hosting for the Large Binocular Telescope (LBT) Corporation, the Galileo National Telescope (Telescopio Nazionale Galileo, TNG) Consortium and other telescopes and instruments, IA2 offers proprietary and public data access through user portals (both developed and mirrored) and deploys resources complying the Virtual Observatory (VO) standards. Archiving systems and web interfaces are developed to be extremely flexible about adding new instruments from other telescopes. VO resources publishing, along with data access portals, implements the International Virtual Observatory Alliance (IVOA) protocols providing astronomers with new ways of analyzing data. Given the large variety of data flavours and IVOA standards, the need for tools to easily accomplish data ingestion and data publishing arises. This paper describes the VO-Dance tool, that IA2 started developing to address VO resources publishing in a dynamical way from already existent database tables or views. The tool consists in a Java web application, potentially DBMS and platform independent, that stores internally the services' metadata and information, exposes restful endpoints to accept VO queries for these services and dynamically translates calls to these endpoints to SQL queries coherent with the published table or view. In response to the call VO-Dance translates back the database answer in a VO compliant way.

  9. Comparison of the genomic sequence of the microminipig, a novel breed of swine, with the genomic database for conventional pig.

    PubMed

    Miura, Naoki; Kucho, Ken-Ichi; Noguchi, Michiko; Miyoshi, Noriaki; Uchiumi, Toshiki; Kawaguchi, Hiroaki; Tanimoto, Akihide

    2014-01-01

    The microminipig, which weighs less than 10 kg at an early stage of maturity, has been reported as a potential experimental model animal. Its extremely small size and other distinct characteristics suggest the possibility of a number of differences between the genome of the microminipig and that of conventional pigs. In this study, we analyzed the genomes of two healthy microminipigs using a next-generation sequencer SOLiD™ system. We then compared the obtained genomic sequences with a genomic database for the domestic pig (Sus scrofa). The mapping coverage of sequenced tag from the microminipig to conventional pig genomic sequences was greater than 96% and we detected no clear, substantial genomic variance from these data. The results may indicate that the distinct characteristics of the microminipig derive from small-scale alterations in the genome, such as Single Nucleotide Polymorphisms or translational modifications, rather than large-scale deletion or insertion polymorphisms. Further investigation of the entire genomic sequence of the microminipig with methods enabling deeper coverage is required to elucidate the genetic basis of its distinct phenotypic traits. Copyright © 2014 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  10. Dynamical systems proxies of atmospheric predictability and mid-latitude extremes

    NASA Astrophysics Data System (ADS)

    Messori, Gabriele; Faranda, Davide; Caballero, Rodrigo; Yiou, Pascal

    2017-04-01

    Extreme weather ocurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. Many extremes (for e.g. storms, heatwaves, cold spells, heavy precipitation) are tied to specific patterns of midlatitude atmospheric circulation. The ability to identify these patterns and use them to enhance the predictability of the extremes is therefore a topic of crucial societal and economic value. We propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We use two simple dynamical systems metrics - local dimension and persistence - to identify sets of similar large-scale atmospheric flow patterns which present a coherent temporal evolution. When these patterns correspond to weather extremes, they therefore afford a particularly good forward predictability. We specifically test this technique on European winter temperatures, whose variability largely depends on the atmospheric circulation in the North Atlantic region. We find that our dynamical systems approach provides predictability of large-scale temperature extremes up to one week in advance.

  11. Impact assessment of climate change on tourism in the Pacific small islands based on the database of long-term high-resolution climate ensemble experiments

    NASA Astrophysics Data System (ADS)

    Watanabe, S.; Utsumi, N.; Take, M.; Iida, A.

    2016-12-01

    This study aims to develop a new approach to assess the impact of climate change on the small oceanic islands in the Pacific. In the new approach, the change of the probabilities of various situations was projected with considering the spread of projection derived from ensemble simulations, instead of projecting the most probable situation. The database for Policy Decision making for Future climate change (d4PDF) is a database of long-term high-resolution climate ensemble experiments, which has the results of 100 ensemble simulations. We utilized the database for Policy Decision making for Future climate change (d4PDF), which was (a long-term and high-resolution database) composed of results of 100 ensemble experiments. A new methodology, Multi Threshold Ensemble Assessment (MTEA), was developed using the d4PDF in order to assess the impact of climate change. We focused on the impact of climate change on tourism because it has played an important role in the economy of the Pacific Islands. The Yaeyama Region, one of the tourist destinations in Okinawa, Japan, was selected as the case study site. Two kinds of impact were assessed: change in probability of extreme climate phenomena and tourist satisfaction associated with weather. The database of long-term high-resolution climate ensemble experiments and the questionnaire survey conducted by a local government were used for the assessment. The result indicated that the strength of extreme events would be increased, whereas the probability of occurrence would be decreased. This change should result in increase of the number of clear days and it could contribute to improve the tourist satisfaction.

  12. Using Large Diabetes Databases for Research.

    PubMed

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  13. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  14. Technical note: Space-time analysis of rainfall extremes in Italy: clues from a reconciled dataset

    NASA Astrophysics Data System (ADS)

    Libertino, Andrea; Ganora, Daniele; Claps, Pierluigi

    2018-05-01

    Like other Mediterranean areas, Italy is prone to the development of events with significant rainfall intensity, lasting for several hours. The main triggering mechanisms of these events are quite well known, but the aim of developing rainstorm hazard maps compatible with their actual probability of occurrence is still far from being reached. A systematic frequency analysis of these occasional highly intense events would require a complete countrywide dataset of sub-daily rainfall records, but this kind of information was still lacking for the Italian territory. In this work several sources of data are gathered, for assembling the first comprehensive and updated dataset of extreme rainfall of short duration in Italy. The resulting dataset, referred to as the Italian Rainfall Extreme Dataset (I-RED), includes the annual maximum rainfalls recorded in 1 to 24 consecutive hours from more than 4500 stations across the country, spanning the period between 1916 and 2014. A detailed description of the spatial and temporal coverage of the I-RED is presented, together with an exploratory statistical analysis aimed at providing preliminary information on the climatology of extreme rainfall at the national scale. Due to some legal restrictions, the database can be provided only under certain conditions. Taking into account the potentialities emerging from the analysis, a description of the ongoing and planned future work activities on the database is provided.

  15. Determination of consistent patterns of range of motion in the ankle joint with a computed tomography stress-test.

    PubMed

    Tuijthof, Gabriëlle Josephine Maria; Zengerink, Maartje; Beimers, Lijkele; Jonges, Remmet; Maas, Mario; van Dijk, Cornelis Niek; Blankevoort, Leendert

    2009-07-01

    Measuring the range of motion of the ankle joint can assist in accurate diagnosis of ankle laxity. A computed tomography-based stress-test (3D CT stress-test) was used that determines the three-dimensional position and orientation of tibial, calcaneal and talar bones. The goal was to establish a quantitative database of the normal ranges of motion of the talocrural and subtalar joints. A clinical case on suspected subtalar instability demonstrated the relevance the proposed method. The range of motion was measured for the ankle joints in vivo for 20 subjects using the 3D CT stress-test. Motion of the tibia and calcaneus relative to the talus for eight extreme foot positions were described by helical parameters. High consistency for finite helical axis orientation (n) and rotation (theta) was shown for: talocrural extreme dorsiflexion to extreme plantarflexion (root mean square direction deviation (eta) 5.3 degrees and theta: SD 11.0 degrees), talorucral and subtalar extreme combined eversion-dorsiflexion to combined inversion-plantarflexion (eta: 6.7 degrees , theta: SD 9.0 degrees and eta:6.3 degrees , theta: SD 5.1 degrees), and subtalar extreme inversion to extreme eversion (eta: 6.4 degrees, theta: SD 5.9 degrees). Nearly all dorsi--and plantarflexion occurs in the talocrural joint (theta: mean 63.3 degrees (SD 11 degrees)). The inversion and internal rotation components for extreme eversion to inversion were approximately three times larger for the subtalar joint (theta: mean 22.9 degrees and 29.1 degrees) than for the talocrural joint (theta: mean 8.8 degrees and 10.7 degrees). Comparison of the ranges of motion of the pathologic ankle joint with the healthy subjects showed an increased inversion and axial rotation in the talocrural joint instead of in the suspected subtalar joint. The proposed diagnostic technique and the acquired database of helical parameters of ankle joint ranges of motion are suitable to apply in clinical cases.

  16. Application of kernel functions for accurate similarity search in large chemical databases.

    PubMed

    Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H

    2010-04-29

    Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.

  17. Chest Ultrasonography in Modern Day Extreme Settings: From Military Setting and Natural Disasters to Space Flights and Extreme Sports

    PubMed Central

    Mucci, Viviana

    2018-01-01

    Chest ultrasonography (CU) is a noninvasive imaging technique able to provide an immediate diagnosis of the underlying aetiology of acute respiratory failure and traumatic chest injuries. Given the great technologies, it is now possible to perform accurate CU in remote and adverse environments including the combat field, extreme sport settings, and environmental disasters, as well as during space missions. Today, the usage of CU in the extreme emergency setting is more likely to occur, as this technique proved to be a fast diagnostic tool to assist resuscitation manoeuvres and interventional procedures in many cases. A scientific literature review is presented here. This was based on a systematic search of published literature, on the following online databases: PubMed and Scopus. The following words were used: “chest sonography,” “ thoracic ultrasound,” and “lung sonography,” in different combinations with “extreme sport,” “extreme environment,” “wilderness,” “catastrophe,” and “extreme conditions.” This manuscript reports the most relevant usages of CU in the extreme setting as well as technological improvements and current limitations. CU application in the extreme setting is further encouraged here. PMID:29736195

  18. A review of the risk factors for lower extremity overuse injuries in young elite female ballet dancers.

    PubMed

    Bowerman, Erin Anne; Whatman, Chris; Harris, Nigel; Bradshaw, Elizabeth

    2015-06-01

    The objective of this study was to review the evidence for selected risk factors of lower extremity overuse injuries in young elite female ballet dancers. An electronic search of key databases from 1969 to July 2013 was conducted using the keywords dancers, ballet dancers, athletes, adolescent, adolescence, young, injury, injuries, risk, overuse, lower limb, lower extremity, lower extremities, growth, maturation, menarche, alignment, and biomechanics. Thirteen published studies were retained for review. Results indicated that there is a high incidence of lower extremity overuse injuries in the target population. Primary risk factors identified included maturation, growth, and poor lower extremity alignment. Strong evidence from well-designed studies indicates that young elite female ballet dancers suffer from delayed onset of growth, maturation, menarche, and menstrual irregularities. However, there is little evidence that this deficit increases the risk of overuse injury, with the exception of stress fractures. Similarly, there is minimal evidence linking poor lower extremity alignment to increased risk of overuse injury. It is concluded that further prospective, longitudinal studies are required to clarify the relationship between growth, maturation, menarche, and lower extremity alignment, and the risk of lower extremity overuse injury in young elite female ballet dancers.

  19. Climate Change Projection for the Department of Energy's Savannah River Site

    NASA Astrophysics Data System (ADS)

    Werth, D. W.

    2014-12-01

    As per recent Department of Energy (DOE) sustainability requirements, the Savannah River National Laboratory (SRNL) is developing a climate projection for the DOE's Savannah River Site (SRS) near Aiken, SC. This will comprise data from both a statistical and a dynamic downscaling process, each interpolated to the SRS. We require variables most relevant to operational activities at the site (such as the US Forest Service's forest management program), and select temperature, precipitation, wind, and humidity as being most relevant to energy and water resource requirements, fire and forest ecology, and facility and worker safety. We then develop projections of the means and extremes of these variables, estimate the effect on site operations, and develop long-term mitigation strategies. For example, given that outdoor work while wearing protective gear is a daily facet of site operations, heat stress is of primary importance to work planning, and we use the downscaled data to estimate changes in the occurrence of high temperatures. For the statistical downscaling, we use global climate model (GCM) data from the Climate Model Intercomparison Project, version 5 (CMIP-5), which was used in the IPCC Fifth Assessment Report (AR5). GCM data from five research groups was selected, and two climate change scenarios - RCP 4.5 and RCP 8.5 - are used with observed data from site instruments and other databases to produce the downscaled projections. We apply a quantile regression downscaling method, which involves the use of the observed cumulative distribution function to correct that of the GCM. This produces a downscaled projection with an interannual variability closer to that of the observed data and allows for more extreme values in the projections, which are often absent in GCM data. The statistically downscaled data is complemented with dynamically downscaled data from the NARCCAP database, which comprises output from regional climate models forced with GCM output from the CMIP-3 database of GCM simulations. Applications of the downscaled climate projections to some of the unique operational needs of a large DOE weapons complex site are described.

  20. The influence of weather and fuel type on the fuel composition of the area burned by forest fires in Ontario, 1996-2006.

    PubMed

    Podur, Justin J; Martell, David L

    2009-07-01

    Forest fires are influenced by weather, fuels, and topography, but the relative influence of these factors may vary in different forest types. Compositional analysis can be used to assess the relative importance of fuels and weather in the boreal forest. Do forest or wild land fires burn more flammable fuels preferentially or, because most large fires burn in extreme weather conditions, do fires burn fuels in the proportions they are available despite differences in flammability? In the Canadian boreal forest, aspen (Populus tremuloides) has been found to burn in less than the proportion in which it is available. We used the province of Ontario's Provincial Fuels Database and fire records provided by the Ontario Ministry of Natural Resources to compare the fuel composition of area burned by 594 large (>40 ha) fires that occurred in Ontario's boreal forest region, a study area some 430,000 km2 in size, between 1996 and 2006 with the fuel composition of the neighborhoods around the fires. We found that, over the range of fire weather conditions in which large fires burned and in a study area with 8% aspen, fires burn fuels in the proportions that they are available, results which are consistent with the dominance of weather in controlling large fires.

  1. Do climate extreme events foster violent civil conflicts? A coincidence analysis

    NASA Astrophysics Data System (ADS)

    Schleussner, Carl-Friedrich; Donges, Jonathan F.; Donner, Reik V.

    2014-05-01

    Civil conflicts promoted by adverse environmental conditions represent one of the most important potential feedbacks in the global socio-environmental nexus. While the role of climate extremes as a triggering factor is often discussed, no consensus is yet reached about the cause-and-effect relation in the observed data record. Here we present results of a rigorous statistical coincidence analysis based on the Munich Re Inc. extreme events database and the Uppsala conflict data program. We report evidence for statistically significant synchronicity between climate extremes with high economic impact and violent conflicts for various regions, although no coherent global signal emerges from our analysis. Our results indicate the importance of regional vulnerability and might aid to identify hot-spot regions for potential climate-triggered violent social conflicts.

  2. Genome signature analysis of thermal virus metagenomes reveals Archaea and thermophilic signatures

    PubMed Central

    Pride, David T; Schoenfeld, Thomas

    2008-01-01

    Background Metagenomic analysis provides a rich source of biological information for otherwise intractable viral communities. However, study of viral metagenomes has been hampered by its nearly complete reliance on BLAST algorithms for identification of DNA sequences. We sought to develop algorithms for examination of viral metagenomes to identify the origin of sequences independent of BLAST algorithms. We chose viral metagenomes obtained from two hot springs, Bear Paw and Octopus, in Yellowstone National Park, as they represent simple microbial populations where comparatively large contigs were obtained. Thermal spring metagenomes have high proportions of sequences without significant Genbank homology, which has hampered identification of viruses and their linkage with hosts. To analyze each metagenome, we developed a method to classify DNA fragments using genome signature-based phylogenetic classification (GSPC), where metagenomic fragments are compared to a database of oligonucleotide signatures for all previously sequenced Bacteria, Archaea, and viruses. Results From both Bear Paw and Octopus hot springs, each assembled contig had more similarity to other metagenome contigs than to any sequenced microbial genome based on GSPC analysis, suggesting a genome signature common to each of these extreme environments. While viral metagenomes from Bear Paw and Octopus share some similarity, the genome signatures from each locale are largely unique. GSPC using a microbial database predicts most of the Octopus metagenome has archaeal signatures, while bacterial signatures predominate in Bear Paw; a finding consistent with those of Genbank BLAST. When using a viral database, the majority of the Octopus metagenome is predicted to belong to archaeal virus Families Globuloviridae and Fuselloviridae, while none of the Bear Paw metagenome is predicted to belong to archaeal viruses. As expected, when microbial and viral databases are combined, each of the Octopus and Bear Paw metagenomic contigs are predicted to belong to viruses rather than to any Bacteria or Archaea, consistent with the apparent viral origin of both metagenomes. Conclusion That BLAST searches identify no significant homologs for most metagenome contigs, while GSPC suggests their origin as archaeal viruses or bacteriophages, indicates GSPC provides a complementary approach in viral metagenomic analysis. PMID:18798991

  3. Genome signature analysis of thermal virus metagenomes reveals Archaea and thermophilic signatures.

    PubMed

    Pride, David T; Schoenfeld, Thomas

    2008-09-17

    Metagenomic analysis provides a rich source of biological information for otherwise intractable viral communities. However, study of viral metagenomes has been hampered by its nearly complete reliance on BLAST algorithms for identification of DNA sequences. We sought to develop algorithms for examination of viral metagenomes to identify the origin of sequences independent of BLAST algorithms. We chose viral metagenomes obtained from two hot springs, Bear Paw and Octopus, in Yellowstone National Park, as they represent simple microbial populations where comparatively large contigs were obtained. Thermal spring metagenomes have high proportions of sequences without significant Genbank homology, which has hampered identification of viruses and their linkage with hosts. To analyze each metagenome, we developed a method to classify DNA fragments using genome signature-based phylogenetic classification (GSPC), where metagenomic fragments are compared to a database of oligonucleotide signatures for all previously sequenced Bacteria, Archaea, and viruses. From both Bear Paw and Octopus hot springs, each assembled contig had more similarity to other metagenome contigs than to any sequenced microbial genome based on GSPC analysis, suggesting a genome signature common to each of these extreme environments. While viral metagenomes from Bear Paw and Octopus share some similarity, the genome signatures from each locale are largely unique. GSPC using a microbial database predicts most of the Octopus metagenome has archaeal signatures, while bacterial signatures predominate in Bear Paw; a finding consistent with those of Genbank BLAST. When using a viral database, the majority of the Octopus metagenome is predicted to belong to archaeal virus Families Globuloviridae and Fuselloviridae, while none of the Bear Paw metagenome is predicted to belong to archaeal viruses. As expected, when microbial and viral databases are combined, each of the Octopus and Bear Paw metagenomic contigs are predicted to belong to viruses rather than to any Bacteria or Archaea, consistent with the apparent viral origin of both metagenomes. That BLAST searches identify no significant homologs for most metagenome contigs, while GSPC suggests their origin as archaeal viruses or bacteriophages, indicates GSPC provides a complementary approach in viral metagenomic analysis.

  4. Project Ares: A Systems Engineering and Operations Architecture for the Exploration of Mars

    DTIC Science & Technology

    1992-03-20

    increased use of automation, experiential databases , expert systems, and fail-soft’ configurations and designs (33:252-253). Automatic communication relay and...communications satellite’s lifetimes, we assume that uplink data rates on the order of 10 Kbps should suffice for command and database uploads. Current...squashed, 20-sided polyhedron configuration which should be relatively easy to obtain. Thus, two extremes for configuration exist. At one end is the site

  5. Evaluating the Large-Scale Environment of Extreme Events Using Reanalyses

    NASA Astrophysics Data System (ADS)

    Bosilovich, M. G.; Schubert, S. D.; Koster, R. D.; da Silva, A. M., Jr.; Eichmann, A.

    2014-12-01

    Extreme conditions and events have always been a long standing concern in weather forecasting and national security. While some evidence indicates extreme weather will increase in global change scenarios, extremes are often related to the large scale atmospheric circulation, but also occurring infrequently. Reanalyses assimilate substantial amounts of weather data and a primary strength of reanalysis data is the representation of the large-scale atmospheric environment. In this effort, we link the occurrences of extreme events or climate indicators to the underlying regional and global weather patterns. Now, with greater than 3o years of data, reanalyses can include multiple cases of extreme events, and thereby identify commonality among the weather to better characterize the large-scale to global environment linked to the indicator or extreme event. Since these features are certainly regionally dependent, and also, the indicators of climate are continually being developed, we outline various methods to analyze the reanalysis data and the development of tools to support regional evaluation of the data. Here, we provide some examples of both individual case studies and composite studies of similar events. For example, we will compare the large scale environment for Northeastern US extreme precipitation with that of highest mean precipitation seasons. Likewise, southerly winds can shown to be a major contributor to very warm days in the Northeast winter. While most of our development has involved NASA's MERRA reanalysis, we are also looking forward to MERRA-2 which includes several new features that greatly improve the representation of weather and climate, especially for the regions and sectors involved in the National Climate Assessment.

  6. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    NASA Astrophysics Data System (ADS)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are being simulated for plausible physical reasons, boosting confidence in future projections of temperature extremes. Conversely, where model skill is identified to be lower, caution should be exercised in interpreting future projections.

  7. The Belgian repository of fundamental atomic data and stellar spectra (BRASS). I. Cross-matching atomic databases of astrophysical interest

    NASA Astrophysics Data System (ADS)

    Laverick, M.; Lobel, A.; Merle, T.; Royer, P.; Martayan, C.; David, M.; Hensberge, H.; Thienpont, E.

    2018-04-01

    Context. Fundamental atomic parameters, such as oscillator strengths, play a key role in modelling and understanding the chemical composition of stars in the Universe. Despite the significant work underway to produce these parameters for many astrophysically important ions, uncertainties in these parameters remain large and can propagate throughout the entire field of astronomy. Aims: The Belgian repository of fundamental atomic data and stellar spectra (BRASS) aims to provide the largest systematic and homogeneous quality assessment of atomic data to date in terms of wavelength, atomic and stellar parameter coverage. To prepare for it, we first compiled multiple literature occurrences of many individual atomic transitions, from several atomic databases of astrophysical interest, and assessed their agreement. In a second step synthetic spectra will be compared against extremely high-quality observed spectra, for a large number of BAFGK spectral type stars, in order to critically evaluate the atomic data of a large number of important stellar lines. Methods: Several atomic repositories were searched and their data retrieved and formatted in a consistent manner. Data entries from all repositories were cross-matched against our initial BRASS atomic line list to find multiple occurrences of the same transition. Where possible we used a new non-parametric cross-match depending only on electronic configurations and total angular momentum values. We also checked for duplicate entries of the same physical transition, within each retrieved repository, using the non-parametric cross-match. Results: We report on the number of cross-matched transitions for each repository and compare their fundamental atomic parameters. We find differences in log(gf) values of up to 2 dex or more. We also find and report that 2% of our line list and Vienna atomic line database retrievals are composed of duplicate transitions. Finally we provide a number of examples of atomic spectral lines with different retrieved literature log(gf) values, and discuss the impact of these uncertain log(gf) values on quantitative spectroscopy. All cross-matched atomic data and duplicate transition pairs are available to download at http://brass.sdf.org

  8. Quality control of EUVE databases

    NASA Technical Reports Server (NTRS)

    John, L. M.; Drake, J.

    1992-01-01

    The publicly accessible databases for the Extreme Ultraviolet Explorer include: the EUVE Archive mailserver; the CEA ftp site; the EUVE Guest Observer Mailserver; and the Astronomical Data System node. The EUVE Performance Assurance team is responsible for verifying that these public EUVE databases are working properly, and that the public availability of EUVE data contained therein does not infringe any data rights which may have been assigned. In this poster, we describe the Quality Assurance (QA) procedures we have developed from the approach of QA as a service organization, thus reflecting the overall EUVE philosophy of Quality Assurance integrated into normal operating procedures, rather than imposed as an external, post facto, control mechanism.

  9. Plans for the extreme ultraviolet explorer data base

    NASA Technical Reports Server (NTRS)

    Marshall, Herman L.; Dobson, Carl A.; Malina, Roger F.; Bowyer, Stuart

    1988-01-01

    The paper presents an approach for storage and fast access to data that will be obtained by the Extreme Ultraviolet Explorer (EUVE), a satellite payload scheduled for launch in 1991. The EUVE telescopes will be operated remotely from the EUVE Science Operation Center (SOC) located at the University of California, Berkeley. The EUVE science payload consists of three scanning telescope carrying out an all-sky survey in the 80-800 A spectral region and a Deep Survey/Spectrometer telescope performing a deep survey in the 80-250 A spectral region. Guest Observers will remotely access the EUVE spectrometer database at the SOC. The EUVE database will consist of about 2 X 10 to the 10th bytes of information in a very compact form, very similar to the raw telemetry data. A history file will be built concurrently giving telescope parameters, command history, attitude summaries, engineering summaries, anomalous events, and ephemeris summaries.

  10. Hypervelocity impact physics

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.; Bean, Alan J.; Darzi, Kent

    1991-01-01

    All large spacecraft are susceptible to impacts by meteoroids and orbiting space debris. These impacts occur at extremely high speed and can damage flight-critical systems, which can in turn lead to a catastrophic failure of the spacecraft. Therefore, the design of a spacecraft for a long-duration mission must take into account the possibility of such impacts and their effects on the spacecraft structure and on all of its exposed subsystems components. The work performed under the contract consisted of applied research on the effects of meteoroid/space debris impacts on candidate materials, design configurations, and support mechanisms of long term space vehicles. Hypervelocity impact mechanics was used to analyze the damage that occurs when a space vehicle is impacted by a micrometeoroid or a space debris particle. An impact analysis of over 500 test specimens was performed to generate by a hypervelocity impact damage database.

  11. [Organization, availability and possibility of analysis of disaster data of climate related origin and its impacts on health].

    PubMed

    Xavier, Diego Ricardo; Barcellos, Christovam; Barros, Heglaucio da Silva; Magalhães, Monica de Avelar Figueiredo Mafra; Matos, Vanderlei Pascoal de; Pedroso, Marcel de Moraes

    2014-09-01

    The occurrence of disasters is often related to unforeseeable able natural processes. However, the analysis of major databases may highlight seasonal and long-term trends, as well as some spatial patterns where risks are concentrated. In this paper the process of acquiring and organizing climate-related disaster data collected by civil protection institutions and made available by the Brazilian Climate and Health Observatory is described. Preliminary analyses show the concentration of disasters caused by heavy rainfall events along the Brazilian coastline especially during the summer. Droughts have longer duration and extent, affecting large areas of the south and northeast regions of the country. These data can be used to analyze and monitor the impact of extreme climatic events on health, as well as identify the vulnerability and climate deteminants.

  12. The Top 50 Articles on Minimally Invasive Spine Surgery.

    PubMed

    Virk, Sohrab S; Yu, Elizabeth

    2017-04-01

    Bibliometric study of current literature. To catalog the most important minimally invasive spine (MIS) surgery articles using the amount of citations as a marker of relevance. MIS surgery is a relatively new tool used by spinal surgeons. There is a dynamic and evolving field of research related to MIS techniques, clinical outcomes, and basic science research. To date, there is no comprehensive review of the most cited articles related to MIS surgery. A systematic search was performed over three widely used literature databases: Web of Science, Scopus, and Google Scholar. There were four searches performed using the terms "minimally invasive spine surgery," "endoscopic spine surgery," "percutaneous spinal surgery," and "lateral interbody surgery." The amount of citations included was averaged amongst the three databases to rank each article. The query of the three databases was performed in November 2015. Fifty articles were selected based upon the amount of citations each averaged amongst the three databases. The most cited article was titled "Extreme Lateral Interbody Fusion (XLIF): a novel surgical technique for anterior lumbar interbody fusion" by Ozgur et al and was credited with 447, 239, and 279 citations in Google Scholar, Web of Science, and Scopus, respectively. Citations ranged from 27 to 239 for Web of Science, 60 to 279 for Scopus, and 104 to 462 for Google Scholar. There was a large variety of articles written spanning over 14 different topics with the majority dealing with clinical outcomes related to MIS surgery. The majority of the most cited articles were level III and level IV studies. This is likely due to the relatively recent nature of technological advances in the field. Furthermore level I and level II studies are required in MIS surgery in the years ahead. 5.

  13. Identification of large-scale meteorological patterns associated with extreme precipitation in the US northeast

    NASA Astrophysics Data System (ADS)

    Agel, Laurie; Barlow, Mathew; Feldstein, Steven B.; Gutowski, William J.

    2018-03-01

    Patterns of daily large-scale circulation associated with Northeast US extreme precipitation are identified using both k-means clustering (KMC) and Self-Organizing Maps (SOM) applied to tropopause height. The tropopause height provides a compact representation of the upper-tropospheric potential vorticity, which is closely related to the overall evolution and intensity of weather systems. Extreme precipitation is defined as the top 1% of daily wet-day observations at 35 Northeast stations, 1979-2008. KMC is applied on extreme precipitation days only, while the SOM algorithm is applied to all days in order to place the extreme results into the overall context of patterns for all days. Six tropopause patterns are identified through KMC for extreme day precipitation: a summertime tropopause ridge, a summertime shallow trough/ridge, a summertime shallow eastern US trough, a deeper wintertime eastern US trough, and two versions of a deep cold-weather trough located across the east-central US. Thirty SOM patterns for all days are identified. Results for all days show that 6 SOM patterns account for almost half of the extreme days, although extreme precipitation occurs in all SOM patterns. The same SOM patterns associated with extreme precipitation also routinely produce non-extreme precipitation; however, on extreme precipitation days the troughs, on average, are deeper and the downstream ridges more pronounced. Analysis of other fields associated with the large-scale patterns show various degrees of anomalously strong moisture transport preceding, and upward motion during, extreme precipitation events.

  14. MASSCLEANage—Stellar Cluster Ages from Integrated Colors

    NASA Astrophysics Data System (ADS)

    Popescu, Bogdan; Hanson, M. M.

    2010-11-01

    We present the recently updated and expanded MASSCLEANcolors, a database of 70 million Monte Carlo models selected to match the properties (metallicity, ages, and masses) of stellar clusters found in the Large Magellanic Cloud (LMC). This database shows the rather extreme and non-Gaussian distribution of integrated colors and magnitudes expected with different cluster age and mass and the enormous age degeneracy of integrated colors when mass is unknown. This degeneracy could lead to catastrophic failures in estimating age with standard simple stellar population models, particularly if most of the clusters are of intermediate or low mass, like in the LMC. Utilizing the MASSCLEANcolors database, we have developed MASSCLEANage, a statistical inference package which assigns the most likely age and mass (solved simultaneously) to a cluster based only on its integrated broadband photometric properties. Finally, we use MASSCLEANage to derive the age and mass of LMC clusters based on integrated photometry alone. First, we compare our cluster ages against those obtained for the same seven clusters using more accurate integrated spectroscopy. We find improved agreement with the integrated spectroscopy ages over the original photometric ages. A close examination of our results demonstrates the necessity of solving simultaneously for mass and age to reduce degeneracies in the cluster ages derived via integrated colors. We then selected an additional subset of 30 photometric clusters with previously well-constrained ages and independently derive their age using the MASSCLEANage with the same photometry with very good agreement. The MASSCLEANage program is freely available under GNU General Public License.

  15. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  16. Does Eccentric Exercise Reduce Pain and Improve Strength in Physically Active Adults With Symptomatic Lower Extremity Tendinosis? A Systematic Review

    PubMed Central

    Wasielewski, Noah J; Kotsko, Kevin M

    2007-01-01

    Objective: To critically review evidence for the effectiveness of eccentric exercise to treat lower extremity tendinoses. Data Sources: Databases used to locate randomized controlled trials (RCTs) included PubMed (1980–2006), CINAHL (1982–2006), Web of Science (1995–2006), SPORT Discus (1980–2006), Physiotherapy Evidence Database (PEDro), and the Cochrane Collaboration Database. Key words included tendon, tendonitis, tendinosis, tendinopathy, exercise, eccentric, rehabilitation, and therapy. Study Selection: The criteria for trial selection were (1) the literature was written in English, (2) the research design was an RCT, (3) the study participants were adults with a clinical diagnosis of tendinosis, (4) the outcome measures included pain or strength, and (5) eccentric exercise was used to treat lower extremity tendinosis. Data Extraction: Specific data were abstracted from the RCTs, including eccentric exercise protocol, adjunctive treatments, concurrent physical activity, and treatment outcome. Data Synthesis: The calculated post hoc statistical power of the selected studies (n = 11) was low, and the average methodologic score was 5.3/10 based on PEDro criteria. Eccentric exercise was compared with no treatment (n = 1), concentric exercise (n = 5), an alternative eccentric exercise protocol (n = 1), stretching (n = 2), night splinting (n = 1), and physical agents (n = 1). In most trials, tendinosis-related pain was reduced with eccentric exercise over time, but only in 3 studies did eccentric exercise decrease pain relative to the control treatment. Similarly, the RCTs demonstrated that strength-related measures improved over time, but none revealed significant differences relative to the control treatment. Based on the best evidence available, it appears that eccentric exercise may reduce pain and improve strength in lower extremity tendinoses, but whether eccentric exercise is more effective than other forms of therapeutic exercise for the resolution of tendinosis symptoms remains questionable. PMID:18059998

  17. Effect of Plyometric Training on Vertical Jump Performance in Female Athletes: A Systematic Review and Meta-Analysis.

    PubMed

    Stojanović, Emilija; Ristić, Vladimir; McMaster, Daniel Travis; Milanović, Zoran

    2017-05-01

    Plyometric training is an effective method to prevent knee injuries in female athletes; however, the effects of plyometric training on jump performance in female athletes is unclear. The aim of this systematic review and meta-analysis was to determine the effectiveness of plyometric training on vertical jump (VJ) performance of amateur, collegiate and elite female athletes. Six electronic databases were searched (PubMed, MEDLINE, ERIC, Google Scholar, SCIndex and ScienceDirect). The included studies were coded for the following criteria: training status, training modality and type of outcome measures. The methodological quality of each study was assessed using the physiotherapy evidence database (PEDro) scale. The effects of plyometric training on VJ performance were based on the following standardised pre-post testing effect size (ES) thresholds: trivial (<0.20), small (0.21-0.60), moderate (0.61-1.20), large (1.21-2.00), very large (2.01-4.00) and extremely large (>4.00). A total of 16 studies met the inclusion criteria. The meta-analysis revealed that plyometric training had a most likely moderate effect on countermovement jump (CMJ) height performance (ES = 1.09; 95 % confidence interval [CI] 0.57-1.61; I 2  = 75.60 %). Plyometric training interventions of less than 10 weeks in duration had a most likely small effect on CMJ height performance (ES = 0.58; 95 % CI 0.25-0.91). In contrast, plyometric training durations greater than 10 weeks had a most likely large effect on CMJ height (ES = 1.87; 95 % CI 0.73-3.01). The effect of plyometric training on concentric-only squat jump (SJ) height was likely small (ES = 0.44; 95 % CI -0.09 to 0.97). Similar effects were observed on SJ height after 6 weeks of plyometric training in amateur (ES = 0.35) and young (ES = 0.49) athletes, respectively. The effect of plyometric training on CMJ height with the arm swing was likely large (ES = 1.31; 95 % CI -0.04 to 2.65). The largest plyometric training effects were observed in drop jump (DJ) height performance (ES = 3.59; 95 % CI -3.04 to 10.23). Most likely extremely large plyometric training effects on DJ height performance (ES = 7.07; 95 % CI 4.71-9.43) were observed following 12 weeks of plyometric training. In contrast, a possibly small positive training effect (ES = 0.30; 95 % CI -0.63 to 1.23) was observed following 6 weeks of plyometric training. Plyometric training is an effective form of training to improve VJ performance (e.g. CMJ, SJ and DJ) in female athletes. The benefits of plyometric training on VJ performance are greater for interventions of longer duration (≥10 weeks).

  18. Characterizing rainfall in the Tenerife island

    NASA Astrophysics Data System (ADS)

    Díez-Sierra, Javier; del Jesus, Manuel; Losada Rodriguez, Inigo

    2017-04-01

    In many locations, rainfall data are collected through networks of meteorological stations. The data collection process is nowadays automated in many places, leading to the development of big databases of rainfall data covering extensive areas of territory. However, managers, decision makers and engineering consultants tend not to extract most of the information contained in these databases due to the lack of specific software tools for their exploitation. Here we present the modeling and development effort put in place in the Tenerife island in order to develop MENSEI-L, a software tool capable of automatically analyzing a complete rainfall database to simplify the extraction of information from observations. MENSEI-L makes use of weather type information derived from atmospheric conditions to separate the complete time series into homogeneous groups where statistical distributions are fitted. Normal and extreme regimes are obtained in this manner. MENSEI-L is also able to complete missing data in the time series and to generate synthetic stations by using Kriging techniques. These techniques also serve to generate the spatial regimes of precipitation, both normal and extreme ones. MENSEI-L makes use of weather type information to also provide a stochastic three-day probability forecast for rainfall.

  19. Building Simulation Modelers are we big-data ready?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan

    Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performancemore » simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical aspects of managing big data, the paper details design of experiments in anticipation of large volumes of data. The cost of re-reading output into an analysis program is elaborated and analysis techniques that perform analysis in-situ with the simulations as they are run are discussed. The paper concludes with an example and elaboration of the tipping point where it becomes more expensive to store the output than re-running a set of simulations.« less

  20. National Databases for Neurosurgical Outcomes Research: Options, Strengths, and Limitations.

    PubMed

    Karhade, Aditya V; Larsen, Alexandra M G; Cote, David J; Dubois, Heloise M; Smith, Timothy R

    2017-08-05

    Quality improvement, value-based care delivery, and personalized patient care depend on robust clinical, financial, and demographic data streams of neurosurgical outcomes. The neurosurgical literature lacks a comprehensive review of large national databases. To assess the strengths and limitations of various resources for outcomes research in neurosurgery. A review of the literature was conducted to identify surgical outcomes studies using national data sets. The databases were assessed for the availability of patient demographics and clinical variables, longitudinal follow-up of patients, strengths, and limitations. The number of unique patients contained within each data set ranged from thousands (Quality Outcomes Database [QOD]) to hundreds of millions (MarketScan). Databases with both clinical and financial data included PearlDiver, Premier Healthcare Database, Vizient Clinical Data Base and Resource Manager, and the National Inpatient Sample. Outcomes collected by databases included patient-reported outcomes (QOD); 30-day morbidity, readmissions, and reoperations (National Surgical Quality Improvement Program); and disease incidence and disease-specific survival (Surveillance, Epidemiology, and End Results-Medicare). The strengths of large databases included large numbers of rare pathologies and multi-institutional nationally representative sampling; the limitations of these databases included variable data veracity, variable data completeness, and missing disease-specific variables. The improvement of existing large national databases and the establishment of new registries will be crucial to the future of neurosurgical outcomes research. Copyright © 2017 by the Congress of Neurological Surgeons

  1. Application and measurement properties of EQ-5D to measure quality of life in patients with upper extremity orthopaedic disorders: a systematic literature review.

    PubMed

    Grobet, Cécile; Marks, Miriam; Tecklenburg, Linda; Audigé, Laurent

    2018-04-13

    The EuroQol-5 Dimension (EQ-5D) is the most widely used generic instrument to measure quality of life (QoL), yet its application in upper extremity orthopaedics as well as its measurement properties remain largely undefined. We implemented a systematic literature review to provide an overview of the application of EQ-5D in patients with upper extremity disorders and analyse its measurement properties. We searched Medline, EMBASE, Cochrane and Scopus databases for clinical studies including orthopaedic patients with surgical interventions of the upper extremity who completed the EQ-5D. For all included studies, the use of EQ-5D and quantitative QoL data were described. Validation studies of EQ-5D were assessed according to COSMIN guidelines and standard measurement properties were examined. Twenty-three studies were included in the review, 19 of which investigated patients with an intervention carried out at the shoulder region. In 15 studies, EQ-5D assessed QoL as the primary outcome. Utility index scores in non-trauma patients generally improved postoperatively, whereas trauma patients did not regain their recalled pre-injury QoL levels. EQ-5D measurement properties were reported in three articles on proximal humerus fractures and carpal tunnel syndrome. Positive ratings were seen for construct validity (Spearman correlation coefficient ≥ 0.70 with the Short Form (SF)-12 or SF-6D health surveys) and reliability (intraclass correlation coefficient ≥ 0.77) with intermediate responsiveness (standardised response means: 0.5-0.9). However, ceiling effects were identified with 16-48% of the patients scoring the maximum QoL. The methodological quality of the three articles varied from fair to good. For surgical interventions of the upper extremity, EQ-5D was mostly applied to assess QoL as a primary outcome in patients with shoulder disorders. Investigations of the measurement properties were rare, but indicate good reliability and validity as well as moderate responsiveness in patients with upper extremity conditions.

  2. LDEF meteoroid and debris database

    NASA Technical Reports Server (NTRS)

    Dardano, C. B.; See, Thomas H.; Zolensky, Michael E.

    1994-01-01

    The Long Duration Exposure Facility (LDEF) Meteoroid and Debris Special Investigation Group (M&D SIG) database is maintained at the Johnson Space Center (JSC), Houston, Texas, and consists of five data tables containing information about individual features, digitized images of selected features, and LDEF hardware (i.e., approximately 950 samples) archived at JSC. About 4000 penetrations (greater than 300 micron in diameter) and craters (greater than 500 micron in diameter) were identified and photodocumented during the disassembly of LDEF at the Kennedy Space Center (KSC), while an additional 4500 or so have subsequently been characterized at JSC. The database also contains some data that have been submitted by various PI's, yet the amount of such data is extremely limited in its extent, and investigators are encouraged to submit any and all M&D-type data to JSC for inclusion within the M&D database. Digitized stereo-image pairs are available for approximately 4500 features through the database.

  3. Probabilistic Forecast of Solar Particle Fluence for Mission Durations and Exposure Assessment in Consideration of Integral Proton Fluence at High Energies

    NASA Astrophysics Data System (ADS)

    Kim, M. Y.; Tylka, A. J.; Dietrich, W. F.; Cucinotta, F. A.

    2012-12-01

    The occasional occurrence of solar particle events (SPEs) with large amounts of energy is non-predictable, while the expected frequency is strongly influenced by solar cycle activity. The potential for exposure to large SPEs with high energy levels is the major concern during extra-vehicular activities (EVAs) on the Moon, near Earth object, and Mars surface for future long duration space missions. We estimated the propensity for SPE occurrence with large proton fluence as a function of time within a typical future solar cycle from a non-homogeneous Poisson model using the historical database for measurements of protons with energy > 30 MeV, Φ30. The database includes a comprehensive collection of historical data set for the past 5 solar cycles. Using all the recorded proton fluence of SPEs, total fluence distributions of Φ30, Φ60, and Φ100 were simulated ranging from its 5th to 95th percentile for each mission durations. In addition to the total particle intensity of SPEs, the detailed energy spectra of protons, especially at high energy levels, were recognized as extremely important for assessing the radiation cancer risk associated with energetic particles for large events. For radiation exposure assessments of major SPEs, we used the spectral functional form of a double power law in rigidity (the so-called Band function), which have provided a satisfactory representation of the combined satellite and neutron monitor data from ~10 MeV to ~10 GeV. The dependencies of exposure risk were evaluated as a function of proton fluence at a given energy threshold of 30, 60, and 100 MeV, and overall risk prediction was improved as the energy level threshold increases from 30 to 60 to 100 MeV. The results can be applied to the development of approaches of improved radiation protection for astronauts, as well as the optimization of mission planning and shielding for future space missions.

  4. Cross-checking of Large Evaluated and Experimental Nuclear Reaction Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeydina, O.; Koning, A.J.; Soppera, N.

    2014-06-15

    Automated methods are presented for the verification of large experimental and evaluated nuclear reaction databases (e.g. EXFOR, JEFF, TENDL). These methods allow an assessment of the overall consistency of the data and detect aberrant values in both evaluated and experimental databases.

  5. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    PubMed

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  6. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    NASA Astrophysics Data System (ADS)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  7. United States Temperature and Precipitation Extremes: Phenomenology, Large-Scale Organization, Physical Mechanisms and Model Representation

    NASA Astrophysics Data System (ADS)

    Black, R. X.

    2017-12-01

    We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.

  8. Surgical research using national databases

    PubMed Central

    Leland, Hyuma; Heckmann, Nathanael

    2016-01-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research. PMID:27867945

  9. Surgical research using national databases.

    PubMed

    Alluri, Ram K; Leland, Hyuma; Heckmann, Nathanael

    2016-10-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research.

  10. Climate Impacts on Extreme Energy Consumption of Different Types of Buildings

    PubMed Central

    Li, Mingcai; Shi, Jun; Guo, Jun; Cao, Jingfu; Niu, Jide; Xiong, Mingming

    2015-01-01

    Exploring changes of building energy consumption and its relationships with climate can provide basis for energy-saving and carbon emission reduction. Heating and cooling energy consumption of different types of buildings during 1981-2010 in Tianjin city, was simulated by using TRNSYS software. Daily or hourly extreme energy consumption was determined by percentile methods, and the climate impact on extreme energy consumption was analyzed. The results showed that days of extreme heating consumption showed apparent decrease during the recent 30 years for residential and large venue buildings, whereas days of extreme cooling consumption increased in large venue building. No significant variations were found for the days of extreme energy consumption for commercial building, although a decreasing trend in extreme heating energy consumption. Daily extreme energy consumption for large venue building had no relationship with climate parameters, whereas extreme energy consumption for commercial and residential buildings was related to various climate parameters. Further multiple regression analysis suggested heating energy consumption for commercial building was affected by maximum temperature, dry bulb temperature, solar radiation and minimum temperature, which together can explain 71.5 % of the variation of the daily extreme heating energy consumption. The daily extreme cooling energy consumption for commercial building was only related to the wet bulb temperature (R2= 0.382). The daily extreme heating energy consumption for residential building was affected by 4 climate parameters, but the dry bulb temperature had the main impact. The impacts of climate on hourly extreme heating energy consumption has a 1-3 hour delay in all three types of buildings, but no delay was found in the impacts of climate on hourly extreme cooling energy consumption for the selected buildings. PMID:25923205

  11. Climate impacts on extreme energy consumption of different types of buildings.

    PubMed

    Li, Mingcai; Shi, Jun; Guo, Jun; Cao, Jingfu; Niu, Jide; Xiong, Mingming

    2015-01-01

    Exploring changes of building energy consumption and its relationships with climate can provide basis for energy-saving and carbon emission reduction. Heating and cooling energy consumption of different types of buildings during 1981-2010 in Tianjin city, was simulated by using TRNSYS software. Daily or hourly extreme energy consumption was determined by percentile methods, and the climate impact on extreme energy consumption was analyzed. The results showed that days of extreme heating consumption showed apparent decrease during the recent 30 years for residential and large venue buildings, whereas days of extreme cooling consumption increased in large venue building. No significant variations were found for the days of extreme energy consumption for commercial building, although a decreasing trend in extreme heating energy consumption. Daily extreme energy consumption for large venue building had no relationship with climate parameters, whereas extreme energy consumption for commercial and residential buildings was related to various climate parameters. Further multiple regression analysis suggested heating energy consumption for commercial building was affected by maximum temperature, dry bulb temperature, solar radiation and minimum temperature, which together can explain 71.5 % of the variation of the daily extreme heating energy consumption. The daily extreme cooling energy consumption for commercial building was only related to the wet bulb temperature (R2= 0.382). The daily extreme heating energy consumption for residential building was affected by 4 climate parameters, but the dry bulb temperature had the main impact. The impacts of climate on hourly extreme heating energy consumption has a 1-3 hour delay in all three types of buildings, but no delay was found in the impacts of climate on hourly extreme cooling energy consumption for the selected buildings.

  12. A secure and efficiently searchable health information architecture.

    PubMed

    Yasnoff, William A

    2016-06-01

    Patient-centric repositories of health records are an important component of health information infrastructure. However, patient information in a single repository is potentially vulnerable to loss of the entire dataset from a single unauthorized intrusion. A new health record storage architecture, the personal grid, eliminates this risk by separately storing and encrypting each person's record. The tradeoff for this improved security is that a personal grid repository must be sequentially searched since each record must be individually accessed and decrypted. To allow reasonable search times for large numbers of records, parallel processing with hundreds (or even thousands) of on-demand virtual servers (now available in cloud computing environments) is used. Estimated search times for a 10 million record personal grid using 500 servers vary from 7 to 33min depending on the complexity of the query. Since extremely rapid searching is not a critical requirement of health information infrastructure, the personal grid may provide a practical and useful alternative architecture that eliminates the large-scale security vulnerabilities of traditional databases by sacrificing unnecessary searching speed. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Computer aided manual validation of mass spectrometry-based proteomic data.

    PubMed

    Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M

    2013-06-15

    Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Some Reliability Issues in Very Large Databases.

    ERIC Educational Resources Information Center

    Lynch, Clifford A.

    1988-01-01

    Describes the unique reliability problems of very large databases that necessitate specialized techniques for hardware problem management. The discussion covers the use of controlled partial redundancy to improve reliability, issues in operating systems and database management systems design, and the impact of disk technology on very large…

  15. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    NASA Astrophysics Data System (ADS)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of type IIB string theory on toric Calabi-Yau hypersurfaces with h1,1 ≤ 4 in the Kreuzer-Skarke database. While none of these examples can sustain a super-Planckian displacement--the largest possible is 0.3 Mpl--we find an alignment mechanism responsible for large displacements in random matrix models at large h 1,1 >> 1, indicating that large-field inflation may be feasible in compactifications with tens or hundreds of axions. These results represent a modest step toward a complete understanding of large hierarchies and naturalness in quantum gravity.

  16. Use of large healthcare databases for rheumatology clinical research.

    PubMed

    Desai, Rishi J; Solomon, Daniel H

    2017-03-01

    Large healthcare databases, which contain data collected during routinely delivered healthcare to patients, can serve as a valuable resource for generating actionable evidence to assist medical and healthcare policy decision-making. In this review, we summarize use of large healthcare databases in rheumatology clinical research. Large healthcare data are critical to evaluate medication safety and effectiveness in patients with rheumatologic conditions. Three major sources of large healthcare data are: first, electronic medical records, second, health insurance claims, and third, patient registries. Each of these sources offers unique advantages, but also has some inherent limitations. To address some of these limitations and maximize the utility of these data sources for evidence generation, recent efforts have focused on linking different data sources. Innovations such as randomized registry trials, which aim to facilitate design of low-cost randomized controlled trials built on existing infrastructure provided by large healthcare databases, are likely to make clinical research more efficient in coming years. Harnessing the power of information contained in large healthcare databases, while paying close attention to their inherent limitations, is critical to generate a rigorous evidence-base for medical decision-making and ultimately enhancing patient care.

  17. Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patchett, John M; Ahrens, James P; Lo, Li - Ta

    2010-10-15

    Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less

  18. An algorithm of discovering signatures from DNA databases on a computer cluster.

    PubMed

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-10-05

    Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.

  19. Evolving the Land Information System into a Cloud Computing Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houser, Paul R.

    The Land Information System (LIS) was developed to use advanced flexible land surface modeling and data assimilation frameworks to integrate extremely large satellite- and ground-based observations with advanced land surface models to produce continuous high-resolution fields of land surface states and fluxes. The resulting fields are extremely useful for drought and flood assessment, agricultural planning, disaster management, weather and climate forecasting, water resources assessment, and the like. We envisioned transforming the LIS modeling system into a scientific cloud computing-aware web and data service that would allow clients to easily setup and configure for use in addressing large water management issues.more » The focus of this Phase 1 project was to determine the scientific, technical, commercial merit and feasibility of the proposed LIS-cloud innovations that are currently barriers to broad LIS applicability. We (a) quantified the barriers to broad LIS utility and commercialization (high performance computing, big data, user interface, and licensing issues); (b) designed the proposed LIS-cloud web service, model-data interface, database services, and user interfaces; (c) constructed a prototype LIS user interface including abstractions for simulation control, visualization, and data interaction, (d) used the prototype to conduct a market analysis and survey to determine potential market size and competition, (e) identified LIS software licensing and copyright limitations and developed solutions, and (f) developed a business plan for development and marketing of the LIS-cloud innovation. While some significant feasibility issues were found in the LIS licensing, overall a high degree of LIS-cloud technical feasibility was found.« less

  20. Influence of tooth position on wind instrumentalists' performance and embouchure comfort : A systematic review.

    PubMed

    van der Weijden, F N; Kuitert, R B; Berkhout, F R U; van der Weijden, G A

    2018-05-01

    To systematically search the scientific literature concerning the influence of tooth position on wind instrumentalists' performance and embouchure comfort. The PubMed, Cochrane, and Embase databases were searched up to November 2017. The main orthodontic journals were searched for papers older than the inception date of PubMed. Grey literature was sought via Google Scholar. Eligible studies were critically appraised and analysed. The searches retrieved 54 papers. Only two met the inclusion criteria. Searching the orthodontic journals and Google Scholar resulted in two additional eligible studies. All four studies had a cross-sectional design. The sample sizes ranged from 20-100 participants, varying from children to professional musicians. Because of a large heterogeneity in outcome variables, no meta-analysis could be performed. Descriptive analysis shows that there are indications that tooth irregularities have a negative influence on embouchure comfort and performance of a wind instrument player. A large overjet may impede the embouchure of brass musicians and may have a negative influence on trumpet player performance. A wide jaw form seems more beneficial to trumpet player performance than a small jaw form. Furthermore, players of all types of wind instruments can experience embouchure difficulties from extreme spacing or an open bite. Tooth position can influence musical performance and embouchure comfort of wind instrumentalists. A Class I relationship without malocclusion seems appropriate for every type of wind instrument. The more extreme the malocclusion, the greater the interference with wind instrumentalists' performance and embouchure comfort. Evidence however is limited.

  1. Assessing the Regional Frequency, Intensity, and Spatial Extent of Tropical Cyclone Rainfall

    NASA Astrophysics Data System (ADS)

    Bosma, C.; Wright, D.; Nguyen, P.

    2017-12-01

    While the strength of a hurricane is generally classified based on its wind speed, the unprecedented rainfall-driven flooding experienced in southeastern Texas during Hurricane Harvey clearly highlights the need for better understanding of the hazards associated with extreme rainfall from hurricanes and other tropical systems. In this study, we seek to develop a framework for describing the joint probabilistic and spatio-temporal properties of extreme rainfall from hurricanes and other tropical systems. Furthermore, we argue that commonly-used terminology - such as the "500-year storm" - fail to convey the true properties of tropical cyclone rainfall occurrences in the United States. To quantify the magnitude and spatial extent of these storms, a database consisting of hundreds of unique rainfall volumetric shapes (or "voxels") was created. Each voxel is a four-dimensional object, created by connecting, in both space and time, gridded rainfall observations from the daily, gauge-based NOAA CPC-Unified precipitation dataset. Individual voxels were then associated with concurrent tropical cyclone tracks from NOAA's HURDAT-2 archive, to create distinct representations of the rainfall associated with every Atlantic tropical system making landfall over (or passing near) the United States since 1948. Using these voxels, a series of threshold-excess extreme value models were created to estimate the recurrence intervals of extreme tropical cyclone rainfall, both nationally and locally, for single and multi-day timescales. This voxel database also allows for the "indexing" of past events, placing recent extremes - such as the 50+ inches of rain observed during Hurricane Harvey - into a national context and emphasizing how rainfall totals that are rare at the point scale may be more frequent from a regional perspective.

  2. The Halophile protein database.

    PubMed

    Sharma, Naveen; Farooqi, Mohammad Samir; Chaturvedi, Krishna Kumar; Lal, Shashi Bhushan; Grover, Monendra; Rai, Anil; Pandey, Pankaj

    2014-01-01

    Halophilic archaea/bacteria adapt to different salt concentration, namely extreme, moderate and low. These type of adaptations may occur as a result of modification of protein structure and other changes in different cell organelles. Thus proteins may play an important role in the adaptation of halophilic archaea/bacteria to saline conditions. The Halophile protein database (HProtDB) is a systematic attempt to document the biochemical and biophysical properties of proteins from halophilic archaea/bacteria which may be involved in adaptation of these organisms to saline conditions. In this database, various physicochemical properties such as molecular weight, theoretical pI, amino acid composition, atomic composition, estimated half-life, instability index, aliphatic index and grand average of hydropathicity (Gravy) have been listed. These physicochemical properties play an important role in identifying the protein structure, bonding pattern and function of the specific proteins. This database is comprehensive, manually curated, non-redundant catalogue of proteins. The database currently contains 59 897 proteins properties extracted from 21 different strains of halophilic archaea/bacteria. The database can be accessed through link. Database URL: http://webapp.cabgrid.res.in/protein/ © The Author(s) 2014. Published by Oxford University Press.

  3. Does filler database size influence identification accuracy?

    PubMed

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Time compression of soil erosion by the effect of largest daily event. A regional analysis of USLE database.

    NASA Astrophysics Data System (ADS)

    Gonzalez-Hidalgo, J. C.; Batalla, R.; Cerda, A.; de Luis, M.

    2009-04-01

    When Thornes and Brunsden wrote in 1977 "How often one hears the researcher (and no less the undergraduate) complain that after weeks of observation "nothing happened" only to learn that, the day after his departure, a flood caused unprecedent erosion and channel changes!" (Thornes and Brunsden, 1977, p. 57), they focussed on two different problems in geomorphological research: the effects of extreme events and the temporal compression of geomorphological processes. The time compression is one of the main characteristic of erosion processes. It means that an important amount of the total soil eroded is produced in very short temporal intervals, i.e. few events mostly related to extreme events. From magnitude-frequency analysis we know that few events, not necessarily extreme by magnitude, produce high amount of geomorphological work. Last but not least, extreme isolated events are a classical issue in geomorphology by their specific effects, and they are receiving permanent attention, increased at present because of scenarios of global change. Notwithstanding, the time compression of geomorphological processes could be focused not only on the analysis of extreme events and the traditional magnitude-frequency approach, but on new complementary approach based on the effects of largest events. The classical approach define extreme event as a rare event (identified by its magnitude and quantified by some deviation from central value), while we define largest events by the rank, whatever their magnitude. In a previous research on time compression of soil erosion, using USLE soil erosion database (Gonzalez-Hidalgo et al., EGU 2007), we described a relationship between the total amount of daily erosive events recorded by plot and the percentage contribution to total soil erosion of n-largest aggregated daily events. Now we offer a further refined analysis comparing different agricultural regions in USA. To do that we have analyzed data from 594 erosion plots from USLE database with different record periods, and located in different climatic regions. Results indicate that there are no significant differences in the mean contribution of aggregated 5-largest daily erosion events between different agricultural divisions (i.e. different regional climate), and the differences detected can be attributed to specific site and plots conditions. Expected contribution of 5-largest daily event for 100 total daily events recorded is estimated around 40% of total soil erosion. We discuss the possible causes of such results and the applicability of them to the design of field research on soil erosion plots.

  5. Neonatal morphine in extremely and very preterm neonates: its effect on the developing brain - a review.

    PubMed

    Schuurmans, Juliette; Benders, Manon; Lemmers, Petra; van Bel, Frank

    2015-01-01

    Preterm infants requiring intensive care experience a large number of stressful and painful procedures. Management of stress and pain is therefore an important issue. This review provides an overview of the research on the use of morphine and its neurodevelopmental effects on this vulnerable group of neonates. A structural literature search of both experimental and clinical data has been done using an electronic database (PubMed), but also relevant reference lists and related articles were used. A total of 39 sources were considered relevant for this review to elucidate the effects of morphine on the developing brain. The results showed that both animal experimental and clinical data displayed conflicting results on the effects of neonatal morphine on neurodevelopmental outcome. However, in contrast to specific short-term neurological outcomes long-term neurodevelopmental outcome does not seem to be adversely affected by morphine. After a careful review of the literature, no definite conclusions concerning the effects of neonatal morphine on the long-term neurodevelopmental outcome in extremely premature neonates can be drawn. More prospectively designed trials should be conducted using reliable and validated pain assessment scores to evaluate effects of morphine on long-term neurodevelopmental outcome to demonstrate a beneficial or adverse effect of morphine in preterm infants.

  6. The role of categorization and scale endpoint comparisons in numerical information processing: A two-process model.

    PubMed

    Tao, Tao; Wyer, Robert S; Zheng, Yuhuang

    2017-03-01

    We propose a two-process conceptualization of numerical information processing to describe how people form impressions of a score that is described along a bounded scale. According to the model, people spontaneously categorize a score as high or low. Furthermore, they compare the numerical discrepancy between the score and the endpoint of the scale to which it is closer, if they are not confident of their categorization, and use implications of this comparison as a basis for judgment. As a result, their evaluation of the score is less extreme when the range of numbers along the scale is large (e.g., from 0 to 100) than when it is small (from 0 to 10). Six experiments support this two-process model and demonstrate its generalizability. Specifically, the magnitude of numbers composing the scale has less impact on judgments (a) when the score being evaluated is extreme, (b) when individuals are unmotivated to engage in endpoint comparison processes (i.e., they are low in need for cognition), and (c) when they are unable to do so (i.e., they are under cognitive load). Moreover, the endpoint to which individuals compare the score can depend on their regulatory focus. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Meta-Storms: efficient search for similar microbial communities based on a novel indexing scheme and similarity score for metagenomic data.

    PubMed

    Su, Xiaoquan; Xu, Jian; Ning, Kang

    2012-10-01

    It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.

  8. Turbulence Generation Using Localized Sources of Energy: Direct Numerical Simulations and the Effects of Thermal Non-Equilibrium

    NASA Astrophysics Data System (ADS)

    Maqui, Agustin Francisco

    Turbulence in high-speed flows is an important problem in aerospace applications, yet extremely difficult from a theoretical, computational and experimental perspective. A main reason for the lack of complete understanding is the difficulty of generating turbulence in the lab at a range of speeds which can also include hypersonic effects such as thermal non-equilibrium. This work studies the feasibility of a new approach to generate turbulence based on laser-induced photo-excitation/dissociation of seeded molecules. A large database of incompressible and compressible direct numerical simulations (DNS) has been generated to systematically study the development and evolution of the flow towards realistic turbulence. Governing parameters and the conditions necessary for the establishment of turbulence, as well as the length and time scales associated with such process, are identified. For both the compressible and incompressible experiments a minimum Reynolds number is found to be needed for the flow to evolve towards fully developed turbulence. Additionally, for incompressible cases a minimum time scale is required, while for compressible cases a minimum distance from the grid and limit on the maximum temperature introduced are required. Through an extensive analysis of single and two point statistics, as well as spectral dynamics, the primary mechanisms leading to turbulence are shown. As commonly done in compressible turbulence, dilatational and solenoidal components are separated to understand the effect of acoustics on the development of turbulence. Finally, a large database of forced isotropic turbulence has been generated to study the effect of internal degrees of freedom on the evolution of turbulence.

  9. Multi-window detection for P-wave in electrocardiograms based on bilateral accumulative area.

    PubMed

    Chen, Riqing; Huang, Yingsong; Wu, Jian

    2016-11-01

    P-wave detection is one of the most challenging aspects in electrocardiograms (ECGs) due to its low amplitude, low frequency, and variable waveforms. This work introduces a novel multi-window detection method for P-wave delineation based on the bilateral accumulative area. The bilateral accumulative area is calculated by summing the areas covered by the P-wave curve with left and right sliding windows. The onset and offset of a positive P-wave correspond to the local maxima of the area detector. The position drift and difference in area variation of local extreme points with different windows are used to systematically combine multi-window and 12-lead synchronous detection methods, which are used to screen the optimization boundary points from all extreme points of different window widths and adaptively match the P-wave location. The proposed method was validated with ECG signals from various databases, including the Standard CSE Database, T-Wave Alternans Challenge Database, PTB Diagnostic ECG Database, and the St. Petersburg Institute of Cardiological Technics 12-Lead Arrhythmia Database. The average sensitivity Se was 99.44% with a positive predictivity P+ of 99.37% for P-wave detection. Standard deviations of 3.7 and 4.3ms were achieved for the onset and offset of P-waves, respectively, which is in agreement with the accepted tolerances required by the CSE committee. Compared with well-known delineation methods, this method can achieve high sensitivity and positive predictability using a simple calculation process. The experiment results suggest that the bilateral accumulative area could be an effective detection tool for ECG signal analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Using SQL Databases for Sequence Similarity Searching and Analysis.

    PubMed

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  11. The EUVE Proposal Database

    NASA Astrophysics Data System (ADS)

    Christian, C. A.; Olson, E. C.

    1993-01-01

    The proposal database and scheduling system for the Extreme Ultraviolet Explorer is described. The proposal database has been implemented to take input for approved observations selected by the EUVE Peer Review Panel and output target information suitable for the scheduling system to digest. The scheduling system is a hybrid of the SPIKE program and EUVE software which checks spacecraft constraints, produces a proposed schedule and selects spacecraft orientations with optimal configurations for acquiring star trackers, etc. This system is used to schedule the In Orbit Calibration activities that took place this summer, following the EUVE launch in early June 1992. The strategy we have implemented has implications for the selection of approved targets, which have impacted the Peer Review process. In addition, we will discuss how the proposal database, founded on Sybase, controls the processing of EUVE Guest Observer data.

  12. Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction

    NASA Astrophysics Data System (ADS)

    Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab

    2017-11-01

    Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.

  13. Global Weirding? - Using Very Large Ensembles and Extreme Value Theory to assess Changes in Extreme Weather Events Today

    NASA Astrophysics Data System (ADS)

    Otto, F. E. L.; Mitchell, D.; Sippel, S.; Black, M. T.; Dittus, A. J.; Harrington, L. J.; Mohd Saleh, N. H.

    2014-12-01

    A shift in the distribution of socially-relevant climate variables such as daily minimum winter temperatures and daily precipitation extremes, has been attributed to anthropogenic climate change for various mid-latitude regions. However, while there are many process-based arguments suggesting also a change in the shape of these distributions, attribution studies demonstrating this have not currently been undertaken. Here we use a very large initial condition ensemble of ~40,000 members simulating the European winter 2013/2014 using the distributed computing infrastructure under the weather@home project. Two separate scenarios are used:1. current climate conditions, and 2. a counterfactual scenario of "world that might have been" without anthropogenic forcing. Specifically focusing on extreme events, we assess how the estimated parameters of the Generalized Extreme Value (GEV) distribution vary depending on variable-type, sampling frequency (daily, monthly, …) and geographical region. We find that the location parameter changes for most variables but, depending on the region and variables, we also find significant changes in scale and shape parameters. The very large ensemble allows, furthermore, to assess whether such findings in the fitted GEV distributions are consistent with an empirical analysis of the model data, and whether the most extreme data still follow a known underlying distribution that in a small sample size might otherwise be thought of as an out-lier. The ~40,000 member ensemble is simulated using 12 different SST patterns (1 'observed', and 11 best guesses of SSTs with no anthropogenic warming). The range in SSTs, along with the corresponding changings in the NAO and high-latitude blocking inform on the dynamics governing some of these extreme events. While strong tele-connection patterns are not found in this particular experiment, the high number of simulated extreme events allows for a more thorough analysis of the dynamics than has been performed before. Therefore, combining extreme value theory with very large ensemble simulations allows us to understand the dynamics of changes in extreme events which is not possible just using the former but also shows in which cases statistics combined with smaller ensembles give as valid results as very large initial conditions.

  14. A dynamical systems approach to studying midlatitude weather extremes

    NASA Astrophysics Data System (ADS)

    Messori, Gabriele; Caballero, Rodrigo; Faranda, Davide

    2017-04-01

    Extreme weather occurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. The ability to predict these events is therefore a topic of crucial importance. Here we propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We show that simple dynamical systems metrics can be used to identify sets of large-scale atmospheric flow patterns with similar spatial structure and temporal evolution on time scales of several days to a week. In regions where these patterns favor extreme weather, they afford a particularly good predictability of the extremes. We specifically test this technique on the atmospheric circulation in the North Atlantic region, where it provides predictability of large-scale wintertime surface temperature extremes in Europe up to 1 week in advance.

  15. ATLAS offline data quality monitoring

    NASA Astrophysics Data System (ADS)

    Adelman, J.; Baak, M.; Boelaert, N.; D'Onofrio, M.; Frost, J. A.; Guyot, C.; Hauschild, M.; Hoecker, A.; Leney, K. J. C.; Lytken, E.; Martinez-Perez, M.; Masik, J.; Nairz, A. M.; Onyisi, P. U. E.; Roe, S.; Schaetzel, S.; Wilson, M. G.

    2010-04-01

    The ATLAS experiment at the Large Hadron Collider reads out 100 Million electronic channels at a rate of 200 Hz. Before the data are shipped to storage and analysis centres across the world, they have to be checked to be free from irregularities which render them scientifically useless. Data quality offline monitoring provides prompt feedback from full first-pass event reconstruction at the Tier-0 computing centre and can unveil problems in the detector hardware and in the data processing chain. Detector information and reconstructed proton-proton collision event characteristics are distilled into a few key histograms and numbers which are automatically compared with a reference. The results of the comparisons are saved as status flags in a database and are published together with the histograms on a web server. They are inspected by a 24/7 shift crew who can notify on-call experts in case of problems and in extreme cases signal data taking abort.

  16. Effect of extreme data loss on heart rate signals quantified by entropy analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Wang, Jun; Li, Jin; Liu, Dazhao

    2015-02-01

    The phenomenon of data loss always occurs in the analysis of large databases. Maintaining the stability of analysis results in the event of data loss is very important. In this paper, we used a segmentation approach to generate a synthetic signal that is randomly wiped from data according to the Gaussian distribution and the exponential distribution of the original signal. Then, the logistic map is used as verification. Finally, two methods of measuring entropy-base-scale entropy and approximate entropy-are comparatively analyzed. Our results show the following: (1) Two key parameters-the percentage and the average length of removed data segments-can change the sequence complexity according to logistic map testing. (2) The calculation results have preferable stability for base-scale entropy analysis, which is not sensitive to data loss. (3) The loss percentage of HRV signals should be controlled below the range (p = 30 %), which can provide useful information in clinical applications.

  17. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  18. Computational knowledge integration in biopharmaceutical research.

    PubMed

    Ficenec, David; Osborne, Mark; Pradines, Joel; Richards, Dan; Felciano, Ramon; Cho, Raymond J; Chen, Richard O; Liefeld, Ted; Owen, James; Ruttenberg, Alan; Reich, Christian; Horvath, Joseph; Clark, Tim

    2003-09-01

    An initiative to increase biopharmaceutical research productivity by capturing, sharing and computationally integrating proprietary scientific discoveries with public knowledge is described. This initiative involves both organisational process change and multiple interoperating software systems. The software components rely on mutually supporting integration techniques. These include a richly structured ontology, statistical analysis of experimental data against stored conclusions, natural language processing of public literature, secure document repositories with lightweight metadata, web services integration, enterprise web portals and relational databases. This approach has already begun to increase scientific productivity in our enterprise by creating an organisational memory (OM) of internal research findings, accessible on the web. Through bringing together these components it has also been possible to construct a very large and expanding repository of biological pathway information linked to this repository of findings which is extremely useful in analysis of DNA microarray data. This repository, in turn, enables our research paradigm to be shifted towards more comprehensive systems-based understandings of drug action.

  19. Geochemistry of selenium.

    PubMed

    Kabata-Pendias, A

    1998-01-01

    Selenium (Se) is one of the most peculiar chemical elements in the geo- and biospheres. It partly resembles sulfur and tellurium; however, its behavior in the geosphere and its functions in the biosphere are very specific. Despite a relatively large database, its cycling in both the natural environment and in that modified by human activities requires further study. Selenium is rather concentrated in the geospheric cycle and is also bioconcentrated. The values of its accumulation ratios are: 5 for soil/sandstone, 2 for animal tissues/sandstone, and 5 for animal tissues/grain. For a specific plant/soil system, the bioconcentration factor for plants always has to be estimated because some plants can absorb extremely high concentrations of Se. Their ability to accumulate and tolerate high Se levels is related to different Se metabolisms. These plants play a significant role in geochemical prospecting and animal nutrition. This paper presents some geochemical observations toward a better understanding of the environmental properties of Se.

  20. Quantitative Analysis of Gender Stereotypes and Information Aggregation in a National Election

    PubMed Central

    Tumminello, Michele; Miccichè, Salvatore; Varho, Jan; Piilo, Jyrki; Mantegna, Rosario N.

    2013-01-01

    By analyzing a database of a questionnaire answered by a large majority of candidates and elected in a parliamentary election, we quantitatively verify that (i) female candidates on average present political profiles which are more compassionate and more concerned with social welfare issues than male candidates and (ii) the voting procedure acts as a process of information aggregation. Our results show that information aggregation proceeds with at least two distinct paths. In the first case candidates characterize themselves with a political profile aiming to describe the profile of the majority of voters. This is typically the case of candidates of political parties which are competing for the center of the various political dimensions. In the second case, candidates choose a political profile manifesting a clear difference from opposite political profiles endorsed by candidates of a political party positioned at the opposite extreme of some political dimension. PMID:23555606

  1. Optimized extreme learning machine for urban land cover classification using hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam

    2017-12-01

    This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.

  2. Acquisition of a Biomedical Database of Acute Responses to Space Flight during Commercial Personal Suborbital Flights

    NASA Technical Reports Server (NTRS)

    Charles, John B.; Richard, Elizabeth E.

    2010-01-01

    There is currently too little reproducible data for a scientifically valid understanding of the initial responses of a diverse human population to weightlessness and other space flight factors. Astronauts on orbital space flights to date have been extremely healthy and fit, unlike the general human population. Data collection opportunities during the earliest phases of space flights to date, when the most dynamic responses may occur in response to abrupt transitions in acceleration loads, have been limited by operational restrictions on our ability to encumber the astronauts with even minimal monitoring instrumentation. The era of commercial personal suborbital space flights promises the availability of a large (perhaps hundreds per year), diverse population of potential participants with a vested interest in their own responses to space flight factors, and a number of flight providers interested in documenting and demonstrating the attractiveness and safety of the experience they are offering. Voluntary participation by even a fraction of the flying population in a uniform set of unobtrusive biomedical data collections would provide a database enabling statistical analyses of a variety of acute responses to a standardized space flight environment. This will benefit both the space life sciences discipline and the general state of human knowledge.

  3. Uncertainties in the palaeoflood record - interpreting geomorphology since 12 500 BP

    NASA Astrophysics Data System (ADS)

    Moloney, Jessica; Coulthard, Tom; Freer, Jim; Rogerson, Mike

    2017-04-01

    Recent floods in the UK have reinvigorated the national debate within academic and non-academic organisations of how we quantify risk and improve the resilience of communities to flooding. One critical aspect of that debate is to better understand and quantify the frequency of extreme floods occurring. The research presented in this study explores the challenges and uncertainties of using longer term palaeoflood data records to improve the quantification of flood risk. The frequency of floods has been studied on short (under 100 years) and long-time (over 200 years) scales. Long term flood frequency records rely on the radiocarbon dating and interpretation of geomorphological evidence within fluvial depositional environments. However, there are limitations with the methods used to do this. Notably, the use of probability distribution functions of fluvial deposits dates does not consider any other information, such as the geomorphological context of material and/ or the type of depositional environment. This study re-analyses 776 radiocarbon dated fluvial deposits from the UK, which have been compiled into a database, to interpret the geomorphological flood record. Initial findings indicate that even this large number of samples may be unsuitable for probabilistic methods and shows an unusual sensitivity to the number of records present in the database.

  4. Virtual Reality Rehabilitation from Social Cognitive and Motor Learning Theoretical Perspectives in Stroke Population

    PubMed Central

    Imam, Bita; Jarus, Tal

    2014-01-01

    Objectives. To identify the virtual reality (VR) interventions used for the lower extremity rehabilitation in stroke population and to explain their underlying training mechanisms using Social Cognitive (SCT) and Motor Learning (MLT) theoretical frameworks. Methods. Medline, Embase, Cinahl, and Cochrane databases were searched up to July 11, 2013. Randomized controlled trials that included a VR intervention for lower extremity rehabilitation in stroke population were included. The Physiotherapy Evidence Database (PEDro) scale was used to assess the quality of the included studies. The underlying training mechanisms involved in each VR intervention were explained according to the principles of SCT (vicarious learning, performance accomplishment, and verbal persuasion) and MLT (focus of attention, order and predictability of practice, augmented feedback, and feedback fading). Results. Eleven studies were included. PEDro scores varied from 3 to 7/10. All studies but one showed significant improvement in outcomes in favour of the VR group (P < 0.05). Ten VR interventions followed the principle of performance accomplishment. All the eleven VR interventions directed subject's attention externally, whereas nine provided training in an unpredictable and variable fashion. Conclusions. The results of this review suggest that VR applications used for lower extremity rehabilitation in stroke population predominantly mediate learning through providing a task-oriented and graduated learning under a variable and unpredictable practice. PMID:24523967

  5. Severe upper extremity injuries in frontal automobile crashes: the effects of depowered airbags.

    PubMed

    Jernigan, M Virginia; Rath, Amber L; Duma, Stefan M

    2005-03-01

    The purpose of this study was to determine the effects of depowered frontal airbags on the incidence of severe upper extremity injuries. The National Automotive Sampling System database files from 1993 to 2000 were examined in a study that included 2,413,347 occupants who were exposed to an airbag deployment in the United States. Occupants exposed to a depowered airbag deployment were significantly more likely to sustain a severe upper extremity injury (3.9%) than those occupants exposed to a full-powered airbag deployment (2.5%) (P=.01). Full-powered systems resulted in an injury distribution of 89.2% fractures and 7.9% dislocations compared with depowered systems with 55.3% fractures and 44.3% dislocations. Although depowered airbags were designed to reduce the risk of injuries, they appear to have increased the overall incidence of severe upper extremity injuries through a shift from long bone fractures to joint dislocations.

  6. Design and deployment of a large brain-image database for clinical and nonclinical research

    NASA Astrophysics Data System (ADS)

    Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.

    2004-04-01

    An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.

  7. Characterization of extreme years in Central Europe between 2000 and 2016 according to specific vegetation characteristics based on Earth Observatory data

    NASA Astrophysics Data System (ADS)

    Kern, Anikó; Marjanović, Hrvoje; Barcza, Zoltán

    2017-04-01

    Extreme weather events frequently occur in Central Europe, affecting the state of the vegetation in large areas. Droughts and heat-waves affect all plant functional types, but the response of the vegetation is not uniform and depends on other parameters, plant strategies and the antecedent meteorological conditions as well. Meteorologists struggle with the definition of extreme events and selection of years that can be considered as extreme in terms of meteorological conditions due to the large variability of the meteorological parameters both in time and space. One way to overcome this problem is the definition of extreme weather based on its observed effect on plant state. The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), the Leaf Area Index (LAI), the Fraction of Photosynthetically Active Radiation (FPAR) and the Gross Primary Production (GPP) are different measures of the land vegetation derived from remote sensing data, providing information about the plant state, but it is less known how weather anomalies affect these measures. We used the vegetation related official products created from the measurements of the MODerate resolution Imaging Spectroradiometer (MODIS) on board satellite Terra to select and characterize the extreme years in Central European countries during the 2000-2016 time period. The applied Collection-6 MOD13 NDVI/EVI, MOD15 LAI/FPAR and MOD17 GPP datasets have 500 m × 500 m spatial resolution covering the region of the Carpathian-Basin. After quality and noise filtering (and temporal interpolation in case of MOD13) 8-day anomaly values were derived to investigate the different years. The freely available FORESEE meteorological database was used to study climate variability in the region. Daily precipitation and maximum/minimum temperature fields at 1/12° × 1/12° grid were resampled to the 8-day temporal and 500 m × 500 m spatial resolution of the MODIS products. To discriminate the different behavior of the various plant functional types MODIS (MCD12) and CORINE (CLC2012) land cover datasets were applied and handled together. Based on the determination of the reliable pixels with different plant types the response of broadleaf forests, coniferous forests, grasslands and croplands were discriminated and investigated. Characteristic time periods were selected based on the remote sensing data to define anomalies, and then the meteorological data were used to define critical time periods within the year that has the strongest effect on the observed anomalies. Similarities/dissimilarities between the behaviors of the different remotely sensed measures are also studied to elucidate the consistency of the indices. The results indicate that the diverse remote sensing indices typically co-vary but reveal strong plant functional type dependency. The study suggest that the selection of extreme years based on annual data is not the best choice, as shorter time periods within the years explain the anomalies to a higher degree than annual data. The results can be used to select anomalous years outside of the satellite era as well. Keywords: Remote sensing, meteorology; extreme years; MODIS, NDVI; EVI; LAI; FPAR; GPP; phenology

  8. Phenotypic landscape of non-conventional yeast species for different stress tolerance traits desirable in bioethanol fermentation.

    PubMed

    Mukherjee, Vaskar; Radecka, Dorota; Aerts, Guido; Verstrepen, Kevin J; Lievens, Bart; Thevelein, Johan M

    2017-01-01

    Non-conventional yeasts present a huge, yet barely exploited, resource of yeast biodiversity for industrial applications. This presents a great opportunity to explore alternative ethanol-fermenting yeasts that are more adapted to some of the stress factors present in the harsh environmental conditions in second-generation (2G) bioethanol fermentation. Extremely tolerant yeast species are interesting candidates to investigate the underlying tolerance mechanisms and to identify genes that when transferred to existing industrial strains could help to design more stress-tolerant cell factories. For this purpose, we performed a high-throughput phenotypic evaluation of a large collection of non-conventional yeast species to identify the tolerance limits of the different yeast species for desirable stress tolerance traits in 2G bioethanol production. Next, 12 multi-tolerant strains were selected and used in fermentations under different stressful conditions. Five strains out of which, showing desirable fermentation characteristics, were then evaluated in small-scale, semi-anaerobic fermentations with lignocellulose hydrolysates. Our results revealed the phenotypic landscape of many non-conventional yeast species which have not been previously characterized for tolerance to stress conditions relevant for bioethanol production. This has identified for each stress condition evaluated several extremely tolerant non- Saccharomyces yeasts. It also revealed multi-tolerance in several yeast species, which makes those species good candidates to investigate the molecular basis of a robust general stress tolerance. The results showed that some non-conventional yeast species have similar or even better fermentation efficiency compared to S. cerevisiae in the presence of certain stressful conditions. Prior to this study, our knowledge on extreme stress-tolerant phenotypes in non-conventional yeasts was limited to only few species. Our work has now revealed in a systematic way the potential of non- Saccharomyces species to emerge either as alternative host species or as a source of valuable genetic information for construction of more robust industrial S. serevisiae bioethanol production yeasts. Striking examples include yeast species like Pichia kudriavzevii and Wickerhamomyces anomalus that show very high tolerance to diverse stress factors. This large-scale phenotypic analysis has yielded a detailed database useful as a resource for future studies to understand and benefit from the molecular mechanisms underlying the extreme phenotypes of non-conventional yeast species.

  9. Changes in extremes due to half a degree warming in observations and models

    NASA Astrophysics Data System (ADS)

    Fischer, E. M.; Schleussner, C. F.; Pfleiderer, P.

    2017-12-01

    Assessing the climate impacts of half-a-degree warming increments is high on the post-Paris science agenda. Discriminating those effects is particularly challenging for climate extremes such as heavy precipitation and heat extremes for which model uncertainties are generally large, and for which internal variability is so important that it can easily offset or strongly amplify the forced local changes induced by half a degree warming. Despite these challenges we provide evidence for large-scale changes in the intensity and frequency of climate extremes due to half a degree warming. We first assess the difference in extreme climate indicators in observational data for the 1960s and 1970s versus the recent past, two periods differ by half a degree. We identify distinct differences for the global and continental-scale occurrence of heat and heavy precipitation extremes. We show that those observed changes in heavy precipitation and heat extremes broadly agree with simulated historical differences and are informative for the projected differences between 1.5 and 2°C warming despite different radiative forcings. We therefore argue that evidence from the observational record can inform the debate about discernible climate impacts in the light of model uncertainty by providing a conservative estimate of the implications of 0.5°C warming. A limitation of using the observational record arises from potential non-linearities in the response of climate extremes to a certain level of warming. We test for potential non-linearities in the response of heat and heavy precipitation extremes in a large ensemble of transient climate simulations. We further quantify differences between a time-window approach in a coupled model large ensemble vs. time-slice experiments using prescribed SST experiments performed in the context of the HAPPI-MIP project. Thereby we provide different lines of evidence that half a degree warming leads to substantial changes in the expected occurrence of heat and heavy precipitation extremes.

  10. Role of absorbing aerosols on hot extremes in India in a GCM

    NASA Astrophysics Data System (ADS)

    Mondal, A.; Sah, N.; Venkataraman, C.; Patil, N.

    2017-12-01

    Temperature extremes and heat waves in North-Central India during the summer months of March through June are known for causing significant impact in terms of human health, productivity and mortality. While greenhouse gas-induced global warming is generally believed to intensify the magnitude and frequency of such extremes, aerosols are usually associated with an overall cooling, by virtue of their dominant radiation scattering nature, in most world regions. Recently, large-scale atmospheric conditions leading to heat wave and extreme temperature conditions have been analysed for the North-Central Indian region. However, the role of absorbing aerosols, including black carbon and dust, is still not well understood, in mediating hot extremes in the region. In this study, we use 30-year simulations from a chemistry-coupled atmosphere-only General Circulation Model (GCM), ECHAM6-HAM2, forced with evolving aerosol emissions in an interactive aerosol module, along with observed sea surface temperatures, to examine large-scale and mesoscale conditions during hot extremes in India. The model is first validated with observed gridded temperature and reanalysis data, and is found to represent observed variations in temperature in the North-Central region and concurrent large-scale atmospheric conditions during high temperature extremes realistically. During these extreme events, changes in near surface properties include a reduction in single scattering albedo and enhancement in short-wave solar heating rate, compared to climatological conditions. This is accompanied by positive anomalies of black carbon and dust aerosol optical depths. We conclude that the large-scale atmospheric conditions such as the presence of anticyclones and clear skies, conducive to heat waves and high temperature extremes, are exacerbated by absorbing aerosols in North-Central India. Future air quality regulations are expected to reduce sulfate particles and their masking of GHG warming. It is concurrently important to mitigate emissions of warming black carbon particles, to manage future climate change-induced hot extremes.

  11. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  12. Brain Tumour Segmentation based on Extremely Randomized Forest with high-level features.

    PubMed

    Pinto, Adriano; Pereira, Sergio; Correia, Higino; Oliveira, J; Rasteiro, Deolinda M L D; Silva, Carlos A

    2015-08-01

    Gliomas are among the most common and aggressive brain tumours. Segmentation of these tumours is important for surgery and treatment planning, but also for follow-up evaluations. However, it is a difficult task, given that its size and locations are variable, and the delineation of all tumour tissue is not trivial, even with all the different modalities of the Magnetic Resonance Imaging (MRI). We propose a discriminative and fully automatic method for the segmentation of gliomas, using appearance- and context-based features to feed an Extremely Randomized Forest (Extra-Trees). Some of these features are computed over a non-linear transformation of the image. The proposed method was evaluated using the publicly available Challenge database from BraTS 2013, having obtained a Dice score of 0.83, 0.78 and 0.73 for the complete tumour, and the core and the enhanced regions, respectively. Our results are competitive, when compared against other results reported using the same database.

  13. The Regional Hydrologic Extremes Assessment System: A software framework for hydrologic modeling and data assimilation

    PubMed Central

    Das, Narendra; Stampoulis, Dimitrios; Ines, Amor; Fisher, Joshua B.; Granger, Stephanie; Kawata, Jessie; Han, Eunjin; Behrangi, Ali

    2017-01-01

    The Regional Hydrologic Extremes Assessment System (RHEAS) is a prototype software framework for hydrologic modeling and data assimilation that automates the deployment of water resources nowcasting and forecasting applications. A spatially-enabled database is a key component of the software that can ingest a suite of satellite and model datasets while facilitating the interfacing with Geographic Information System (GIS) applications. The datasets ingested are obtained from numerous space-borne sensors and represent multiple components of the water cycle. The object-oriented design of the software allows for modularity and extensibility, showcased here with the coupling of the core hydrologic model with a crop growth model. RHEAS can exploit multi-threading to scale with increasing number of processors, while the database allows delivery of data products and associated uncertainty through a variety of GIS platforms. A set of three example implementations of RHEAS in the United States and Kenya are described to demonstrate the different features of the system in real-world applications. PMID:28545077

  14. The Regional Hydrologic Extremes Assessment System: A software framework for hydrologic modeling and data assimilation.

    PubMed

    Andreadis, Konstantinos M; Das, Narendra; Stampoulis, Dimitrios; Ines, Amor; Fisher, Joshua B; Granger, Stephanie; Kawata, Jessie; Han, Eunjin; Behrangi, Ali

    2017-01-01

    The Regional Hydrologic Extremes Assessment System (RHEAS) is a prototype software framework for hydrologic modeling and data assimilation that automates the deployment of water resources nowcasting and forecasting applications. A spatially-enabled database is a key component of the software that can ingest a suite of satellite and model datasets while facilitating the interfacing with Geographic Information System (GIS) applications. The datasets ingested are obtained from numerous space-borne sensors and represent multiple components of the water cycle. The object-oriented design of the software allows for modularity and extensibility, showcased here with the coupling of the core hydrologic model with a crop growth model. RHEAS can exploit multi-threading to scale with increasing number of processors, while the database allows delivery of data products and associated uncertainty through a variety of GIS platforms. A set of three example implementations of RHEAS in the United States and Kenya are described to demonstrate the different features of the system in real-world applications.

  15. A Data Analysis Expert System For Large Established Distributed Databases

    NASA Astrophysics Data System (ADS)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  16. Use of a primary care database to determine trends in genital chlamydia testing, diagnostic episodes and management in UK general practice, 1990–2004

    PubMed Central

    Hughes, Gwenda; Williams, Tim; Simms, Ian; Mercer, Catherine; Fenton, Kevin; Cassell, Jackie

    2007-01-01

    Objective To determine the extent of testing, diagnostic episodes and management of genital Chlamydia trachomatis (CT) infection in UK primary care using a large primary care database. Methods The incidence of CT tests, diagnostic episodes, treatments and referrals was measured for all adult patients in the General Practice Research Database between 1990 and 2004. Results Rates of CT testing in those aged 12–64 years in 2004 increased to 1439/100 000 patient years (py) in women but only 74/100 000 py in men. Testing rates were highest among 20–24‐year‐old women (5.5% tested in 2004), followed by 25–34‐year‐old women (3.7% tested in 2004). 0.5% of registered 16–24‐year‐old women were diagnosed as having CT infection in 2004. Three‐quarters of patients with a recorded diagnosis of CT had had an appropriate prescription issued in 2004, a proportion that increased from 1990 along with a decrease in referrals to genitourinary medicine. In 2004, general practitioners treated 25.0% of all recorded diagnoses of CT in women and 5.1% of those in men. Conclusions Testing for and diagnostic episodes of CT in primary care have increased since 1990. Testing continues disproportionately to target women aged >24 years. Extremely low rates of testing in men, together with high positivity, demonstrate a missed opportunity for diagnosis of CT and contact tracing in general practice. PMID:17360731

  17. MASSCLEANage-STELLAR CLUSTER AGES FROM INTEGRATED COLORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popescu, Bogdan; Hanson, M. M., E-mail: popescb@mail.uc.ed, E-mail: margaret.hanson@uc.ed

    2010-11-20

    We present the recently updated and expanded MASSCLEANcolors, a database of 70 million Monte Carlo models selected to match the properties (metallicity, ages, and masses) of stellar clusters found in the Large Magellanic Cloud (LMC). This database shows the rather extreme and non-Gaussian distribution of integrated colors and magnitudes expected with different cluster age and mass and the enormous age degeneracy of integrated colors when mass is unknown. This degeneracy could lead to catastrophic failures in estimating age with standard simple stellar population models, particularly if most of the clusters are of intermediate or low mass, like in the LMC.more » Utilizing the MASSCLEANcolors database, we have developed MASSCLEANage, a statistical inference package which assigns the most likely age and mass (solved simultaneously) to a cluster based only on its integrated broadband photometric properties. Finally, we use MASSCLEANage to derive the age and mass of LMC clusters based on integrated photometry alone. First, we compare our cluster ages against those obtained for the same seven clusters using more accurate integrated spectroscopy. We find improved agreement with the integrated spectroscopy ages over the original photometric ages. A close examination of our results demonstrates the necessity of solving simultaneously for mass and age to reduce degeneracies in the cluster ages derived via integrated colors. We then selected an additional subset of 30 photometric clusters with previously well-constrained ages and independently derive their age using the MASSCLEANage with the same photometry with very good agreement. The MASSCLEANage program is freely available under GNU General Public License.« less

  18. Use of Patient Registries and Administrative Datasets for the Study of Pediatric Cancer

    PubMed Central

    Rice, Henry E.; Englum, Brian R.; Gulack, Brian C.; Adibe, Obinna O.; Tracy, Elizabeth T.; Kreissman, Susan G.; Routh, Jonathan C.

    2015-01-01

    Analysis of data from large administrative databases and patient registries is increasingly being used to study childhood cancer care, although the value of these data sources remains unclear to many clinicians. Interpretation of large databases requires a thorough understanding of how the dataset was designed, how data were collected, and how to assess data quality. This review will detail the role of administrative databases and registry databases for the study of childhood cancer, tools to maximize information from these datasets, and recommendations to improve the use of these databases for the study of pediatric oncology. PMID:25807938

  19. Randomized Approaches for Nearest Neighbor Search in Metric Space When Computing the Pairwise Distance Is Extremely Expensive

    NASA Astrophysics Data System (ADS)

    Wang, Lusheng; Yang, Yong; Lin, Guohui

    Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.

  20. Lux in obscuro II: photon orbits of extremal AdS black holes revisited

    NASA Astrophysics Data System (ADS)

    Tang, Zi-Yu; Ong, Yen Chin; Wang, Bin

    2017-12-01

    A large class of spherically symmetric static extremal black hole spacetimes possesses a stable null photon sphere on their horizons. For the extremal Kerr-Newman family, the photon sphere only really coincides with the horizon in the sense clarified by Doran. The condition under which a photon orbit is stable on an asymptotically flat extremal Kerr-Newman black hole horizon has recently been clarified; it is found that a sufficiently large angular momentum destabilizes the photon orbit, whereas an electrical charge tends to stabilize it. We investigated the effect of a negative cosmological constant on this observation, and found the same behavior in the case of extremal asymptotically Kerr-Newman-AdS black holes in (3+1) -dimensions. In (2+1) -dimensions, in the presence of an electrical charge, the angular momentum never becomes large enough to destabilize the photon orbit. We comment on the instabilities of black hole spacetimes with a stable photon orbit.

  1. North American Extreme Temperature Events and Related Large Scale Meteorological Patterns: A Review of Statistical Methods, Dynamics, Modeling, and Trends

    NASA Technical Reports Server (NTRS)

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael G.; Gershunov, Alexander; Gutowski, William J., Jr.; Gyakum, John R.; Katz, Richard W.; hide

    2015-01-01

    The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and landatmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.

  2. Experimental Study of Homogeneous Isotropic Slowly-Decaying Turbulence in Giant Grid-Wind Tunnel Set Up

    NASA Astrophysics Data System (ADS)

    Aliseda, Alberto; Bourgoin, Mickael; Eswirp Collaboration

    2014-11-01

    We present preliminary results from a recent grid turbulence experiment conducted at the ONERA wind tunnel in Modane, France. The ESWIRP Collaboration was conceived to probe the smallest scales of a canonical turbulent flow with very high Reynolds numbers. To achieve this, the largest scales of the turbulence need to be extremely big so that, even with the large separation of scales, the smallest scales would be well above the spatial and temporal resolution of the instruments. The ONERA wind tunnel in Modane (8 m -diameter test section) was chosen as a limit of the biggest large scales achievable in a laboratory setting. A giant inflatable grid (M = 0.8 m) was conceived to induce slowly-decaying homogeneous isotropic turbulence in a large region of the test section, with minimal structural risk. An international team or researchers collected hot wire anemometry, ultrasound anemometry, resonant cantilever anemometry, fast pitot tube anemometry, cold wire thermometry and high-speed particle tracking data of this canonical turbulent flow. While analysis of this large database, which will become publicly available over the next 2 years, has only started, the Taylor-scale Reynolds number is estimated to be between 400 and 800, with Kolmogorov scales as large as a few mm . The ESWIRP Collaboration is formed by an international team of scientists to investigate experimentally the smallest scales of turbulence. It was funded by the European Union to take advantage of the largest wind tunnel in Europe for fundamental research.

  3. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.

  4. Polygenic determinants in extremes of high-density lipoprotein cholesterol[S

    PubMed Central

    Dron, Jacqueline S.; Wang, Jian; Low-Kam, Cécile; Khetarpal, Sumeet A.; Robinson, John F.; McIntyre, Adam D.; Ban, Matthew R.; Cao, Henian; Rhainds, David; Dubé, Marie-Pierre; Rader, Daniel J.; Lettre, Guillaume; Tardif, Jean-Claude

    2017-01-01

    HDL cholesterol (HDL-C) remains a superior biochemical predictor of CVD risk, but its genetic basis is incompletely defined. In patients with extreme HDL-C concentrations, we concurrently evaluated the contributions of multiple large- and small-effect genetic variants. In a discovery cohort of 255 unrelated lipid clinic patients with extreme HDL-C levels, we used a targeted next-generation sequencing panel to evaluate rare variants in known HDL metabolism genes, simultaneously with common variants bundled into a polygenic trait score. Two additional cohorts were used for validation and included 1,746 individuals from the Montréal Heart Institute Biobank and 1,048 individuals from the University of Pennsylvania. Findings were consistent between cohorts: we found rare heterozygous large-effect variants in 18.7% and 10.9% of low- and high-HDL-C patients, respectively. We also found common variant accumulation, indicated by extreme polygenic trait scores, in an additional 12.8% and 19.3% of overall cases of low- and high-HDL-C extremes, respectively. Thus, the genetic basis of extreme HDL-C concentrations encountered clinically is frequently polygenic, with contributions from both rare large-effect and common small-effect variants. Multiple types of genetic variants should be considered as contributing factors in patients with extreme dyslipidemia. PMID:28870971

  5. Polygenic determinants in extremes of high-density lipoprotein cholesterol.

    PubMed

    Dron, Jacqueline S; Wang, Jian; Low-Kam, Cécile; Khetarpal, Sumeet A; Robinson, John F; McIntyre, Adam D; Ban, Matthew R; Cao, Henian; Rhainds, David; Dubé, Marie-Pierre; Rader, Daniel J; Lettre, Guillaume; Tardif, Jean-Claude; Hegele, Robert A

    2017-11-01

    HDL cholesterol (HDL-C) remains a superior biochemical predictor of CVD risk, but its genetic basis is incompletely defined. In patients with extreme HDL-C concentrations, we concurrently evaluated the contributions of multiple large- and small-effect genetic variants. In a discovery cohort of 255 unrelated lipid clinic patients with extreme HDL-C levels, we used a targeted next-generation sequencing panel to evaluate rare variants in known HDL metabolism genes, simultaneously with common variants bundled into a polygenic trait score. Two additional cohorts were used for validation and included 1,746 individuals from the Montréal Heart Institute Biobank and 1,048 individuals from the University of Pennsylvania. Findings were consistent between cohorts: we found rare heterozygous large-effect variants in 18.7% and 10.9% of low- and high-HDL-C patients, respectively. We also found common variant accumulation, indicated by extreme polygenic trait scores, in an additional 12.8% and 19.3% of overall cases of low- and high-HDL-C extremes, respectively. Thus, the genetic basis of extreme HDL-C concentrations encountered clinically is frequently polygenic, with contributions from both rare large-effect and common small-effect variants. Multiple types of genetic variants should be considered as contributing factors in patients with extreme dyslipidemia. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.

  6. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    ERIC Educational Resources Information Center

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  7. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    DOE PAGES

    Grotjahn, Richard; Black, Robert; Leung, Ruby; ...

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.« less

  8. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    ERIC Educational Resources Information Center

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  9. Large-Scale 1:1 Computing Initiatives: An Open Access Database

    ERIC Educational Resources Information Center

    Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet

    2013-01-01

    This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…

  10. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement.

    PubMed

    Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.

  11. Global Losses and Declining Vulnerability to Tropical Cyclones

    NASA Astrophysics Data System (ADS)

    Narita, D.; Hsiang, S. M.

    2011-12-01

    Approach An extreme environmental event may generate different losses for different societies. If the physical exposure to an event is held fixed, then the magnitude of a society's loss defines its vulnerability to that event. Competing hypotheses suggest that social and economic developments could make vulnerability rise or fall over time, but previous studies have been unable to reject either hypothesis because they lacked accurate data on societies' physical exposure to extreme events. We address this problem for a specific type of event by reconstructing the exposure of 233 countries to every tropical cyclone (TC) on the planet between 1950 and 2008 in making use of the Limited Information Cyclone Reconstruction and Integration for Climate and Economics (LICRICE) model [Hsiang, 2010]. By filling a critical data gap, this reconstruction enables us to compare how revenue losses, damages, and deaths from physically similar events change over time. Our approach contrasts with a large literature, which relies almost exclusively on self-reporting data of TC damages compiled by the Emergency Events Database (EM-DAT)[OFDA/CRED, 2009]. Results On a global scale, we find that populations rapidly mitigate certain TC risks, reducing their reported damages from a TC of low intensity by a remarkable 9.4% yr-1 and death rates by 5.1% yr-1 (Figure 1). However, these rapid reductions in vulnerability are not evident for the highest intensity TCs and lost agricultural revenues, which are more difficult to observe than deaths or damages, exhibit non-declining vulnerability for events of all intensities. Because the vulnerability of agriculture has remained high while vulnerability to damages has declined rapidly, our results indicate that lost agricultural revenues have dominated TC losses ever since ˜1990. References Hsiang, S. M. (2010). Temperatures and cyclones strongly associated with economic production in the Caribbean and Central America. Proceedings of the National Academy of Sciences, 107(35):15367-15372. OFDA/CRED (2009). The International Disaster Database.

  12. A Database as a Service for the Healthcare System to Store Physiological Signal Data.

    PubMed

    Chang, Hsien-Tsung; Lin, Tsai-Huei

    2016-01-01

    Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records- 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users-we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance.

  13. A Database as a Service for the Healthcare System to Store Physiological Signal Data

    PubMed Central

    Lin, Tsai-Huei

    2016-01-01

    Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records– 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users—we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance. PMID:28033415

  14. Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis

    PubMed Central

    2013-01-01

    Background Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. Results We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. Conclusions When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time. PMID:23815620

  15. In-Flight Decision-Making by General Aviation Pilots Operating in Areas of Extreme Thunderstorms.

    PubMed

    Boyd, Douglas D

    2017-12-01

    General aviation (comprised mainly of noncommercial, light aircraft) accounts for 94% of civil aviation fatalities in the United States. Although thunderstorms are hazardous to light aircraft, little research has been undertaken on in-flight pilot decision-making regarding their avoidance. The study objectives were: 1) to determine if the thunderstorm accident rate has declined over the last two decades; and 2) assess in-flight (enroute/landing) airman decision-making regarding adherence to FAA separation minima from thunderstorms. Thunderstorm-related accidents were identified from the NTSB database. To determine en route/arriving aircraft real-time thunderstorm proximity/relative position and airplane location, using a flight-tracking (Flight Aware®) website, were overlaid on a graphical weather image. Statistics employed Poisson and Chi-squared analyses. The thunderstorm-related accident rate was undiminished over the 1996-2014 period. In a prospective analysis the majority (enroute 77%, landing 93%) of flights violated the FAA-recommended separation distance from extreme convection. Of these, 79 and 69% (en route and landing, respectively) selected a route downwind of the thunderstorm rather than a less hazardous upwind flight path. Using a mathematical product of binary (separation distance, relative aircraft-thunderstorm position) and nominal (thunderstorm-free egress area) parameters, airmen were more likely to operate in the thunderstorm hazard zone for landings than en route operations. The thunderstorm-related accident rate, carrying a 70% fatality rate, remains unabated, largely reflecting nonadherence to the FAA-recommended separation minima and selection of a more hazardous route (downwind) for circumnavigation of extreme convective weather. These findings argue for additional emphasis in ab initio pilot training/recurrency on thunderstorm hazards and safe practices (separation distance and flight path).Boyd DD. In-flight decision-making by general aviation pilots operating in areas of extreme thunderstorms. Aerosp Med Hum Perform. 2017; 88(12):1066-1072.

  16. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  17. A systematic review of model-based economic evaluations of diagnostic and therapeutic strategies for lower extremity artery disease.

    PubMed

    Vaidya, Anil; Joore, Manuela A; ten Cate-Hoek, Arina J; Kleinegris, Marie-Claire; ten Cate, Hugo; Severens, Johan L

    2014-01-01

    Lower extremity artery disease (LEAD) is a sign of wide spread atherosclerosis also affecting coronary, cerebral and renal arteries and is associated with increased risk of cardiovascular events. Many economic evaluations have been published for LEAD due to its clinical, social and economic importance. The aim of this systematic review was to assess modelling methods used in published economic evaluations in the field of LEAD. Our review appraised and compared the general characteristics, model structure and methodological quality of published models. Electronic databases MEDLINE and EMBASE were searched until February 2013 via OVID interface. Cochrane database of systematic reviews, Health Technology Assessment database hosted by National Institute for Health research and National Health Services Economic Evaluation Database (NHSEED) were also searched. The methodological quality of the included studies was assessed by using the Philips' checklist. Sixteen model-based economic evaluations were identified and included. Eleven models compared therapeutic health technologies; three models compared diagnostic tests and two models compared a combination of diagnostic and therapeutic options for LEAD. Results of this systematic review revealed an acceptable to low methodological quality of the included studies. Methodological diversity and insufficient information posed a challenge for valid comparison of the included studies. In conclusion, there is a need for transparent, methodologically comparable and scientifically credible model-based economic evaluations in the field of LEAD. Future modelling studies should include clinically and economically important cardiovascular outcomes to reflect the wider impact of LEAD on individual patients and on the society.

  18. Drivers and seasonal predictability of extreme wind speeds in the ECMWF System 4 and a statistical model

    NASA Astrophysics Data System (ADS)

    Walz, M. A.; Donat, M.; Leckebusch, G. C.

    2017-12-01

    As extreme wind speeds are responsible for large socio-economic losses in Europe, a skillful prediction would be of great benefit for disaster prevention as well as for the actuarial community. Here we evaluate patterns of large-scale atmospheric variability and the seasonal predictability of extreme wind speeds (e.g. >95th percentile) in the European domain in the dynamical seasonal forecast system ECMWF System 4, and compare to the predictability based on a statistical prediction model. The dominant patterns of atmospheric variability show distinct differences between reanalysis and ECMWF System 4, with most patterns in System 4 extended downstream in comparison to ERA-Interim. The dissimilar manifestations of the patterns within the two models lead to substantially different drivers associated with the occurrence of extreme winds in the respective model. While the ECMWF System 4 is shown to provide some predictive power over Scandinavia and the eastern Atlantic, only very few grid cells in the European domain have significant correlations for extreme wind speeds in System 4 compared to ERA-Interim. In contrast, a statistical model predicts extreme wind speeds during boreal winter in better agreement with the observations. Our results suggest that System 4 does not seem to capture the potential predictability of extreme winds that exists in the real world, and therefore fails to provide reliable seasonal predictions for lead months 2-4. This is likely related to the unrealistic representation of large-scale patterns of atmospheric variability. Hence our study points to potential improvements of dynamical prediction skill by improving the simulation of large-scale atmospheric dynamics.

  19. TabSQL: a MySQL tool to facilitate mapping user data to public databases.

    PubMed

    Xia, Xiao-Qin; McClelland, Michael; Wang, Yipeng

    2010-06-23

    With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data.

  20. TabSQL: a MySQL tool to facilitate mapping user data to public databases

    PubMed Central

    2010-01-01

    Background With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. Results We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. Conclusions TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data. PMID:20573251

  1. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2002-08-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, and relational databases, as well as ACeDB. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system.

  2. Extreme water-related weather events and waterborne disease.

    PubMed

    Cann, K F; Thomas, D Rh; Salmon, R L; Wyn-Jones, A P; Kay, D

    2013-04-01

    Global climate change is expected to affect the frequency, intensity and duration of extreme water-related weather events such as excessive precipitation, floods, and drought. We conducted a systematic review to examine waterborne outbreaks following such events and explored their distribution between the different types of extreme water-related weather events. Four medical and meteorological databases (Medline, Embase, GeoRef, PubMed) and a global electronic reporting system (ProMED) were searched, from 1910 to 2010. Eighty-seven waterborne outbreaks involving extreme water-related weather events were identified and included, alongside 235 ProMED reports. Heavy rainfall and flooding were the most common events preceding outbreaks associated with extreme weather and were reported in 55·2% and 52·9% of accounts, respectively. The most common pathogens reported in these outbreaks were Vibrio spp. (21·6%) and Leptospira spp. (12·7%). Outbreaks following extreme water-related weather events were often the result of contamination of the drinking-water supply (53·7%). Differences in reporting of outbreaks were seen between the scientific literature and ProMED. Extreme water-related weather events represent a risk to public health in both developed and developing countries, but impact will be disproportionate and likely to compound existing health disparities.

  3. Epidemiology of fishing related upper extremity injuries presenting to the emergency department in the United States.

    PubMed

    Gil, Joseph A; Elia, Gregory; Shah, Kalpit N; Owens, Brett D; Got, Christopher

    2018-04-16

    Fishing injuries commonly affect the hands. The goal of this study was to quantify the incidence of fishing-related upper extremity injuries that present to emergency departments in the United States. We examined the reported cases of fishing-related upper extremity injuries in the National Electronic Injury Surveillance System database. Analysis was performed based on age, sex and the type of injury reported. The national incidence of fishing-related upper extremity injuries was 119.6 per 1 million person-years in 2014. The most common anatomic site for injury was the finger (63.3%), followed by the hand (20.3%). The most common type of injury in the upper extremity was the presence of a foreign body (70.4%). The incidence of fishing-related upper extremity injuries in males was 200 per 1 million person-years, which was significantly higher than the incidence in females (41 per 1 million person-years). The incidence of fishing-related upper extremity injuries that present to the Emergency Department was 120 per 1 million person-years. The incidence was significantly higher in males. With the widespread popularity of the activity, it is important for Emergency Physicians and Hand Surgeons to understand how to properly evaluate and manage these injuries.

  4. Extreme phenophase delays and their relationship with natural forcings in Beijing over the past 260 years.

    PubMed

    Liu, Yang; Zhang, Mingqing; Fang, Xiuqi

    2018-03-20

    By merging reconstructed phenological series from published articles and observations of China Phenology Observation Network (CPON), the first blooming date of Amygdalus davidiana (FBA) in Beijing between 1741 and 2000 is reconstructed. The Butterworth method is used to remove the multi-year variations for generating the phenological series of annual variations in the first blooming date of A. davidiana. The extreme delay years in the phenological series are identified using the percentage threshold method. The characteristics of the extreme delays and the correspondence of these events with natural forcings are analysed. The main results are as follows. In annual phenological series, the extreme delays appeared in single year as main feature, only A.D.1800-1801, 1816-1817 and 1983-1984 were the events of two consecutively extreme years. Approximately 85% of the extreme delays occurred during 1-2 years after the large volcanic eruptions (VEI ≥ 4) in the eastern rim or the western rim of the Pacific Ocean, as the same proportion of the extreme delays followed El Niño events. About 73% years of the extreme delays fall in the valleys of sunspot cycles or the Dalton minimum period in the year or the previous year. According to the certainty factor (CF), the large eruptions have the greatest influence to the extreme delays; sunspot activity is the second, and ENSO is the last one. The extreme phenological delayed year is most likely to occur after a large eruption, which particularly occurs during El Niño year and its previous several years were in the descending portion or valley of sunspot phase.

  5. Extreme phenophase delays and their relationship with natural forcings in Beijing over the past 260 years

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Zhang, Mingqing; Fang, Xiuqi

    2018-03-01

    By merging reconstructed phenological series from published articles and observations of China Phenology Observation Network (CPON), the first blooming date of Amygdalus davidiana (FBA) in Beijing between 1741 and 2000 is reconstructed. The Butterworth method is used to remove the multi-year variations for generating the phenological series of annual variations in the first blooming date of A. davidiana. The extreme delay years in the phenological series are identified using the percentage threshold method. The characteristics of the extreme delays and the correspondence of these events with natural forcings are analysed. The main results are as follows. In annual phenological series, the extreme delays appeared in single year as main feature, only A.D.1800-1801, 1816-1817 and 1983-1984 were the events of two consecutively extreme years. Approximately 85% of the extreme delays occurred during 1-2 years after the large volcanic eruptions (VEI ≥ 4) in the eastern rim or the western rim of the Pacific Ocean, as the same proportion of the extreme delays followed El Niño events. About 73% years of the extreme delays fall in the valleys of sunspot cycles or the Dalton minimum period in the year or the previous year. According to the certainty factor (CF), the large eruptions have the greatest influence to the extreme delays; sunspot activity is the second, and ENSO is the last one. The extreme phenological delayed year is most likely to occur after a large eruption, which particularly occurs during El Niño year and its previous several years were in the descending portion or valley of sunspot phase.

  6. [Privacy and public benefit in using large scale health databases].

    PubMed

    Yamamoto, Ryuichi

    2014-01-01

    In Japan, large scale heath databases were constructed in a few years, such as National Claim insurance and health checkup database (NDB) and Japanese Sentinel project. But there are some legal issues for making adequate balance between privacy and public benefit by using such databases. NDB is carried based on the act for elderly person's health care but in this act, nothing is mentioned for using this database for general public benefit. Therefore researchers who use this database are forced to pay much concern about anonymization and information security that may disturb the research work itself. Japanese Sentinel project is a national project to detecting drug adverse reaction using large scale distributed clinical databases of large hospitals. Although patients give the future consent for general such purpose for public good, it is still under discussion using insufficiently anonymized data. Generally speaking, researchers of study for public benefit will not infringe patient's privacy, but vague and complex requirements of legislation about personal data protection may disturb the researches. Medical science does not progress without using clinical information, therefore the adequate legislation that is simple and clear for both researchers and patients is strongly required. In Japan, the specific act for balancing privacy and public benefit is now under discussion. The author recommended the researchers including the field of pharmacology should pay attention to, participate in the discussion of, and make suggestion to such act or regulations.

  7. Spatial clustering and meteorological drivers of summer ozone in Europe

    NASA Astrophysics Data System (ADS)

    Carro-Calvo, Leopoldo; Ordóñez, Carlos; García-Herrera, Ricardo; Schnell, Jordan L.

    2017-04-01

    We present a regionalization of summer near-surface ozone (O3) in Europe. For this purpose we apply a K-means algorithm on a gridded MDA8 O3 (maximum daily average 8-h ozone) dataset covering a European domain [15° W - 30° E, 35°-70° N] at 1° x 1° horizontal resolution for the 1998-2012 period. This dataset was compiled by merging observations from the European Monitoring and Evaluation Programme (EMEP) and the European Environment Agency's air quality database (AirBase). The K-means method allows identifying sets of different regions where the O3 concentrations present coherent spatiotemporal patterns and are thus expected to be driven by similar meteorological factors. After some testing, 9 regions were selected: the British Isles, North-Central Europe, Northern Scandinavia, the Baltic countries, the Iberian Peninsula, Western Europe, South-Central Europe, Eastern Europe and the Balkans. For each region we examine the synoptic situations associated with elevated ozone extremes (days exceeding the 95th percentile of the summer MDA8 O3 distribution). Our analyses reveal that there are basically two different kinds of regions in Europe: (a) those in the centre and south of the continent where ozone extremes are associated with elevated temperature within the same region and (b) those in northern Europe where ozone extremes are driven by southerly advection of air masses from warmer, more polluted areas. Even when the observed patterns were initially identified only for days registering high O3 extremes, all summer days can be projected on such patterns to identify the main modes of meteorological variability of O3. We have found that such modes are partly responsible for the day-to-day variability in the O3 concentrations and can explain a relatively large fraction (from 44 to 88 %, depending on the region) of the interannual variability of summer mean MDA8 O3 during the period of analysis. On the other hand, some major teleconnection patterns have been tested but do not seem to exert a large impact on the variability of surface O3 over most regions. The identification of these independent regions where surface ozone presents a coherent behaviour and responds similarly to specific meteorological modes of variability has multiple applications. For instance, the performance of chemical transport models (CTMs) and chemistry-climate models (CCMs) can be separately assessed over such regions to identify areas where they present large biases that need to be corrected. Our results can also be used to test the models' sensitivity to the day-to-day changing meteorology and to climate change over specific regions.

  8. High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan

    2010-10-04

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less

  9. Dynamical analysis of extreme precipitation in the US northeast based on large-scale meteorological patterns

    NASA Astrophysics Data System (ADS)

    Agel, Laurie; Barlow, Mathew; Colby, Frank; Binder, Hanin; Catto, Jennifer L.; Hoell, Andrew; Cohen, Judah

    2018-05-01

    Previous work has identified six large-scale meteorological patterns (LSMPs) of dynamic tropopause height associated with extreme precipitation over the Northeast US, with extreme precipitation defined as the top 1% of daily station precipitation. Here, we examine the three-dimensional structure of the tropopause LSMPs in terms of circulation and factors relevant to precipitation, including moisture, stability, and synoptic mechanisms associated with lifting. Within each pattern, the link between the different factors and extreme precipitation is further investigated by comparing the relative strength of the factors between days with and without the occurrence of extreme precipitation. The six tropopause LSMPs include two ridge patterns, two eastern US troughs, and two troughs centered over the Ohio Valley, with a strong seasonality associated with each pattern. Extreme precipitation in the ridge patterns is associated with both convective mechanisms (instability combined with moisture transport from the Great Lakes and Western Atlantic) and synoptic forcing related to Great Lakes storm tracks and embedded shortwaves. Extreme precipitation associated with eastern US troughs involves intense southerly moisture transport and strong quasi-geostrophic forcing of vertical velocity. Ohio Valley troughs are associated with warm fronts and intense warm conveyor belts that deliver large amounts of moisture ahead of storms, but little direct quasi-geostrophic forcing. Factors that show the largest difference between days with and without extreme precipitation include integrated moisture transport, low-level moisture convergence, warm conveyor belts, and quasi-geostrophic forcing, with the relative importance varying between patterns.

  10. Impact of an extreme climatic event on community assembly.

    PubMed

    Thibault, Katherine M; Brown, James H

    2008-03-04

    Extreme climatic events are predicted to increase in frequency and magnitude, but their ecological impacts are poorly understood. Such events are large, infrequent, stochastic perturbations that can change the outcome of entrained ecological processes. Here we show how an extreme flood event affected a desert rodent community that has been monitored for 30 years. The flood (i) caused catastrophic, species-specific mortality; (ii) eliminated the incumbency advantage of previously dominant species; (iii) reset long-term population and community trends; (iv) interacted with competitive and metapopulation dynamics; and (v) resulted in rapid, wholesale reorganization of the community. This and a previous extreme rainfall event were punctuational perturbations-they caused large, rapid population- and community-level changes that were superimposed on a background of more gradual trends driven by climate and vegetation change. Captured by chance through long-term monitoring, the impacts of such large, infrequent events provide unique insights into the processes that structure ecological communities.

  11. Toxicity assessment of industrial chemicals and airborne contaminants: transition from in vivo to in vitro test methods: a review.

    PubMed

    Bakand, S; Winder, C; Khalil, C; Hayes, A

    2005-12-01

    Exposure to occupational and environmental contaminants is a major contributor to human health problems. Inhalation of gases, vapors, aerosols, and mixtures of these can cause a wide range of adverse health effects, ranging from simple irritation to systemic diseases. Despite significant achievements in the risk assessment of chemicals, the toxicological database, particularly for industrial chemicals, remains limited. Considering there are approximately 80,000 chemicals in commerce, and an extremely large number of chemical mixtures, in vivo testing of this large number is unachievable from both economical and practical perspectives. While in vitro methods are capable of rapidly providing toxicity information, regulatory agencies in general are still cautious about the replacement of whole-animal methods with new in vitro techniques. Although studying the toxic effects of inhaled chemicals is a complex subject, recent studies demonstrate that in vitro methods may have significant potential for assessing the toxicity of airborne contaminants. In this review, current toxicity test methods for risk evaluation of industrial chemicals and airborne contaminants are presented. To evaluate the potential applications of in vitro methods for studying respiratory toxicity, more recent models developed for toxicity testing of airborne contaminants are discussed.

  12. Large scale simulations of the mechanical properties of layered transition metal ternary compounds for fossil energy power system applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, Wai-Yim

    2014-12-31

    Advanced materials with applications in extreme conditions such as high temperature, high pressure, and corrosive environments play a critical role in the development of new technologies to significantly improve the performance of different types of power plants. Materials that are currently employed in fossil energy conversion systems are typically the Ni-based alloys and stainless steels that have already reached their ultimate performance limits. Incremental improvements are unlikely to meet the more stringent requirements aimed at increased efficiency and reduce risks while addressing environmental concerns and keeping costs low. Computational studies can lead the way in the search for novel materialsmore » or for significant improvements in existing materials that can meet such requirements. Detailed computational studies with sufficient predictive power can provide an atomistic level understanding of the key characteristics that lead to desirable properties. This project focuses on the comprehensive study of a new class of materials called MAX phases, or Mn+1AXn (M = a transition metal, A = Al or other group III, IV, and V elements, X = C or N). The MAX phases are layered transition metal carbides or nitrides with a rare combination of metallic and ceramic properties. Due to their unique structural arrangements and special types of bonding, these thermodynamically stable alloys possess some of the most outstanding properties. We used a genomic approach in screening a large number of potential MAX phases and established a database for 665 viable MAX compounds on the structure, mechanical and electronic properties and investigated the correlations between them. This database if then used as a tool for materials informatics for further exploration of this class of intermetallic compounds.« less

  13. DSM-5 alternative personality disorder model traits as maladaptive extreme variants of the five-factor model: An item-response theory analysis.

    PubMed

    Suzuki, Takakuni; Samuel, Douglas B; Pahlen, Shandell; Krueger, Robert F

    2015-05-01

    Over the past two decades, evidence has suggested that personality disorders (PDs) can be conceptualized as extreme, maladaptive variants of general personality dimensions, rather than discrete categorical entities. Recognizing this literature, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) alternative PD model in Section III defines PDs partially through 25 maladaptive traits that fall within 5 domains. Empirical evidence based on the self-report measure of these traits, the Personality Inventory for DSM-5 (PID-5), suggests that these five higher-order domains share a structure and correlate in meaningful ways with the five-factor model (FFM) of general personality. In the current study, item response theory was used to compare the DSM-5 alternative PD model traits to those from a normative FFM inventory (the International Personality Item Pool-NEO [IPIP-NEO]) in terms of their measurement precision along the latent dimensions. Within a combined sample of 3,517 participants, results strongly supported the conclusion that the DSM-5 alternative PD model traits and IPIP-NEO traits are complimentary measures of 4 of the 5 FFM domains (with perhaps the exception of openness to experience vs. psychoticism). Importantly, the two measures yield largely overlapping information curves on these four domains. Differences that did emerge suggested that the PID-5 scales generally have higher thresholds and provide more information at the upper levels, whereas the IPIP-NEO generally had an advantage at the lower levels. These results support the general conceptualization that 4 domains of the DSM-5 alternative PD model traits are maladaptive, extreme versions of the FFM. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  14. High Flow Nasal Cannula Use Is Associated with Increased Morbidity and Length of Hospitalization in Extremely Low Birth Weight Infants.

    PubMed

    Taha, Dalal K; Kornhauser, Michael; Greenspan, Jay S; Dysart, Kevin C; Aghai, Zubair H

    2016-06-01

    To determine differences in the incidence of bronchopulmonary dysplasia (BPD) or death in extremely low birth weight infants managed on high flow nasal cannula (HFNC) vs continuous positive airway pressure (CPAP). This is a retrospective data analysis from the Alere Neonatal Database for infants born between January 2008 and July 2013, weighing ≤1000 g at birth, and received HFNC or CPAP. Baseline demographics, clinical characteristics, and neonatal outcomes were compared between the infants who received CPAP and HFNC, or HFNC ± CPAP. Multivariable regression analysis was performed to control for the variables that differ in bivariate analysis. A total of 2487 infants met the inclusion criteria (941 CPAP group, 333 HFNC group, and 1546 HFNC ± CPAP group). The primary outcome of BPD or death was significantly higher in the HFNC group (56.8%) compared with the CPAP group (50.4%, P < .05). Similarly, adjusted odds of developing BPD or death was greater in the HFNC ± CPAP group compared with the CPAP group (OR 1.085, 95% CI 1.035-1.137, P = .001). The number of ventilator days, postnatal steroid use, days to room air, days to initiate or reach full oral feeds, and length of hospitalization were significantly higher in the HFNC and HFNC ± CPAP groups compared with the CPAP group. In this retrospective study, use of HFNC in extremely low birth weight infants is associated with a higher risk of death or BPD, increased respiratory morbidities, delayed oral feeding, and prolonged hospitalization. A large clinical trial is needed to evaluate long-term safety and efficacy of HFNC in preterm infants. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. High Flow Nasal Cannula Use Is Associated with Increased Morbidity and Length of Hospitalization in Extremely Low Birth Weight Infants

    PubMed Central

    Taha, Dalal K.; Kornhauser, Michael; Greenspan, Jay S.; Dysart, Kevin C.; Aghai, Zubair H.

    2017-01-01

    Objective To determine differences in the incidence of bronchopulmonary dysplasia (BPD) or death in extremely low birth weight infants managed on high flow nasal cannula (HFNC) vs continuous positive airway pressure (CPAP). Study design This is aretrospective data analysis from the Alere Neonatal Database for infants born between January 2008 and July 2013, weighing ≤ 1000 g at birth, and received HFNC or CPAP. Baseline demographics, clinical characteristics, and neonatal outcomes were compared between the infants who received CPAP and HFNC, or HFNC ± CPAP. Multivariable regression analysis was performed to control for the variables that differ in bivariate analysis. Results A total of 2487 infants met the inclusion criteria (941 CPAP group, 333 HFNC group, and 1546 HFNC ± CPAP group). The primary outcome of BPD or death was significantly higher in the HFNC group (56.8%) compared with the CPAP group (50.4%, P < .05). Similarly, adjusted odds of developing BPD or death was greater in the HFNC ± CPAP group compared with the CPAP group (OR 1.085, 95% CI 1.035–1.137, P = .001). The number of ventilator days, postnatal steroid use, days to room air, days to initiate or reach full oral feeds, and length of hospitalization were significantly higher in the HFNC and HFNC ± CPAP groups compared with the CPAP group. Conclusions In this retrospective study, use of HFNC in extremely low birth weight infants is associated with a higher risk of death or BPD, increased respiratory morbidities, delayed oral feeding, and prolonged hospitalization. A large clinical trial is needed to evaluate long-term safety and efficacy of HFNC in preterm infants. PMID:27004673

  16. A two-step database search method improves sensitivity in peptide sequence matches for metaproteomics and proteogenomics studies.

    PubMed

    Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J

    2013-04-01

    Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Trunk restraint to promote upper extremity recovery in stroke patients: a systematic review and meta-analysis.

    PubMed

    Wee, Seng Kwee; Hughes, Ann-Marie; Warner, Martin; Burridge, Jane H

    2014-09-01

    Many stroke patients exhibit excessive compensatory trunk movements during reaching. Compensatory movement behaviors may improve upper extremity function in the short-term but be detrimental to long-term recovery. To evaluate the evidence that trunk restraint limits compensatory trunk movement and/or promotes better upper extremity recovery in stroke patients. A search was conducted through electronic databases from January 1980 to June 2013. Only randomized controlled trials (RCTs) comparing upper extremity training with and without trunk restraint were selected for review. Three review authors independently assessed the methodological quality and extracted data from the studies. Meta-analysis was conducted when there was sufficient homogenous data. Six RCTs involving 187 chronic stroke patients were identified. Meta-analysis of key outcome measures showed that trunk restraint has a moderate statistically significant effect on improving Fugl-Meyer Upper Extremity (FMA-UE) score, active shoulder flexion, and reduction in trunk displacement during reaching. There was a small, nonsignificant effect of trunk restraint on upper extremity function. Trunk restraint has a moderate effect on reduction of upper extremity impairment in chronic stroke patients, in terms of FMA-UE score, increased shoulder flexion, and reduction in excessive trunk movement during reaching. There is insufficient evidence to demonstrate that trunk restraint improves upper extremity function and reaching trajectory smoothness and straightness in chronic stroke patients. Future research on stroke patients at different phases of recovery and with different levels of upper extremity impairment is recommended. © The Author(s) 2014.

  18. Resource utilization and disability outcome assessment of combat casualties from Operation Iraqi Freedom and Operation Enduring Freedom.

    PubMed

    Masini, Brendan D; Waterman, Scott M; Wenke, Joseph C; Owens, Brett D; Hsu, Joseph R; Ficke, James R

    2009-04-01

    Injuries are common during combat operations. The high costs of extremity injuries both in resource utilization and disability are well known in the civilian sector. We hypothesized that, similarly, combat-related extremity injuries, when compared with other injures from the current conflicts in Iraq and Afghanistan, require the largest percentage of medical resources, account for the greatest number of disabled soldiers, and have greater costs of disability benefits. Descriptive epidemiologic study and cost analysis. The Department of Defense Medical Metrics (M2) database was queried for the hospital admissions and billing data of a previously published cohort of soldiers injured in Iraq and Afghanistan between October 2001 and January 2005 and identified from the Joint Theater Trauma Registry. The US Army Physical Disability Administration database was also queried for Physical Evaluation Board outcomes for these soldiers, allowing calculation of disability benefit cost. Primary body region injured was assigned using billing records that gave a primary diagnosis International Classification of Diseases Ninth Edition code, which was corroborated with Joint Theater Trauma Registry injury mechanisms and descriptions for accuracy. A total of 1333 soldiers had complete admission data and were included from 1566 battle injuries not returned to duty of 3102 total casualties. Extremity-injured patients had the longest average inpatient stay at 10.7 days, accounting for 65% of the $65.3-million total inpatient resource utilization, 64% of the 464 patients found "unfit for duty," and 64% of the $170-million total projected disability benefit costs. Extrapolation of data yields total disability costs for this conflict, approaching $2 billion. Combat-related extremity injuries require the greatest utilization of resources for inpatient treatment in the initial postinjury period, cause the greatest number of disabled soldiers, and have the greatest projected disability benefit costs. This study highlights the need for continued or increased funding and support for military orthopaedic surgeons and extremity trauma research efforts.

  19. Orthographic and Phonological Neighborhood Databases across Multiple Languages.

    PubMed

    Marian, Viorica

    2017-01-01

    The increased globalization of science and technology and the growing number of bilinguals and multilinguals in the world have made research with multiple languages a mainstay for scholars who study human function and especially those who focus on language, cognition, and the brain. Such research can benefit from large-scale databases and online resources that describe and measure lexical, phonological, orthographic, and semantic information. The present paper discusses currently-available resources and underscores the need for tools that enable measurements both within and across multiple languages. A general review of language databases is followed by a targeted introduction to databases of orthographic and phonological neighborhoods. A specific focus on CLEARPOND illustrates how databases can be used to assess and compare neighborhood information across languages, to develop research materials, and to provide insight into broad questions about language. As an example of how using large-scale databases can answer questions about language, a closer look at neighborhood effects on lexical access reveals that not only orthographic, but also phonological neighborhoods can influence visual lexical access both within and across languages. We conclude that capitalizing upon large-scale linguistic databases can advance, refine, and accelerate scientific discoveries about the human linguistic capacity.

  20. The MAJORANA Parts Tracking Database

    NASA Astrophysics Data System (ADS)

    Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O`Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.

    2015-04-01

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provide a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.

  1. The FP4026 Research Database on the fundamental period of RC infilled frame structures.

    PubMed

    Asteris, Panagiotis G

    2016-12-01

    The fundamental period of vibration appears to be one of the most critical parameters for the seismic design of buildings because it strongly affects the destructive impact of the seismic forces. In this article, important research data (entitled FP4026 Research Database (Fundamental Period-4026 cases of infilled frames) based on a detailed and in-depth analytical research on the fundamental period of reinforced concrete structures is presented. In particular, the values of the fundamental period which have been analytically determined are presented, taking into account the majority of the involved parameters. This database can be extremely valuable for the development of new code proposals for the estimation of the fundamental period of reinforced concrete structures fully or partially infilled with masonry walls.

  2. Impact of a Single Unusually Large Rainfall Event on the Level of Risk Used for Infrastructure Design

    NASA Astrophysics Data System (ADS)

    Dhakal, N.; Jain, S.

    2013-12-01

    Rare and unusually large events (such as hurricanes and floods) can create unusual and interesting trends in statistics. Generalized Extreme Value (GEV) distribution is usually used to statistically describe extreme rainfall events. A number of the recent studies have shown that the frequency of extreme rainfall events has increased over the last century and as a result, there has been change in parameters of GEV distribution with the time (non-stationary). But what impact does a single unusually large rainfall event (e.g., hurricane Irene) have on the GEV parameters and consequently on the level of risks or the return periods used in designing the civil infrastructures? In other words, if such a large event occurs today, how will it influence the level of risks (estimated based on past rainfall records) for the civil infrastructures? To answer these questions, we performed sensitivity analysis of the distribution parameters of GEV as well as the return periods to unusually large outlier events. The long-term precipitation records over the period of 1981-2010 from 12 USHCN stations across the state of Maine were used for analysis. For most of the stations, addition of each outlier event caused an increase in the shape parameter with a huge decrease on the corresponding return period. This is a key consideration for time-varying engineering design. These isolated extreme weather events should simultaneously be considered with traditional statistical methodology related to extreme events while designing civil infrastructures (such as dams, bridges, and culverts). Such analysis is also useful in understanding the statistical uncertainty of projecting extreme events into future.

  3. A New Breed of Database System: Volcano Global Risk Identification and Analysis Project (VOGRIPA)

    NASA Astrophysics Data System (ADS)

    Crosweller, H. S.; Sparks, R. S.; Siebert, L.

    2009-12-01

    VOGRIPA originated as part of the Global Risk Identification Programme (GRIP) that is being co-ordinated from the Earth Institute of Columbia University under the auspices of the United Nations and World Bank. GRIP is a five-year programme aiming at improving global knowledge about risk from natural hazards and is part of the international response to the catastrophic 2004 Asian tsunami. VOGRIPA is also a formal IAVCEI project. The objectives of VOGRIPA are to create a global database of volcanic activity, hazards and vulnerability information that can be analysed to identify locations at high risk from volcanism, gaps in knowledge about hazards and risk, and will allow scientists and disaster managers at specific locations to analyse risk within a global context of systematic information. It is this added scope of risk and vulnerability as well as hazard which sets VOGRIPA apart from most previous databases. The University of Bristol is the central coordinating centre for the project, which is an international partnership including the Smithsonian Institution, the Geological Survey of Japan, the Earth Observatory of Singapore (Chris Newhall), the British Geological Survey, the University of Buffalo (SUNY) and Munich Re. The partnership is intended to grow and any individuals or institutions who are able to contribute resources to VOGRIPA objectives are welcome to participate. Work has already begun (funded principally by Munich Re) on populating a database of large magnitude explosive eruptions reaching back to the Quaternary, with extreme-value statistics being used to evaluate the magnitude-frequency relationship of such events, and also an assessment of how the quality of records affect the results. The following 4 years of funding from the European Research Council for VOGRIPA will be used to establish further international collaborations in order to develop different aspects of the database, with the data being accessible online once it is sufficiently complete and analyses have been carried out. It is anticipated that such a resource would be of use to the scientific community, civil authorities with responsibility for mitigating and managing volcanic hazards, and the public.

  4. The association between preceding drought occurrence and heat waves in the Mediterranean

    NASA Astrophysics Data System (ADS)

    Russo, Ana; Gouveia, Célia M.; Ramos, Alexandre M.; Páscoa, Patricia; Trigo, Ricardo M.

    2017-04-01

    A large number of weather driven extreme events has occurred worldwide in the last decade, namely in Europe that has been struck by record breaking extreme events with unprecedented socio-economic impacts, including the mega-heatwaves of 2003 in Europe and 2010 in Russia, and the large droughts in southwestern Europe in 2005 and 2012. The last IPCC report on extreme events points that a changing climate can lead to changes in the frequency, intensity, spatial extent, duration, and timing of weather and climate extremes. These, combined with larger exposure, can result in unprecedented risk to humans and ecosystems. In this context it is becoming increasingly relevant to improve the early identification and predictability of such events, as they negatively affect several socio-economic activities. Moreover, recent diagnostic and modelling experiments have confirmed that hot extremes are often preceded by surface moisture deficits in some regions throughout the world. In this study we analyze if the occurrence of hot extreme months is enhanced by the occurrence of preceding drought events throughout the Mediterranean area. In order to achieve this purpose, the number of hot days in the regions' hottest month will be associated with a drought indicator. The evolution and characterization of drought was analyzed using both the Standardized Precipitation Evaporation Index (SPEI) and the Standardized Precipitation Index (SPI), as obtained from CRU TS3.23 database for the period 1950-2014. We have used both SPI and SPEI for different time scales between 3 and 9 months with a spatial resolution of 0.5°. The number of hot days and nights per month (NHD and NHN) was determined using the ECAD-EOBS daily dataset for the same period and spatial resolution (dataset v14). The NHD and NHN were computed, respectively, as the number of days with a maximum or minimum temperature exceeding the 90th percentile. Results show that the most frequent hottest months for the Mediterranean region occur in July and August. Moreover, the magnitude of correlations between detrended NHD/NHN and the preceding 6- and 9-month SPEI/SPI are usually dimmer than for the 3 month time-scale. Most regions exhibit significantly negative correlations, i.e. high (low) NHD/NHN following negative (positive) SPEI/SPI values, and thus a potential for NHD/NHN early warning. Finally, correlations between the NHD/NHN with SPI and SPEI differ, with SPEI characterized by slightly higher values observed mainly for the 3-months time-scale. Acknowledgments: This work was partially supported by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project IMDROFLOOD (WaterJPI/0004/2014). Ana Russo thanks FCT for granted support (SFRH/BPD/99757/2014). A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012).

  5. Assessing Uncertainty in Deep Learning Techniques that Identify Atmospheric Rivers in Climate Simulations

    NASA Astrophysics Data System (ADS)

    Mahesh, A.; Mudigonda, M.; Kim, S. K.; Kashinath, K.; Kahou, S.; Michalski, V.; Williams, D. N.; Liu, Y.; Prabhat, M.; Loring, B.; O'Brien, T. A.; Collins, W. D.

    2017-12-01

    Atmospheric rivers (ARs) can be the difference between CA facing drought or hurricane-level storms. ARs are a form of extreme weather defined as long, narrow columns of moisture which transport water vapor outside the tropics. When they make landfall, they release the vapor as rain or snow. Convolutional neural networks (CNNs), a machine learning technique that uses filters to recognize features, are the leading computer vision mechanism for classifying multichannel images. CNNs have been proven to be effective in identifying extreme weather events in climate simulation output (Liu et. al. 2016, ABDA'16, http://bit.ly/2hlrFNV). Here, we compare three different CNN architectures, tuned with different hyperparameters and training schemes. We compare two-layer, three-layer, four-layer, and sixteen-layer CNNs' ability to recognize ARs in Community Atmospheric Model version 5 output, and we explore the ability of data augmentation and pre-trained models to increase the accuracy of the classifier. Because pre-training the model with regular images (i.e. benches, stoves, and dogs) yielded the highest accuracy rate, this strategy, also known as transfer learning, may be vital in future scientific CNNs, which likely will not have access to a large labelled training dataset. By choosing the most effective CNN architecture, climate scientists can build an accurate historical database of ARs, which can be used to develop a predictive understanding of these phenomena.

  6. Large-scale annotation of small-molecule libraries using public databases.

    PubMed

    Zhou, Yingyao; Zhou, Bin; Chen, Kaisheng; Yan, S Frank; King, Frederick J; Jiang, Shumei; Winzeler, Elizabeth A

    2007-01-01

    While many large publicly accessible databases provide excellent annotation for biological macromolecules, the same is not true for small chemical compounds. Commercial data sources also fail to encompass an annotation interface for large numbers of compounds and tend to be cost prohibitive to be widely available to biomedical researchers. Therefore, using annotation information for the selection of lead compounds from a modern day high-throughput screening (HTS) campaign presently occurs only under a very limited scale. The recent rapid expansion of the NIH PubChem database provides an opportunity to link existing biological databases with compound catalogs and provides relevant information that potentially could improve the information garnered from large-scale screening efforts. Using the 2.5 million compound collection at the Genomics Institute of the Novartis Research Foundation (GNF) as a model, we determined that approximately 4% of the library contained compounds with potential annotation in such databases as PubChem and the World Drug Index (WDI) as well as related databases such as the Kyoto Encyclopedia of Genes and Genomes (KEGG) and ChemIDplus. Furthermore, the exact structure match analysis showed 32% of GNF compounds can be linked to third party databases via PubChem. We also showed annotations such as MeSH (medical subject headings) terms can be applied to in-house HTS databases in identifying signature biological inhibition profiles of interest as well as expediting the assay validation process. The automated annotation of thousands of screening hits in batch is becoming feasible and has the potential to play an essential role in the hit-to-lead decision making process.

  7. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  8. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less

  9. Overrepresentation of extreme events in decision making reflects rational use of cognitive resources.

    PubMed

    Lieder, Falk; Griffiths, Thomas L; Hsu, Ming

    2018-01-01

    People's decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision makers should overrepresent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account, we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people's availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Statistical distributions of extreme dry spell in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Jemain, Abdul Aziz

    2010-11-01

    Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.

  11. High performance semantic factoring of giga-scale semantic graph databases.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    al-Saffar, Sinan; Adolf, Bob; Haglin, David

    2010-10-01

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.« less

  12. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  13. The Muon Conditions Data Management:. Database Architecture and Software Infrastructure

    NASA Astrophysics Data System (ADS)

    Verducci, Monica

    2010-04-01

    The management of the Muon Conditions Database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored and their analysis. The Muon conditions database is responsible for almost all of the 'non-event' data and detector quality flags storage needed for debugging of the detector operations and for performing the reconstruction and the analysis. In particular for the early data, the knowledge of the detector performance, the corrections in term of efficiency and calibration will be extremely important for the correct reconstruction of the events. In this work, an overview of the entire Muon conditions database architecture is given, in particular the different sources of the data and the storage model used, including the database technology associated. Particular emphasis is given to the Data Quality chain: the flow of the data, the analysis and the final results are described. In addition, the description of the software interfaces used to access to the conditions data are reported, in particular, in the ATLAS Offline Reconstruction framework ATHENA environment.

  14. Daily precipitation extreme events for the Iberian Peninsula and its association with Atmospheric Rivers

    NASA Astrophysics Data System (ADS)

    Ramos, Alexandre M.; Trigo, Ricardo M.; Liberato, Margarida LR

    2014-05-01

    Extreme precipitation events in the Iberian Peninsula during the extended winter months have major socio-economic impacts such as floods, landslides, extensive property damage and life losses. These events are usually associated with low pressure systems with Atlantic origin, although some extreme events in summer/autumn months can be linked to Mediterranean low pressure systems. Quite often these events are evaluated on a casuistic base and making use of data from relatively few stations. An objective method for ranking daily precipitation events is presented here based on the extensive use of the most comprehensive database of daily gridded precipitation available for the Iberian Peninsula (IB02) and spanning from 1950 to 2008, with a resolution of 0.2° (approximately 16 x 22 km at latitude 40°N), for a total of 1673 pixels. This database is based on a dense network of rain gauges, combining two national data sets, 'Spain02' for peninsular Spain and Balearic islands, and 'PT02' for mainland Portugal, with a total of more than two thousand stations over Spain and four hundred stations over Portugal, all quality-controlled and homogenized. Through this objective method for ranking daily precipitation events the magnitude of an event is obtained after considering the area affected as well as its intensity in every grid point and taking into account the daily precipitation normalised departure from climatology. Different precipitation rankings are presented considering the entire Iberian Peninsula, Portugal and also the six largest river basins in the Iberian Peninsula. Atmospheric Rivers (AR) are the water vapour (WV) core section of the broader warm conveyor belt occurring over the oceans along the warm sector of extra-tropical cyclones. They are usually W-E oriented steered by pre-frontal low level jets along the trailing cold front and subsequently feed the precipitation in the extra-tropical cyclones. They are relatively narrow regions of concentrated WV responsible for horizontal transport in the lower atmosphere. It was shown that more than 90% of the meridional WV transport in the mid-latitudes occurs in the AR, although they cover less than 10% of the area of the globe. The large amount of WV that is transported can lead to heavy precipitation and floods. In this work we use an automated AR detection algorithm for the North Atlantic Ocean Basin to identify the major AR events that affected the Iberian Peninsula based on the NCEP/NCAR reanalysis. The two different databases (extreme precipitation events and AR) will be analysed together in order to study ARs in detail in the North Atlantic Basin and, additionally, the relationship with precipitation-related events in Iberian Peninsula. Results confirm the significance link between these phenomena, as the TOP 20 days of the ranking of precipitation anomalies for the Iberian Peninsula includes 19 days that are clearly related with AR events. This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER-019524 (PTDC/AAC-CLI/121339/2010). A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/SFRH/BPD/84328/2012).

  15. [Benefits of large healthcare databases for drug risk research].

    PubMed

    Garbe, Edeltraut; Pigeot, Iris

    2015-08-01

    Large electronic healthcare databases have become an important worldwide data resource for drug safety research after approval. Signal generation methods and drug safety studies based on these data facilitate the prospective monitoring of drug safety after approval, as has been recently required by EU law and the German Medicines Act. Despite its large size, a single healthcare database may include insufficient patients for the study of a very small number of drug-exposed patients or the investigation of very rare drug risks. For that reason, in the United States, efforts have been made to work on models that provide the linkage of data from different electronic healthcare databases for monitoring the safety of medicines after authorization in (i) the Sentinel Initiative and (ii) the Observational Medical Outcomes Partnership (OMOP). In July 2014, the pilot project Mini-Sentinel included a total of 178 million people from 18 different US databases. The merging of the data is based on a distributed data network with a common data model. In the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCEPP) there has been no comparable merging of data from different databases; however, first experiences have been gained in various EU drug safety projects. In Germany, the data of the statutory health insurance providers constitute the most important resource for establishing a large healthcare database. Their use for this purpose has so far been severely restricted by the Code of Social Law (Section 75, Book 10). Therefore, a reform of this section is absolutely necessary.

  16. Database of in-situ field measurements for estimates of fuel consumption and fire emissions in Siberia

    NASA Astrophysics Data System (ADS)

    Kukavskaya, Elena; Conard, Susan; Buryak, Ludmila; Ivanova, Galina; Soja, Amber; Kalenskaya, Olga; Zhila, Sergey; Zarubin, Denis; Groisman, Pavel

    2016-04-01

    Wildfires show great variability in the amount of fuel consumed and carbon emitted to the atmosphere. Various types of models are used to calculate global or large scale regional fire emissions. However, in the databases used to estimate fuel consumptions, data for Russia are typically under-represented. Meanwhile, the differences in vegetation and fire regimes in the boreal forests in North America and Eurasia argue strongly for the need of regional ecosystem-specific data. For about 15 years we have been collecting field data on fuel loads and consumption in different ecosystem types of Siberia. We conducted a series of experimental burnings of varying fireline intensity in Scots pine and larch forests of central Siberia to obtain quantitative and qualitative data on fire behavior and carbon emissions. In addition, we examined wildfire behavior and effects in different vegetation types including Scots pine, Siberian pine, fir, birch, poplar, and larch-dominated forests; evergreen coniferous shrubs; grasslands, and peats. We investigated various ecosystem zones of Siberia (central and southern taiga, forest-steppe, steppe, mountains) in the different subjects of the Russian Federation (Krasnoyarsk Kray, Republic of Khakassia, Republic of Buryatia, Tuva Republic, Zabaikalsky Kray). To evaluate the impact of forest practices on fire emissions, burned and unburned logged sites and forest plantations were examined. We found large variations of fuel consumption and fire emission rates among different vegetation types depending on growing conditions, fire behavior characteristics and anthropogenic factors. Changes in the climate system result in an increase in fire frequency, area burned, the number of extreme fires, fire season length, fire season severity, and the number of ignitions from lightning. This leads to an increase of fire-related emissions of carbon to the atmosphere. The field measurement database we compiled is required for improving accuracy of existing biomass burning models and for use by air quality agencies in developing regional strategies to mitigate negative smoke impacts on human health and environment. The research was supported by the Grant of the President of the Russian Federation MK-4646.2015.5, RFBR grant # 15-04-06567, and the NASA LCLUC Program.

  17. Visualizing the semantic content of large text databases using text maps

    NASA Technical Reports Server (NTRS)

    Combs, Nathan

    1993-01-01

    A methodology for generating text map representations of the semantic content of text databases is presented. Text maps provide a graphical metaphor for conceptualizing and visualizing the contents and data interrelationships of large text databases. Described are a set of experiments conducted against the TIPSTER corpora of Wall Street Journal articles. These experiments provide an introduction to current work in the representation and visualization of documents by way of their semantic content.

  18. Overweight children: are they at increased risk for severe injury in motor vehicle collisions?

    PubMed

    Zaveri, Pavan P; Morris, Danielle M; Freishtat, Robert J; Brown, Kathleen

    2009-09-01

    Obesity is an epidemic in the United States. The relationship between traumatic injury and obesity in children is not well-studied. We hypothesized that overweight children suffer more severe injuries, different distributions of injuries and improper use of restraints in motor vehicle collisions. We conducted a secondary analysis of the CIREN database of motor vehicle collisions of subjects 2-17 years old. Overweight was defined as a BMI percentile for age >85%. Significant injury was an Injury Severity Score (ISS) >15 or an Abbreviated Injury Scale (AIS) score greater than one. Further analysis looked at injuries classified as head, trunk, or extremities and appropriateness of restraints. Odds ratios compared the overweight to lean groups. 335 subjects met inclusion criteria with 35.5% of cases being overweight. For significant injury, overweight cases had an odds ratio of 1.2 [95% CI: 0.8-1.9]. Analysis by AIS for overall significant injury and to specific body regions also did not show any significant associations. Overweight versus lean subjects had an odds ratio of 1.3 [95% CI: 0.8-2.1] for improper use of restraints. We found no significant relationship between pediatric injury severity, distribution of injuries, or restraint use and being overweight. Limitations of this study were the small sample size in this database and the large number of unrestrained subjects.

  19. Functional insights from the distribution and role of homopeptide repeat-containing proteins

    PubMed Central

    Faux, Noel G.; Bottomley, Stephen P.; Lesk, Arthur M.; Irving, James A.; Morrison, John R.; de la Banda, Maria Garcia; Whisstock, James C.

    2005-01-01

    Expansion of “low complex” repeats of amino acids such as glutamine (Poly-Q) is associated with protein misfolding and the development of degenerative diseases such as Huntington's disease. The mechanism by which such regions promote misfolding remains controversial, the function of many repeat-containing proteins (RCPs) remains obscure, and the role (if any) of repeat regions remains to be determined. Here, a Web-accessible database of RCPs is presented. The distribution and evolution of RCPs that contain homopeptide repeats tracts are considered, and the existence of functional patterns investigated. Generally, it is found that while polyamino acid repeats are extremely rare in prokaryotes, several eukaryote putative homologs of prokaryote RCP—involved in important housekeeping processes—retain the repetitive region, suggesting an ancient origin for certain repeats. Within eukarya, the most common uninterrupted amino acid repeats are glutamine, asparagines, and alanine. Interestingly, while poly-Q repeats are found in vertebrates and nonvertebrates, poly-N repeats are only common in more primitive nonvertebrate organisms, such as insects and nematodes. We have assigned function to eukaryote RCPs using Online Mendelian Inheritance in Man (OMIM), the Human Reference Protein Database (HRPD), FlyBase, and Wormpep. Prokaryote RCPs were annotated using BLASTp searches and Gene Ontology. These data reveal that the majority of RCPs are involved in processes that require the assembly of large, multiprotein complexes, such as transcription and signaling. PMID:15805494

  20. Coherent Microwave Scattering Model of Marsh Grass

    NASA Astrophysics Data System (ADS)

    Duan, Xueyang; Jones, Cathleen E.

    2017-12-01

    In this work, we developed an electromagnetic scattering model to analyze radar scattering from tall-grass-covered lands such as wetlands and marshes. The model adopts the generalized iterative extended boundary condition method (GIEBCM) algorithm, previously developed for buried cylindrical media such as vegetation roots, to simulate the scattering from the grass layer. The major challenge of applying GIEBCM to tall grass is the extremely time-consuming iteration among the large number of short subcylinders building up the grass. To overcome this issue, we extended the GIEBCM to multilevel GIEBCM, or M-GIEBCM, in which we first use GIEBCM to calculate a T matrix (transition matrix) database of "straws" with various lengths, thicknesses, orientations, curvatures, and dielectric properties; we then construct the grass with a group of straws from the database and apply GIEBCM again to calculate the T matrix of the overall grass scene. The grass T matrix is transferred to S matrix (scattering matrix) and combined with the ground S matrix, which is computed using the stabilized extended boundary condition method, to obtain the total scattering. In this article, we will demonstrate the capability of the model by simulating scattering from scenes with different grass densities, different grass structures, different grass water contents, and different ground moisture contents. This model will help with radar experiment design and image interpretation for marshland and wetland observations.

  1. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  2. Pulmonary nodule detection using a cascaded SVM classifier

    NASA Astrophysics Data System (ADS)

    Bergtholdt, Martin; Wiemker, Rafael; Klinder, Tobias

    2016-03-01

    Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.

  3. Event Recording Data Acquisition System and Experiment Data Management System for Neutron Experiments at MLF, J-PARC

    NASA Astrophysics Data System (ADS)

    Nakatani, T.; Inamura, Y.; Moriyama, K.; Ito, T.; Muto, S.; Otomo, T.

    Neutron scattering can be a powerful probe in the investigation of many phenomena in the materials and life sciences. The Materials and Life Science Experimental Facility (MLF) at the Japan Proton Accelerator Research Complex (J-PARC) is a leading center of experimental neutron science and boasts one of the most intense pulsed neutron sources in the world. The MLF currently has 18 experimental instruments in operation that support a wide variety of users from across a range of research fields. The instruments include optical elements, sample environment apparatus and detector systems that are controlled and monitored electronically throughout an experiment. Signals from these components and those from the neutron source are converted into a digital format by the data acquisition (DAQ) electronics and recorded as time-tagged event data in the DAQ computers using "DAQ-Middleware". Operating in event mode, the DAQ system produces extremely large data files (˜GB) under various measurement conditions. Simultaneously, the measurement meta-data indicating each measurement condition is recorded in XML format by the MLF control software framework "IROHA". These measurement event data and meta-data are collected in the MLF common storage and cataloged by the MLF Experimental Database (MLF EXP-DB) based on a commercial XML database. The system provides a web interface for users to manage and remotely analyze experimental data.

  4. Development of Climate Change Adaptation Platform using Spatial Information

    NASA Astrophysics Data System (ADS)

    Lee, J.; Oh, K. Y.; Lee, M. J.; Han, W. J.

    2014-12-01

    Climate change adaptation has attracted growing attention with the recent extreme weather conditions that affect people around the world. More and more countries, including the Republic of Korea, have begun to hatch adaptation plan to resolve these matters of great concern. They all, meanwhile, have mentioned that it should come first to integrate climate information in all analysed areas. That's because climate information is not independently made through one source; that is to say, the climate information is connected one another in a complicated way. That is the reason why we have to promote integrated climate change adaptation platform before setting up climate change adaptation plan. Therefore, the large-scaled project has been actively launched and worked on. To date, we researched 620 literatures and interviewed 51 government organizations. Based on the results of the researches and interviews, we obtained 2,725 impacts about vulnerability assessment information such as Monitoring and Forecasting, Health, Disaster, Agriculture, Forest, Water Management, Ecosystem, Ocean/Fisheries, Industry/Energy. Among 2,725 impacts, 995 impacts are made into a database until now. This database is made up 3 sub categories like Climate-Exposure, Sensitivity, Adaptive capacity, presented by IPCC. Based on the constructed database, vulnerability assessments were carried out in order to evaluate climate change capacity of local governments all over the country. These assessments were conducted by using web-based vulnerability assessment tool which was newly developed through this project. These results have shown that, metropolitan areas like Seoul, Pusan, Inchon, and so on have high risks more than twice than rural areas. Acknowledgements: The authors appreciate the support that this study has received from "Development of integrated model for climate change impact and vulnerability assessment and strengthening the framework for model implementation ", an initiative of the Korea Environmental & Industry Technology Institute .

  5. The new Cloud Dynamics and Radiation Database algorithms for AMSR2 and GMI: exploitation of the GPM observational database for operational applications

    NASA Astrophysics Data System (ADS)

    Cinzia Marra, Anna; Casella, Daniele; Martins Costa do Amaral, Lia; Sanò, Paolo; Dietrich, Stefano; Panegrossi, Giulia

    2017-04-01

    Two new precipitation retrieval algorithms for the Advanced Microwave Scanning Radiometer 2 (AMSR2) and for the GPM Microwave Imager (GMI) are presented. The algorithms are based on the Cloud Dynamics and Radiation Database (CDRD) Bayesian approach and represent an evolution of the previous version applied to Special Sensor Microwave Imager/Sounder (SSMIS) observations, and used operationally within the EUMETSAT Satellite Application Facility on support to Operational Hydrology and Water Management (H-SAF). These new products present as main innovation the use of an extended database entirely empirical, derived from coincident radar and radiometer observations from the NASA/JAXA Global Precipitation Measurement Core Observatory (GPM-CO) (Dual-frequency Precipitation Radar-DPR and GMI). The other new aspects are: 1) a new rain-no-rain screening approach; 2) the use of Empirical Orthogonal Functions (EOF) and Canonical Correlation Analysis (CCA) both in the screening approach, and in the Bayesian algorithm; 2) the use of new meteorological and environmental ancillary variables to categorize the database and mitigate the problem of non-uniqueness of the retrieval solution; 3) the development and implementations of specific modules for computational time minimization. The CDRD algorithms for AMSR2 and GMI are able to handle an extremely large observational database available from GPM-CO and provide the rainfall estimate with minimum latency, making them suitable for near-real time hydrological and operational applications. As far as CDRD for AMSR2, a verification study over Italy using ground-based radar data and over the MSG full disk area using coincident GPM-CO/AMSR2 observations has been carried out. Results show remarkable AMSR2 capabilities for rainfall rate (RR) retrieval over ocean (for RR > 0.25 mm/h), good capabilities over vegetated land (for RR > 1 mm/h), while for coastal areas the results are less certain. Comparisons with NASA GPM products, and with ground-based radar data, show that CDRD for AMSR2 is able to depict very well the areas of high precipitation over all surface types. Similarly, preliminary results of the application of CDRD for GMI are also shown and discussed, highlighting the advantage of the availability of high frequency channels (> 90 GHz) for precipitation retrieval over land and coastal areas.

  6. Properties of Extreme Precipitation and Their Uncertainties in 3-year GPM Precipitation Radar Data

    NASA Astrophysics Data System (ADS)

    Liu, N.; Liu, C.

    2017-12-01

    Extreme high precipitation rates are often related to flash floods and have devastating impacts on human society and the environments. To better understand these rare events, 3-year Precipitation Features (PFs) are defined by grouping the contiguous areas with nonzero near-surface precipitation derived using Global Precipitation Measurement (GPM) Ku band Precipitation Radar (KuPR). The properties of PFs with extreme precipitation rates greater than 20, 50, 100 mm/hr, such as the geographical distribution, volumetric precipitation contribution, seasonal and diurnal variations, are examined. In addition to the large seasonal and regional variations, the rare extreme precipitation rates often have a larger contribution to the local total precipitation. Extreme precipitation rates occur more often over land than over ocean. The challenges in the retrieval of extreme precipitation might be from the attenuation correction and large uncertainties in the Z-R relationships from near-surface radar reflectivity to precipitation rates. These potential uncertainties are examined by using collocated ground based radar reflectivity and precipitation retrievals.

  7. The EpiSLI Database: A Publicly Available Database on Speech and Language

    ERIC Educational Resources Information Center

    Tomblin, J. Bruce

    2010-01-01

    Purpose: This article describes a database that was created in the process of conducting a large-scale epidemiologic study of specific language impairment (SLI). As such, this database will be referred to as the EpiSLI database. Children with SLI have unexpected and unexplained difficulties learning and using spoken language. Although there is no…

  8. A NON-STEADY-STATE DIAGENETIC MODEL FOR CHANGES IN SEDIMENT BIOGEOCHEMISTRY IN RESPONSE TO SEASONALLY HYPOXIC/ANOXIC CONDITIONS BENEATH THE MISSISSIPPI RIVER PLUME

    EPA Science Inventory

    Although the bottom waters of many freshwater and marine environments are either permanently oxic or anoxic, there is a growing appreciation that in many bodies of water near-bottom conditions seasonally oscillate between these extreme. Although observational databases for these ...

  9. Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies

    ERIC Educational Resources Information Center

    Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.

    2004-01-01

    Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…

  10. Novel Functional Extended Solids at Extreme Conditions

    DTIC Science & Technology

    2013-02-01

    XeF8 polyhedron with a Xe-F distance 2.3 (±.1) Å - well below the metallization pressure of Xe [2,3] and F2 [4]. These findings signify...for explosives modeling and is expected to be incorporated into the explosive database such as Cheetah, etc., and (v) training graduate students and

  11. Bridging the gap between habitat-modeling research and bird conservation with dynamic landscape and population models

    Treesearch

    Frank R., III Thompson

    2009-01-01

    Habitat models are widely used in bird conservation planning to assess current habitat or populations and to evaluate management alternatives. These models include species-habitat matrix or database models, habitat suitability models, and statistical models that predict abundance. While extremely useful, these approaches have some limitations.

  12. Custom controls

    NASA Astrophysics Data System (ADS)

    Butell, Bart

    1996-02-01

    Microsoft's Visual Basic (VB) and Borland's Delphi provide an extremely robust programming environment for delivering multimedia solutions for interactive kiosks, games and titles. Their object oriented use of standard and custom controls enable a user to build extremely powerful applications. A multipurpose, database enabled programming environment that can provide an event driven interface functions as a multimedia kernel. This kernel can provide a variety of authoring solutions (e.g. a timeline based model similar to Macromedia Director or a node authoring model similar to Icon Author). At the heart of the kernel is a set of low level multimedia components providing object oriented interfaces for graphics, audio, video and imaging. Data preparation tools (e.g., layout, palette and Sprite Editors) could be built to manage the media database. The flexible interface for VB allows the construction of an infinite number of user models. The proliferation of these models within a popular, easy to use environment will allow the vast developer segment of 'producer' types to bring their ideas to the market. This is the key to building exciting, content rich multimedia solutions. Microsoft's VB and Borland's Delphi environments combined with multimedia components enable these possibilities.

  13. The spatiotemporal changes in precipitation extremes over Canada and their connections to large-scale climate patterns

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Gan, T. Y.; Tan, X.

    2017-12-01

    In the past few decades, there have been more extreme climate events around the world, and Canada has also suffered from numerous extreme precipitation events. In this paper, trend analysis, change point analysis, probability distribution function, principal component analysis and wavelet analysis were used to investigate the spatial and temporal patterns of extreme precipitation in Canada. Ten extreme precipitation indices were calculated using long-term daily precipitation data from 164 gauging stations. Several large-scale climate patterns such as El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), Pacific-North American (PNA), and North Atlantic Oscillation (NAO) were selected to analyze the relationships between extreme precipitation and climate indices. Convective Available Potential Energy (CAPE), specific humidity, and surface temperature were employed to investigate the potential causes of the trends.The results show statistically significant positive trends for most indices, which indicate increasing extreme precipitation. The majority of indices display more increasing trends along the southern border of Canada while decreasing trends dominate in the central Canadian Prairies (CP). In addition, strong connections are found between the extreme precipitation and climate indices and the effects of climate pattern differ for each region. The seasonal CAPE, specific humidity, and temperature are found to be closely related to Canadian extreme precipitation.

  14. Viewpoints: Interactive Exploration of Large Multivariate Earth and Space Science Data Sets

    NASA Astrophysics Data System (ADS)

    Levit, C.; Gazis, P. R.

    2006-05-01

    Analysis and visualization of extremely large and complex data sets may be one of the most significant challenges facing earth and space science investigators in the forthcoming decades. While advances in hardware speed and storage technology have roughly kept up with (indeed, have driven) increases in database size, the same is not of our abilities to manage the complexity of these data. Current missions, instruments, and simulations produce so much data of such high dimensionality that they outstrip the capabilities of traditional visualization and analysis software. This problem can only be expected to get worse as data volumes increase by orders of magnitude in future missions and in ever-larger supercomputer simulations. For large multivariate data (more than 105 samples or records with more than 5 variables per sample) the interactive graphics response of most existing statistical analysis, machine learning, exploratory data analysis, and/or visualization tools such as Torch, MLC++, Matlab, S++/R, and IDL stutters, stalls, or stops working altogether. Fortunately, the graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform application which leverages much of the power latent in the GPU to enable smooth interactive exploration and analysis of large high- dimensional data using a variety of classical and recent techniques. The targeted application is the interactive analysis of large, complex, multivariate data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 106-108.

  15. Tomato functional genomics database (TFGD): a comprehensive collection and analysis package for tomato functional genomics

    USDA-ARS?s Scientific Manuscript database

    Tomato Functional Genomics Database (TFGD; http://ted.bti.cornell.edu) provides a comprehensive systems biology resource to store, mine, analyze, visualize and integrate large-scale tomato functional genomics datasets. The database is expanded from the previously described Tomato Expression Database...

  16. Is the extraction by Whatman FTA filter matrix technology and sequencing of large ribosomal subunit D1-D2 region sufficient for identification of clinical fungi?

    PubMed

    Kiraz, Nuri; Oz, Yasemin; Aslan, Huseyin; Erturan, Zayre; Ener, Beyza; Akdagli, Sevtap Arikan; Muslumanoglu, Hamza; Cetinkaya, Zafer

    2015-10-01

    Although conventional identification of pathogenic fungi is based on the combination of tests evaluating their morphological and biochemical characteristics, they can fail to identify the less common species or the differentiation of closely related species. In addition these tests are time consuming, labour-intensive and require experienced personnel. We evaluated the feasibility and sufficiency of DNA extraction by Whatman FTA filter matrix technology and DNA sequencing of D1-D2 region of the large ribosomal subunit gene for identification of clinical isolates of 21 yeast and 160 moulds in our clinical mycology laboratory. While the yeast isolates were identified at species level with 100% homology, 102 (63.75%) clinically important mould isolates were identified at species level, 56 (35%) isolates at genus level against fungal sequences existing in DNA databases and two (1.25%) isolates could not be identified. Consequently, Whatman FTA filter matrix technology was a useful method for extraction of fungal DNA; extremely rapid, practical and successful. Sequence analysis strategy of D1-D2 region of the large ribosomal subunit gene was found considerably sufficient in identification to genus level for the most clinical fungi. However, the identification to species level and especially discrimination of closely related species may require additional analysis. © 2015 Blackwell Verlag GmbH.

  17. A Bit-Encoding Based New Data Structure for Time and Memory Efficient Handling of Spike Times in an Electrophysiological Setup.

    PubMed

    Ljungquist, Bengt; Petersson, Per; Johansson, Anders J; Schouenborg, Jens; Garwicz, Martin

    2018-04-01

    Recent neuroscientific and technical developments of brain machine interfaces have put increasing demands on neuroinformatic databases and data handling software, especially when managing data in real time from large numbers of neurons. Extrapolating these developments we here set out to construct a scalable software architecture that would enable near-future massive parallel recording, organization and analysis of neurophysiological data on a standard computer. To this end we combined, for the first time in the present context, bit-encoding of spike data with a specific communication format for real time transfer and storage of neuronal data, synchronized by a common time base across all unit sources. We demonstrate that our architecture can simultaneously handle data from more than one million neurons and provide, in real time (< 25 ms), feedback based on analysis of previously recorded data. In addition to managing recordings from very large numbers of neurons in real time, it also has the capacity to handle the extensive periods of recording time necessary in certain scientific and clinical applications. Furthermore, the bit-encoding proposed has the additional advantage of allowing an extremely fast analysis of spatiotemporal spike patterns in a large number of neurons. Thus, we conclude that this architecture is well suited to support current and near-future Brain Machine Interface requirements.

  18. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.

    PubMed

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  19. Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies V1.0.

    PubMed

    Wang, Shirley V; Schneeweiss, Sebastian; Berger, Marc L; Brown, Jeffrey; de Vries, Frank; Douglas, Ian; Gagne, Joshua J; Gini, Rosa; Klungel, Olaf; Mullins, C Daniel; Nguyen, Michael D; Rassen, Jeremy A; Smeeth, Liam; Sturkenboom, Miriam

    2017-09-01

    Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of studies conducted in large healthcare databases. We reviewed key investigator decisions required to operate a sample of macros and software tools designed to create and analyze analytic cohorts from longitudinal streams of healthcare data. A panel of academic, regulatory, and industry experts in healthcare database analytics discussed and added to this list. Evidence generated from large healthcare encounter and reimbursement databases is increasingly being sought by decision-makers. Varied terminology is used around the world for the same concepts. Agreeing on terminology and which parameters from a large catalogue are the most essential to report for replicable research would improve transparency and facilitate assessment of validity. At a minimum, reporting for a database study should provide clarity regarding operational definitions for key temporal anchors and their relation to each other when creating the analytic dataset, accompanied by an attrition table and a design diagram. A substantial improvement in reproducibility, rigor and confidence in real world evidence generated from healthcare databases could be achieved with greater transparency about operational study parameters used to create analytic datasets from longitudinal healthcare databases. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  20. The persistence of the large volumes in black holes

    NASA Astrophysics Data System (ADS)

    Ong, Yen Chin

    2015-08-01

    Classically, black holes admit maximal interior volumes that grow asymptotically linearly in time. We show that such volumes remain large when Hawking evaporation is taken into account. Even if a charged black hole approaches the extremal limit during this evolution, its volume continues to grow; although an exactly extremal black hole does not have a "large interior". We clarify this point and discuss the implications of our results to the information loss and firewall paradoxes.

  1. Economic Evaluations of the Health Impacts of Weather-Related Extreme Events: A Scoping Review

    PubMed Central

    Schmitt, Laetitia H. M.; Graham, Hilary M.; White, Piran C. L.

    2016-01-01

    The frequency and severity of extreme events is expected to increase under climate change. There is a need to understand the economic consequences of human exposure to these extreme events, to underpin decisions on risk reduction. We undertook a scoping review of economic evaluations of the adverse health effects from exposure to weather-related extreme events. We searched PubMed, Embase and Web of Science databases with no restrictions to the type of evaluations. Twenty studies were included, most of which were recently published. Most studies have been undertaken in the U.S. (nine studies) or Asia (seven studies), whereas we found no studies in Africa, Central and Latin America nor the Middle East. Extreme temperatures accounted for more than a third of the pool of studies (seven studies), closely followed by flooding (six studies). No economic study was found on drought. Whilst studies were heterogeneous in terms of objectives and methodology, they clearly indicate that extreme events will become a pressing public health issue with strong welfare and distributional implications. The current body of evidence, however, provides little information to support decisions on the allocation of scarce resources between risk reduction options. In particular, the review highlights a significant lack of research attention to the potential cost-effectiveness of interventions that exploit the capacity of natural ecosystems to reduce our exposure to, or ameliorate the consequences of, extreme events. PMID:27834843

  2. The Era of the Large Databases: Outcomes After Gastroesophageal Surgery According to NSQIP, NIS, and NCDB Databases. Systematic Literature Review.

    PubMed

    Batista Rodríguez, Gabriela; Balla, Andrea; Fernández-Ananín, Sonia; Balagué, Carmen; Targarona, Eduard M

    2018-05-01

    The term big data refers to databases that include large amounts of information used in various areas of knowledge. Currently, there are large databases that allow the evaluation of postoperative evolution, such as the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP), the Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS), and the National Cancer Database (NCDB). The aim of this review was to evaluate the clinical impact of information obtained from these registries regarding gastroesophageal surgery. A systematic review using the Meta-analysis of Observational Studies in Epidemiology guidelines was performed. The research was carried out using the PubMed database identifying 251 articles. All outcomes related to gastroesophageal surgery were analyzed. A total of 34 articles published between January 2007 and July 2017 were included, for a total of 345 697 patients. Studies were analyzed and divided according to the type of surgery and main theme in (1) esophageal surgery and (2) gastric surgery. The information provided by these databases is an effective way to obtain levels of evidence not obtainable by conventional methods. Furthermore, this information is useful for the external validation of previous studies, to establish benchmarks that allow comparisons between centers and have a positive impact on the quality of care.

  3. Development and Operation of a Database Machine for Online Access and Update of a Large Database.

    ERIC Educational Resources Information Center

    Rush, James E.

    1980-01-01

    Reviews the development of a fault tolerant database processor system which replaced OCLC's conventional file system. A general introduction to database management systems and the operating environment is followed by a description of the hardware selection, software processes, and system characteristics. (SW)

  4. Hypersonic and Supersonic Flow Roadmaps Using Bibliometrics and Database Tomography.

    ERIC Educational Resources Information Center

    Kostoff, R. N.; Eberhart, Henry J.; Toothman, Darrell Ray

    1999-01-01

    Database Tomography (DT) is a textual database-analysis system consisting of algorithms for extracting multiword phrase frequencies and proximities from a large textual database, to augment interpretative capabilities of the expert human analyst. Describes use of the DT process, supplemented by literature bibliometric analyses, to derive technical…

  5. Comparison of the NCI open database with seven large chemical structural databases.

    PubMed

    Voigt, J H; Bienfait, B; Wang, S; Nicklaus, M C

    2001-01-01

    Eight large chemical databases have been analyzed and compared to each other. Central to this comparison is the open National Cancer Institute (NCI) database, consisting of approximately 250 000 structures. The other databases analyzed are the Available Chemicals Directory ("ACD," from MDL, release 1.99, 3D-version); the ChemACX ("ACX," from CamSoft, Version 4.5); the Maybridge Catalog and the Asinex database (both as distributed by CamSoft as part of ChemInfo 4.5); the Sigma-Aldrich Catalog (CD-ROM, 1999 Version); the World Drug Index ("WDI," Derwent, version 1999.03); and the organic part of the Cambridge Crystallographic Database ("CSD," from Cambridge Crystallographic Data Center, 1999 Version 5.18). The database properties analyzed are internal duplication rates; compounds unique to each database; cumulative occurrence of compounds in an increasing number of databases; overlap of identical compounds between two databases; similarity overlap; diversity; and others. The crystallographic database CSD and the WDI show somewhat less overlap with the other databases than those with each other. In particular the collections of commercial compounds and compilations of vendor catalogs have a substantial degree of overlap among each other. Still, no database is completely a subset of any other, and each appears to have its own niche and thus "raison d'être". The NCI database has by far the highest number of compounds that are unique to it. Approximately 200 000 of the NCI structures were not found in any of the other analyzed databases.

  6. A Database of Supercooled Large Droplet Ice Accretions [Supplement

    NASA Technical Reports Server (NTRS)

    VanZante, Judith Foss

    2007-01-01

    A unique, publicly available database regarding supercooled large droplet (SLD) ice accretions has been developed in NASA Glenn's Icing Research Tunnel. Identical cloud and flight conditions were generated for five different airfoil models. The models chosen represent a variety of aircraft types from the horizontal stabilizer of a large transport aircraft to the wings of regional, business, and general aviation aircraft. In addition to the standard documentation methods of 2D ice shape tracing and imagery, ice mass measurements were also taken. This database will also be used to validate and verify the extension of the ice accretion code, LEWICE, into the SLD realm.

  7. A Database of Supercooled Large Droplet Ice Accretions

    NASA Technical Reports Server (NTRS)

    VanZante, Judith Foss

    2007-01-01

    A unique, publicly available database regarding supercooled large droplet ice accretions has been developed in NASA Glenn's Icing Research Tunnel. Identical cloud and flight conditions were generated for five different airfoil models. The models chosen represent a variety of aircraft types from the horizontal stabilizer of a large trans-port aircraft to the wings of regional, business, and general aviation aircraft. In addition to the standard documentation methods of 2D ice shape tracing and imagery, ice mass measurements were also taken. This database will also be used to validate and verify the extension of the ice accretion code, LEWICE, into the SLD realm.

  8. A Review of Stellar Abundance Databases and the Hypatia Catalog Database

    NASA Astrophysics Data System (ADS)

    Hinkel, Natalie Rose

    2018-01-01

    The astronomical community is interested in elements from lithium to thorium, from solar twins to peculiarities of stellar evolution, because they give insight into different regimes of star formation and evolution. However, while some trends between elements and other stellar or planetary properties are well known, many other trends are not as obvious and are a point of conflict. For example, stars that host giant planets are found to be consistently enriched in iron, but the same cannot be definitively said for any other element. Therefore, it is time to take advantage of large stellar abundance databases in order to better understand not only the large-scale patterns, but also the more subtle, small-scale trends within the data.In this overview to the special session, I will present a review of large stellar abundance databases that are both currently available (i.e. RAVE, APOGEE) and those that will soon be online (i.e. Gaia-ESO, GALAH). Additionally, I will discuss the Hypatia Catalog Database (www.hypatiacatalog.com) -- which includes abundances from individual literature sources that observed stars within 150pc. The Hypatia Catalog currently contains 72 elements as measured within ~6000 stars, with a total of ~240,000 unique abundance determinations. The online database offers a variety of solar normalizations, stellar properties, and planetary properties (where applicable) that can all be viewed through multiple interactive plotting interfaces as well as in a tabular format. By analyzing stellar abundances for large populations of stars and from a variety of different perspectives, a wealth of information can be revealed on both large and small scales.

  9. The Majorana Parts Tracking Database

    DOE PAGES

    Abgrall, N.; Aguayo, E.; Avignone, F. T.; ...

    2015-01-16

    The Majorana Demonstrator is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The Majorana Parts Tracking Database is used to record the history of components used in the construction of the Demonstrator. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation.more » In summary, a web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.« less

  10. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  11. An effective method to screen sodium-based layered materials for sodium ion batteries

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Zhang, Zihe; Yao, Sai; Chen, An; Zhao, Xudong; Zhou, Zhen

    2018-03-01

    Due to the high cost and insufficient resource of lithium, sodium-ion batteries are widely investigated for large-scale applications. Typically, insertion-type materials possess better cyclic stability than alloy-type and conversion-type ones. Therefore, in this work, we proposed a facile and effective method to screen sodium-based layered materials based on Materials Project database as potential candidate insertion-type materials for sodium ion batteries. The obtained Na-based layered materials contains 38 kinds of space group, which reveals that the credibility of our screening approach would not be affected by the space group. Then, some important indexes of the representative materials, including the average voltage, volume change and sodium ion mobility, were further studied by means of density functional theory computations. Some materials with extremely low volume changes and Na diffusion barriers are promising candidates for sodium ion batteries. We believe that our classification algorithm could also be used to search for other alkali and multivalent ion-based layered materials, to accelerate the development of battery materials.

  12. Advances in Data Management in Remote Sensing and Climate Modeling

    NASA Astrophysics Data System (ADS)

    Brown, P. G.

    2014-12-01

    Recent commercial interest in "Big Data" information systems has yielded little more than a sense of deja vu among scientists whose work has always required getting their arms around extremely large databases, and writing programs to explore and analyze it. On the flip side, there are some commercial DBMS startups building "Big Data" platform using techniques taken from earth science, astronomy, high energy physics and high performance computing. In this talk, we will introduce one such platform; Paradigm4's SciDB, the first DBMS designed from the ground up to combine the kinds of quality-of-service guarantees made by SQL DBMS platforms—high level data model, query languages, extensibility, transactions—with the kinds of functionality familiar to scientific users—arrays as structural building blocks, integrated linear algebra, and client language interfaces that minimize the learning curve. We will review how SciDB is used to manage and analyze earth science data by several teams of scientific users.

  13. The relevance of flood hazards and impacts in Turkey: What can be learned from different disaster loss databases?

    NASA Astrophysics Data System (ADS)

    Koc, Gamze; Thieken, Annegret H.

    2016-04-01

    Despite technological development, better data and considerable efforts to reduce the impacts of natural hazards over the last two decades, natural disasters inflicted losses have caused enormous human and economic damages in Turkey. In particular earthquakes and flooding have caused enormous human and economic losses that occasionally amounted to 3 to 4% of the gross national product of Turkey (Genç, 2007). While there is a large body of literature on earthquake hazards and risks in Turkey, comparatively little is known about flood hazards and risks. Therefore, this study is aimed at investigating flood patterns, intensities and impacts, also providing an overview of the temporal and spatial distribution of flood losses by analysing different databases on disaster losses throughout Turkey. As input for more detailed event analyses, an additional aim is to retrieve the most severe flood events in the period between 1960 and 2014 from the databases. In general, data on disaster impacts are scarce in comparison to other scientific fields in natural hazard research, although the lack of reliable, consistent and comparable data is seen as a major obstacle for effective and long-term loss prevention. Currently, only a few data sets, especially the emergency events database EM-DAT (www.emdat.be) hosted and maintained by the Centre for Research on the Epidemiology of Disasters (CRED) since 1988, are publicly accessible and have become widely used to describe trends in disaster losses. However, loss data are subjected to various biases (Gall et al. 2009). Since Turkey is in the favourable position of having a distinct national disaster database since 2009, i.e. the Turkey Disaster Data Base (TABB), there is the unique opportunity to investigate flood impacts in Turkey in more detail as well as to identify biases and underlying reasons for mismatches with EM-DAT. To compare these two databases, the events of the two databases were reclassified by using the IRDR peril classification system (IRDR, 2014). Furthermore, literature, news archives and the Global Active Archive of Large Flood Events - Dartmouth Flood Observatory (floodobservatory.colorado.edu) were used to complement loss data gaps of the databases. From 1960 to 2014, EM-DAT reported 35 flood events in Turkey (26.3 % of all natural hazards events), which caused 773 fatalities (the second most destructive type of natural hazard after earthquakes) and a total economic damage of US 2.2 billion. In contrast, TABB contained 1076 flood events (8.3 % of all natural hazards events), by which 795 people died. On this basis, floods are the third most destructive type of natural hazard -after earthquakes and extreme temperatures- for human losses in Turkey. A comparison of the two databases EM-DAT and TABB reveals big mismatches of the flood data, e.g. the reported number of events, number of affected people and economic loss, differ dramatically. It is concluded that the main reason for the big differences and contradicting numbers of different natural disaster databases is lack of standardization for data collection, peril classification and database thresholds (entry criteria). Since loss data collection is gaining more and more attention, e.g. in the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), the study could offer substantial insights for flood risk mitigation and adaptation studies in Turkey. References Gall, M., Borden, K., Cutter, S.L. (2009) When do losses count? Six fallacies of loss data from natural hazards. Bulletin of the American Meteorological Society, 90(6), 799-809. Genç, F.S., (2007) Türkiye'de Kentleşme ve Doǧal Afet Riskleri ile İlişkisi, TMMOB Afet Sempozyumu. IRDR (2014) IRDR Peril Classification and Hazard Glossary. Report of the Data Group in the Integrated Research on Disaster Risk. (Available at: http://www.irdrinternational.org/2014/03/28/irdr-peril-classification-and-hazard-glossary).

  14. Return period curves for extreme 5-min rainfall amounts at the Barcelona urban network

    NASA Astrophysics Data System (ADS)

    Lana, X.; Casas-Castillo, M. C.; Serra, C.; Rodríguez-Solà, R.; Redaño, A.; Burgueño, A.; Martínez, M. D.

    2018-03-01

    Heavy rainfall episodes are relatively common in the conurbation of Barcelona and neighbouring cities (NE Spain), usually due to storms generated by convective phenomena in summer and eastern and south-eastern advections in autumn. Prevention of local flood episodes and right design of urban drainage have to take into account the rainfall intensity spread instead of a simple evaluation of daily rainfall amounts. The database comes from 5-min rain amounts recorded by tipping buckets in the Barcelona urban network along the years 1994-2009. From these data, extreme 5-min rain amounts are selected applying the peaks-over-threshold method for thresholds derived from both 95% percentile and the mean excess plot. The return period curves are derived from their statistical distribution for every gauge, describing with detail expected extreme 5-min rain amounts across the urban network. These curves are compared with those derived from annual extreme time series. In this way, areas in Barcelona submitted to different levels of flood risk from the point of view of rainfall intensity are detected. Additionally, global time trends on extreme 5-min rain amounts are quantified for the whole network and found as not statistically significant.

  15. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency

    PubMed Central

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254

  16. Tandem anchoring: informational and politeness effects of range offers in social exchange.

    PubMed

    Ames, Daniel R; Mason, Malia F

    2015-02-01

    We examined whether and why range offers (e.g., "I want $7,200 to $7,600 for my car") matter in negotiations. A selective-attention account predicts that motivated and skeptical offer-recipients focus overwhelmingly on the attractive endpoint (i.e., a buyer would hear, in effect, "I want $7,200"). In contrast, we propose a tandem anchoring account, arguing that offer-recipients are often influenced by both endpoints as they judge the offer-maker's reservation price (i.e., bottom line) as well as how polite they believe an extreme (nonaccommodating) counteroffer would be. In 5 studies, featuring scripted negotiation scenarios and live dyadic negotiations, we find that certain range offers yield improved settlement terms for offer-makers without relational costs, whereas others may yield relationship benefits without deal costs. We clarify the types of range offers that evoke these benefits and identify boundaries to their impact, including range width and extremity. In addition, our studies reveal evidence consistent with 2 proposed mechanisms, one involving an informational effect (both endpoints of range offers can be taken as signals of an offer-maker's reservation price) and another involving a politeness effect (range offers can make extreme counteroffers seem less polite). Our results have implications for models of negotiation behavior and outcomes and, more broadly, for the nature of social exchange. (PsycINFO Database Record (c) 2015 APA, all rights reserved). PsycINFO Database Record (c) 2015 APA, all rights reserved.

  17. Modelling probabilities of heavy precipitation by regional approaches

    NASA Astrophysics Data System (ADS)

    Gaal, L.; Kysely, J.

    2009-09-01

    Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of the size of the region is linked with a built-in test on regional homogeneity of data. Once a pooling group is delineated, weights based on a dissimilarity measure are assigned to individual sites involved in a pooling group, and all (weighted) data are employed in the estimation of model parameters and high quantiles at a given location. The ROI method is compared with the Hosking-Wallis (HW) regional frequency analysis, which is based on delineating fixed regions (instead of flexible pooling groups) and assigning unit weights to all sites in a region. The comparison of the performance of the individual regional models makes use of data on annual maxima of 1-day precipitation amounts at 209 stations covering the Czech Republic, with altitudes ranging from 150 to 1490 m a.s.l. We conclude that the ROI methodology is superior to the HW analysis, particularly for very high quantiles (100-yr return values). Another advantage of the ROI approach is that subjective decisions - unavoidable when fixed regions in the HW analysis are formed - may efficiently be suppressed, and almost all settings of the ROI method may be justified by results of the simulation experiments. The differences between (any) regional method and single-site analysis are very pronounced and suggest that the at-site estimation is highly unreliable. The ROI method is then applied to estimate high quantiles of precipitation amounts at individual sites. The estimates and their uncertainty are compared with those from a single-site analysis. We focus on the eastern part of the Czech Republic, i.e. an area with complex orography and a particularly pronounced role of Mediterranean cyclones in producing precipitation extremes. The design values are compared with precipitation amounts recorded during the recent heavy precipitation events, including the one associated with the flash flood on June 24, 2009. We also show that the ROI methodology may easily be transferred to the analysis of precipitation extremes in climate model outputs. It efficiently reduces (random) variations in the estimates of parameters of the extreme value distributions in individual gridboxes that result from large spatial variability of heavy precipitation, and represents a straightforward tool for ‘weighting’ data from neighbouring gridboxes within the estimation procedure. The study is supported by the Grant Agency of AS CR under project B300420801.

  18. Brief Report: The Negev Hospital-University-Based (HUB) Autism Database

    ERIC Educational Resources Information Center

    Meiri, Gal; Dinstein, Ilan; Michaelowski, Analya; Flusser, Hagit; Ilan, Michal; Faroy, Michal; Bar-Sinai, Asif; Manelis, Liora; Stolowicz, Dana; Yosef, Lili Lea; Davidovitch, Nadav; Golan, Hava; Arbelle, Shosh; Menashe, Idan

    2017-01-01

    Elucidating the heterogeneous etiologies of autism will require investment in comprehensive longitudinal data acquisition from large community based cohorts. With this in mind, we have established a hospital-university-based (HUB) database of autism which incorporates prospective and retrospective data from a large and ethnically diverse…

  19. Improving Decisions with Data

    ERIC Educational Resources Information Center

    Johnson, Doug

    2004-01-01

    Schools gather, store and use an increasingly large amount of data. Keeping track of everything from bus routes to building access codes to test scores to sports equipment is done with the help of electronic database programs. Large databases designed for budgeting and student record keeping have long been an integral part of the educational…

  20. Springtime extreme moisture transport into the Arctic and its impact on sea ice concentration

    NASA Astrophysics Data System (ADS)

    Yang, Wenchang; Magnusdottir, Gudrun

    2017-05-01

    Recent studies suggest that springtime moisture transport into the Arctic can initiate sea ice melt that extends to a large area in the following summer and fall, which can help explain Arctic sea ice interannual variability. Yet the impact from an individual moisture transport event, especially the extreme ones, is unclear on synoptic to intraseasonal time scales and this is the focus of the current study. Springtime extreme moisture transport into the Arctic from a daily data set is found to be dominant over Atlantic longitudes. Lag composite analysis shows that these extreme events are accompanied by a substantial sea ice concentration reduction over the Greenland-Barents-Kara Seas that lasts around a week. Surface air temperature also becomes anomalously high over these seas and cold to the west of Greenland as well as over the interior Eurasian continent. The blocking weather regime over the North Atlantic is mainly responsible for the extreme moisture transport, occupying more than 60% of the total extreme days, while the negative North Atlantic Oscillation regime is hardly observed at all during the extreme transport days. These extreme moisture transport events appear to be preceded by eastward propagating large-scale tropical convective forcing by as long as 2 weeks but with great uncertainty due to lack of statistical significance.

  1. Teaching Advanced SQL Skills: Text Bulk Loading

    ERIC Educational Resources Information Center

    Olsen, David; Hauser, Karina

    2007-01-01

    Studies show that advanced database skills are important for students to be prepared for today's highly competitive job market. A common task for database administrators is to insert a large amount of data into a database. This paper illustrates how an up-to-date, advanced database topic, namely bulk insert, can be incorporated into a database…

  2. Sagace: A web-based search engine for biomedical databases in Japan

    PubMed Central

    2012-01-01

    Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816

  3. A high performance, ad-hoc, fuzzy query processing system for relational databases

    NASA Technical Reports Server (NTRS)

    Mansfield, William H., Jr.; Fleischman, Robert M.

    1992-01-01

    Database queries involving imprecise or fuzzy predicates are currently an evolving area of academic and industrial research. Such queries place severe stress on the indexing and I/O subsystems of conventional database environments since they involve the search of large numbers of records. The Datacycle architecture and research prototype is a database environment that uses filtering technology to perform an efficient, exhaustive search of an entire database. It has recently been modified to include fuzzy predicates in its query processing. The approach obviates the need for complex index structures, provides unlimited query throughput, permits the use of ad-hoc fuzzy membership functions, and provides a deterministic response time largely independent of query complexity and load. This paper describes the Datacycle prototype implementation of fuzzy queries and some recent performance results.

  4. Optical phased array configuration for an extremely large telescope.

    PubMed

    Meinel, Aden Baker; Meinel, Marjorie Pettit

    2004-01-20

    Extremely large telescopes are currently under consideration by several groups in several countries. Extrapolation of current technology up to 30 m indicates a cost of over dollars 1 billion. Innovative concepts are being explored to find significant cost reductions. We explore the concept of an Optical Phased Array (OPA) telescope. Each element of the OPA is a separate Cassegrain telescope. Collimated beams from the array are sent via an associated set of delay lines to a central beam combiner. This array of small telescope elements offers the possibility of starting with a low-cost array of a few rings of elements, adding structure and additional Cass elements until the desired diameter telescope is attained. We address the salient features of such an extremely large telescope and cost elements relative to more conventional options.

  5. Integration of NASA/GSFC and USGS Rock Magnetic Databases.

    NASA Astrophysics Data System (ADS)

    Nazarova, K. A.; Glen, J. M.

    2004-05-01

    A global Magnetic Petrology Database (MPDB) was developed and continues to be updated at NASA/Goddard Space Flight Center. The purpose of this database is to provide the geomagnetic community with a comprehensive and user-friendly method of accessing magnetic petrology data via the Internet for a more realistic interpretation of satellite (as well as aeromagnetic and ground) lithospheric magnetic anomalies. The MPDB contains data on rocks from localities around the world (about 19,000 samples) including the Ukranian and Baltic Shields, Kamchatka, Iceland, Urals Mountains, etc. The MPDB is designed, managed and presented on the web as a research oriented database. Several database applications have been specifically developed for data manipulation and analysis of the MPDB. The geophysics unit at the USGS in Menlo Park has over 17,000 rock-property data, largely from sites within the western U.S. This database contains rock-density and rock-magnetic parameters collected for use in gravity and magnetic field modeling, and paleomagnetic studies. Most of these data were taken from surface outcrops and together they span a broad range of rock types. Measurements were made either in-situ at the outcrop, or in the laboratory on hand samples and paleomagnetic cores acquired in the field. The USGS and NASA/GSFC data will be integrated as part of an effort to provide public access to a single, uniformly maintained database. Due to the large number of data and the very large area sampled, the database can yield rock-property statistics on a broad range of rock types; it is thus applicable to study areas beyond the geographic scope of the database. The intent of this effort is to provide incentive for others to further contribute to the database, and a tool with which the geophysical community can entertain studies formerly precluded.

  6. The effect of ankle bracing on lower extremity biomechanics during landing: A systematic review.

    PubMed

    Mason-Mackay, A R; Whatman, C; Reid, D

    2016-07-01

    To examine the evidence for effect of ankle bracing on lower-extremity landing biomechanics. Literature review. Systematic search of the literature on EBSCO health databases. Articles critiqued by two reviewers. Ten studies were identified which investigated the effect of ankle bracing on landing biomechanics. Overall results suggest that landing biomechanics are altered with some brace types but studies disagree as to the particular variables affected. There is evidence that ankle bracing may alter lower-extremity landing biomechanics in a manner which predisposes athletes to injury. The focus of studies on specific biomechanical variables rather than biomechanical patterns, analysis of pooled data means in the presence of differing landing styles between participants, variation in landing-tasks investigated in different studies, and lack of studies investigating goal-directed sport-specific landing tasks creates difficulty in interpreting results. These areas require further research. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  7. Flood protection diversification to reduce probabilities of extreme losses.

    PubMed

    Zhou, Qian; Lambert, James H; Karvetski, Christopher W; Keisler, Jeffrey M; Linkov, Igor

    2012-11-01

    Recent catastrophic losses because of floods require developing resilient approaches to flood risk protection. This article assesses how diversification of a system of coastal protections might decrease the probabilities of extreme flood losses. The study compares the performance of portfolios each consisting of four types of flood protection assets in a large region of dike rings. A parametric analysis suggests conditions in which diversifications of the types of included flood protection assets decrease extreme flood losses. Increased return periods of extreme losses are associated with portfolios where the asset types have low correlations of economic risk. The effort highlights the importance of understanding correlations across asset types in planning for large-scale flood protection. It allows explicit integration of climate change scenarios in developing flood mitigation strategy. © 2012 Society for Risk Analysis.

  8. BioMart: a data federation framework for large collaborative projects.

    PubMed

    Zhang, Junjun; Haider, Syed; Baran, Joachim; Cros, Anthony; Guberman, Jonathan M; Hsu, Jack; Liang, Yong; Yao, Long; Kasprzyk, Arek

    2011-01-01

    BioMart is a freely available, open source, federated database system that provides a unified access to disparate, geographically distributed data sources. It is designed to be data agnostic and platform independent, such that existing databases can easily be incorporated into the BioMart framework. BioMart allows databases hosted on different servers to be presented seamlessly to users, facilitating collaborative projects between different research groups. BioMart contains several levels of query optimization to efficiently manage large data sets and offers a diverse selection of graphical user interfaces and application programming interfaces to ensure that queries can be performed in whatever manner is most convenient for the user. The software has now been adopted by a large number of different biological databases spanning a wide range of data types and providing a rich source of annotation available to bioinformaticians and biologists alike.

  9. A Test-Length Correction to the Estimation of Extreme Proficiency Levels

    ERIC Educational Resources Information Center

    Magis, David; Beland, Sebastien; Raiche, Gilles

    2011-01-01

    In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…

  10. Heterogeneous Sensitivity of Tropical Precipitation Extremes during Growth and Mature Phases of Atmospheric Warming

    NASA Astrophysics Data System (ADS)

    Parhi, P.; Giannini, A.; Lall, U.; Gentine, P.

    2016-12-01

    Assessing and managing risks posed by climate variability and change is challenging in the tropics, from both a socio-economic and a scientific perspective. Most of the vulnerable countries with a limited climate adaptation capability are in the tropics. However, climate projections, particularly of extreme precipitation, are highly uncertain there. The CMIP5 (Coupled Model Inter- comparison Project - Phase 5) inter-model range of extreme precipitation sensitivity to the global temperature under climate change is much larger in the tropics as compared to the extra-tropics. It ranges from nearly 0% to greater than 30% across models (O'Gorman 2012). The uncertainty is also large in historical gauge or satellite based observational records. These large uncertainties in the sensitivity of tropical precipitation extremes highlight the need to better understand how tropical precipitation extremes respond to warming. We hypothesize that one of the factors explaining the large uncertainty is due to differing sensitivities during different phases of warming. We consider the `growth' and `mature' phases of warming under climate variability case- typically associated with an El Niño event. In the remote tropics (away from tropical Pacific Ocean), the response of the precipitation extremes during the two phases can be through different pathways: i) a direct and fast changing radiative forcing in an atmospheric column, acting top-down due to the tropospheric warming, and/or ii) an indirect effect via changes in surface temperatures, acting bottom-up through surface water and energy fluxes. We also speculate that the insights gained here might be useful in interpreting the large sensitivity under climate change scenarios, since the physical mechanisms during the two warming phases under climate variability case, have some correspondence with an increasing and stabilized green house gas emission scenarios.

  11. dEMBF: A Comprehensive Database of Enzymes of Microalgal Biofuel Feedstock.

    PubMed

    Misra, Namrata; Panda, Prasanna Kumar; Parida, Bikram Kumar; Mishra, Barada Kanta

    2016-01-01

    Microalgae have attracted wide attention as one of the most versatile renewable feedstocks for production of biofuel. To develop genetically engineered high lipid yielding algal strains, a thorough understanding of the lipid biosynthetic pathway and the underpinning enzymes is essential. In this work, we have systematically mined the genomes of fifteen diverse algal species belonging to Chlorophyta, Heterokontophyta, Rhodophyta, and Haptophyta, to identify and annotate the putative enzymes of lipid metabolic pathway. Consequently, we have also developed a database, dEMBF (Database of Enzymes of Microalgal Biofuel Feedstock), which catalogues the complete list of identified enzymes along with their computed annotation details including length, hydrophobicity, amino acid composition, subcellular location, gene ontology, KEGG pathway, orthologous group, Pfam domain, intron-exon organization, transmembrane topology, and secondary/tertiary structural data. Furthermore, to facilitate functional and evolutionary study of these enzymes, a collection of built-in applications for BLAST search, motif identification, sequence and phylogenetic analysis have been seamlessly integrated into the database. dEMBF is the first database that brings together all enzymes responsible for lipid synthesis from available algal genomes, and provides an integrative platform for enzyme inquiry and analysis. This database will be extremely useful for algal biofuel research. It can be accessed at http://bbprof.immt.res.in/embf.

  12. dEMBF: A Comprehensive Database of Enzymes of Microalgal Biofuel Feedstock

    PubMed Central

    Misra, Namrata; Panda, Prasanna Kumar; Parida, Bikram Kumar; Mishra, Barada Kanta

    2016-01-01

    Microalgae have attracted wide attention as one of the most versatile renewable feedstocks for production of biofuel. To develop genetically engineered high lipid yielding algal strains, a thorough understanding of the lipid biosynthetic pathway and the underpinning enzymes is essential. In this work, we have systematically mined the genomes of fifteen diverse algal species belonging to Chlorophyta, Heterokontophyta, Rhodophyta, and Haptophyta, to identify and annotate the putative enzymes of lipid metabolic pathway. Consequently, we have also developed a database, dEMBF (Database of Enzymes of Microalgal Biofuel Feedstock), which catalogues the complete list of identified enzymes along with their computed annotation details including length, hydrophobicity, amino acid composition, subcellular location, gene ontology, KEGG pathway, orthologous group, Pfam domain, intron-exon organization, transmembrane topology, and secondary/tertiary structural data. Furthermore, to facilitate functional and evolutionary study of these enzymes, a collection of built-in applications for BLAST search, motif identification, sequence and phylogenetic analysis have been seamlessly integrated into the database. dEMBF is the first database that brings together all enzymes responsible for lipid synthesis from available algal genomes, and provides an integrative platform for enzyme inquiry and analysis. This database will be extremely useful for algal biofuel research. It can be accessed at http://bbprof.immt.res.in/embf. PMID:26727469

  13. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    NASA Astrophysics Data System (ADS)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  14. HadISD: a quality-controlled global synoptic report database for selected variables at long-term stations from 1973-2011

    NASA Astrophysics Data System (ADS)

    Dunn, R. J. H.; Willett, K. M.; Thorne, P. W.; Woolley, E. V.; Durre, I.; Dai, A.; Parker, D. E.; Vose, R. S.

    2012-10-01

    This paper describes the creation of HadISD: an automatically quality-controlled synoptic resolution dataset of temperature, dewpoint temperature, sea-level pressure, wind speed, wind direction and cloud cover from global weather stations for 1973-2011. The full dataset consists of over 6000 stations, with 3427 long-term stations deemed to have sufficient sampling and quality for climate applications requiring sub-daily resolution. As with other surface datasets, coverage is heavily skewed towards Northern Hemisphere mid-latitudes. The dataset is constructed from a large pre-existing ASCII flatfile data bank that represents over a decade of substantial effort at data retrieval, reformatting and provision. These raw data have had varying levels of quality control applied to them by individual data providers. The work proceeded in several steps: merging stations with multiple reporting identifiers; reformatting to netCDF; quality control; and then filtering to form a final dataset. Particular attention has been paid to maintaining true extreme values where possible within an automated, objective process. Detailed validation has been performed on a subset of global stations and also on UK data using known extreme events to help finalise the QC tests. Further validation was performed on a selection of extreme events world-wide (Hurricane Katrina in 2005, the cold snap in Alaska in 1989 and heat waves in SE Australia in 2009). Some very initial analyses are performed to illustrate some of the types of problems to which the final data could be applied. Although the filtering has removed the poorest station records, no attempt has been made to homogenise the data thus far, due to the complexity of retaining the true distribution of high-resolution data when applying adjustments. Hence non-climatic, time-varying errors may still exist in many of the individual station records and care is needed in inferring long-term trends from these data. This dataset will allow the study of high frequency variations of temperature, pressure and humidity on a global basis over the last four decades. Both individual extremes and the overall population of extreme events could be investigated in detail to allow for comparison with past and projected climate. A version-control system has been constructed for this dataset to allow for the clear documentation of any updates and corrections in the future.

  15. Mining the Galaxy Zoo Database: Machine Learning Applications

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.; Wallin, J.; Vedachalam, A.; Baehr, S.; Lintott, C.; Darg, D.; Smith, A.; Fortson, L.

    2010-01-01

    The new Zooniverse initiative is addressing the data flood in the sciences through a transformative partnership between professional scientists, volunteer citizen scientists, and machines. As part of this project, we are exploring the application of machine learning techniques to data mining problems associated with the large and growing database of volunteer science results gathered by the Galaxy Zoo citizen science project. We will describe the basic challenge, some machine learning approaches, and early results. One of the motivators for this study is the acquisition (through the Galaxy Zoo results database) of approximately 100 million classification labels for roughly one million galaxies, yielding a tremendously large and rich set of training examples for improving automated galaxy morphological classification algorithms. In our first case study, the goal is to learn which morphological and photometric features in the Sloan Digital Sky Survey (SDSS) database correlate most strongly with user-selected galaxy morphological class. As a corollary to this study, we are also aiming to identify which galaxy parameters in the SDSS database correspond to galaxies that have been the most difficult to classify (based upon large dispersion in their volunter-provided classifications). Our second case study will focus on similar data mining analyses and machine leaning algorithms applied to the Galaxy Zoo catalog of merging and interacting galaxies. The outcomes of this project will have applications in future large sky surveys, such as the LSST (Large Synoptic Survey Telescope) project, which will generate a catalog of 20 billion galaxies and will produce an additional astronomical alert database of approximately 100 thousand events each night for 10 years -- the capabilities and algorithms that we are exploring will assist in the rapid characterization and classification of such massive data streams. This research has been supported in part through NSF award #0941610.

  16. Effect of Footwear on Joint Pain and Function in Older Adults With Lower Extremity Osteoarthritis.

    PubMed

    Wagner, Amy; Luna, Sarah

    Lower extremity osteoarthritis (OA) is a common condition among older adults; given the risks of surgical and pharmaceutical interventions, conservative, lower-cost management options such as footwear warrant further investigation. This systematic review investigated the effects of footwear, including shoe inserts, in reducing lower extremity joint pain and improving gait, mobility, and quality of life in older adults with OA. The CINAHL, SPORTDiscus, PubMed, RECAL, and Web of Knowledge databases were searched for publications from January 1990 to September 2014, using the terms "footwear," "shoes," "gait," "pain," and "older adult." Participants who were 50 years or older and those who had OA in at least one lower extremity joint narrowed the results. Outcomes of interest included measures of pain, comfort, function, gait, or quality of life. Exclusion criteria applied to participants with rheumatoid arthritis, amputation, diabetes, multiple sclerosis, use of modified footwear or custom orthotics, purely biomechanical studies, and outcomes of balance or falls only. Single-case studies, qualitative narrative descriptions, and expert opinions were also excluded. The initial search resulted in a total of 417 citations. Eleven articles met inclusion criteria. Two randomized controlled trials and 3 quasiexperimental studies reported lateral wedge insoles may have at least some pain-relieving effects and improved functional mobility in older adults at 4 weeks to 2 years' follow-up, particularly when used with subtalar and ankle strapping. Three randomized controlled trials with large sample sizes reported that lateral wedges provided no knee pain relief compared with flat insoles. Hardness of shoe soles did not significantly affect joint comfort in the foot in a quasiexperimental study. A quasiexperimental designed study investigating shock-absorbing insoles showed reduction in knee joint pain with 1 month of wear. Finally, a cross-sectional prognostic study indicated poor footwear at early ages exhibits an association with hindfoot pain later in life. Because of the limited number of randomized control trials, it is not possible to make a definitive conclusion about the long-term effects of footwear on lower extremity joint pain caused by OA. There is mounting evidence that shock-absorbing insoles, subtalar strapping, and avoidance of high heels and sandals early in life may prevent lower extremity joint pain in older adults, but no conclusive evidence exists to show that lateral wedge insoles will provide long-term relief from knee joint pain and improved mobility in older adults with OA. More high-quality randomized control trials are needed to study the effectiveness of footwear and shoe inserts on joint pain and function in older adults with OA.

  17. Using Large-Scale Databases in Evaluation: Advances, Opportunities, and Challenges

    ERIC Educational Resources Information Center

    Penuel, William R.; Means, Barbara

    2011-01-01

    Major advances in the number, capabilities, and quality of state, national, and transnational databases have opened up new opportunities for evaluators. Both large-scale data sets collected for administrative purposes and those collected by other researchers can provide data for a variety of evaluation-related activities. These include (a)…

  18. Improving the Scalability of an Exact Approach for Frequent Item Set Hiding

    ERIC Educational Resources Information Center

    LaMacchia, Carolyn

    2013-01-01

    Technological advances have led to the generation of large databases of organizational data recognized as an information-rich, strategic asset for internal analysis and sharing with trading partners. Data mining techniques can discover patterns in large databases including relationships considered strategically relevant to the owner of the data.…

  19. Reflections on CD-ROM: Bridging the Gap between Technology and Purpose.

    ERIC Educational Resources Information Center

    Saviers, Shannon Smith

    1987-01-01

    Provides a technological overview of CD-ROM (Compact Disc-Read Only Memory), an optically-based medium for data storage offering large storage capacity, computer-based delivery system, read-only medium, and economic mass production. CD-ROM database attributes appropriate for information delivery are also reviewed, including large database size,…

  20. Relational Databases: A Transparent Framework for Encouraging Biology Students to Think Informatically

    ERIC Educational Resources Information Center

    Rice, Michael; Gladstone, William; Weir, Michael

    2004-01-01

    We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a…

  1. Cost and cost-effectiveness studies in urologic oncology using large administrative databases.

    PubMed

    Wang, Ye; Mossanen, Matthew; Chang, Steven L

    2018-04-01

    Urologic cancers are not only among the most common types of cancers, but also among the most expensive cancers to treat in the United States. This study aimed to review the use of CEAs and other cost analyses in urologic oncology using large databases to better understand the value of management strategies of these cancers. A literature review on CEAs and other cost analyses in urologic oncology using large databases. The options for and costs of diagnosing, treating, and following patients with urologic cancers can be expected to rise in the coming years. There are numerous opportunities in each urologic cancer to use CEAs to both lower costs and provide high-quality services. Improved cancer care must balance the integration of novelty with ensuring reasonable costs to patients and the health care system. With the increasing focus cost containment, appreciating the value of competing strategies in caring for our patients is pivotal. Leveraging methods such as CEAs and harnessing large databases may help evaluate the merit of established or emerging strategies. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Characterizing differences in precipitation regimes of extreme wet and dry years: implications for climate change experiments.

    PubMed

    Knapp, Alan K; Hoover, David L; Wilcox, Kevin R; Avolio, Meghan L; Koerner, Sally E; La Pierre, Kimberly J; Loik, Michael E; Luo, Yiqi; Sala, Osvaldo E; Smith, Melinda D

    2015-02-03

    Climate change is intensifying the hydrologic cycle and is expected to increase the frequency of extreme wet and dry years. Beyond precipitation amount, extreme wet and dry years may differ in other ways, such as the number of precipitation events, event size, and the time between events. We assessed 1614 long-term (100 year) precipitation records from around the world to identify key attributes of precipitation regimes, besides amount, that distinguish statistically extreme wet from extreme dry years. In general, in regions where mean annual precipitation (MAP) exceeded 1000 mm, precipitation amounts in extreme wet and dry years differed from average years by ~40% and 30%, respectively. The magnitude of these deviations increased to >60% for dry years and to >150% for wet years in arid regions (MAP<500 mm). Extreme wet years were primarily distinguished from average and extreme dry years by the presence of multiple extreme (large) daily precipitation events (events >99th percentile of all events); these occurred twice as often in extreme wet years compared to average years. In contrast, these large precipitation events were rare in extreme dry years. Less important for distinguishing extreme wet from dry years were mean event size and frequency, or the number of dry days between events. However, extreme dry years were distinguished from average years by an increase in the number of dry days between events. These precipitation regime attributes consistently differed between extreme wet and dry years across 12 major terrestrial ecoregions from around the world, from deserts to the tropics. Thus, we recommend that climate change experiments and model simulations incorporate these differences in key precipitation regime attributes, as well as amount into treatments. This will allow experiments to more realistically simulate extreme precipitation years and more accurately assess the ecological consequences. © 2015 John Wiley & Sons Ltd.

  3. A comprehensive view of the web-resources related to sericulture

    PubMed Central

    Singh, Deepika; Chetia, Hasnahana; Kabiraj, Debajyoti; Sharma, Swagata; Kumar, Anil; Sharma, Pragya; Deka, Manab; Bora, Utpal

    2016-01-01

    Recent progress in the field of sequencing and analysis has led to a tremendous spike in data and the development of data science tools. One of the outcomes of this scientific progress is development of numerous databases which are gaining popularity in all disciplines of biology including sericulture. As economically important organism, silkworms are studied extensively for their numerous applications in the field of textiles, biomaterials, biomimetics, etc. Similarly, host plants, pests, pathogens, etc. are also being probed to understand the seri-resources more efficiently. These studies have led to the generation of numerous seri-related databases which are extremely helpful for the scientific community. In this article, we have reviewed all the available online resources on silkworm and its related organisms, including databases as well as informative websites. We have studied their basic features and impact on research through citation count analysis, finally discussing the role of emerging sequencing and analysis technologies in the field of seri-data science. As an outcome of this review, a web portal named SeriPort, has been created which will act as an index for the various sericulture-related databases and web resources available in cyberspace. Database URL: http://www.seriport.in/ PMID:27307138

  4. Quantitative methods for stochastic high frequency spatio-temporal and non-linear analysis: Assessing health effects of exposure to extreme ambient temperature

    NASA Astrophysics Data System (ADS)

    Liss, Alexander

    Extreme weather events, such as heat waves and cold spells, cause substantial excess mortality and morbidity in the vulnerable elderly population, and cost billions of dollars. The accurate and reliable assessment of adverse effects of extreme weather events on human health is crucial for environmental scientists, economists, and public health officials to ensure proper protection of vulnerable populations and efficient allocation of scarce resources. However, the methodology for the analysis of large national databases is yet to be developed. The overarching objective of this dissertation is to examine the effect of extreme weather on the elderly population of the Conterminous US (ConUS) with respect to seasonality in temperature in different climatic regions by utilizing heterogeneous high frequency and spatio-temporal resolution data. To achieve these goals the author: 1) incorporated dissimilar stochastic high frequency big data streams and distinct data types into the integrated data base for use in analytical and decision support frameworks; 2) created an automated climate regionalization system based on remote sensing and machine learning to define climate regions for the Conterminous US; 3) systematically surveyed the current state of the art and identified existing gaps in the scientific knowledge; 4) assessed the dose-response relationship of exposure to temperature extremes on human health in relatively homogeneous climate regions using different statistical models, such as parametric and non-parametric, contemporaneous and asynchronous, applied to the same data; 5) assessed seasonal peak timing and synchronization delay of the exposure and the disease within the framework of contemporaneous high frequency harmonic time series analysis and modification of the effect by the regional climate; 6) modeled using hyperbolic functional form non-linear properties of the effect of exposure to extreme temperature on human health. The proposed climate regionalization method algorithmically forms eight climatically homogeneous regions for Conterminous US from satellite Remote Sensing inputs. The relative risk of hospitalizations due to extreme ambient temperature varied across climatic regions. Difference in regional hospitalization rates suggests presence of an adaptation effect to a prevailing climate. In various climatic regions the hospitalizations peaked earlier than the peak of exposure. This suggests disproportionally high impact of extreme weather events, such as cold spells or heat waves when they occur early in the season. These findings provide an insight into the use of high frequency disjoint data sets for the assessment of the magnitude, timing, synchronization and non-linear properties of adverse health consequences due to exposure to extreme weather events to the elderly in defined climatic regions. These findings assist in the creation of decision support frameworks targeting preventions and adaptation strategies such as improving infrastructure, providing energy assistance, education and early warning notifications for the vulnerable population. This dissertation offers a number of methodological innovations for the assessment of the high frequency spatio-temporal and non-linear impacts of extreme weather events on human health. These innovations help to ensure an improved protection of the elderly population, aid policy makers in the development of efficient disaster prevention strategies, and facilitate more efficient allocation of scarce resources.

  5. LSD: Large Survey Database framework

    NASA Astrophysics Data System (ADS)

    Juric, Mario

    2012-09-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.

  6. A global analysis of the asymmetric effect of ENSO on extreme precipitation

    NASA Astrophysics Data System (ADS)

    Sun, Xun; Renard, Benjamin; Thyer, Mark; Westra, Seth; Lang, Michel

    2015-11-01

    The global and regional influence of the El Niño-Southern Oscillation (ENSO) phenomenon on extreme precipitation was analyzed using a global database comprising over 7000 high quality observation sites. To better quantify possible changes in relatively rare design-relevant precipitation quantiles (e.g. the 1 in 10 year event), a Bayesian regional extreme value model was used, which employed the Southern Oscillation Index (SOI) - a measure of ENSO - as a covariate. Regions found to be influenced by ENSO include parts of North and South America, southern and eastern Asia, South Africa, Australia and Europe. The season experiencing the greatest ENSO effect varies regionally, but in most of the ENSO-affected regions the strongest effect happens in boreal winter, during which time the 10-year precipitation for |SOI| = 20 (corresponding to either a strong El Niño or La Niña episode) can be up to 50% higher or lower than for SOI = 0 (a neutral phase). Importantly, the effect of ENSO on extreme precipitation is asymmetric, with most parts of the world experiencing a significant effect only for a single ENSO phase. This finding has important implications on the current understanding of how ENSO influences extreme precipitation, and will enable a more rigorous theoretical foundation for providing quantitative extreme precipitation intensity predictions at seasonal timescales. We anticipate that incorporating asymmetric impacts of ENSO on extreme precipitation will help lead to better-informed climate-adaptive design of flood-sensitive infrastructure.

  7. The Seasonal Predictability of Extreme Wind Events in the Southwest United States

    NASA Astrophysics Data System (ADS)

    Seastrand, Simona Renee

    Extreme wind events are a common phenomenon in the Southwest United States. Entities such as the United States Air Force (USAF) find the Southwest appealing for many reasons, primarily for the an expansive, unpopulated, and electronically unpolluted space for large-scale training and testing. However, wind events can cause hazards for the USAF including: surface wind gusts can impact the take-off and landing of all aircraft, can tip the airframes of large wing-surface aircraft during the performance of maneuvers close to the ground, and can even impact weapons systems. This dissertation is comprised of three sections intended to further our knowledge and understanding of wind events in the Southwest. The first section builds a climatology of wind events for seven locations in the Southwest during the twelve 3-month seasons of the year. The first section further examines the wind events in relation to terrain and the large-scale flow of the atmosphere. The second section builds upon the first by taking the wind events and generating mid-level composites for each of the twelve 3-month seasons. In the third section, teleconnections identified as consistent with the large-scale circulation in the second paper were used as predictor variables to build a Poisson regression model for each of the twelve 3-month seasons. The purpose of this research is to increase our understanding of the climatology of extreme wind events, increase our understanding of how the large-scale circulation influences extreme wind events, and create a model to enhance predictability of extreme wind events in the Southwest. Knowledge from this paper will help protect personnel and property associated with not only the USAF, but all those in the Southwest.

  8. Accelerating Pathology Image Data Cross-Comparison on CPU-GPU Hybrid Systems

    PubMed Central

    Wang, Kaibo; Huai, Yin; Lee, Rubao; Wang, Fusheng; Zhang, Xiaodong; Saltz, Joel H.

    2012-01-01

    As an important application of spatial databases in pathology imaging analysis, cross-comparing the spatial boundaries of a huge amount of segmented micro-anatomic objects demands extremely data- and compute-intensive operations, requiring high throughput at an affordable cost. However, the performance of spatial database systems has not been satisfactory since their implementations of spatial operations cannot fully utilize the power of modern parallel hardware. In this paper, we provide a customized software solution that exploits GPUs and multi-core CPUs to accelerate spatial cross-comparison in a cost-effective way. Our solution consists of an efficient GPU algorithm and a pipelined system framework with task migration support. Extensive experiments with real-world data sets demonstrate the effectiveness of our solution, which improves the performance of spatial cross-comparison by over 18 times compared with a parallelized spatial database approach. PMID:23355955

  9. Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis

    NASA Astrophysics Data System (ADS)

    Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles

    2018-03-01

    We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiogama, Hideo; Imada, Yukiko; Mori, Masato

    Here, we describe two unprecedented large (100-member), longterm (61-year) ensembles based on MRI-AGCM3.2, which were driven by historical and non-warming climate forcing. These ensembles comprise the "Database for Policy Decision making for Future climate change (d4PDF)". We compare these ensembles to large ensembles based on another climate model, as well as to observed data, to investigate the influence of anthropogenic activities on historical changes in the numbers of record-breaking events, including: the annual coldest daily minimum temperature (TNn), the annual warmest daily maximum temperature (TXx) and the annual most intense daily precipitation event (Rx1day). These two climate model ensembles indicatemore » that human activity has already had statistically significant impacts on the number of record-breaking extreme events worldwide mainly in the Northern Hemisphere land. Specifically, human activities have altered the likelihood that a wider area globally would suffer record-breaking TNn, TXx and Rx1day events than that observed over the 2001- 2010 period by a factor of at least 0.6, 5.4 and 1.3, respectively. However, we also find that the estimated spatial patterns and amplitudes of anthropogenic impacts on the probabilities of record-breaking events are sensitive to the climate model and/or natural-world boundary conditions used in the attribution studies.« less

  11. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    NASA Astrophysics Data System (ADS)

    Yu, Lejiang; Zhong, Shiyuan; Pei, Lisi; Bian, Xindi; Heilman, Warren E.

    2016-04-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for severe flooding over a large region, little is known about how extreme precipitation events that cause flash flooding and occur at sub-daily time scales have changed over time. Here we use the observed hourly precipitation from the North American Land Data Assimilation System Phase 2 forcing datasets to determine trends in the frequency of extreme precipitation events of short (1 h, 3 h, 6 h, 12 h and 24 h) duration for the period 1979-2013. The results indicate an increasing trend in the central and eastern US. Over most of the western US, especially the Southwest and the Intermountain West, the trends are generally negative. These trends can be largely explained by the interdecadal variability of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation (AMO), with the AMO making a greater contribution to the trends in both warm and cold seasons.

  12. Increasing climate whiplash in 21st century California

    NASA Astrophysics Data System (ADS)

    Swain, D. L.; Langenbrunner, B.; Neelin, J. D.; Hall, A. D.

    2017-12-01

    Temperate "Mediterranean" climate regimes across the globe are particularly susceptible to wide swings between drought and flood—of which California's rapid transition from record multi-year dryness between 2012-2016 to extreme wetness during 2016-2017 provides a dramatic example. The wide-ranging human and environmental impacts of this recent "climate whiplash" event in a highly-populated, economically critical, and biodiverse region highlight the importance of understanding weather and climate extremes at both ends of the hydroclimatic spectrum. Previous studies have examined the potential contribution of anthropogenic warming to recent California extremes, but findings to date have been mixed and primarily drought-focused. Here, we use specific historical California flood and drought events as thresholds for quantifying long-term changes in precipitation extremes using a large ensemble of multi-decadal climate model simulations (CESM-LENS). We find that greenhouse gas emissions are already responsible for a detectable increase in both wet and dry extremes across portions of California, and that increasing 21st century "climate whiplash" will likely yield large increases in the frequency of both rapid "dry-to-wet" transitions and severe flood events over a wide range of timescales. This projected intensification of California's hydrological cycle would seriously challenge the region's existing water storage, conveyance, and flood control infrastructure—even absent large changes in mean precipitation.

  13. Analyzing extreme sea levels for broad-scale impact and adaptation studies

    NASA Astrophysics Data System (ADS)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.

    2017-12-01

    Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels (ESL), because increasing damage due to extreme events is one of the major consequences of sea-level rise (SLR) and climate change. Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future SLR; different scenarios were developed with process-based or semi-empirical models and used for coastal impact studies at various temporal and spatial scales to guide coastal management and adaptation efforts. Uncertainties in future SLR are typically accounted for by analyzing the impacts associated with a range of scenarios and model ensembles. ESL distributions are then displaced vertically according to the SLR scenarios under the inherent assumption that we have perfect knowledge on the statistics of extremes. However, there is still a limited understanding of present-day ESL which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of storm surge water levels, and (2) statistical models used for determining present-day ESL exceedance probabilities. There is no universally accepted approach to obtain such values for broad-scale flood risk assessments and while substantial research has explored SLR uncertainties, we quantify, for the first time globally, key uncertainties in ESL estimates. We find that contemporary ESL uncertainties exceed those from SLR projections and, assuming that we meet the Paris agreement, the projected SLR itself by the end of the century. Our results highlight the necessity to further improve our understanding of uncertainties in ESL estimates through (1) continued improvement of numerical and statistical models to simulate and analyze coastal water levels and (2) exploit the rich observational database and continue data archeology to obtain longer time series and remove model bias. Finally, ESL uncertainties need to be integrated with SLR uncertainties. Otherwise, important improvements in providing more robust SLR projections are of less benefit for broad-scale impact and adaptation studies and decision processes.

  14. Trauma transitional care coordination: A mature system at work.

    PubMed

    Hall, Erin C; Tyrrell, Rebecca L; Doyle, Karen E; Scalea, Thomas M; Stein, Deborah M

    2018-05-01

    We have previously demonstrated effectiveness of a Trauma Transitional Care Coordination (TTCC) Program in reducing 30-day readmission rates for trauma patients most at risk. With program maturation, we achieved improved readmission rates for specific patient populations. TTCC is a nursing driven program that supports patients at high risk for 30-day readmission. The TTCC interventions include calls to patients within 72 hours of discharge, complete medication reconciliation, coordination of medical appointments, and individualized problem solving. Account IDs were used to link TTCC patients with the Health Services Cost Review Commission database to collect data on statewide unplanned 30-day readmissions. Four hundred seventy-five patients were enrolled in the TTCC program from January 2014 to September 2016. Only 10.5% (n = 50) of TTCC enrollees were privately insured, 54.5% had Medicaid (n = 259), and 13.5% had Medicare (n = 64). Seventy-three percent had Health Services Cost Review Commission severity of injury ratings of 3 or 4 (maximum severity of injury = 4). The most common All Patient Refined Diagnosis Related Groups for participants were: lower-extremity procedures (n = 67, 14%); extensive abdominal/thoracic procedures (n = 40, 8.4%); musculoskeletal procedures (n = 37, 7.8%); complicated tracheostomy and upper extremity procedures (n = 29 each, 6.1%); infectious disease complications (n = 14, 2.9%); major chest/respiratory trauma, major small and large bowel procedures and vascular procedures (n = 13 each, 2.7%). The TTCC participants with lower-extremity injury, complicated tracheostomy, and bowel procedures had 6-point reduction (10% vs. 16%, p = 0.05), 11-point reduction (13% vs. 24%, p = 0.05), and 16-point reduction (11% vs. 27%, p = 0.05) in 30-day readmission rates, respectively, compared to those without TTCC. Targeted outpatient support for high-risk patients can decrease 30-day readmission rates. As our TTCC program matured, we reduced 30-day readmission in patients with lower-extremity injury, complicated tracheostomy and bowel procedures. This represents over one million-dollar savings for the hospital per year through quality-based reimbursement. Therapeutic/care management, level III.

  15. Scenario based tsunami wave height estimation towards hazard evaluation for the Hellenic coastline and examples of extreme inundation zones in South Aegean

    NASA Astrophysics Data System (ADS)

    Melis, Nikolaos S.; Barberopoulou, Aggeliki; Frentzos, Elias; Krassanakis, Vassilios

    2016-04-01

    A scenario based methodology for tsunami hazard assessment is used, by incorporating earthquake sources with the potential to produce extreme tsunamis (measured through their capacity to cause maximum wave height and inundation extent). In the present study we follow a two phase approach. In the first phase, existing earthquake hazard zoning in the greater Aegean region is used to derive representative maximum expected earthquake magnitude events, with realistic seismotectonic source characteristics, and of greatest tsunamigenic potential within each zone. By stacking the scenario produced maximum wave heights a global maximum map is constructed for the entire Hellenic coastline, corresponding to all expected extreme offshore earthquake sources. Further evaluation of the produced coastline categories based on the maximum expected wave heights emphasizes the tsunami hazard in selected coastal zones with important functions (i.e. touristic crowded zones, industrial zones, airports, power plants etc). Owing to its proximity to the Hellenic Arc, many urban centres and being a popular tourist destination, Crete Island and the South Aegean region are given a top priority to define extreme inundation zoning. In the second phase, a set of four large coastal cities (Kalamata, Chania, Heraklion and Rethymno), important for tsunami hazard, due i.e. to the crowded beaches during the summer season or industrial facilities, are explored towards preparedness and resilience for tsunami hazard in Greece. To simulate tsunamis in the Aegean region (generation, propagation and runup) the MOST - ComMIT NOAA code was used. High resolution DEMs for bathymetry and topography were joined via an interface, specifically developed for the inundation maps in this study and with similar products in mind. For the examples explored in the present study, we used 5m resolution for the topography and 30m resolution for the bathymetry, respectively. Although this study can be considered as preliminary, it can also form the basis to further develop a scenario based inundation model database that can be used as an operational tool, for fast assessing tsunami prone zones during a real tsunami crisis.

  16. Convective weather hazards in the Twin Cities Metropolitan Area, MN

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Kenneth A.

    This dissertation investigates the frequency and intensity of severe convective storms, and their associated hazards, in the Twin Cities Metropolitan Area (TCMA), Minnesota. Using public severe weather reports databases and high spatial density rain gauge data, annual frequencies and return-periods are calculated for tornadoes, damaging winds, large hail, and flood-inducing rainfall. The hypothesis that severe thunderstorms and tornadoes are less likely in the central TCMA than in surrounding areas also is examined, and techniques for estimating 100-year rainfall amounts are developed and discussed. This research finds that: (i) storms capable of significant damage somewhere within the TCMA recur annually (sometimes multiple times per year), while storms virtually certain to cause such damage recur every 2-3 years; (ii) though severe weather reports data are not amenable to classical comparative statistical testing, careful treatment of them suggests all types and intensity categories of severe convective weather have been and should continue to be approximately as common in the central TCMA as in surrounding areas; and (iii) applications of Generalized Extreme Value (GEV) statistics and areal analyses of rainfall data lead to significantly larger (25-50%) estimates of 100-year rainfall amounts in the TCMA and parts of Minnesota than those currently published and used for precipitation design. The growth of the TCMA, the popular sentiment that downtown areas somehow deter severe storms and tornadoes, and the prior underestimation of extreme rainfall thresholds for precipitation design, all act to enhance local susceptibility to hazards from severe convective storms.

  17. Position-related injury is uncommon in robotic gynecologic surgery.

    PubMed

    Ulm, Michael A; Fleming, Nicole D; Rallapali, Vijayashri; Munsell, Mark F; Ramirez, Pedro T; Westin, Shannon N; Nick, Alpa M; Schmeler, Kathleen M; Soliman, Pamela T

    2014-12-01

    To assess the rate and risk factors for position-related injury in robotic gynecologic surgery. A prospective database from 12/2006 to 1/2014 of all planned robotic gynecologic procedures was retrospectively reviewed for patients who experienced neurologic injury, musculoskeletal injury, or vascular compromise related to patient positioning in the operating room. Analysis was performed to determine risk-factors and incidence for position-related injury. Of the 831 patients who underwent robotic surgery during the study time period, only 7 (0.8%) experienced positioning-related injury. The injuries included minor head contusions (n=3), two lower extremity neuropathies (n=2), brachial plexus injury (n=1) and one large subcutaneous ecchymosis on the left flank and thigh (n=1). There were no long term sequelae from the positioning-related injuries. The only statistically significant risk factor for positioning-related injury was prior abdominal surgery (P=0.05). There were no significant associations between position-related injuries and operative time (P=0.232), body mass index (P=0.847), age (P=0.152), smoking history (P=0.161), or medical comorbidities (P=0.229-0.999). The incidence of position-related injury among women undergoing robotic surgery was extremely low (0.8%). Due to the low incidence we were unable to identify modifiable risk factors for position-related injury following robotic surgery. A standardized, team-oriented approach may significantly decrease position-related injuries following robotic gynecologic surgery. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Climate Change and Hydrological Extreme Events - Risks and Perspectives for Water Management in Bavaria and Québec

    NASA Astrophysics Data System (ADS)

    Ludwig, R.

    2017-12-01

    There is as yet no confirmed knowledge whether and how climate change contributes to the magnitude and frequency of hydrological extreme events and how regional water management could adapt to the corresponding risks. The ClimEx project (2015-2019) investigates the effects of climate change on the meteorological and hydrological extreme events and their implications for water management in Bavaria and Québec. High Performance Computing is employed to enable the complex simulations in a hydro-climatological model processing chain, resulting in a unique high-resolution and transient (1950-2100) dataset of climatological and meteorological forcing and hydrological response: (1) The climate module has developed a large ensemble of high resolution data (12km) of the CRCM5 RCM for Central Europe and North-Eastern North America, downscaled from 50 members of the CanESM2 GCM. The dataset is complemented by all available data from the Euro-CORDEX project to account for the assessment of both natural climate variability and climate change. The large ensemble with several thousand model years provides the potential to catch rare extreme events and thus improves the process understanding of extreme events with return periods of 1000+ years. (2) The hydrology module comprises process-based and spatially explicit model setups (e.g. WaSiM) for all major catchments in Bavaria and Southern Québec in high temporal (3h) and spatial (500m) resolution. The simulations form the basis for in depth analysis of hydrological extreme events based on the inputs from the large climate model dataset. The specific data situation enables to establish a new method for `virtual perfect prediction', which assesses climate change impacts on flood risk and water resources management by identifying patterns in the data which reveal preferential triggers of hydrological extreme events. The presentation will highlight first results from the analysis of the large scale ClimEx model ensemble, showing the current and future ratio of natural variability and climate change impacts on meteorological extreme events. Selected data from the ensemble is used to drive a hydrological model experiment to illustrate the capacity to better determine the recurrence periods of hydrological extreme events under conditions of climate change.

  19. Assignment to database industy

    NASA Astrophysics Data System (ADS)

    Abe, Kohichiroh

    Various kinds of databases are considered to be essential part in future large sized systems. Information provision only by databases is also considered to be growing as the market becomes mature. This paper discusses how such circumstances have been built and will be developed from now on.

  20. European Extremely Large Telescope: progress report

    NASA Astrophysics Data System (ADS)

    Tamai, R.; Spyromilio, J.

    2014-07-01

    The European Extremely Large Telescope is a project of the European Southern Observatory to build and operate a 40-m class optical near-infrared telescope. The telescope design effort is largely concluded and construction contracts are being placed with industry and academic/research institutes for the various components. The siting of the telescope in Northern Chile close to the Paranal site allows for an integrated operation of the facility providing significant economies. The progress of the project in various areas is presented in this paper and references to other papers at this SPIE meeting are made.

  1. Construction of a ratiometric fluorescent probe with an extremely large emission shift for imaging hypochlorite in living cells

    NASA Astrophysics Data System (ADS)

    Song, Xuezhen; Dong, Baoli; Kong, Xiuqi; Wang, Chao; Zhang, Nan; Lin, Weiying

    2018-01-01

    Hypochlorite is one of the important reactive oxygen species (ROS) and plays critical roles in many biologically vital processes. Herein, we present a unique ratiometric fluorescent probe (CBP) with an extremely large emission shift for detecting hypochlorite in living cells. Utilizing positively charged α,β-unsaturated carbonyl group as the reaction site, the probe CBP itself exhibited near-infrared (NIR) fluorescence at 662 nm, and can display strong blue fluorescence at 456 nm when responded to hypochlorite. Notably, the extremely large emission shift of 206 nm could enable the precise measurement of the fluorescence peak intensities and ratios. CBP showed high sensitivity, excellent selectivity, desirable performance at physiological pH, and low cytotoxicity. The bioimaging experiments demonstrate the biological application of CBP for the ratiometric imaging of hypochlorite in living cells.

  2. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement

    PubMed Central

    Garcia-Cantero, Juan J.; Brito, Juan P.; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes. PMID:28690511

  3. Comparative Probabilistic Assessment of Occupational Pesticide Exposures Based on Regulatory Assessments

    PubMed Central

    Pouzou, Jane G.; Cullen, Alison C.; Yost, Michael G.; Kissel, John C.; Fenske, Richard A.

    2018-01-01

    Implementation of probabilistic analyses in exposure assessment can provide valuable insight into the risks of those at the extremes of population distributions, including more vulnerable or sensitive subgroups. Incorporation of these analyses into current regulatory methods for occupational pesticide exposure is enabled by the exposure data sets and associated data currently used in the risk assessment approach of the Environmental Protection Agency (EPA). Monte Carlo simulations were performed on exposure measurements from the Agricultural Handler Exposure Database and the Pesticide Handler Exposure Database along with data from the Exposure Factors Handbook and other sources to calculate exposure rates for three different neurotoxic compounds (azinphos methyl, acetamiprid, emamectin benzoate) across four pesticide-handling scenarios. Probabilistic estimates of doses were compared with the no observable effect levels used in the EPA occupational risk assessments. Some percentage of workers were predicted to exceed the level of concern for all three compounds: 54% for azinphos methyl, 5% for acetamiprid, and 20% for emamectin benzoate. This finding has implications for pesticide risk assessment and offers an alternative procedure that may be more protective of those at the extremes of exposure than the current approach. PMID:29105804

  4. Comparative Probabilistic Assessment of Occupational Pesticide Exposures Based on Regulatory Assessments.

    PubMed

    Pouzou, Jane G; Cullen, Alison C; Yost, Michael G; Kissel, John C; Fenske, Richard A

    2017-11-06

    Implementation of probabilistic analyses in exposure assessment can provide valuable insight into the risks of those at the extremes of population distributions, including more vulnerable or sensitive subgroups. Incorporation of these analyses into current regulatory methods for occupational pesticide exposure is enabled by the exposure data sets and associated data currently used in the risk assessment approach of the Environmental Protection Agency (EPA). Monte Carlo simulations were performed on exposure measurements from the Agricultural Handler Exposure Database and the Pesticide Handler Exposure Database along with data from the Exposure Factors Handbook and other sources to calculate exposure rates for three different neurotoxic compounds (azinphos methyl, acetamiprid, emamectin benzoate) across four pesticide-handling scenarios. Probabilistic estimates of doses were compared with the no observable effect levels used in the EPA occupational risk assessments. Some percentage of workers were predicted to exceed the level of concern for all three compounds: 54% for azinphos methyl, 5% for acetamiprid, and 20% for emamectin benzoate. This finding has implications for pesticide risk assessment and offers an alternative procedure that may be more protective of those at the extremes of exposure than the current approach. © 2017 Society for Risk Analysis.

  5. High Resolution Spatiotemporal Climate Reconstruction and Variability in East Asia during Little Ice Age

    NASA Astrophysics Data System (ADS)

    Lin, K. H. E.; Wang, P. K.; Lee, S. Y.; Liao, Y. C.; Fan, I. C.; Liao, H. M.

    2017-12-01

    The Little ice Age (LIA) is one of the most prominent epochs in paleoclimate reconstruction of the Common Era. While the signals of LIA were generally discovered across hemispheres, wide arrays of regional variability were found, and the reconstructed anomalies were sometimes inconsistent across studies by using various proxy data or historical records. This inconsistency is mainly attributed to limited data coverage at fine resolution that can assist high-resolution climate reconstruction in the continuous spatiotemporal trends. Qing dynasty (1644-1911 CE) of China existed in the coldest period of LIA. Owing to a long-standing tradition that acquired local officials to record odds and social or meteorological events, thousands of local chronicles were left. Zhang eds. (2004) took two decades to compile all these meteorological records in a compendium, for which we then digitized and coded all records into our REACHS database system for reconstructing climate. There were in total 1,435 points (sites) in our database for over 80,000 events in the period of time. After implementing two-rounds coding check for data quality control (accuracy rate 87.2%), multiple indexes were retrieved for reconstructing annually and seasonally resolved temperature and precipitation series for North, Central, and South China. The reconstruction methods include frequency count and grading, with usage of multiple regression models to test sensitivity and to calculate correlations among several reconstructed series. Validation was also conducted through comparison with instrumental data and with other reconstructed series in previous studies. Major research results reveal interannual (3-5 years), decadal (8-12 years), and interdecadal (≈30 years) variabilities with strong regional expressions across East China. Cooling effect was not homogenously distributed in space and time. Flood and drought conditions frequently repeated but the spatiotemporal pattern was variant, indicating likely different climate regimes that can be linked to the dynamism of large atmospheric circulation and East Asian monsoon. Spatiotemporal analysis of extreme events such as typhoons and extreme droughts also indicated similar patterns. More detailed analysis are undertaken to explain the physical mechanisms that can drive these changes.

  6. A large, benign prostatic cyst presented with an extremely high serum prostate-specific antigen level.

    PubMed

    Chen, Han-Kuang; Pemberton, Richard

    2016-01-08

    We report a case of a patient who presented with an extremely high serum prostate specific antigen (PSA) level and underwent radical prostatectomy for presumed prostate cancer. Surprisingly, the whole mount prostatectomy specimen showed only small volume, organ-confined prostate adenocarcinoma and a large, benign intraprostatic cyst, which was thought to be responsible for the PSA elevation. 2016 BMJ Publishing Group Ltd.

  7. Inconsistencies in the red blood cell membrane proteome analysis: generation of a database for research and diagnostic applications

    PubMed Central

    Hegedűs, Tamás; Chaubey, Pururawa Mayank; Várady, György; Szabó, Edit; Sarankó, Hajnalka; Hofstetter, Lia; Roschitzki, Bernd; Sarkadi, Balázs

    2015-01-01

    Based on recent results, the determination of the easily accessible red blood cell (RBC) membrane proteins may provide new diagnostic possibilities for assessing mutations, polymorphisms or regulatory alterations in diseases. However, the analysis of the current mass spectrometry-based proteomics datasets and other major databases indicates inconsistencies—the results show large scattering and only a limited overlap for the identified RBC membrane proteins. Here, we applied membrane-specific proteomics studies in human RBC, compared these results with the data in the literature, and generated a comprehensive and expandable database using all available data sources. The integrated web database now refers to proteomic, genetic and medical databases as well, and contains an unexpected large number of validated membrane proteins previously thought to be specific for other tissues and/or related to major human diseases. Since the determination of protein expression in RBC provides a method to indicate pathological alterations, our database should facilitate the development of RBC membrane biomarker platforms and provide a unique resource to aid related further research and diagnostics. Database URL: http://rbcc.hegelab.org PMID:26078478

  8. A blue carbon soil database: Tidal wetland stocks for the US National Greenhouse Gas Inventory

    NASA Astrophysics Data System (ADS)

    Feagin, R. A.; Eriksson, M.; Hinson, A.; Najjar, R. G.; Kroeger, K. D.; Herrmann, M.; Holmquist, J. R.; Windham-Myers, L.; MacDonald, G. M.; Brown, L. N.; Bianchi, T. S.

    2015-12-01

    Coastal wetlands contain large reservoirs of carbon, and in 2015 the US National Greenhouse Gas Inventory began the work of placing blue carbon within the national regulatory context. The potential value of a wetland carbon stock, in relation to its location, soon could be influential in determining governmental policy and management activities, or in stimulating market-based CO2 sequestration projects. To meet the national need for high-resolution maps, a blue carbon stock database was developed linking National Wetlands Inventory datasets with the USDA Soil Survey Geographic Database. Users of the database can identify the economic potential for carbon conservation or restoration projects within specific estuarine basins, states, wetland types, physical parameters, and land management activities. The database is geared towards both national-level assessments and local-level inquiries. Spatial analysis of the stocks show high variance within individual estuarine basins, largely dependent on geomorphic position on the landscape, though there are continental scale trends to the carbon distribution as well. Future plans including linking this database with a sedimentary accretion database to predict carbon flux in US tidal wetlands.

  9. New Resources for Computer-Aided Legal Research: An Assessment of the Usefulness of the DIALOG System in Securities Regulation Studies.

    ERIC Educational Resources Information Center

    Gruner, Richard; Heron, Carol E.

    1984-01-01

    Examines usefulness of DIALOG as legal research tool through use of DIALOG's DIALINDEX database to identify those databases among almost 200 available that contain large numbers of records related to federal securities regulation. Eight databases selected for further study are detailed. Twenty-six footnotes, database statistics, and samples are…

  10. Recent advances on terrain database correlation testing

    NASA Astrophysics Data System (ADS)

    Sakude, Milton T.; Schiavone, Guy A.; Morelos-Borja, Hector; Martin, Glenn; Cortes, Art

    1998-08-01

    Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.

  11. Multiresource inventories incorporating GIS, GPS, and database management systems

    Treesearch

    Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du

    2000-01-01

    Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...

  12. Data-driven indexing mechanism for the recognition of polyhedral objects

    NASA Astrophysics Data System (ADS)

    McLean, Stewart; Horan, Peter; Caelli, Terry M.

    1992-02-01

    This paper is concerned with the problem of searching large model databases. To date, most object recognition systems have concentrated on the problem of matching using simple searching algorithms. This is quite acceptable when the number of object models is small. However, in the future, general purpose computer vision systems will be required to recognize hundreds or perhaps thousands of objects and, in such circumstances, efficient searching algorithms will be needed. The problem of searching a large model database is one which must be addressed if future computer vision systems are to be at all effective. In this paper we present a method we call data-driven feature-indexed hypothesis generation as one solution to the problem of searching large model databases.

  13. Asymmetrical Responses of Ecosystem Processes to Positive Versus Negative Precipitation Extremes: a Replicated Regression Experimental Approach

    NASA Astrophysics Data System (ADS)

    Felton, A. J.; Smith, M. D.

    2016-12-01

    Heightened climatic variability due to atmospheric warming is forecast to increase the frequency and severity of climate extremes. In particular, changes to interannual variability in precipitation, characterized by increases in extreme wet and dry years, are likely to impact virtually all terrestrial ecosystem processes. However, to date experimental approaches have yet to explicitly test how ecosystem processes respond to multiple levels of climatic extremity, limiting our understanding of how ecosystems will respond to forecast increases in the magnitude of climate extremes. Here we report the results of a replicated regression experimental approach, in which we imposed 9 and 11 levels of growing season precipitation amount and extremity in mesic grassland during 2015 and 2016, respectively. Each level corresponded to a specific percentile of the long-term record, which produced a large gradient of soil moisture conditions that ranged from extreme wet to extreme dry. In both 2015 and 2016, asymptotic responses to water availability were observed for soil respiration. This asymmetry was driven in part by transitions between soil moisture versus temperature constraints on respiration as conditions became increasingly dry versus increasingly wet. In 2015, aboveground net primary production (ANPP) exhibited asymmetric responses to precipitation that largely mirrored those of soil respiration. In total, our results suggest that in this mesic ecosystem, these two carbon cycle processes were more sensitive to extreme drought than to extreme wet years. Future work will assess ANPP responses for 2016, soil nutrient supply and physiological responses of the dominant plant species. Future efforts are needed to compare our findings across a diverse array of ecosystem types, and in particular how the timing and magnitude of precipitation events may modify the response of ecosystem processes to increasing magnitudes of precipitation extremes.

  14. The Majorana Parts Tracking Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abgrall, N.; Aguayo, E.; Avignone, F. T.

    2015-04-01

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developedmore » as tools for achieving the extreme radiopurity required for this rare decay search.« less

  15. Binary Lenses in OGLE-III EWS Database. Seasons 2002-2003

    NASA Astrophysics Data System (ADS)

    Jaroszynski, M.; Udalski, A.; Kubiak, M.; Szymanski, M.; Pietrzynski, G.; Soszynski, I.; Zebrun, K.; Szewczyk, O.; Wyrzykowski, L.

    2004-06-01

    We present 15 binary lens candidates from OGLE-III Early Warning System database for seasons 2002-2003. We also found 15 events interpreted as single mass lensing of double sources. The candidates were selected by visual light curves inspection. Examining the models of binary lenses of this and our previous study (10 caustic crossing events of OGLE-II seasons 1997--1999) we find one case of extreme mass ratio binary (q approx 0.005) and the rest in the range 0.1

  16. Clinical and Epidemiological Aspects of Scorpionism in the World: A Systematic Review.

    PubMed

    Santos, Maria S V; Silva, Cláudio G L; Neto, Basílio Silva; Grangeiro Júnior, Cícero R P; Lopes, Victor H G; Teixeira Júnior, Antônio G; Bezerra, Deryk A; Luna, João V C P; Cordeiro, Josué B; Júnior, Jucier Gonçalves; Lima, Marcos A P

    2016-12-01

    Scorpion stings are registered worldwide, but the incidence and the features of the envenomations vary depending on the region. The aim of this review was to summarize the epidemiological, clinical, diagnostic, and therapeutic data worldwide regarding humans stung by scorpions. A systematic review of the literature was conducted through the online databases of the Virtual Health Library (VHL), which hosts Medline and the Latin American and Caribbean Center on Health Sciences Informational (LILACS) database. We selected articles published between January 1, 2002 and July 31, 2014. Scorpion envenomation reports were found throughout the world, mainly in subtropical and tropical regions. The clinical manifestations were sympathetically and parasympathetically mediated, depending on the species of scorpion. Some of the most common severe complications of scorpionism included respiratory distress syndrome, pulmonary edema, cardiac dysfunction, impaired hemostasis, pancreatitis, and multiple organ failure. Scorpion envenomation could be classified as mild, moderate, and severe, and the therapeutic approach was based on the case severity. The treatment comprised 3 components: symptomatic measures, vital functions support, and injection of antivenom. Moreover, the time that elapsed between the sting and administration of the appropriate medical care was extremely important to the patient's prognosis. The large number of scorpion stings worldwide is concerning and reaffirms the need for new prevention measures and policies to reduce the incidence, prevalence, morbidity, and mortality rates from these poisonous arachnids. Copyright © 2016 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.

  17. A Search to Uncover the Infrared Excess (IRXS) Sources in the Spitzer Enhanced Imaging Products (SEIP) Catalog

    NASA Astrophysics Data System (ADS)

    Rowe, Jamie Lynn; Duranko, Gary; Gorjian, Varoujan; Lineberger, Howard; Orr, Laura; Adewole, Ayomikun; Bradford, Eric; Douglas, Alea; Kohl, Steven; Larson, Lillia; Lascola, Gus; Orr, Quinton; Scott, Mekai; Walston, Joseph; Wang, Xian

    2018-01-01

    The Spitzer Enhanced Imaging Products catalog (SEIP) is a collection of nearly 42 million point sources obtained by the Spitzer Space Telescope during its 5+ year cryogenic mission. Strasburger et al (2014) isolated sources with a signal-to-noise ratio (SNR) >10 in five infrared (IR) wavelength channels (3.6, 4.5, 5.8, 8 and 24 microns) to begin a search for sources with infrared excess (IRXS). They found 76 objects that were never catalogued before. Based on this success, we intend to dig deeper into the catalog in an attempt to find more IRXS sources, specifically by lowering the SNR on the 3.6, 4.5, and 24 micron channels. The ultimate goal is to use this large sample to seek rare astrophysical sources that are transitional in nature and evolutionarily very important.Our filtering of the database at SNR > 5 yielded 461,000 sources. This was further evaluated and reduced to only the most interesting based on source location on a [3.6]-[4.5] vs [4.5]-[24] color-color diagram. We chose a sample of 985 extreme IRXS sources for further inspection. All of these candidate sources were visually inspected and cross-referenced against known sources in existing databases, resulting in a list of highly reliable IRXS sources.These sources will prove important in the study of galaxy and stellar evolution, and will serve as a starting point for further investigation.

  18. Heavy Tail Behavior of Rainfall Extremes across Germany

    NASA Astrophysics Data System (ADS)

    Castellarin, A.; Kreibich, H.; Vorogushyn, S.; Merz, B.

    2017-12-01

    Distributions are termed heavy-tailed if extreme values are more likely than would be predicted by probability distributions that have exponential asymptotic behavior. Heavy-tail behavior often leads to surprise, because historical observations can be a poor guide for the future. Heavy-tail behavior seems to be widespread for hydro-meteorological extremes, such as extreme rainfall and flood events. To date there have been only vague hints to explain under which conditions these extremes show heavy-tail behavior. We use an observational data set consisting of 11 climate variables at 1440 stations across Germany. This homogenized, gap-free data set covers 110 years (1901-2010) at daily resolution. We estimate the upper tail behavior, including its uncertainty interval, of daily precipitation extremes for the 1,440 stations at the annual and seasonal time scales. Different tail indicators are tested, including the shape parameter of the Generalized Extreme Value distribution, the upper tail ratio and the obesity index. In a further step, we explore to which extent the tail behavior can be explained by geographical and climate factors. A large number of characteristics is derived, such as station elevation, degree of continentality, aridity, measures for quantifying the variability of humidity and wind velocity, or event-triggering large-scale atmospheric situation. The link between the upper tail behavior and these characteristics is investigated via data mining methods capable of detecting non-linear relationships in large data sets. This exceptionally rich observational data set, in terms of number of stations, length of time series and number of explaining variables, allows insights into the upper tail behavior which is rarely possible given the typical observational data sets available.

  19. Text mining for metabolic pathways, signaling cascades, and protein networks.

    PubMed

    Hoffmann, Robert; Krallinger, Martin; Andres, Eduardo; Tamames, Javier; Blaschke, Christian; Valencia, Alfonso

    2005-05-10

    The complexity of the information stored in databases and publications on metabolic and signaling pathways, the high throughput of experimental data, and the growing number of publications make it imperative to provide systems to help the researcher navigate through these interrelated information resources. Text-mining methods have started to play a key role in the creation and maintenance of links between the information stored in biological databases and its original sources in the literature. These links will be extremely useful for database updating and curation, especially if a number of technical problems can be solved satisfactorily, including the identification of protein and gene names (entities in general) and the characterization of their types of interactions. The first generation of openly accessible text-mining systems, such as iHOP (Information Hyperlinked over Proteins), provides additional functions to facilitate the reconstruction of protein interaction networks, combine database and text information, and support the scientist in the formulation of novel hypotheses. The next challenge is the generation of comprehensive information regarding the general function of signaling pathways and protein interaction networks.

  20. Forest tree responses to extreme drought and some biotic events: Towards a selection according to hazard tolerance?

    NASA Astrophysics Data System (ADS)

    Bréda, Nathalie; Badeau, Vincent

    2008-09-01

    The aim of this paper is to illustrate how some extreme events could affect forest ecosystems. Forest tree response can be analysed using dendroecological methods, as tree-ring widths are strongly controlled by climatic or biotic events. Years with such events induce similar tree responses and are called pointer years. They can result from extreme climatic events like frost, a heat wave, spring water logging, drought or insect damage… Forest tree species showed contrasting responses to climatic hazards, depending on their sensitivity to water shortage or temperature hardening, as illustrated from our dendrochronological database. For foresters, a drought or a pest disease is an extreme event if visible and durable symptoms are induced (leaf discolouration, leaf loss, perennial organs mortality, tree dieback and mortality). These symptoms here are shown, lagging one or several years behind a climatic or biotic event, from forest decline cases in progress since the 2003 drought or attributed to previous severe droughts or defoliations in France. Tree growth or vitality recovery is illustrated, and the functional interpretation of the long lasting memory of trees is discussed. A coupled approach linking dendrochronology and ecophysiology helps in discussing vulnerability of forest stands, and suggests management advices in order to mitigate extreme drought and cope with selective mortality.

  1. Building community resilience to violent extremism through genuine partnerships.

    PubMed

    Ellis, B Heidi; Abdi, Saida

    2017-04-01

    What is community resilience in relation to violent extremism, and how can we build it? This article explores strategies to harness community assets that may contribute to preventing youth from embracing violent extremism, drawing from models of community resilience as defined in relation to disaster preparedness. Research suggests that social connection is at the heart of resilient communities and any strategy to increase community resilience must both harness and enhance existing social connections, and endeavor to not damage or diminish them. First, the role of social connection within and between communities is explored. Specifically, the ways in which social bonding and social bridging can diminish risk for violence, including violent extremism, is examined. Second, research on the role of social connection between communities and institutions or governing bodies (termed social linking) is described. This research is discussed in terms of how the process of government partnering with community members can both provide systems for early intervention for violent extremism, as well as strengthen bonding and bridging social networks and in this way contribute broadly to building community resilience. Finally, community-based participatory research, a model of community engagement and partnership in research, is presented as a road map for building true partnerships and community engagement. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Effects of BMI on the risk and frequency of AIS 3+ injuries in motor-vehicle crashes.

    PubMed

    Rupp, Jonathan D; Flannagan, Carol A C; Leslie, Andrew J; Hoff, Carrie N; Reed, Matthew P; Cunningham, Rebecca M

    2013-01-01

    Determine the effects of BMI on the risk of serious-to-fatal injury (Abbreviated Injury Scale ≥ 3 or AIS 3+) to different body regions for adults in frontal, nearside, farside, and rollover crashes. Multivariate logistic regression analysis was applied to a probability sample of adult occupants involved in crashes generated by combining the National Automotive Sampling System (NASS-CDS) with a pseudoweighted version of the Crash Injury Research and Engineering Network database. Logistic regression models were applied to weighted data to estimate the change in the number of occupants with AIS 3+ injuries if no occupants were obese. Increasing BMI increased risk of lower-extremity injury in frontal crashes, decreased risk of lower-extremity injury in nearside impacts, increased risk of upper-extremity injury in frontal and nearside crashes, and increased risk of spine injury in frontal crashes. Several of these findings were affected by interactions with gender and vehicle type. If no occupants in frontal crashes were obese, 7% fewer occupants would sustain AIS 3+ upper-extremity injuries, 8% fewer occupants would sustain AIS 3+ lower-extremity injuries, and 28% fewer occupants would sustain AIS 3+ spine injuries. Results of this study have implications on the design and evaluation of vehicle safety systems. Copyright © 2013 The Obesity Society.

  3. Medical data mining: knowledge discovery in a clinical data warehouse.

    PubMed Central

    Prather, J. C.; Lobach, D. F.; Goodwin, L. K.; Hales, J. W.; Hage, M. L.; Hammond, W. E.

    1997-01-01

    Clinical databases have accumulated large quantities of information about patients and their medical conditions. Relationships and patterns within this data could provide new medical knowledge. Unfortunately, few methodologies have been developed and applied to discover this hidden knowledge. In this study, the techniques of data mining (also known as Knowledge Discovery in Databases) were used to search for relationships in a large clinical database. Specifically, data accumulated on 3,902 obstetrical patients were evaluated for factors potentially contributing to preterm birth using exploratory factor analysis. Three factors were identified by the investigators for further exploration. This paper describes the processes involved in mining a clinical database including data warehousing, data query and cleaning, and data analysis. PMID:9357597

  4. The Latest Mars Climate Database (MCD v5.1)

    NASA Astrophysics Data System (ADS)

    Millour, Ehouarn; Forget, Francois; Spiga, Aymeric; Navarro, Thomas; Madeleine, Jean-Baptiste; Pottier, Alizée; Montabone, Luca; Kerber, Laura; Lefèvre, Franck; Montmessin, Franck; Chaufray, Jean-Yves; López-Valverde, Miguel; González-Galindo, Francisco; Lewis, Stephen; Read, Peter; Huot, Jean-Paul; Desjean, Marie-Christine; the MCD/GCM development Team

    2014-05-01

    For many years, several teams around the world have developed GCMs (General Circulation Model or Global Climate Model) to simulate the environment on Mars. The GCM developed at the Laboratoire de Météorologie Dynamique in collaboration with several teams in Europe (LATMOS, France, University of Oxford, The Open University, the Instituto de Astrofisica de Andalucia), and with the support of ESA and CNES is currently used for many applications. Its outputs have also regularly been compiled to build a Mars Climate Database, a freely available tool useful for the scientific and engineering communities. The Mars Climate Database (MCD) has over the years been distributed to more than 150 teams around the world. Following the recent improvements inthe GCM, a new series of reference simulations have been run and compiled into a new version (version5.1) of the Mars Climate Database, released in the first half of 2014. To summarize, MCD v5.1 provides: - Climatologies over a series of dust scenarios: standard year, cold (ie: low dust), warm (ie: dusty atmosphere) and dust storm, all topped by various cases of Extreme UV solar inputs (low, mean or maximum). These scenarios differ from those of previous versions of the MCD (version 4.x) as they have been derived from home-made, instrument-derived (TES, THEMIS, MCS, MERs), dust climatology of the last 8 Martian years. - Mean values and statistics of main meteorological variables (atmospheric temperature, density, pressure and winds), as well as surface pressure and temperature, CO2 ice cover, thermal and solar radiative fluxes, dust column opacity and mixing ratio, [H20] vapor and ice columns, concentrations of many species: [CO], [O2], [O], [N2], [H2], [O3], ... - A high resolution mode which combines high resolution (32 pixel/degree) MOLA topography records and Viking Lander 1 pressure records with raw lower resolution GCM results to yield, within the restriction of the procedure, high resolution values of atmospheric variables. - The possibility to reconstruct realistic conditions by combining the provided climatology with additional large scale and small scale perturbations schemes. At EGU, we will report on the latest improvements in the Mars Climate Database, with comparisons with available measurements from orbit (e.g.: TES, MCS) or landers (Viking, Phoenix, MSL).

  5. Nosql for Storage and Retrieval of Large LIDAR Data Collections

    NASA Astrophysics Data System (ADS)

    Boehm, J.; Liu, K.

    2015-08-01

    Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  6. What have we learned in minimally invasive colorectal surgery from NSQIP and NIS large databases? A systematic review.

    PubMed

    Batista Rodríguez, Gabriela; Balla, Andrea; Corradetti, Santiago; Martinez, Carmen; Hernández, Pilar; Bollo, Jesús; Targarona, Eduard M

    2018-06-01

    "Big data" refers to large amount of dataset. Those large databases are useful in many areas, including healthcare. The American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) and the National Inpatient Sample (NIS) are big databases that were developed in the USA in order to record surgical outcomes. The aim of the present systematic review is to evaluate the type and clinical impact of the information retrieved through NISQP and NIS big database articles focused on laparoscopic colorectal surgery. A systematic review was conducted using The Meta-Analysis Of Observational Studies in Epidemiology (MOOSE) guidelines. The research was carried out on PubMed database and revealed 350 published papers. Outcomes of articles in which laparoscopic colorectal surgery was the primary aim were analyzed. Fifty-five studies, published between 2007 and February 2017, were included. Articles included were categorized in groups according to the main topic as: outcomes related to surgical technique comparisons, morbidity and perioperatory results, specific disease-related outcomes, sociodemographic disparities, and academic training impact. NSQIP and NIS databases are just the tip of the iceberg for the potential application of Big Data technology and analysis in MIS. Information obtained through big data is useful and could be considered as external validation in those situations where a significant evidence-based medicine exists; also, those databases establish benchmarks to measure the quality of patient care. Data retrieved helps to inform decision-making and improve healthcare delivery.

  7. Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.

    ERIC Educational Resources Information Center

    Gutmann, Myron P.; And Others

    1989-01-01

    Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)

  8. Incorporating Aquatic Interspecies Toxicity Estimates into Large Databases: Model Evaluations and Data Gains

    EPA Science Inventory

    The Chemical Aquatic Fate and Effects (CAFE) database, developed by NOAA’s Emergency Response Division (ERD), is a centralized data repository that allows for unrestricted access to fate and effects data. While this database was originally designed to help support decisions...

  9. Evolving trends in aortic valve replacement: A statewide experience.

    PubMed

    Kim, Karen M; Shannon, Francis; Paone, Gaetano; Lall, Shelly; Batra, Sanjay; Boeve, Theodore; DeLucia, Alphonse; Patel, Himanshu J; Theurer, Patricia F; He, Chang; Clark, Melissa J; Sultan, Ibrahim; Deeb, George Michael; Prager, Richard L

    2018-06-17

    Transcatheter aortic valve replacement (TAVR) is an alternative to surgical aortic valve replacement (SAVR) for the treatment of aortic stenosis in patients at intermediate, high, and extreme risk for mortality from SAVR. We examined recent trends in aortic valve replacement (AVR) in Michigan. The Michigan Society of Thoracic and Cardiovascular Surgeons Quality Collaborative (MSTCVS-QC) database was used to determine the number of SAVR and TAVR cases performed from January 2012 through June 2017. Patients were divided into low, intermediate, high, and extreme risk groups based on STS predicted risk of mortality (PROM). TAVR patients in the MSTCVS-QC database were also matched with those in the Transcatheter Valve Therapy Registry to determine their Heart Team-designated risk category. During the study period 9517 SAVR and 4470 TAVR cases were performed. Total annual AVR volume increased by 40.0% (from 2086 to 2920), with a 13.3% decrease in number of SAVR cases (from 1892 to 1640) and a 560% increase in number of TAVR cases (from 194 to 1280). Greater than 90% of SAVR patients had PROM ≤8%. While >70% of TAVR patients had PROM ≤ 8%, they were mostly designated as high or extreme risk by a Heart Team. During the study period, SAVR volume gradually declined and TAVR volume dramatically increased. This was mostly due to a new group of patients with lower STS PROM who were designated as higher risk by a Heart Team due to characteristics not completely captured by the STS PROM score. © 2018 Wiley Periodicals, Inc.

  10. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprisingmore » computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.« less

  11. A rational decision rule with extreme events.

    PubMed

    Basili, Marcello

    2006-12-01

    Risks induced by extreme events are characterized by small or ambiguous probabilities, catastrophic losses, or windfall gains. Through a new functional, that mimics the restricted Bayes-Hurwicz criterion within the Choquet expected utility approach, it is possible to represent the decisionmaker behavior facing both risky (large and reliable probability) and extreme (small or ambiguous probability) events. A new formalization of the precautionary principle (PP) is shown and a new functional, which encompasses both extreme outcomes and expectation of all the possible results for every act, is claimed.

  12. Effects of climate extremes on the terrestrial carbon cycle: concepts, processes and potential future impacts

    PubMed Central

    Frank, Dorothea; Reichstein, Markus; Bahn, Michael; Thonicke, Kirsten; Frank, David; Mahecha, Miguel D; Smith, Pete; van der Velde, Marijn; Vicca, Sara; Babst, Flurin; Beer, Christian; Buchmann, Nina; Canadell, Josep G; Ciais, Philippe; Cramer, Wolfgang; Ibrom, Andreas; Miglietta, Franco; Poulter, Ben; Rammig, Anja; Seneviratne, Sonia I; Walz, Ariane; Wattenbach, Martin; Zavala, Miguel A; Zscheischler, Jakob

    2015-01-01

    Extreme droughts, heat waves, frosts, precipitation, wind storms and other climate extremes may impact the structure, composition and functioning of terrestrial ecosystems, and thus carbon cycling and its feedbacks to the climate system. Yet, the interconnected avenues through which climate extremes drive ecological and physiological processes and alter the carbon balance are poorly understood. Here, we review the literature on carbon cycle relevant responses of ecosystems to extreme climatic events. Given that impacts of climate extremes are considered disturbances, we assume the respective general disturbance-induced mechanisms and processes to also operate in an extreme context. The paucity of well-defined studies currently renders a quantitative meta-analysis impossible, but permits us to develop a deductive framework for identifying the main mechanisms (and coupling thereof) through which climate extremes may act on the carbon cycle. We find that ecosystem responses can exceed the duration of the climate impacts via lagged effects on the carbon cycle. The expected regional impacts of future climate extremes will depend on changes in the probability and severity of their occurrence, on the compound effects and timing of different climate extremes, and on the vulnerability of each land-cover type modulated by management. Although processes and sensitivities differ among biomes, based on expert opinion, we expect forests to exhibit the largest net effect of extremes due to their large carbon pools and fluxes, potentially large indirect and lagged impacts, and long recovery time to regain previous stocks. At the global scale, we presume that droughts have the strongest and most widespread effects on terrestrial carbon cycling. Comparing impacts of climate extremes identified via remote sensing vs. ground-based observational case studies reveals that many regions in the (sub-)tropics are understudied. Hence, regional investigations are needed to allow a global upscaling of the impacts of climate extremes on global carbon–climate feedbacks. PMID:25752680

  13. Effects of climate extremes on the terrestrial carbon cycle: concepts, processes and potential future impacts.

    PubMed

    Frank, Dorothea; Reichstein, Markus; Bahn, Michael; Thonicke, Kirsten; Frank, David; Mahecha, Miguel D; Smith, Pete; van der Velde, Marijn; Vicca, Sara; Babst, Flurin; Beer, Christian; Buchmann, Nina; Canadell, Josep G; Ciais, Philippe; Cramer, Wolfgang; Ibrom, Andreas; Miglietta, Franco; Poulter, Ben; Rammig, Anja; Seneviratne, Sonia I; Walz, Ariane; Wattenbach, Martin; Zavala, Miguel A; Zscheischler, Jakob

    2015-08-01

    Extreme droughts, heat waves, frosts, precipitation, wind storms and other climate extremes may impact the structure, composition and functioning of terrestrial ecosystems, and thus carbon cycling and its feedbacks to the climate system. Yet, the interconnected avenues through which climate extremes drive ecological and physiological processes and alter the carbon balance are poorly understood. Here, we review the literature on carbon cycle relevant responses of ecosystems to extreme climatic events. Given that impacts of climate extremes are considered disturbances, we assume the respective general disturbance-induced mechanisms and processes to also operate in an extreme context. The paucity of well-defined studies currently renders a quantitative meta-analysis impossible, but permits us to develop a deductive framework for identifying the main mechanisms (and coupling thereof) through which climate extremes may act on the carbon cycle. We find that ecosystem responses can exceed the duration of the climate impacts via lagged effects on the carbon cycle. The expected regional impacts of future climate extremes will depend on changes in the probability and severity of their occurrence, on the compound effects and timing of different climate extremes, and on the vulnerability of each land-cover type modulated by management. Although processes and sensitivities differ among biomes, based on expert opinion, we expect forests to exhibit the largest net effect of extremes due to their large carbon pools and fluxes, potentially large indirect and lagged impacts, and long recovery time to regain previous stocks. At the global scale, we presume that droughts have the strongest and most widespread effects on terrestrial carbon cycling. Comparing impacts of climate extremes identified via remote sensing vs. ground-based observational case studies reveals that many regions in the (sub-)tropics are understudied. Hence, regional investigations are needed to allow a global upscaling of the impacts of climate extremes on global carbon-climate feedbacks. © 2015 The Authors. Global Change Biology published by John Wiley & Sons Ltd.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Gang

    Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less

  15. Routine health insurance data for scientific research: potential and limitations of the Agis Health Database.

    PubMed

    Smeets, Hugo M; de Wit, Niek J; Hoes, Arno W

    2011-04-01

    Observational studies performed within routine health care databases have the advantage of their large size and, when the aim is to assess the effect of interventions, can offer a completion to randomized controlled trials with usually small samples from experimental situations. Institutional Health Insurance Databases (HIDs) are attractive for research because of their large size, their longitudinal perspective, and their practice-based information. As they are based on financial reimbursement, the information is generally reliable. The database of one of the major insurance companies in the Netherlands, the Agis Health Database (AHD), is described in detail. Whether the AHD data sets meet the specific requirements to conduct several types of clinical studies is discussed according to the classification of the four different types of clinical research; that is, diagnostic, etiologic, prognostic, and intervention research. The potential of the AHD for these various types of research is illustrated using examples of studies recently conducted in the AHD. HIDs such as the AHD offer large potential for several types of clinical research, in particular etiologic and intervention studies, but at present the lack of detailed clinical information is an important limitation. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Image-based query-by-example for big databases of galaxy images

    NASA Astrophysics Data System (ADS)

    Shamir, Lior; Kuminski, Evan

    2017-01-01

    Very large astronomical databases containing millions or even billions of galaxy images have been becoming increasingly important tools in astronomy research. However, in many cases the very large size makes it more difficult to analyze these data manually, reinforcing the need for computer algorithms that can automate the data analysis process. An example of such task is the identification of galaxies of a certain morphology of interest. For instance, if a rare galaxy is identified it is reasonable to expect that more galaxies of similar morphology exist in the database, but it is virtually impossible to manually search these databases to identify such galaxies. Here we describe computer vision and pattern recognition methodology that receives a galaxy image as an input, and searches automatically a large dataset of galaxies to return a list of galaxies that are visually similar to the query galaxy. The returned list is not necessarily complete or clean, but it provides a substantial reduction of the original database into a smaller dataset, in which the frequency of objects visually similar to the query galaxy is much higher. Experimental results show that the algorithm can identify rare galaxies such as ring galaxies among datasets of 10,000 astronomical objects.

  17. Where can cone penetrometer technology be applied? Development of a map of Europe regarding the soil penetrability.

    PubMed

    Fleischer, Matthias; van Ree, Derk; Leven, Carsten

    2014-01-01

    Over the past decades, significant efforts have been invested in the development of push-in technology for site characterization and monitoring for geotechnical and environmental purposes and have especially been undertaken in the Netherlands and Germany. These technologies provide the opportunity for faster, cheaper, and collection of more reliable subsurface data. However, to maximize the technology both from a development and implementation point of view, it is necessary to have an overview of the areas suitable for the application of this type of technology. Such an overview is missing and cannot simply be read from existing maps and material. This paper describes the development of a map showing the feasibility or applicability of Direct Push/Cone Penetrometer Technology (DPT/CPT) in Europe which depends on the subsurface and its extremely varying properties throughout Europe. Subsurface penetrability is dependent on a range of factors that have not been mapped directly or can easily be inferred from existing databases, especially the maximum depth reachable would be of interest. Among others, it mainly depends on the geology, the soil mechanical properties, the type of equipment used as well as soil-forming processes. This study starts by looking at different geological databases available at the European scale. Next, a scheme has been developed linking geological properties mapped to geotechnical properties to determine basic penetrability categories. From this, a map of soil penetrability is developed and presented. Validating the output by performing field tests was beyond the scope of this study, but for the country of the Netherlands, this map has been compared against a database containing actual cone penetrometer depth data to look for possible contradictory results that would negate the approach. The map for the largest part of Europe clearly shows that there is a much wider potential for the application of Direct Push Technology than is currently seen. The study also shows that there is a lack of large-scale databases that contain depth-resolved data as well as soil mechanical and physical properties that can be used for engineering purposes in relation to the subsurface.

  18. The Mars Climate Database (MCD version 5.3)

    NASA Astrophysics Data System (ADS)

    Millour, Ehouarn; Forget, Francois; Spiga, Aymeric; Vals, Margaux; Zakharov, Vladimir; Navarro, Thomas; Montabone, Luca; Lefevre, Franck; Montmessin, Franck; Chaufray, Jean-Yves; Lopez-Valverde, Miguel; Gonzalez-Galindo, Francisco; Lewis, Stephen; Read, Peter; Desjean, Marie-Christine; MCD/GCM Development Team

    2017-04-01

    Our Global Circulation Model (GCM) simulates the atmospheric environment of Mars. It is developped at LMD (Laboratoire de Meteorologie Dynamique, Paris, France) in close collaboration with several teams in Europe (LATMOS, France, University of Oxford, The Open University, the Instituto de Astrofisica de Andalucia), and with the support of ESA (European Space Agency) and CNES (French Space Agency). GCM outputs are compiled to build a Mars Climate Database, a freely available tool useful for the scientific and engineering communities. The Mars Climate Database (MCD) has over the years been distributed to more than 300 teams around the world. The latest series of reference simulations have been compiled in a new version (v5.3) of the MCD, released in the first half of 2017. To summarize, MCD v5.3 provides: - Climatologies over a series of synthetic dust scenarios: standard (climatology) year, cold (ie: low dust), warm (ie: dusty atmosphere) and dust storm, all topped by various cases of Extreme UV solar inputs (low, mean or maximum). These scenarios have been derived from home-made, instrument-derived (TES, THEMIS, MCS, MERs), dust climatology of the last 8 Martian years. The MCD also provides simulation outputs (MY24-31) representative of these actual years. - Mean values and statistics of main meteorological variables (atmospheric temperature, density, pressure and winds), as well as surface pressure and temperature, CO2 ice cover, thermal and solar radiative fluxes, dust column opacity and mixing ratio, [H20] vapor and ice columns, concentrations of many species: [CO], [O2], [O], [N2], [H2], [O3], ... - A high resolution mode which combines high resolution (32 pixel/degree) MOLA topography records and Viking Lander 1 pressure records with raw lower resolution GCM results to yield, within the restriction of the procedure, high resolution values of atmospheric variables. - The possibility to reconstruct realistic conditions by combining the provided climatology with additional large scale and small scale perturbations schemes. At EGU, we will report on the latest improvements in the Mars Climate Database, with comparisons with available measurements from orbit (e.g.: TES, MCS) and landers (Viking, Phoenix, MSL).

  19. Reduced Hospital Mortality With Surgical Ligation of Patent Ductus Arteriosus in Premature, Extremely Low Birth Weight Infants: A Propensity Score-matched Outcome Study.

    PubMed

    Tashiro, Jun; Perez, Eduardo A; Sola, Juan E

    2016-03-01

    To evaluate outcomes after surgical ligation (SL) of patent ductus arteriosus (PDA) in premature, extremely low birth weight (ELBW) infants. Optimal management of PDA in this specialized population remains undefined. Currently, surgical therapy is largely reserved for infants failing medical management. To date, a large-scale, risk-matched population-based study has not been performed to evaluate differences in mortality and resource utilization. Data on identified premature (<37 weeks) and ELBW (<1000  g) infants with PDA (International Classification of Diseases, 9th revision, Clinical Modification, 747.0) and respiratory distress (769) were obtained from Kids' Inpatient Database (2003-2009). Overall, 12,470 cases were identified, with 3008 undergoing SL. Propensity score-matched analysis of 1620 SL versus 1584 non-SL found reduced mortality (15% vs 26%) and more routine disposition (48% vs 41%) for SL (P < 0.001). SL had longer length of stay and higher total cost (P < 0.001). On multivariate analysis, SL mortality predictors were necrotizing enterocolitis (NEC; surgical odds ratio, 5.95; medical odds ratio, 4.42) and sepsis (3.43) (P < 0.006). Length of stay increased with bronchopulmonary dysplasia (BPD; 1.77), whereas total cost increased with surgical NEC (1.82) and sepsis (1.26) (P < 0.04). Non-SL mortality predictors were NEC (surgical, 76.3; medical, 6.17), sepsis (2.66), and intraventricular hemorrhage (1.97) (P < 0.005). Length of stay increased with BPD (2.92) and NEC (surgical, 2.04; medical, 1.28) (P < 0.03). Total cost increased with surgical NEC (2.06), medical NEC (1.57), sepsis (1.43), and BPD (1.30) (P < 0.001). Propensity score-matched analysis demonstrates reduced mortality in premature/ELBW infants with SL for PDA. NEC and sepsis are predictors of mortality and resource utilization.

  20. Food Price Volatility and Decadal Climate Variability

    NASA Astrophysics Data System (ADS)

    Brown, M. E.

    2013-12-01

    The agriculture system is under pressure to increase production every year as global population expands and more people move from a diet mostly made up of grains, to one with more meat, dairy and processed foods. Weather shocks and large changes in international commodity prices in the last decade have increased pressure on local food prices. This paper will review several studies that link climate variability as measured with satellite remote sensing to food price dynamics in 36 developing countries where local monthly food price data is available. The focus of the research is to understand how weather and climate, as measured by variations in the growing season using satellite remote sensing, has affected agricultural production, food prices and access to food in agricultural societies. Economies are vulnerable to extreme weather at multiple levels. Subsistence small holders who hold livestock and consume much of the food they produce are vulnerable to food production variability. The broader society, however, is also vulnerable to extreme weather because of the secondary effects on market functioning, resource availability, and large-scale impacts on employment in trading, trucking and wage labor that are caused by weather-related shocks. Food price variability captures many of these broad impacts and can be used to diagnose weather-related vulnerability across multiple sectors. The paper will trace these connections using market-level data and analysis. The context of the analysis is the humanitarian aid community, using the guidance of the USAID Famine Early Warning Systems Network and the United Nation's World Food Program in their response to food security crises. These organizations have worked over the past three decades to provide baseline information on food production through satellite remote sensing data and agricultural yield models, as well as assessments of food access through a food price database. Econometric models and spatial analysis are used to describe the connection between shocks and food prices, and to demonstrate the importance of these metrics in overall outcomes in food-insecure communities.

  1. Incidence and epidemiology of combat injuries sustained during "the surge" portion of operation Iraqi Freedom by a U.S. Army brigade combat team.

    PubMed

    Belmont, Philip J; Goodman, Gens P; Zacchilli, Michael; Posner, Matthew; Evans, Clifford; Owens, Brett D

    2010-01-01

    A prospective, longitudinal analysis of injuries sustained by a large combat-deployed maneuver unit has not been previously performed. A detailed description of the combat casualty care statistics, distribution of wounds, and mechanisms of injury incurred by a U.S. Army Brigade Combat Team during "The Surge" phase of Operation Iraqi Freedom was performed using a centralized casualty database and an electronic medical record system. Among the 4,122 soldiers deployed, there were 500 combat wounds in 390 combat casualties. The combat casualty rate for the Brigade Combat Team was 75.7 per 1,000 soldier combat-years. The % killed in action (KIA) was 22.1%, and the %died of wounds was 3.2%. The distribution of these wounds was as follows: head/neck 36.2%, thorax 7.5%, abdomen 6.9%, and extremities 49.4%. The percentage of combat wounds showed a significant increase in the head/neck region (p < 0.0001) and a decrease in the extremities (p < 0.03) compared with data from World War II, Korea, and Vietnam. The percentage of thoracic wounds (p < 0.03) was significantly less than historical data from World War II and Vietnam. The %KIA was significantly greater in those soldiers injured by an explosion (26.3%) compared with those soldiers injured by a gunshot wound (4.6%; p = 0.003). Improvised explosive devices accounted for 77.7% of all combat wounds. There was a significantly higher proportion of head/neck wounds compared with previous U.S. conflicts. The 22.1% KIA was comparable with previous U.S. conflicts despite improvements in individual/vehicular body armor and is largely attributable to the lethality of improvised explosive devices. The lethality of a gunshot wound in Operation Iraqi Freedom has decreased to 4.6% with the use of individual body armor.

  2. Constraining relationships between rainfall and landsliding with satellite derived rainfall measurements and landslide inventories.

    NASA Astrophysics Data System (ADS)

    Marc, Odin; Malet, Jean-Philippe; Stumpf, Andre; Gosset, Marielle

    2017-04-01

    In mountainous and hilly regions, landslides are an important source of damage and fatalities. Landsliding correlates with extreme rainfall events and may increase with climate change. Still, how precipitation drives landsliding at regional scales is poorly understood quantitatively in part because constraining simultaneously landsliding and rainfall across large areas is challenging. By combining optical images acquired from satellite observation platforms and rainfall measurements from satellite constellations we are building a database of landslide events caused by with single storm events. We present results from storm-induced landslides from Brazil, Taiwan, Micronesia, Central America, Europe and the USA. We present scaling laws between rainfall metrics derived by satellites (total rainfall, mean intensity, antecedent rainfall, ...) and statistical descriptors of landslide events (total area and volume, size distribution, mean runout, ...). Total rainfall seems to be the most important parameter driving non-linearly the increase in total landslide number, and area and volume. The maximum size of bedrock landslides correlates with the total number of landslides, and thus with total rainfall, within the limits of available topographic relief. In contrast, the power-law scaling exponent of the size distribution, controlling the relative abundance of small and large landslides, appears rather independent of the rainfall metrics (intensity, duration and total rainfall). These scaling laws seem to explain both the intra-storm pattern of landsliding, at the scale of satellite rainfall measurements ( 25kmx25km), and the different impacts observed for various storms. Where possible, we evaluate the limits of standard rainfall products (TRMM, GPM, GSMaP) by comparing them to in-situ data. Then we discuss how slope distribution and other geomorphic factors (lithology, soil presence,...) modulate these scaling laws. Such scaling laws at the basin scale and based only on a-priori information (topography, lithology, …) and rainfall metrics available from meteorological forecast may allow to better anticipate and mitigates landsliding associated with extreme rainfall events.

  3. Large uncertainties in observed daily precipitation extremes over land

    NASA Astrophysics Data System (ADS)

    Herold, Nicholas; Behrangi, Ali; Alexander, Lisa V.

    2017-01-01

    We explore uncertainties in observed daily precipitation extremes over the terrestrial tropics and subtropics (50°S-50°N) based on five commonly used products: the Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) dataset, the Global Precipitation Climatology Centre-Full Data Daily (GPCC-FDD) dataset, the Tropical Rainfall Measuring Mission (TRMM) multi-satellite research product (T3B42 v7), the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR), and the Global Precipitation Climatology Project's One-Degree Daily (GPCP-1DD) dataset. We use the precipitation indices R10mm and Rx1day, developed by the Expert Team on Climate Change Detection and Indices, to explore the behavior of "moderate" and "extreme" extremes, respectively. In order to assess the sensitivity of extreme precipitation to different grid sizes we perform our calculations on four common spatial resolutions (0.25° × 0.25°, 1° × 1°, 2.5° × 2.5°, and 3.75° × 2.5°). The impact of the chosen "order of operation" in calculating these indices is also determined. Our results show that moderate extremes are relatively insensitive to product and resolution choice, while extreme extremes can be very sensitive. For example, at 0.25° × 0.25° quasi-global mean Rx1day values vary from 37 mm in PERSIANN-CDR to 62 mm in T3B42. We find that the interproduct spread becomes prominent at resolutions of 1° × 1° and finer, thus establishing a minimum effective resolution at which observational products agree. Without improvements in interproduct spread, these exceedingly large observational uncertainties at high spatial resolution may limit the usefulness of model evaluations. As has been found previously, resolution sensitivity can be largely eliminated by applying an order of operation where indices are calculated prior to regridding. However, this approach is not appropriate when true area averages are desired (e.g., for model evaluations).

  4. DBMap: a TreeMap-based framework for data navigation and visualization of brain research registry

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Zhang, Hong; Tjandra, Donny; Wong, Stephen T. C.

    2003-05-01

    The purpose of this study is to investigate and apply a new, intuitive and space-conscious visualization framework to facilitate efficient data presentation and exploration of large-scale data warehouses. We have implemented the DBMap framework for the UCSF Brain Research Registry. Such a novel utility would facilitate medical specialists and clinical researchers in better exploring and evaluating a number of attributes organized in the brain research registry. The current UCSF Brain Research Registry consists of a federation of disease-oriented database modules, including Epilepsy, Brain Tumor, Intracerebral Hemorrphage, and CJD (Creuzfeld-Jacob disease). These database modules organize large volumes of imaging and non-imaging data to support Web-based clinical research. While the data warehouse supports general information retrieval and analysis, there lacks an effective way to visualize and present the voluminous and complex data stored. This study investigates whether the TreeMap algorithm can be adapted to display and navigate categorical biomedical data warehouse or registry. TreeMap is a space constrained graphical representation of large hierarchical data sets, mapped to a matrix of rectangles, whose size and color represent interested database fields. It allows the display of a large amount of numerical and categorical information in limited real estate of computer screen with an intuitive user interface. The paper will describe, DBMap, the proposed new data visualization framework for large biomedical databases. Built upon XML, Java and JDBC technologies, the prototype system includes a set of software modules that reside in the application server tier and provide interface to backend database tier and front-end Web tier of the brain registry.

  5. Increasing precipitation volatility in twenty-first-century California

    NASA Astrophysics Data System (ADS)

    Swain, Daniel L.; Langenbrunner, Baird; Neelin, J. David; Hall, Alex

    2018-05-01

    Mediterranean climate regimes are particularly susceptible to rapid shifts between drought and flood—of which, California's rapid transition from record multi-year dryness between 2012 and 2016 to extreme wetness during the 2016-2017 winter provides a dramatic example. Projected future changes in such dry-to-wet events, however, remain inadequately quantified, which we investigate here using the Community Earth System Model Large Ensemble of climate model simulations. Anthropogenic forcing is found to yield large twenty-first-century increases in the frequency of wet extremes, including a more than threefold increase in sub-seasonal events comparable to California's `Great Flood of 1862'. Smaller but statistically robust increases in dry extremes are also apparent. As a consequence, a 25% to 100% increase in extreme dry-to-wet precipitation events is projected, despite only modest changes in mean precipitation. Such hydrological cycle intensification would seriously challenge California's existing water storage, conveyance and flood control infrastructure.

  6. Relational databases for rare disease study: application to vascular anomalies.

    PubMed

    Perkins, Jonathan A; Coltrera, Marc D

    2008-01-01

    To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.

  7. Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.

    ERIC Educational Resources Information Center

    Conlon, Sumali Pin-Ngern; And Others

    1993-01-01

    Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)

  8. NREL Opens Large Database of Inorganic Thin-Film Materials | News | NREL

    Science.gov Websites

    Inorganic Thin-Film Materials April 3, 2018 An extensive experimental database of inorganic thin-film Energy Laboratory (NREL) is now publicly available. The High Throughput Experimental Materials (HTEM Schroeder / NREL) "All existing experimental databases either contain many entries or have all this

  9. Active Exploration of Large 3D Model Repositories.

    PubMed

    Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min

    2015-12-01

    With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.

  10. Vehicle-triggered video compression/decompression for fast and efficient searching in large video databases

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng

    2013-03-01

    Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.

  11. Crystallography Open Databases and Preservation: a World-wide Initiative

    NASA Astrophysics Data System (ADS)

    Chateigner, Daniel

    In 2003, an international team of crystallographers proposed the Crystallography Open Database (COD), a fully-free collection of crystal structure data, in the aim of ensuring their preservation. With nearly 250000 entries, this database represents a large open set of data for crystallographers, academics and industrials, located at five different places world-wide, and included in Thomson-Reuters’ ISI. As a large step towards data preservation, raw data can now be uploaded along with «digested» structure files, and COD can be questioned by most of the crystallography-linked industrial software. The COD initiative work deserves several other open developments.

  12. Iris indexing based on local intensity order pattern

    NASA Astrophysics Data System (ADS)

    Emerich, Simina; Malutan, Raul; Crisan, Septimiu; Lefkovits, Laszlo

    2017-03-01

    In recent years, iris biometric systems have increased in popularity and have been proven that are capable of handling large-scale databases. The main advantage of these systems is accuracy and reliability. A proper iris patterns classification is expected to reduce the matching time in huge databases. This paper presents an iris indexing technique based on Local Intensity Order Pattern. The performance of the present approach is evaluated on UPOL database and is compared with other recent systems designed for iris indexing. The results illustrate the potential of the proposed method for large scale iris identification.

  13. The Current State of Head and Neck Injuries in Extreme Sports

    PubMed Central

    Sharma, Vinay K.; Rango, Juan; Connaughton, Alexander J.; Lombardo, Daniel J.; Sabesan, Vani J.

    2015-01-01

    Background: Since their conception during the mid-1970s, international participation in extreme sports has grown rapidly. The recent death of extreme snowmobiler Caleb Moore at the 2013 Winter X Games has demonstrated the serious risks associated with these sports. Purpose: To examine the incidence and prevalence of head and neck injuries (HNIs) in extreme sports. Study Design: Descriptive epidemiological study. Methods: The National Electronic Injury Surveillance System (NEISS) was used to acquire data from 7 sports (2000-2011) that were included in the Winter and Summer X Games. Data from the NEISS database were collected for each individual sport per year and type of HNI. Cumulative data for overall incidence and injuries over the entire 11-year period were calculated. National estimates were determined using NEISS-weighted calculations. Incidence rates were calculated for extreme sports using data from Outdoor Foundation Participation Reports. Results: Over 4 million injuries were reported between 2000 and 2011, of which 11.3% were HNIs. Of all HNIs, 83% were head injuries and 17% neck injuries. The 4 sports with the highest total incidence of HNI were skateboarding (129,600), snowboarding (97,527), skiing (83,313), and motocross (78,236). Severe HNI (cervical or skull fracture) accounted for 2.5% of extreme sports HNIs. Of these, skateboarding had the highest percentage of severe HNIs. Conclusion: The number of serious injuries suffered in extreme sports has increased as participation in the sports continues to grow. A greater awareness of the dangers associated with these sports offers an opportunity for sports medicine and orthopaedic physicians to advocate for safer equipment, improved on-site medical care, and further research regarding extreme sports injuries. PMID:26535369

  14. Association between SCO2 mutation and extreme myopia in Japanese patients.

    PubMed

    Wakazono, Tomotaka; Miyake, Masahiro; Yamashiro, Kenji; Yoshikawa, Munemitsu; Yoshimura, Nagahisa

    2016-07-01

    To investigate the role of SCO2 in extreme myopia of Japanese patients. In total, 101 Japanese patients with extreme myopia (axial length of ≥30 mm) OU at the Kyoto University Hospital were included in this study. Exon 2 of SCO2 was sequenced by conventional Sanger sequencing. The detected variants were assessed using in silico prediction programs: SIFT, PolyPhen-2 and MutationTaster. To determine the frequency of the mutations in normal subjects, we referred to the 1000 Genomes Project data and the Human Genetic Variation Database (HGVD) in the Human Genetic Variation Browser. The average age of the participants was 62.9 ± 12.7 years. There were 31 males (30.7 %) and 70 females. Axial lengths were 31.76 ± 1.17 mm OD and 31.40 ± 1.07 mm OS, and 176 eyes (87.6 %) out of 201 eyes had myopic maculopathy of grade 2 or more. Among the 101 extremely myopic patients, one mutation (c.290 C > T;p.Ala97Val) in SCO2 was detected. This mutation was not found in the 1000 Genomes Project data or HGVD data. Variant type of the mutation was nonsynonymous. Although the SIFT prediction score was 0.350, the PolyPhen-2 probability was 0.846, thus predicting its pathogenicity to be possibly damaging. MutationTaster PhyloP was 1.268, suggesting that the mutation is conserved. We identified one novel possibility of an extreme myopia-causing mutation in SCO2. No other disease-causing mutation was found in 101 extremely myopic Japanese patients, suggesting that SCO2 plays a limited role in Japanese extreme myopia. Further investigation is required for better understanding of extreme myopia.

  15. Factors affecting the 7Be surface concentration and its extremely high occurrences over the Scandinavian Peninsula during autumn and winter.

    PubMed

    Ajtić, J; Brattich, E; Sarvan, D; Djurdjevic, V; Hernández-Ceballos, M A

    2018-05-01

    Relationships between the beryllium-7 activity concentrations in surface air and meteorological parameters (temperature, atmospheric pressure, and precipitation), teleconnection indices (Arctic Oscillation, North Atlantic Oscillation, and Scandinavian pattern) and number of sunspots are investigated using two multivariate statistical techniques: hierarchical cluster and factor analysis. The beryllium-7 surface measurements over 1995-2011, at four sampling sites located in the Scandinavian Peninsula, are obtained from the Radioactivity Environmental Monitoring Database. In all sites, the statistical analyses show that the beryllium-7 concentrations are strongly linked to temperature. Although the beryllium-7 surface concentration exhibits the well-characterised spring/summer maximum, our study shows that extremely high beryllium-7 concentrations, defined as the values exceeding the 90 th percentile in the data records for each site, also occur over the October-March period. Two types of autumn/winter extremes are distinguished: type-1 when the number of extremes in a given month is less than three, and type-2 when at least three extremes occur in a month. Factor analysis performed for these autumn/winter events shows a weaker effect of temperature and a stronger impact of the transport and production signal on the beryllium-7 concentrations. Further, the majority of the type-2 extremes are associated with a very high monthly Scandinavian teleconnection index. The type-2 extremes that occurred in January, February and March are also linked to sudden stratospheric warmings of the Arctic vortex. Our results indicate that the Scandinavian teleconnection index might be a good indicator of the meteorological conditions facilitating extremely high beryllium-7 surface concentrations over Scandinavia during autumn and winter. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Reaeration equations derived from U.S. geological survey database

    USGS Publications Warehouse

    Melching, C.S.; Flores, H.E.

    1999-01-01

    Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the date set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the data set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.

  17. Proteomic analysis of tardigrades: towards a better understanding of molecular mechanisms by anhydrobiotic organisms.

    PubMed

    Schokraie, Elham; Hotz-Wagenblatt, Agnes; Warnken, Uwe; Mali, Brahim; Frohme, Marcus; Förster, Frank; Dandekar, Thomas; Hengherr, Steffen; Schill, Ralph O; Schnölzer, Martina

    2010-03-03

    Tardigrades are small, multicellular invertebrates which are able to survive times of unfavourable environmental conditions using their well-known capability to undergo cryptobiosis at any stage of their life cycle. Milnesium tardigradum has become a powerful model system for the analysis of cryptobiosis. While some genetic information is already available for Milnesium tardigradum the proteome is still to be discovered. Here we present to the best of our knowledge the first comprehensive study of Milnesium tardigradum on the protein level. To establish a proteome reference map we developed optimized protocols for protein extraction from tardigrades in the active state and for separation of proteins by high resolution two-dimensional gel electrophoresis. Since only limited sequence information of M. tardigradum on the genome and gene expression level is available to date in public databases we initiated in parallel a tardigrade EST sequencing project to allow for protein identification by electrospray ionization tandem mass spectrometry. 271 out of 606 analyzed protein spots could be identified by searching against the publicly available NCBInr database as well as our newly established tardigrade protein database corresponding to 144 unique proteins. Another 150 spots could be identified in the tardigrade clustered EST database corresponding to 36 unique contigs and ESTs. Proteins with annotated function were further categorized in more detail by their molecular function, biological process and cellular component. For the proteins of unknown function more information could be obtained by performing a protein domain annotation analysis. Our results include proteins like protein member of different heat shock protein families and LEA group 3, which might play important roles in surviving extreme conditions. The proteome reference map of Milnesium tardigradum provides the basis for further studies in order to identify and characterize the biochemical mechanisms of tolerance to extreme desiccation. The optimized proteomics workflow will enable application of sensitive quantification techniques to detect differences in protein expression, which are characteristic of the active and anhydrobiotic states of tardigrades.

  18. Proteomic Analysis of Tardigrades: Towards a Better Understanding of Molecular Mechanisms by Anhydrobiotic Organisms

    PubMed Central

    Schokraie, Elham; Hotz-Wagenblatt, Agnes; Warnken, Uwe; Mali, Brahim; Frohme, Marcus; Förster, Frank; Dandekar, Thomas; Hengherr, Steffen; Schill, Ralph O.; Schnölzer, Martina

    2010-01-01

    Background Tardigrades are small, multicellular invertebrates which are able to survive times of unfavourable environmental conditions using their well-known capability to undergo cryptobiosis at any stage of their life cycle. Milnesium tardigradum has become a powerful model system for the analysis of cryptobiosis. While some genetic information is already available for Milnesium tardigradum the proteome is still to be discovered. Principal Findings Here we present to the best of our knowledge the first comprehensive study of Milnesium tardigradum on the protein level. To establish a proteome reference map we developed optimized protocols for protein extraction from tardigrades in the active state and for separation of proteins by high resolution two-dimensional gel electrophoresis. Since only limited sequence information of M. tardigradum on the genome and gene expression level is available to date in public databases we initiated in parallel a tardigrade EST sequencing project to allow for protein identification by electrospray ionization tandem mass spectrometry. 271 out of 606 analyzed protein spots could be identified by searching against the publicly available NCBInr database as well as our newly established tardigrade protein database corresponding to 144 unique proteins. Another 150 spots could be identified in the tardigrade clustered EST database corresponding to 36 unique contigs and ESTs. Proteins with annotated function were further categorized in more detail by their molecular function, biological process and cellular component. For the proteins of unknown function more information could be obtained by performing a protein domain annotation analysis. Our results include proteins like protein member of different heat shock protein families and LEA group 3, which might play important roles in surviving extreme conditions. Conclusions The proteome reference map of Milnesium tardigradum provides the basis for further studies in order to identify and characterize the biochemical mechanisms of tolerance to extreme desiccation. The optimized proteomics workflow will enable application of sensitive quantification techniques to detect differences in protein expression, which are characteristic of the active and anhydrobiotic states of tardigrades. PMID:20224743

  19. Predictability and possible earlier awareness of extreme precipitation across Europe

    NASA Astrophysics Data System (ADS)

    Lavers, David; Pappenberger, Florian; Richardson, David; Zsoter, Ervin

    2017-04-01

    Extreme hydrological events can cause large socioeconomic damages in Europe. In winter, a large proportion of these flood episodes are associated with atmospheric rivers, a region of intense water vapour transport within the warm sector of extratropical cyclones. When preparing for such extreme events, forecasts of precipitation from numerical weather prediction models or river discharge forecasts from hydrological models are generally used. Given the strong link between water vapour transport (integrated vapour transport IVT) and heavy precipitation, it is possible that IVT could be used to warn of extreme events. Furthermore, as IVT is located in extratropical cyclones, it is hypothesized to be a more predictable variable due to its link with synoptic-scale atmospheric dynamics. In this research, we firstly provide an overview of the predictability of IVT and precipitation forecasts, and secondly introduce and evaluate the ECMWF Extreme Forecast Index (EFI) for IVT. The EFI is a tool that has been developed to evaluate how ensemble forecasts differ from the model climate, thus revealing the extremeness of the forecast. The ability of the IVT EFI to capture extreme precipitation across Europe during winter 2013/14, 2014/15, and 2015/16 is presented. The results show that the IVT EFI is more capable than the precipitation EFI of identifying extreme precipitation in forecast week 2 during forecasts initialized in a positive North Atlantic Oscillation (NAO) phase. However, the precipitation EFI is superior during the negative NAO phase and at shorter lead times. An IVT EFI example is shown for storm Desmond in December 2015 highlighting its potential to identify upcoming hydrometeorological extremes.

  20. Changes in extreme events and the potential impacts on human health.

    PubMed

    Bell, Jesse E; Brown, Claudia Langford; Conlon, Kathryn; Herring, Stephanie; Kunkel, Kenneth E; Lawrimore, Jay; Luber, George; Schreck, Carl; Smith, Adam; Uejio, Christopher

    2018-04-01

    Extreme weather and climate-related events affect human health by causing death, injury, and illness, as well as having large socioeconomic impacts. Climate change has caused changes in extreme event frequency, intensity, and geographic distribution, and will continue to be a driver for change in the future. Some of these events include heat waves, droughts, wildfires, dust storms, flooding rains, coastal flooding, storm surges, and hurricanes. The pathways connecting extreme events to health outcomes and economic losses can be diverse and complex. The difficulty in predicting these relationships comes from the local societal and environmental factors that affect disease burden. More information is needed about the impacts of climate change on public health and economies to effectively plan for and adapt to climate change. This paper describes some of the ways extreme events are changing and provides examples of the potential impacts on human health and infrastructure. It also identifies key research gaps to be addressed to improve the resilience of public health to extreme events in the future. Extreme weather and climate events affect human health by causing death, injury, and illness, as well as having large socioeconomic impacts. Climate change has caused changes in extreme event frequency, intensity, and geographic distribution, and will continue to be a driver for change in the future. Some of these events include heat waves, droughts, wildfires, flooding rains, coastal flooding, surges, and hurricanes. The pathways connecting extreme events to health outcomes and economic losses can be diverse and complex. The difficulty in predicting these relationships comes from the local societal and environmental factors that affect disease burden.

  1. Differences in the Reporting of Racial and Socioeconomic Disparities among Three Large National Databases for Breast Reconstruction.

    PubMed

    Kamali, Parisa; Zettervall, Sara L; Wu, Winona; Ibrahim, Ahmed M S; Medin, Caroline; Rakhorst, Hinne A; Schermerhorn, Marc L; Lee, Bernard T; Lin, Samuel J

    2017-04-01

    Research derived from large-volume databases plays an increasing role in the development of clinical guidelines and health policy. In breast cancer research, the Surveillance, Epidemiology and End Results, National Surgical Quality Improvement Program, and Nationwide Inpatient Sample databases are widely used. This study aims to compare the trends in immediate breast reconstruction and identify the drawbacks and benefits of each database. Patients with invasive breast cancer and ductal carcinoma in situ were identified from each database (2005-2012). Trends of immediate breast reconstruction over time were evaluated. Patient demographics and comorbidities were compared. Subgroup analysis of immediate breast reconstruction use per race was conducted. Within the three databases, 1.2 million patients were studied. Immediate breast reconstruction in invasive breast cancer patients increased significantly over time in all databases. A similar significant upward trend was seen in ductal carcinoma in situ patients. Significant differences in immediate breast reconstruction rates were seen among races; and the disparity differed among the three databases. Rates of comorbidities were similar among the three databases. There has been a significant increase in immediate breast reconstruction; however, the extent of the reporting of overall immediate breast reconstruction rates and of racial disparities differs significantly among databases. The Nationwide Inpatient Sample and the National Surgical Quality Improvement Program report similar findings, with the Surveillance, Epidemiology and End Results database reporting results significantly lower in several categories. These findings suggest that use of the Surveillance, Epidemiology and End Results database may not be universally generalizable to the entire U.S.

  2. Future changes in summer mean and extreme precipitation frequency in Japan by d4PDF regional climate simulations

    NASA Astrophysics Data System (ADS)

    Okada, Y.; Ishii, M.; Endo, H.; Kawase, H.; Sasaki, H.; Takayabu, I.; Watanabe, S.; Fujita, M.; Sugimoto, S.; Kawazoe, S.

    2017-12-01

    Precipitation in summer plays a vital role in sustaining life across East Asia, but the heavy rain that is often generated during this period can also cause serious damage. Developing a better understanding of the features and occurrence frequency of this heavy rain is an important element of disaster prevention. We investigated future changes in summer mean and extreme precipitation frequency in Japan using large ensemble dataset which simulated by the Non-Hydrostatic Regional Climate Model with a horizontal resolution of 20km (NHRCM20). This dataset called database for Policy Decision making for Future climate changes (d4PDF), which is intended to be utilized for the impact assessment studies and adaptation planning to global warming. The future climate experiments assume the global mean surface air temperature rise 2K and 4K from the pre-industrial period. We investigated using this dataset future changes of precipitation in summer over the Japanese archipelago based on observational locations. For mean precipitation in the present-day climate, the bias of the rainfall for each month is within 25% even considering all members (30 members). The bias at each location is found to increase by over 50% on the Pacific Ocean side of eastern part of Japan and interior locations of western part of Japan. The result in western part of Japan depends on the effect of the elevations in this model. The future changes in mean precipitation show a contrast between northern and southern Japan, with the north showing a slight increase but the south a decrease. The future changes in the frequency of extreme precipitation in the national average of Japan increase at 2K and 4K simulations compared with the present-day climate, respectively. The authors were supported by the Social Implementation Program on Climate Change Adaptation Technology (SI-CAT), the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.

  3. Mean velocity and turbulence measurements in a 90 deg curved duct with thin inlet boundary layer

    NASA Technical Reports Server (NTRS)

    Crawford, R. A.; Peters, C. E.; Steinhoff, J.; Hornkohl, J. O.; Nourinejad, J.; Ramachandran, K.

    1985-01-01

    The experimental database established by this investigation of the flow in a large rectangular turning duct is of benchmark quality. The experimental Reynolds numbers, Deans numbers and boundary layer characteristics are significantly different from previous benchmark curved-duct experimental parameters. This investigation extends the experimental database to higher Reynolds number and thinner entrance boundary layers. The 5% to 10% thick boundary layers, based on duct half-width, results in a large region of near-potential flow in the duct core surrounded by developing boundary layers with large crossflows. The turbulent entrance boundary layer case at R sub ed = 328,000 provides an incompressible flowfield which approaches real turbine blade cascade characteristics. The results of this investigation provide a challenging benchmark database for computational fluid dynamics code development.

  4. The Odense University Pharmacoepidemiological Database (OPED)

    Cancer.gov

    The Odense University Pharmacoepidemiological Database is one of two large prescription registries in Denmark and covers a stable population that is representative of the Danish population as a whole.

  5. Extreme temperatures and out-of-hospital coronary deaths in six large Chinese cities.

    PubMed

    Chen, Renjie; Li, Tiantian; Cai, Jing; Yan, Meilin; Zhao, Zhuohui; Kan, Haidong

    2014-12-01

    The seasonal trend of out-of-hospital coronary death (OHCD) and sudden cardiac death has been observed, but whether extreme temperature serves as a risk factor is rarely investigated. We therefore aimed to evaluate the impact of extreme temperatures on OHCDs in China. We obtained death records of 126,925 OHCDs from six large Chinese cities (Harbin, Beijing, Tianjin, Nanjing, Shanghai and Guangzhou) during the period 2009-2011. The short-term associations between extreme temperature and OHCDs were analysed with time-series methods in each city, using generalised additive Poisson regression models. We specified distributed lag non-linear models in studying the delayed effects of extreme temperature. We then applied Bayesian hierarchical models to combine the city-specific effect estimates. The associations between extreme temperature and OHCDs were almost U-shaped or J-shaped. The pooled relative risks (RRs) of extreme cold temperatures over the lags 0-14 days comparing the 1st and 25th centile temperatures were 1.49 (95% posterior interval (PI) 1.26-1.76); the pooled RRs of extreme hot temperatures comparing the 99th and 75th centile temperatures were 1.53 (95% PI 1.27-1.84) for OHCDs. The RRs of extreme temperature on OHCD were higher if the patients with coronary heart disease were old, male and less educated. This multicity epidemiological study suggested that both extreme cold and hot temperatures posed significant risks on OHCDs, and might have important public health implications for the prevention of OHCD or sudden cardiac death. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Intra-seasonal Characteristics of Wintertime Extreme Cold Events over South Korea

    NASA Astrophysics Data System (ADS)

    Park, Taewon; Jeong, Jeehoon; Choi, Jahyun

    2017-04-01

    The present study reveals the changes in the characteristics of extreme cold events over South Korea for boreal winter (November to March) in terms of the intra-seasonal variability of frequency, duration, and atmospheric circulation pattern. Influences of large-scale variabilities such as the Siberian High activity, the Arctic Oscillation (AO), and the Madden-Julian Oscillation (MJO) on extreme cold events are also investigated. In the early and the late of the winter during November and March, the upper-tropospheric wave-train for a life-cycle of the extreme cold events tends to pass quickly over East Asia. In addition, compared with the other months, the intensity of the Siberian High is weaker and the occurrences of strong negative AO are less frequent. It lead to events with weak amplitude and short duration. On the other hand, the amplified Siberian High and the strong negative AO occur more frequently in the mid of the winter from December to February. The extreme cold events are mainly characterized by a well-organized anticyclonic blocking around the Ural Mountain and the Subarctic. These large-scale circulation makes the extreme cold events for the midwinter last long with strong amplitude. The MJO phases 2-3 which provide a suitable condition for the amplification of extreme cold events occur frequently for November to January when the frequencies are more than twice those for February and March. While the extreme cold events during March have the least frequency, the weakest amplitude, and the shortest duration due to weak impacts of the abovementioned factors, the strong activities of the factors for January force the extreme cold events to be the most frequent, the strongest, and the longest among the boreal winter. Keywords extreme cold event, wave-train, blocking, Siberian High, AO, MJO

  7. The Joint Statistics of California Temperature and Precipitation as a Function of the Large-scale State of the Climate

    NASA Astrophysics Data System (ADS)

    OBrien, J. P.; O'Brien, T. A.

    2015-12-01

    Single climatic extremes have a strong and disproportionate effect on society and the natural environment. However, the joint occurrence of two or more concurrent extremes has the potential to negatively impact these areas of life in ways far greater than any single event could. California, USA, home to nearly 40 million people and the largest agricultural producer in the United States, is currently experiencing an extreme drought, which has persisted for several years. While drought is commonly thought of in terms of only precipitation deficits, above average temperatures co-occurring with precipitation deficits greatly exacerbate drought conditions. The 2014 calendar year in California was characterized both by extremely low precipitation and extremely high temperatures, which has significantly deepened the already extreme drought conditions leading to severe water shortages and wildfires. While many studies have shown the statistics of 2014 temperature and precipitation anomalies as outliers, none have demonstrated a connection with large-scale, long-term climate trends, which would provide useful relationships for predicting the future trajectory of California climate and water resources. We focus on understanding non-stationarity in the joint distribution of California temperature and precipitation anomalies in terms of large-scale, low-frequency trends in climate such as global mean temperature rise and oscillatory indices such as ENSO and the Pacific Decadal Oscillation among others. We consider temperature and precipitation data from the seven distinct climate divisions in California and employ a novel, high-fidelity kernel density estimation method to directly infer the multivariate distribution of temperature and precipitation anomalies conditioned on the large-scale state of the climate. We show that the joint distributions and associated statistics of temperature and precipitation are non-stationary and vary regionally in California. Further, we show that recurrence intervals of extreme concurrent events vary as a function of time and of teleconnections. This research has implications for predicting and forecasting future temperature and precipitation anomalies, which is critically important for city, water, and agricultural planning in California.

  8. Automated hierarchical classification of protein domain subfamilies based on functionally-divergent residue signatures

    PubMed Central

    2012-01-01

    Background The NCBI Conserved Domain Database (CDD) consists of a collection of multiple sequence alignments of protein domains that are at various stages of being manually curated into evolutionary hierarchies based on conserved and divergent sequence and structural features. These domain models are annotated to provide insights into the relationships between sequence, structure and function via web-based BLAST searches. Results Here we automate the generation of conserved domain (CD) hierarchies using a combination of heuristic and Markov chain Monte Carlo (MCMC) sampling procedures and starting from a (typically very large) multiple sequence alignment. This procedure relies on statistical criteria to define each hierarchy based on the conserved and divergent sequence patterns associated with protein functional-specialization. At the same time this facilitates the sequence and structural annotation of residues that are functionally important. These statistical criteria also provide a means to objectively assess the quality of CD hierarchies, a non-trivial task considering that the protein subgroups are often very distantly related—a situation in which standard phylogenetic methods can be unreliable. Our aim here is to automatically generate (typically sub-optimal) hierarchies that, based on statistical criteria and visual comparisons, are comparable to manually curated hierarchies; this serves as the first step toward the ultimate goal of obtaining optimal hierarchical classifications. A plot of runtimes for the most time-intensive (non-parallelizable) part of the algorithm indicates a nearly linear time complexity so that, even for the extremely large Rossmann fold protein class, results were obtained in about a day. Conclusions This approach automates the rapid creation of protein domain hierarchies and thus will eliminate one of the most time consuming aspects of conserved domain database curation. At the same time, it also facilitates protein domain annotation by identifying those pattern residues that most distinguish each protein domain subgroup from other related subgroups. PMID:22726767

  9. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    Treesearch

    Lejiang Yu; Shiyuan Zhong; Lisi Pei; Xindi (Randy) Bian; Warren E. Heilman

    2016-01-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for...

  10. A Projection of Changes in Landfilling Atmospheric River Frequency and Extreme Precipitation over Western North America from the Large Ensemble CESM Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Leung, Lai-Yung R.; Yoon, Jin-Ho

    Simulations from the Community Earth System Model Large Ensemble project are analyzed to investigate the impact of global warming on atmospheric rivers (ARs). The model has notable biases in simulating the subtropical jet position and the relationship between extreme precipitation and moisture transport. After accounting for these biases, the model projects an ensemble mean increase of 35% in the number of landfalling AR days between the last twenty years of the 20th and 21st centuries. However, the number of AR associated extreme precipitation days increases only by 28% because the moisture transport required to produce extreme precipitation also increases withmore » warming. Internal variability introduces an uncertainty of ±8% and ±7% in the projected changes in AR days and associated extreme precipitation days. In contrast, accountings for model biases only change the projected changes by about 1%. The significantly larger mean changes compared to internal variability and to the effects of model biases highlight the robustness of AR responses to global warming.« less

  11. Large optical glass blanks for the ELT generation

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Petzold, Uwe; Dietrich, Volker; Wittmer, Volker; Rexius, Olga

    2016-07-01

    The upcoming extremely large telescope projects like the E-ELT, TMT or GMT telescopes require not only large amount of mirror blank substrates but have also sophisticated instrument setups. Common instrument components are atmospheric dispersion correctors that compensate for the varying atmospheric path length depending on the telescope inclination angle. These elements consist usually of optical glass blanks that have to be large due to the increased size of the focal beam of the extremely large telescopes. SCHOTT has a long experience in producing and delivering large optical glass blanks for astronomical applications up to 1 m and in homogeneity grades up to H3 quality in the past. The most common optical glass available in large formats is SCHOTT N-BK7. But other glass types like F2 or LLF1 can also be produced in formats up to 1 m. The extremely large telescope projects partly demand atmospheric dispersion components even in sizes beyond 1m up to a range of 1.5 m diameter. The production of such large homogeneous optical glass banks requires tight control of all process steps. To cover this demand in the future SCHOTT initiated a research project to improve the large optical blank production process steps from melting to annealing and measurement. Large optical glass blanks are measured in several sub-apertures that cover the total clear aperture of the application. With SCHOTT's new stitching software it is now possible to combine individual sub-aperture measurements to a total homogeneity map of the blank. In this presentation first results will be demonstrated.

  12. Content Is King: Databases Preserve the Collective Information of Science.

    PubMed

    Yates, John R

    2018-04-01

    Databases store sequence information experimentally gathered to create resources that further science. In the last 20 years databases have become critical components of fields like proteomics where they provide the basis for large-scale and high-throughput proteomic informatics. Amos Bairoch, winner of the Association of Biomolecular Resource Facilities Frederick Sanger Award, has created some of the important databases proteomic research depends upon for accurate interpretation of data.

  13. Use of a German longitudinal prescription database (LRx) in pharmacoepidemiology.

    PubMed

    Richter, Hartmut; Dombrowski, Silvia; Hamer, Hajo; Hadji, Peyman; Kostev, Karel

    2015-01-01

    Large epidemiological databases are often used to examine matters pertaining to drug utilization, health services, and drug safety. The major strength of such databases is that they include large sample sizes, which allow precise estimates to be made. The IMS® LRx database has in recent years been used as a data source for epidemiological research. The aim of this paper is to review a number of recent studies published with the aid of this database and compare these with the results of similar studies using independent data published in the literature. In spite of being somewhat limited to studies for which comparative independent results were available, it was possible to include a wide range of possible uses of the LRx database in a variety of therapeutic fields: prevalence/incidence rate determination (diabetes, epilepsy), persistence analyses (diabetes, osteoporosis), use of comedication (diabetes), drug utilization (G-CSF market) and treatment costs (diabetes, G-CSF market). In general, the results of the LRx studies were found to be clearly in line with previously published reports. In some cases, noticeable discrepancies between the LRx results and the literature data were found (e.g. prevalence in epilepsy, persistence in osteoporosis) and these were discussed and possible reasons presented. Overall, it was concluded that the IMS® LRx database forms a suitable database for pharmacoepidemiological studies.

  14. Osteoporosis therapies: evidence from health-care databases and observational population studies.

    PubMed

    Silverman, Stuart L

    2010-11-01

    Osteoporosis is a well-recognized disease with severe consequences if left untreated. Randomized controlled trials are the most rigorous method for determining the efficacy and safety of therapies. Nevertheless, randomized controlled trials underrepresent the real-world patient population and are costly in both time and money. Modern technology has enabled researchers to use information gathered from large health-care or medical-claims databases to assess the practical utilization of available therapies in appropriate patients. Observational database studies lack randomization but, if carefully designed and successfully completed, can provide valuable information that complements results obtained from randomized controlled trials and extends our knowledge to real-world clinical patients. Randomized controlled trials comparing fracture outcomes among osteoporosis therapies are difficult to perform. In this regard, large observational database studies could be useful in identifying clinically important differences among therapeutic options. Database studies can also provide important information with regard to osteoporosis prevalence, health economics, and compliance and persistence with treatment. This article describes the strengths and limitations of both randomized controlled trials and observational database studies, discusses considerations for observational study design, and reviews a wealth of information generated by database studies in the field of osteoporosis.

  15. Extremely large magnetoresistance in a high-quality WTe2 grown by flux method

    NASA Astrophysics Data System (ADS)

    Tsumura, K.; Yano, R.; Kashiwaya, H.; Koyanagi, M.; Masubuchi, S.; Machida, T.; Namiki, H.; Sasagawa, T.; Kashiwaya, S.

    2018-03-01

    We have grown single crystals of WTe2 by a self-flux method and evaluated the quality of the crystals. A Hall bar-type device was fabricated from an as-exfoliated film on a Si substrate and longitudinal resistance Rxx was measured. Rxx increased with an applied perpendicular magnetic field without saturation and an extremely large magnetoresistance as high as 376,059 % was observed at 8.27 T and 1.7 K.

  16. New DMSP database of precipitating auroral electrons and ions

    NASA Astrophysics Data System (ADS)

    Redmon, Robert J.; Denig, William F.; Kilcommons, Liam M.; Knipp, Delores J.

    2017-08-01

    Since the mid-1970s, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low Earth orbit. As the program evolved, so have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) to F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information and the Coordinated Data Analysis Web. We describe how the new database is being applied to high-latitude studies of the colocation of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field-aligned currents, and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.

  17. Integrated case management for work-related upper-extremity disorders: impact of patient satisfaction on health and work status.

    PubMed

    Feuerstein, Michael; Huang, Grant D; Ortiz, Jose M; Shaw, William S; Miller, Virginia I; Wood, Patricia M

    2003-08-01

    An integrated case management (ICM) approach (ergonomic and problem-solving intervention) to work-related upper-extremity disorders was examined in relation to patient satisfaction, future symptom severity, function, and return to work (RTW). Federal workers with work-related upper-extremity disorder workers' compensation claims (n = 205) were randomly assigned to usual care or ICM intervention. Patient satisfaction was assessed after the 4-month intervention period. Questionnaires on clinical outcomes and ergonomic exposure were administered at baseline and at 6- and 12-months postintervention. Time from intervention to RTW was obtained from an administrative database. ICM group assignment was significantly associated with greater patient satisfaction. Regression analyses found higher patient satisfaction levels predicted decreased symptom severity and functional limitations at 6 months and a shorter RTW. At 12 months, predictors of positive outcomes included male gender, lower distress, lower levels of reported ergonomic exposure, and receipt of ICM. Findings highlight the utility of targeting workplace ergonomic and problem solving skills.

  18. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  19. A Descriptive Study of Pediatric Injury Patterns from the National Automotive Sampling System

    PubMed Central

    Newgard, C; Jolly, BT

    1998-01-01

    This study describes information from the National Automotive Sampling System for injury mechanisms in the pediatric age group (age 0–16). The total number of pediatric cases in the NASS database for this three year sampling period is 2141(weighted 591,084). No restraint use was identified in 23–43% of the children. For age < 1yr, 60% of patients suffer a facial injury. Head injuries make up only 10% of the total injuries, but are severe. For those age 1–4 yrs abdominal injuries and lower extremity injuries begin to appear. For those age 5–10 yrs, the predominant change over younger occupants is the proportion of spinal injuries. By age 11–16, injuries to the spine, upper extremities, and lower extremities outnumber injuries to the face and head. However, in this population, the greatest proportions of AIS 3–5 injuries still occur to the head and abdomen.

  20. Extremity movements help occupational therapists identify stress responses in preterm infants in the neonatal intensive care unit: a systematic review.

    PubMed

    Holsti, Liisa; Grunau, Ruth E

    2007-06-01

    Accurate assessment and treatment of pain and stress in preterm infants in neonatal intensive care units (NICU) is vital because pain and stress responses have been linked to long-term alterations in development in this population. To review the evidence of specific extremity movements in preterm infants as observed during stressful procedures. Five on-line databases were searched for relevant studies. For each study, levels of evidence were determined and effect size estimates were calculated. Each study was also evaluated for specific factors that presented potential threats to its validity. Eighteen studies were identified and seven comprised the review. The combined sample included 359 preterm infants. Six specific movements were associated with painful and intrusive procedures. A set of specific extremity movements, when combined with other reliable biobehavioural measures of pain and stress, can form the basis for future research and development of a clinical stress scale for preterm infants.

  1. Driving evaluation methods for able-bodied persons and individuals with lower extremity disabilities: a review of assessment modalities

    PubMed Central

    Greve, Julia Maria D'Andréa; Santos, Luciana; Alonso, Angelica Castilho; Tate, Denise G

    2015-01-01

    Assessing the driving abilities of individuals with disabilities is often a very challenging task because each medical condition is accompanied by physical impairments and because relative individual functional performance may vary depending on personal characteristics. We identified existing driving evaluation modalities for able-bodied and lower extremity-impaired subjects (spinal cord injury patients and amputees) and evaluated the potential relationships between driving performance and the motor component of driving. An extensive scoping review of the literature was conducted to identify driving assessment tools that are currently used for able-bodied individuals and for those with spinal cord injury or lower extremity amputation. The literature search focused on the assessment of the motor component of driving. References were electronically obtained via Medline from the PubMed, Ovid, Web of Science and Google Scholar databases. This article compares the current assessments of driving performance for those with lower extremity impairments with the assessments used for able-bodied persons. Very few articles were found concerning “Lower Extremity Disabilities,” thus confirming the need for further studies that can provide evidence and guidance for such assessments in the future. Little is known about the motor component of driving and its association with the other driving domains, such as vision and cognition. The available research demonstrates the need for a more evidenced-based understanding of how to best evaluate persons with lower extremity impairment. PMID:26375567

  2. Aging assessment of large electric motors in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villaran, M.; Subudhi, M.

    1996-03-01

    Large electric motors serve as the prime movers to drive high capacity pumps, fans, compressors, and generators in a variety of nuclear plant systems. This study examined the stressors that cause degradation and aging in large electric motors operating in various plant locations and environments. The operating history of these machines in nuclear plant service was studied by review and analysis of failure reports in the NPRDS and LER databases. This was supplemented by a review of motor designs, and their nuclear and balance of plant applications, in order to characterize the failure mechanisms that cause degradation, aging, and failuremore » in large electric motors. A generic failure modes and effects analysis for large squirrel cage induction motors was performed to identify the degradation and aging mechanisms affecting various components of these large motors, the failure modes that result, and their effects upon the function of the motor. The effects of large motor failures upon the systems in which they are operating, and on the plant as a whole, were analyzed from failure reports in the databases. The effectiveness of the industry`s large motor maintenance programs was assessed based upon the failure reports in the databases and reviews of plant maintenance procedures and programs.« less

  3. Spatiotemporal Detection of Unusual Human Population Behavior Using Mobile Phone Data

    PubMed Central

    Dobra, Adrian; Williams, Nathalie E.; Eagle, Nathan

    2015-01-01

    With the aim to contribute to humanitarian response to disasters and violent events, scientists have proposed the development of analytical tools that could identify emergency events in real-time, using mobile phone data. The assumption is that dramatic and discrete changes in behavior, measured with mobile phone data, will indicate extreme events. In this study, we propose an efficient system for spatiotemporal detection of behavioral anomalies from mobile phone data and compare sites with behavioral anomalies to an extensive database of emergency and non-emergency events in Rwanda. Our methodology successfully captures anomalous behavioral patterns associated with a broad range of events, from religious and official holidays to earthquakes, floods, violence against civilians and protests. Our results suggest that human behavioral responses to extreme events are complex and multi-dimensional, including extreme increases and decreases in both calling and movement behaviors. We also find significant temporal and spatial variance in responses to extreme events. Our behavioral anomaly detection system and extensive discussion of results are a significant contribution to the long-term project of creating an effective real-time event detection system with mobile phone data and we discuss the implications of our findings for future research to this end. PMID:25806954

  4. Techniques for Efficiently Managing Large Geosciences Data Sets

    NASA Astrophysics Data System (ADS)

    Kruger, A.; Krajewski, W. F.; Bradley, A. A.; Smith, J. A.; Baeck, M. L.; Steiner, M.; Lawrence, R. E.; Ramamurthy, M. K.; Weber, J.; Delgreco, S. A.; Domaszczynski, P.; Seo, B.; Gunyon, C. A.

    2007-12-01

    We have developed techniques and software tools for efficiently managing large geosciences data sets. While the techniques were developed as part of an NSF-Funded ITR project that focuses on making NEXRAD weather data and rainfall products available to hydrologists and other scientists, they are relevant to other geosciences disciplines that deal with large data sets. Metadata, relational databases, data compression, and networking are central to our methodology. Data and derived products are stored on file servers in a compressed format. URLs to, and metadata about the data and derived products are managed in a PostgreSQL database. Virtually all access to the data and products is through this database. Geosciences data normally require a number of processing steps to transform the raw data into useful products: data quality assurance, coordinate transformations and georeferencing, applying calibration information, and many more. We have developed the concept of crawlers that manage this scientific workflow. Crawlers are unattended processes that run indefinitely, and at set intervals query the database for their next assignment. A database table functions as a roster for the crawlers. Crawlers perform well-defined tasks that are, except for perhaps sequencing, largely independent from other crawlers. Once a crawler is done with its current assignment, it updates the database roster table, and gets its next assignment by querying the database. We have developed a library that enables one to quickly add crawlers. The library provides hooks to external (i.e., C-language) compiled codes, so that developers can work and contribute independently. Processes called ingesters inject data into the system. The bulk of the data are from a real-time feed using UCAR/Unidata's IDD/LDM software. An exciting recent development is the establishment of a Unidata HYDRO feed that feeds value-added metadata over the IDD/LDM. Ingesters grab the metadata and populate the PostgreSQL tables. These and other concepts we have developed have enabled us to efficiently manage a 70 Tb (and growing) data weather radar data set.

  5. BoreholeAR: A mobile tablet application for effective borehole database visualization using an augmented reality technology

    NASA Astrophysics Data System (ADS)

    Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong

    2015-03-01

    Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.

  6. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  7. Active browsing using similarity pyramids

    NASA Astrophysics Data System (ADS)

    Chen, Jau-Yuen; Bouman, Charles A.; Dalton, John C.

    1998-12-01

    In this paper, we describe a new approach to managing large image databases, which we call active browsing. Active browsing integrates relevance feedback into the browsing environment, so that users can modify the database's organization to suit the desired task. Our method is based on a similarity pyramid data structure, which hierarchically organizes the database, so that it can be efficiently browsed. At coarse levels, the similarity pyramid allows users to view the database as large clusters of similar images. Alternatively, users can 'zoom into' finer levels to view individual images. We discuss relevance feedback for the browsing process, and argue that it is fundamentally different from relevance feedback for more traditional search-by-query tasks. We propose two fundamental operations for active browsing: pruning and reorganization. Both of these operations depend on a user-defined relevance set, which represents the image or set of images desired by the user. We present statistical methods for accurately pruning the database, and we propose a new 'worm hole' distance metric for reorganizing the database, so that members of the relevance set are grouped together.

  8. A practical approach for inexpensive searches of radiology report databases.

    PubMed

    Desjardins, Benoit; Hamilton, R Curtis

    2007-06-01

    We present a method to perform full text searches of radiology reports for the large number of departments that do not have this ability as part of their radiology or hospital information system. A tool written in Microsoft Access (front-end) has been designed to search a server (back-end) containing the indexed backup weekly copy of the full relational database extracted from a radiology information system (RIS). This front end-/back-end approach has been implemented in a large academic radiology department, and is used for teaching, research and administrative purposes. The weekly second backup of the 80 GB, 4 million record RIS database takes 2 hours. Further indexing of the exported radiology reports takes 6 hours. Individual searches of the indexed database typically take less than 1 minute on the indexed database and 30-60 minutes on the nonindexed database. Guidelines to properly address privacy and institutional review board issues are closely followed by all users. This method has potential to improve teaching, research, and administrative programs within radiology departments that cannot afford more expensive technology.

  9. ClimEx - Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec

    NASA Astrophysics Data System (ADS)

    Ludwig, Ralf; Baese, Frank; Braun, Marco; Brietzke, Gilbert; Brissette, Francois; Frigon, Anne; Giguère, Michel; Komischke, Holger; Kranzlmueller, Dieter; Leduc, Martin; Martel, Jean-Luc; Ricard, Simon; Schmid, Josef; von Trentini, Fabian; Turcotte, Richard; Weismueller, Jens; Willkofer, Florian; Wood, Raul

    2017-04-01

    The recent accumulation of extreme hydrological events in Bavaria and Québec has stimulated scientific and also societal interest. In addition to the challenges of an improved prediction of such situations and the implications for the associated risk management, there is, as yet, no confirmed knowledge whether and how climate change contributes to the magnitude and frequency of hydrological extreme events and how regional water management could adapt to the corresponding risks. The ClimEx project (2015-2019) investigates the effects of climate change on the meteorological and hydrological extreme events and their implications for water management in Bavaria and Québec. High Performance Computing is employed to enable the complex simulations in a hydro-climatological model processing chain, resulting in a unique high-resolution and transient (1950-2100) dataset of climatological and meteorological forcing and hydrological response: (1) The climate module has developed a large ensemble of high resolution data (12km) of the CRCM5 RCM for Central Europe and North-Eastern North America, downscaled from 50 members of the CanESM2 GCM. The dataset is complemented by all available data from the Euro-CORDEX project to account for the assessment of both natural climate variability and climate change. The large ensemble with several thousand model years provides the potential to catch rare extreme events and thus improves the process understanding of extreme events with return periods of 1000+ years. (2) The hydrology module comprises process-based and spatially explicit model setups (e.g. WaSiM) for all major catchments in Bavaria and Southern Québec in high temporal (3h) and spatial (500m) resolution. The simulations form the basis for in depth analysis of hydrological extreme events based on the inputs from the large climate model dataset. The specific data situation enables to establish a new method for 'virtual perfect prediction', which assesses climate change impacts on flood risk and water resources management by identifying patterns in the data which reveal preferential triggers of hydrological extreme events. The presentation will highlight first results from the analysis of the large scale ClimEx model ensemble, showing the current and future ratio of natural variability and climate change impacts on meteorological extreme events. Selected data from the ensemble is used to drive a hydrological model experiment to illustrate the capacity to better determine the recurrence periods of hydrological extreme events under conditions of climate change. [The authors acknowledge funding for the project from the Bavarian State Ministry for the Environment and Consumer Protection].

  10. [Principles of management of All-Russia Disaster Medicine Services].

    PubMed

    Sakhno, I I

    2000-11-01

    Experience of liquidation of earthquake consequences in Armenia (1988) has shown that it is extremely necessary to create the system of management in regions of natural disaster, large accident or catastrophe before arrival of main forces in order to provide reconnaissance, to receive the arriving units. It will help to make well-grounded decisions, to set tasks in time, to organize and conduct emergency-and-rescue works. The article contains general material concerning the structure of All-Russia service of disaster medicine (ARSDM), organization of management at all levels and interaction between the components of ARSDM and other subsystems of Russian Service of Extreme Situations. It is recommended how to organize management of ARSDM during liquidation of medical-and-sanitary consequences of large-scale extreme situations.

  11. Impact of nongray multiphase radiation in pulverized coal combustion

    NASA Astrophysics Data System (ADS)

    Roy, Somesh; Wu, Bifen; Modest, Michael; Zhao, Xinyu

    2016-11-01

    Detailed modeling of radiation is important for accurate modeling of pulverized coal combustion. Because of high temperature and optical properties, radiative heat transfer from coal particles is often more dominant than convective heat transfer. In this work a multiphase photon Monte Carlo radiation solver is used to investigate and to quantify the effect of nongray radiation in a laboratory-scale pulverized coal flame. The nongray radiative properties of carrier phase (gas) is modeled using HITEMP database. Three major species - CO, CO2, and H2O - are treated as participating gases. Two optical models are used to evaluate radiative properties of coal particles: a formulation based on the large particle limit and a size-dependent correlation. Effect of scattering due to coal particle is also investigated using both isotropic scattering and anisotropic scattering using a Henyey-Greenstein function. Lastly, since the optical properties of ash is very different from that of coal, the effect of ash content on the radiative properties of coal particle is examined. This work used Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant Number ACI-1053575.

  12. Sequence-based heuristics for faster annotation of non-coding RNA families.

    PubMed

    Weinberg, Zasha; Ruzzo, Walter L

    2006-01-01

    Non-coding RNAs (ncRNAs) are functional RNA molecules that do not code for proteins. Covariance Models (CMs) are a useful statistical tool to find new members of an ncRNA gene family in a large genome database, using both sequence and, importantly, RNA secondary structure information. Unfortunately, CM searches are extremely slow. Previously, we created rigorous filters, which provably sacrifice none of a CM's accuracy, while making searches significantly faster for virtually all ncRNA families. However, these rigorous filters make searches slower than heuristics could be. In this paper we introduce profile HMM-based heuristic filters. We show that their accuracy is usually superior to heuristics based on BLAST. Moreover, we compared our heuristics with those used in tRNAscan-SE, whose heuristics incorporate a significant amount of work specific to tRNAs, where our heuristics are generic to any ncRNA. Performance was roughly comparable, so we expect that our heuristics provide a high-quality solution that--unlike family-specific solutions--can scale to hundreds of ncRNA families. The source code is available under GNU Public License at the supplementary web site.

  13. How much do disasters cost? A comparison of disaster cost estimates in Australia

    NASA Astrophysics Data System (ADS)

    Ladds, Monique; Keating, Adriana; Handmer, John; Magee, Liam

    2017-04-01

    Extreme weather events in Australia are common and a large proportion of the population are exposed to such events. Therefore, there is great interest as to how these events will impact Australia's society and economy, which requires understanding the current and historical impact of disasters. Despite global efforts to record and cost disaster impacts, no standardised method of collecting and recording data retrospectively yet exists. The lack of comparability in turn produces quite different analyses of economic impacts. This paper examines five examples of aggregate cost and relative impacts of natural disasters in Australia, and comparisons between them reveal significant data shortcomings. The reliability of data sources, and the methodology employed to analyse them can have significant impacts on conclusions regarding the overall cost of disasters, the relative costs of different disaster types, and the distribution of costs across Australian states. We highlight difficulties with time series comparisons, further complicated by the interdependencies of the databases. We reiterate the need for consistent and comparable data collection and analysis, to respond to the increasing frequency and severity of disasters in Australia.

  14. Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.

    PubMed

    Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin

    2015-01-01

    Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.

  15. Water management in the Roman world

    NASA Astrophysics Data System (ADS)

    Dermody, Brian J.; van Beek, Rens L. P. H.; Meeks, Elijah; Klein Goldewijk, Kees; Bierkens, Marc F. P.; Scheidel, Walter; Wassen, Martin J.; van der Velde, Ype; Dekker, Stefan C.

    2014-05-01

    Climate variability can have extreme impacts on societies in regions that are water-limited for agriculture. A society's ability to manage its water resources in such environments is critical to its long-term viability. Water management can involve improving agricultural yields through in-situ irrigation or redistributing water resources through trade in food. Here, we explore how such water management strategies affected the resilience of the Roman Empire to climate variability in the water-limited region of the Mediterranean. Using the large-scale hydrological model PCR-GLOBWB and estimates of landcover based on the Historical Database of the Global Environment (HYDE) we generate potential agricultural yield maps under variable climate. HYDE maps of population density in conjunction with potential yield estimates are used to develop maps of agricultural surplus and deficit. The surplus and deficit regions are abstracted to nodes on a water redistribution network based on the Stanford Geospatial Network Model of the Roman World (ORBIS). This demand-driven, water redistribution network allows us to quantitatively explore how water management strategies such as irrigation and food trade improved the resilience of the Roman Empire to climate variability.

  16. The use of DRG for identifying clinical trials centers with high recruitment potential: a feasability study.

    PubMed

    Aegerter, Philippe; Bendersky, Noelle; Tran, Thi-Chien; Ropers, Jacques; Taright, Namik; Chatellier, Gilles

    2014-01-01

    Recruitment of large samples of patients is crucial for evidence level and efficacy of clinical trials (CT). Clinical Trial Recruitment Support Systems (CTRSS) used to estimate patient recruitment are generally specific to Hospital Information Systems and few were evaluated on a large number of trials. Our aim was to assess, on a large number of CT, the usefulness of commonly available data as Diagnosis Related Groups (DRG) databases in order to estimate potential recruitment. We used the DRG database of a large French multicenter medical institution (1.2 million inpatient stays and 400 new trials each year). Eligibility criteria of protocols were broken down into in atomic entities (diagnosis, procedures, treatments...) then translated into codes and operators recorded in a standardized form. A program parsed the forms and generated requests on the DRG database. A large majority of selection criteria could be coded and final estimations of number of eligible patients were close to observed ones (median difference = 25). Such a system could be part of the feasability evaluation and center selection process before the start of the clinical trial.

  17. Making the MagIC (Magnetics Information Consortium) Web Application Accessible to New Users and Useful to Experts

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.

    2017-12-01

    Challenges are faced by both new and experienced users interested in contributing their data to community repositories, in data discovery, or engaged in potentially transformative science. The Magnetics Information Consortium (https://earthref.org/MagIC) has recently simplified its data model and developed a new containerized web application to reduce the friction in contributing, exploring, and combining valuable and complex datasets for the paleo-, geo-, and rock magnetic scientific community. The new data model more closely reflects the hierarchical workflow in paleomagnetic experiments to enable adequate annotation of scientific results and ensure reproducibility. The new open-source (https://github.com/earthref/MagIC) application includes an upload tool that is integrated with the data model to provide early data validation feedback and ease the friction of contributing and updating datasets. The search interface provides a powerful full text search of contributions indexed by ElasticSearch and a wide array of filters, including specific geographic and geological timescale filtering, to support both novice users exploring the database and experts interested in compiling new datasets with specific criteria across thousands of studies and millions of measurements. The datasets are not large, but they are complex, with many results from evolving experimental and analytical approaches. These data are also extremely valuable due to the cost in collecting or creating physical samples and the, often, destructive nature of the experiments. MagIC is heavily invested in encouraging young scientists as well as established labs to cultivate workflows that facilitate contributing their data in a consistent format. This eLightning presentation includes a live demonstration of the MagIC web application, developed as a configurable container hosting an isomorphic Meteor JavaScript application, MongoDB database, and ElasticSearch search engine. Visitors can explore the MagIC Database through maps and image or plot galleries or search and filter the raw measurements and their derived hierarchy of analytical interpretations.

  18. Physically-based extreme flood frequency with stochastic storm transposition and paleoflood data on large watersheds

    NASA Astrophysics Data System (ADS)

    England, John F.; Julien, Pierre Y.; Velleux, Mark L.

    2014-03-01

    Traditionally, deterministic flood procedures such as the Probable Maximum Flood have been used for critical infrastructure design. Some Federal agencies now use hydrologic risk analysis to assess potential impacts of extreme events on existing structures such as large dams. Extreme flood hazard estimates and distributions are needed for these efforts, with very low annual exceedance probabilities (⩽10-4) (return periods >10,000 years). An integrated data-modeling hydrologic hazard framework for physically-based extreme flood hazard estimation is presented. Key elements include: (1) a physically-based runoff model (TREX) coupled with a stochastic storm transposition technique; (2) hydrometeorological information from radar and an extreme storm catalog; and (3) streamflow and paleoflood data for independently testing and refining runoff model predictions at internal locations. This new approach requires full integration of collaborative work in hydrometeorology, flood hydrology and paleoflood hydrology. An application on the 12,000 km2 Arkansas River watershed in Colorado demonstrates that the size and location of extreme storms are critical factors in the analysis of basin-average rainfall frequency and flood peak distributions. Runoff model results are substantially improved by the availability and use of paleoflood nonexceedance data spanning the past 1000 years at critical watershed locations.

  19. The scaling of population persistence with carrying capacity does not asymptote in populations of a fish experiencing extreme climate variability.

    PubMed

    White, Richard S A; Wintle, Brendan A; McHugh, Peter A; Booker, Douglas J; McIntosh, Angus R

    2017-06-14

    Despite growing concerns regarding increasing frequency of extreme climate events and declining population sizes, the influence of environmental stochasticity on the relationship between population carrying capacity and time-to-extinction has received little empirical attention. While time-to-extinction increases exponentially with carrying capacity in constant environments, theoretical models suggest increasing environmental stochasticity causes asymptotic scaling, thus making minimum viable carrying capacity vastly uncertain in variable environments. Using empirical estimates of environmental stochasticity in fish metapopulations, we showed that increasing environmental stochasticity resulting from extreme droughts was insufficient to create asymptotic scaling of time-to-extinction with carrying capacity in local populations as predicted by theory. Local time-to-extinction increased with carrying capacity due to declining sensitivity to demographic stochasticity, and the slope of this relationship declined significantly as environmental stochasticity increased. However, recent 1 in 25 yr extreme droughts were insufficient to extirpate populations with large carrying capacity. Consequently, large populations may be more resilient to environmental stochasticity than previously thought. The lack of carrying capacity-related asymptotes in persistence under extreme climate variability reveals how small populations affected by habitat loss or overharvesting, may be disproportionately threatened by increases in extreme climate events with global warming. © 2017 The Author(s).

  20. Large-scale feature searches of collections of medical imagery

    NASA Astrophysics Data System (ADS)

    Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.

    1993-09-01

    Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.

  1. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  2. Uses for lunar crawler transporters

    NASA Astrophysics Data System (ADS)

    Kaden, Richard A.

    This article discusses state-of-the-art crawler transporters and expresses the need for additional research and development for lunar crawlers. The thrust of the paper illustrates how the basic crawler technology has progressed to a point where extremely large modules can be shop fabricated and move to some distant location at a considerable savings. Also, extremely heavy loads may be lifted by large crawler cranes and placed in designed locations. The Transi-Lift Crawler crane with its traveling counterweight is an attractive concept for lunar construction.

  3. A Brief Review of RNA–Protein Interaction Database Resources

    PubMed Central

    Yi, Ying; Zhao, Yue; Huang, Yan; Wang, Dong

    2017-01-01

    RNA–Protein interactions play critical roles in various biological processes. By collecting and analyzing the RNA–Protein interactions and binding sites from experiments and predictions, RNA–Protein interaction databases have become an essential resource for the exploration of the transcriptional and post-transcriptional regulatory network. Here, we briefly review several widely used RNA–Protein interaction database resources developed in recent years to provide a guide of these databases. The content and major functions in databases are presented. The brief description of database helps users to quickly choose the database containing information they interested. In short, these RNA–Protein interaction database resources are continually updated, but the current state shows the efforts to identify and analyze the large amount of RNA–Protein interactions. PMID:29657278

  4. Remote visual analysis of large turbulence databases at multiple scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  5. Remote visual analysis of large turbulence databases at multiple scales

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin; ...

    2018-06-15

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  6. How does the size and shape of local populations in China compare to general anthropometric surveys currently used for product design?

    PubMed

    Daniell, Nathan; Fraysse, François; Paul, Gunther

    2012-01-01

    Anthropometry has long been used for a range of ergonomic applications & product design. Although products are often designed for specific cohorts, anthropometric data are typically sourced from large scale surveys representative of the general population. Additionally, few data are available for emerging markets like China and India. This study measured 80 Chinese males that were representative of a specific cohort targeted for the design of a new product. Thirteen anthropometric measurements were recorded and compared to two large databases that represented a general population, a Chinese database and a Western database. Substantial differences were identified between the Chinese males measured in this study and both databases. The subjects were substantially taller, heavier and broader than subjects in the older Chinese database. However, they were still substantially smaller, lighter and thinner than Western males. Data from current Western anthropometric surveys are unlikely to accurately represent the target population for product designers and manufacturers in emerging markets like China.

  7. Mars Global Digital Dune Database: MC2-MC29

    USGS Publications Warehouse

    Hayward, Rosalyn K.; Mullins, Kevin F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, Anthony; Christensen, P.R.

    2007-01-01

    Introduction The Mars Global Digital Dune Database presents data and describes the methodology used in creating the database. The database provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields from 65? N to 65? S latitude and encompasses ~ 550 dune fields. The database will be expanded to cover the entire planet in later versions. Although we have attempted to include all dune fields between 65? N and 65? S, some have likely been excluded for two reasons: 1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or 2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera narrow angle (MOC NA) images allowed, we classifed dunes and included dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid was calculated. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes over 1800 selected Thermal Emission Imaging System (THEMIS) infrared (IR), THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as a series of ArcReader projects which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in ArcMap projects. The ArcMap projects allow fuller use of the data, but require ESRI ArcMap? software. Multiple projects were required to accommodate the large number of images needed. A fuller description of the projects can be found in the Dunes_ReadMe file and the ReadMe_GIS file in the Documentation folder. For users who prefer to create their own projects, the data is available in ESRI shapefile and geodatabase formats, as well as the open Geographic Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. ReadMe files are available in PDF and ASCII (.txt) files. Tables are available in both Excel (.xls) and ASCII formats.

  8. Large Prominence Eruption [video

    NASA Image and Video Library

    2014-10-07

    The STEREO (Behind) spacecraft captured this large prominence and corona mass ejection as they erupted into space (Sept. 26, 2014). By combining images from three instruments, scientists can see the eruption itself (in extreme UV light) as well as follow its progression over the period of about 13 hours with its two coronagraphs. Credit: NASA/Goddard/STEREO The STEREO (Behind) spacecraft captured this large prominence and corona mass ejection as they erupted into space (Sept. 26, 2014). By combining images from three instruments, scientists can see the eruption itself (in extreme UV light) as well as follow its progression over the period of about 13 hours with its two coronagraphs.

  9. Large Prominence Eruption (October 3, 2014)

    NASA Image and Video Library

    2017-12-08

    The STEREO (Behind) spacecraft captured this large prominence and corona mass ejection as they erupted into space (Sept. 26, 2014). By combining images from three instruments, scientists can see the eruption itself (in extreme UV light) as well as follow its progression over the period of about 13 hours with its two coronagraphs. Credit: NASA/Goddard/STEREO The STEREO (Behind) spacecraft captured this large prominence and corona mass ejection as they erupted into space (Sept. 26, 2014). By combining images from three instruments, scientists can see the eruption itself (in extreme UV light) as well as follow its progression over the period of about 13 hours with its two coronagraphs.

  10. FDT 2.0: Improving scalability of the fuzzy decision tree induction tool - integrating database storage.

    PubMed

    Durham, Erin-Elizabeth A; Yu, Xiaxia; Harrison, Robert W

    2014-12-01

    Effective machine-learning handles large datasets efficiently. One key feature of handling large data is the use of databases such as MySQL. The freeware fuzzy decision tree induction tool, FDT, is a scalable supervised-classification software tool implementing fuzzy decision trees. It is based on an optimized fuzzy ID3 (FID3) algorithm. FDT 2.0 improves upon FDT 1.0 by bridging the gap between data science and data engineering: it combines a robust decisioning tool with data retention for future decisions, so that the tool does not need to be recalibrated from scratch every time a new decision is required. In this paper we briefly review the analytical capabilities of the freeware FDT tool and its major features and functionalities; examples of large biological datasets from HIV, microRNAs and sRNAs are included. This work shows how to integrate fuzzy decision algorithms with modern database technology. In addition, we show that integrating the fuzzy decision tree induction tool with database storage allows for optimal user satisfaction in today's Data Analytics world.

  11. Digital geomorphological landslide hazard mapping of the Alpago area, Italy

    NASA Astrophysics Data System (ADS)

    van Westen, Cees J.; Soeters, Rob; Sijmons, Koert

    Large-scale geomorphological maps of mountainous areas are traditionally made using complex symbol-based legends. They can serve as excellent "geomorphological databases", from which an experienced geomorphologist can extract a large amount of information for hazard mapping. However, these maps are not designed to be used in combination with a GIS, due to their complex cartographic structure. In this paper, two methods are presented for digital geomorphological mapping at large scales using GIS and digital cartographic software. The methods are applied to an area with a complex geomorphological setting on the Borsoia catchment, located in the Alpago region, near Belluno in the Italian Alps. The GIS database set-up is presented with an overview of the data layers that have been generated and how they are interrelated. The GIS database was also converted into a paper map, using a digital cartographic package. The resulting largescale geomorphological hazard map is attached. The resulting GIS database and cartographic product can be used to analyse the hazard type and hazard degree for each polygon, and to find the reasons for the hazard classification.

  12. Effects of climate change on landslide hazard in Europe (Invited)

    NASA Astrophysics Data System (ADS)

    Nadim, F.; Solheim, A.

    2009-12-01

    Landslides represent a major threat to human life, property and constructed facilities, infrastructure and natural environment in most mountainous and hilly regions of the world. As a consequence of climatic changes and potential global warming, an increase of landslide activity is expected in some parts of the world in the future. This will be due to increased extreme rainfall events, changes of hydrological cycles, meteorological events followed by sea storms causing coastal erosion and melting of snow and of frozen soils in the high mountains. During the past century, Europe experienced many fatalities and significant economic losses due to landslides. Since in many parts of Europe landslides are the most serious natural hazard, several recent European research projects are looking into the effects of climate change on the risk associated with landslides. Examples are the recently initiated SafeLand project, which looks into this problem across the continent, and GeoExtreme, which focused on Norway. The ongoing project SafeLand (www.safeland-fp7.eu) is a large, integrating project financed by the European Commission. It involves close to 30 organizations from 13 countries in Europe, and it looks into the effects of global change (mainly changes in demography and climate change) on the pattern of landslide risk in Europe. The SafeLand objectives are to (1) provide policy-makers, public administrators, researchers, scientists, educators and other stakeholders with improved harmonized framework and methodology for the assessment and quantification of landslide risk in Europe's regions; (2) evaluate the changes in risk pattern caused by climate change, human activity and policy changes; and (3) provide guidelines for choosing the most appropriate risk management strategies, including risk mitigation and prevention measures. To assess the changes in the landslide risk pattern in Norway over the next 50 years, the four-year integrated research project GeoExtreme (www.geoextreme.no) was executed. Different modules of the project established the database of landslide and avalanche events in Norway, investigated the coupling between climatic parameters and the occurrence of avalanches and landslides, developed regional, down-scaled climate scenarios for the next 50 years, and simulated a picture of possible future geohazards risk in Norway. The socioeconomic implications of geohazards in Norway, both in the past, and under the predicted future climate scenarios were also studied in the project. The latter study considered the costs related to damage by natural disasters and mitigation measures, ability to learn by experience, changes in preparedness, and impact of policy decisions. The main conclusion of the GeoExtreme project was that in a country with large climatic variation like Norway, the effects of climate change on the geohazard situation will vary significantly from location to location. Over a short time interval of 50 years, the largest increase in the direct socio-economic costs will most likely be in the transport sector. However, better adaptation to the present climate and geohazard problems would also require large investments, and this would in fact be the most important step in preparing for the expected changes during the next 50 years.

  13. The study on servo-control system in the large aperture telescope

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Zhenchao, Zhang; Daxing, Wang

    2008-08-01

    Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.

  14. Unified Planetary Coordinates System: A Searchable Database of Geodetic Information

    NASA Technical Reports Server (NTRS)

    Becker, K. J.a; Gaddis, L. R.; Soderblom, L. A.; Kirk, R. L.; Archinal, B. A.; Johnson, J. R.; Anderson, J. A.; Bowman-Cisneros, E.; LaVoie, S.; McAuley, M.

    2005-01-01

    Over the past 40 years, an enormous quantity of orbital remote sensing data has been collected for Mars from many missions and instruments. Unfortunately these datasets currently exist in a wide range of disparate coordinate systems, making it extremely difficult for the scientific community to easily correlate, combine, and compare data from different Mars missions and instruments. As part of our work for the PDS Imaging Node and on behalf of the USGS Astrogeology Team, we are working to solve this problem and to provide the NASA scientific research community with easy access to Mars orbital data in a unified, consistent coordinate system along with a wide variety of other key geometric variables. The Unified Planetary Coordinates (UPC) system is comprised of two main elements: (1) a database containing Mars orbital remote sensing data computed using a uniform coordinate system, and (2) a process by which continual maintainance and updates to the contents of the database are performed.

  15. Astronomical data analysis software and systems I; Proceedings of the 1st Annual Conference, Tucson, AZ, Nov. 6-8, 1991

    NASA Technical Reports Server (NTRS)

    Worrall, Diana M. (Editor); Biemesderfer, Chris (Editor); Barnes, Jeannette (Editor)

    1992-01-01

    Consideration is given to a definition of a distribution format for X-ray data, the Einstein on-line system, the NASA/IPAC extragalactic database, COBE astronomical databases, Cosmic Background Explorer astronomical databases, the ADAM software environment, the Groningen Image Processing System, search for a common data model for astronomical data analysis systems, deconvolution for real and synthetic apertures, pitfalls in image reconstruction, a direct method for spectral and image restoration, and a discription of a Poisson imagery super resolution algorithm. Also discussed are multivariate statistics on HI and IRAS images, a faint object classification using neural networks, a matched filter for improving SNR of radio maps, automated aperture photometry of CCD images, interactive graphics interpreter, the ROSAT extreme ultra-violet sky survey, a quantitative study of optimal extraction, an automated analysis of spectra, applications of synthetic photometry, an algorithm for extra-solar planet system detection and data reduction facilities for the William Herschel telescope.

  16. Variability of hydrological extreme events in East Asia and their dynamical control: a comparison between observations and two high-resolution global climate models

    NASA Astrophysics Data System (ADS)

    Freychet, N.; Duchez, A.; Wu, C.-H.; Chen, C.-A.; Hsu, H.-H.; Hirschi, J.; Forryan, A.; Sinha, B.; New, A. L.; Graham, T.; Andrews, M. B.; Tu, C.-Y.; Lin, S.-J.

    2017-02-01

    This work investigates the variability of extreme weather events (drought spells, DS15, and daily heavy rainfall, PR99) over East Asia. It particularly focuses on the large scale atmospheric circulation associated with high levels of the occurrence of these extreme events. Two observational datasets (APHRODITE and PERSIANN) are compared with two high-resolution global climate models (HiRAM and HadGEM3-GC2) and an ensemble of other lower resolution climate models from CMIP5. We first evaluate the performance of the high resolution models. They both exhibit good skill in reproducing extreme events, especially when compared with CMIP5 results. Significant differences exist between the two observational datasets, highlighting the difficulty of having a clear estimate of extreme events. The link between the variability of the extremes and the large scale circulation is investigated, on monthly and interannual timescales, using composite and correlation analyses. Both extreme indices DS15 and PR99 are significantly linked to the low level wind intensity over East Asia, i.e. the monsoon circulation. It is also found that DS15 events are strongly linked to the surface temperature over the Siberian region and to the land-sea pressure contrast, while PR99 events are linked to the sea surface temperature anomalies over the West North Pacific. These results illustrate the importance of the monsoon circulation on extremes over East Asia. The dependencies on of the surface temperature over the continent and the sea surface temperature raise the question as to what extent they could affect the occurrence of extremes over tropical regions in future projections.

  17. A Generalized Framework for Non-Stationary Extreme Value Analysis

    NASA Astrophysics Data System (ADS)

    Ragno, E.; Cheng, L.; Sadegh, M.; AghaKouchak, A.

    2017-12-01

    Empirical trends in climate variables including precipitation, temperature, snow-water equivalent at regional to continental scales are evidence of changes in climate over time. The evolving climate conditions and human activity-related factors such as urbanization and population growth can exert further changes in weather and climate extremes. As a result, the scientific community faces an increasing demand for updated appraisal of the time-varying climate extremes. The purpose of this study is to offer a robust and flexible statistical tool for non-stationary extreme value analysis which can better characterize the severity and likelihood of extreme climatic variables. This is critical to ensure a more resilient environment in a changing climate. Following the positive feedback on the first version of Non-Stationary Extreme Value Analysis (NEVA) Toolbox by Cheng at al. 2014, we present an improved version, i.e. NEVA2.0. The upgraded version herein builds upon a newly-developed hybrid evolution Markov Chain Monte Carlo (MCMC) approach for numerical parameters estimation and uncertainty assessment. This addition leads to a more robust uncertainty estimates of return levels, return periods, and risks of climatic extremes under both stationary and non-stationary assumptions. Moreover, NEVA2.0 is flexible in incorporating any user-specified covariate other than the default time-covariate (e.g., CO2 emissions, large scale climatic oscillation patterns). The new feature will allow users to examine non-stationarity of extremes induced by physical conditions that underlie the extreme events (e.g. antecedent soil moisture deficit, large-scale climatic teleconnections, urbanization). In addition, the new version offers an option to generate stationary and/or non-stationary rainfall Intensity - Duration - Frequency (IDF) curves that are widely used for risk assessment and infrastructure design. Finally, a Graphical User Interface (GUI) of the package is provided, making NEVA accessible to a broader audience.

  18. Building Vietnamese Herbal Database Towards Big Data Science in Nature-Based Medicine

    DTIC Science & Technology

    2018-01-04

    metabolites, diseases, and geography in order to convey a composite description of each individual species. VHO consists of 2881 species, 10887 metabolites...plants, metabolites, diseases, and geography in order to convey a composite description of each individual species. VHO consists of 2881 species...feature description are extremely diverse and highly redundant. Besides the original words or the key words for description , there are millions of

  19. Body integrity identity disorder crosses culture: case reports in the Japanese and Chinese literature.

    PubMed

    Blom, Rianne M; Vulink, Nienke C; van der Wal, Sija J; Nakamae, Takashi; Tan, Zhonglin; Derks, Eske M; Denys, Damiaan

    2016-01-01

    Body integrity identity disorder (BIID) is a condition in which people do not perceive a part of their body as their own, which results in a strong desire for amputation or paralyzation. The disorder is likely to be congenital due to its very early onset. The English literature describes only Western patients with BIID, suggesting that the disorder might be merely prevalent in the West. To scrutinize this assumption, and to extend our knowledge of the etiology of BIID, it is important to trace cases with BIID in non-Western populations. Our objective was to review Chinese and Japanese literature on BIID to learn about its presence in populations with a different genetic background. A systematic literature search was performed in databases containing Japanese and Chinese research, published in the respective languages. Five Japanese articles of BIID were identified which described two cases of BIID, whereas in the Chinese databases only BIID-related conditions were found. This article reports some preliminary evidence that BIID is also present in non-Western countries. However, making general statements about the biological background of the disorder is hampered by the extremely low number of cases found. This low number possibly resulted from the extreme secrecy associated with the disorder, perhaps even more so in Asian countries.

  20. Thermal burn and electrical injuries among electric utility workers, 1995-2004.

    PubMed

    Fordyce, Tiffani A; Kelsh, Michael; Lu, Elizabeth T; Sahl, Jack D; Yager, Janice W

    2007-03-01

    This study describes the occurrence of work-related injuries from thermal-, electrical- and chemical-burns among electric utility workers. We describe injury trends by occupation, body part injured, age, sex, and circumstances surrounding the injury. This analysis includes all thermal, electric, and chemical injuries included in the Electric Power Research Institute (EPRI) Occupational Health and Safety Database (OHSD). There were a total of 872 thermal burn and electric shock injuries representing 3.7% of all injuries, but accounting for nearly 13% of all medical claim costs, second only to the medical costs associated with sprain- and strain-related injuries (38% of all injuries). The majority of burns involved less than 1 day off of work. The head, hands, and other upper extremities were the body parts most frequently injured by burns or electric shocks. For this industry, electric-related burns accounted for the largest percentage of burn injuries, 399 injuries (45.8%), followed by thermal/heat burns, 345 injuries (39.6%), and chemical burns, 51 injuries (5.8%). These injuries also represented a disproportionate number of fatalities; of the 24 deaths recorded in the database, contact with electric current or with temperature extremes was the source of seven of the fatalities. High-risk occupations included welders, line workers, electricians, meter readers, mechanics, maintenance workers, and plant and equipment operators.

  1. Minimum important differences for the patient-specific functional scale, 4 region-specific outcome measures, and the numeric pain rating scale.

    PubMed

    Abbott, J Haxby; Schmitt, John

    2014-08-01

    Multicenter, prospective, longitudinal cohort study. To investigate the minimum important difference (MID) of the Patient-Specific Functional Scale (PSFS), 4 region-specific outcome measures, and the numeric pain rating scale (NPRS) across 3 levels of patient-perceived global rating of change in a clinical setting. The MID varies depending on the external anchor defining patient-perceived "importance." The MID for the PSFS has not been established across all body regions. One thousand seven hundred eight consecutive patients with musculoskeletal disorders were recruited from 5 physical therapy clinics. The PSFS, NPRS, and 4 region-specific outcome measures-the Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale-were assessed at the initial and final physical therapy visits. Global rating of change was assessed at the final visit. MID was calculated for the PSFS and NPRS (overall and for each body region), and for each region-specific outcome measure, across 3 levels of change defined by the global rating of change (small, medium, large change) using receiver operating characteristic curve methodology. The MID for the PSFS (on a scale from 0 to 10) ranged from 1.3 (small change) to 2.3 (medium change) to 2.7 (large change), and was relatively stable across body regions. MIDs for the NPRS (-1.5 to -3.5), Oswestry Disability Index (-12), Neck Disability Index (-14), Upper Extremity Functional Index (6 to 11), and Lower Extremity Functional Scale (9 to 16) are also reported. We reported the MID for small, medium, and large patient-perceived change on the PSFS, NPRS, Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale for use in clinical practice and research.

  2. Evaluating the ClimEx Single Model Large Ensemble in Comparison with EURO-CORDEX Results of Seasonal Means and Extreme Precipitation Indicators

    NASA Astrophysics Data System (ADS)

    von Trentini, F.; Schmid, F. J.; Braun, M.; Brisette, F.; Frigon, A.; Leduc, M.; Martel, J. L.; Willkofer, F.; Wood, R. R.; Ludwig, R.

    2017-12-01

    Meteorological extreme events seem to become more frequent in the present and future, and a seperation of natural climate variability and a clear climate change effect on these extreme events gains more and more interest. Since there is only one realisation of historical events, natural variability in terms of very long timeseries for a robust statistical analysis is not possible with observation data. A new single model large ensemble (SMLE), developed for the ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) is supposed to overcome this lack of data by downscaling 50 members of the CanESM2 (RCP 8.5) with the Canadian CRCM5 regional model (using the EURO-CORDEX grid specifications) for timeseries of 1950-2099 each, resulting in 7500 years of simulated climate. This allows for a better probabilistic analysis of rare and extreme events than any preceding dataset. Besides seasonal sums, several extreme indicators like R95pTOT, RX5day and others are calculated for the ClimEx ensemble and several EURO-CORDEX runs. This enables us to investigate the interaction between natural variability (as it appears in the CanESM2-CRCM5 members) and a climate change signal of those members for past, present and future conditions. Adding the EURO-CORDEX results to this, we can also assess the role of internal model variability (or natural variability) in climate change simulations. A first comparison shows similar magnitudes of variability of climate change signals between the ClimEx large ensemble and the CORDEX runs for some indicators, while for most indicators the spread of the SMLE is smaller than the spread of different CORDEX models.

  3. The broadcast of shared attention and its impact on political persuasion.

    PubMed

    Shteynberg, Garriy; Bramlett, James M; Fles, Elizabeth H; Cameron, Jaclyn

    2016-11-01

    In democracies where multitudes yield political influence, so does broadcast media that reaches those multitudes. However, broadcast media may not be powerful simply because it reaches a certain audience, but because each of the recipients is aware of that fact. That is, watching broadcast media can evoke a state of shared attention, or the perception of simultaneous coattention with others. Whereas past research has investigated the effects of shared attention with a few socially close others (i.e., friends, acquaintances, minimal ingroup members), we examine the impact of shared attention with a multitude of unfamiliar others in the context of televised broadcasting. In this paper, we explore whether shared attention increases the psychological impact of televised political speeches, and whether fewer numbers of coattending others diminishes this effect. Five studies investigate whether the perception of simultaneous coattention, or shared attention, on a mass broadcasted political speech leads to more extreme judgments. The results indicate that the perception of synchronous coattention (as compared with coattending asynchronously and attending alone) renders persuasive speeches more persuasive, and unpersuasive speeches more unpersuasive. We also find that recall memory for the content of the speech mediates the effect of shared attention on political persuasion. The results are consistent with the notion that shared attention on mass broadcasted information results in deeper processing of the content, rendering judgments more extreme. In all, our findings imply that shared attention is a cognitive capacity that supports large-scale social coordination, where multitudes of people can cognitively prioritize simultaneously coattended information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Seasonal forecasting of groundwater levels in natural aquifers in the United Kingdom

    NASA Astrophysics Data System (ADS)

    Mackay, Jonathan; Jackson, Christopher; Pachocka, Magdalena; Brookshaw, Anca; Scaife, Adam

    2014-05-01

    Groundwater aquifers comprise the world's largest freshwater resource and provide resilience to climate extremes which could become more frequent under future climate changes. Prolonged dry conditions can induce groundwater drought, often characterised by significantly low groundwater levels which may persist for months to years. In contrast, lasting wet conditions can result in anomalously high groundwater levels which result in flooding, potentially at large economic cost. Using computational models to produce groundwater level forecasts allows appropriate management strategies to be considered in advance of extreme events. The majority of groundwater level forecasting studies to date use data-based models, which exploit the long response time of groundwater levels to meteorological drivers and make forecasts based only on the current state of the system. Instead, seasonal meteorological forecasts can be used to drive hydrological models and simulate groundwater levels months into the future. Such approaches have not been used in the past due to a lack of skill in these long-range forecast products. However systems such as the latest version of the Met Office Global Seasonal Forecast System (GloSea5) are now showing increased skill up to a 3-month lead time. We demonstrate the first groundwater level ensemble forecasting system using a multi-member ensemble of hindcasts from GloSea5 between 1996 and 2009 to force 21 simple lumped conceptual groundwater models covering most of the UK's major aquifers. We present the results from this hindcasting study and demonstrate that the system can be used to forecast groundwater levels with some skill up to three months into the future.

  5. Early ICU Standardized Rehabilitation Therapy for the Critically Injured Burn Patient

    DTIC Science & Technology

    2017-10-01

    phase proposed to examine medical records within a large national hospital database to identify optimal care delivery patters. Minimizing the...The original study was deemed phase I and closed. The second phase proposed to examine medical records within a large national hospital database to...engineering, and the academic world on areas such as: • improving public knowledge, attitudes, skills, and abilities; • changing behavior, practices

  6. Knowledge Quality Functions for Rule Discovery

    DTIC Science & Technology

    1994-09-01

    Managers in many organizations finding themselves in the possession of large and rapidly growing databases are beginning to suspect the information in their...missing values (Smyth and Goodman, 1992, p. 303). Decision trees "tend to grow very large for realistic applications and are thus difficult to interpret...by humans" (Holsheimer, 1994, p. 42). Decision trees also grow excessively complicated in the presence of noisy databases (Dhar and Tuzhilin, 1993, p

  7. Leveraging Cognitive Context for Object Recognition

    DTIC Science & Technology

    2014-06-01

    learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by...context that people rely upon as they perceive the world. Context in ACT-R/E takes the form of associations between related concepts that are learned ...and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by

  8. Reference Material Kydex(registered trademark)-100 Test Data Message for Flammability Testing

    NASA Technical Reports Server (NTRS)

    Engel, Carl D.; Richardson, Erin; Davis, Eddie

    2003-01-01

    The Marshall Space Flight Center (MSFC) Materials and Processes Technical Information System (MAPTIS) database contains, as an engineering resource, a large amount of material test data carefully obtained and recorded over a number of years. Flammability test data obtained using Test 1 of NASA-STD-6001 is a significant component of this database. NASA-STD-6001 recommends that Kydex 100 be used as a reference material for testing certification and for comparison between test facilities in the round-robin certification testing that occurs every 2 years. As a result of these regular activities, a large volume of test data is recorded within the MAPTIS database. The activity described in this technical report was undertaken to mine the database, recover flammability (Test 1) Kydex 100 data, and review the lessons learned from analysis of these data.

  9. An ab initio electronic transport database for inorganic materials.

    PubMed

    Ricci, Francesco; Chen, Wei; Aydemir, Umut; Snyder, G Jeffrey; Rignanese, Gian-Marco; Jain, Anubhav; Hautier, Geoffroy

    2017-07-04

    Electronic transport in materials is governed by a series of tensorial properties such as conductivity, Seebeck coefficient, and effective mass. These quantities are paramount to the understanding of materials in many fields from thermoelectrics to electronics and photovoltaics. Transport properties can be calculated from a material's band structure using the Boltzmann transport theory framework. We present here the largest computational database of electronic transport properties based on a large set of 48,000 materials originating from the Materials Project database. Our results were obtained through the interpolation approach developed in the BoltzTraP software, assuming a constant relaxation time. We present the workflow to generate the data, the data validation procedure, and the database structure. Our aim is to target the large community of scientists developing materials selection strategies and performing studies involving transport properties.

  10. Architectural Implications for Spatial Object Association Algorithms*

    PubMed Central

    Kumar, Vijay S.; Kurc, Tahsin; Saltz, Joel; Abdulla, Ghaleb; Kohn, Scott R.; Matarazzo, Celeste

    2013-01-01

    Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server®, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST). PMID:25692244

  11. ApoptoProteomics, an integrated database for analysis of proteomics data obtained from apoptotic cells.

    PubMed

    Arntzen, Magnus Ø; Thiede, Bernd

    2012-02-01

    Apoptosis is the most commonly described form of programmed cell death, and dysfunction is implicated in a large number of human diseases. Many quantitative proteome analyses of apoptosis have been performed to gain insight in proteins involved in the process. This resulted in large and complex data sets that are difficult to evaluate. Therefore, we developed the ApoptoProteomics database for storage, browsing, and analysis of the outcome of large scale proteome analyses of apoptosis derived from human, mouse, and rat. The proteomics data of 52 publications were integrated and unified with protein annotations from UniProt-KB, the caspase substrate database homepage (CASBAH), and gene ontology. Currently, more than 2300 records of more than 1500 unique proteins were included, covering a large proportion of the core signaling pathways of apoptosis. Analysis of the data set revealed a high level of agreement between the reported changes in directionality reported in proteomics studies and expected apoptosis-related function and may disclose proteins without a current recognized involvement in apoptosis based on gene ontology. Comparison between induction of apoptosis by the intrinsic and the extrinsic apoptotic signaling pathway revealed slight differences. Furthermore, proteomics has significantly contributed to the field of apoptosis in identifying hundreds of caspase substrates. The database is available at http://apoptoproteomics.uio.no.

  12. ApoptoProteomics, an Integrated Database for Analysis of Proteomics Data Obtained from Apoptotic Cells*

    PubMed Central

    Arntzen, Magnus Ø.; Thiede, Bernd

    2012-01-01

    Apoptosis is the most commonly described form of programmed cell death, and dysfunction is implicated in a large number of human diseases. Many quantitative proteome analyses of apoptosis have been performed to gain insight in proteins involved in the process. This resulted in large and complex data sets that are difficult to evaluate. Therefore, we developed the ApoptoProteomics database for storage, browsing, and analysis of the outcome of large scale proteome analyses of apoptosis derived from human, mouse, and rat. The proteomics data of 52 publications were integrated and unified with protein annotations from UniProt-KB, the caspase substrate database homepage (CASBAH), and gene ontology. Currently, more than 2300 records of more than 1500 unique proteins were included, covering a large proportion of the core signaling pathways of apoptosis. Analysis of the data set revealed a high level of agreement between the reported changes in directionality reported in proteomics studies and expected apoptosis-related function and may disclose proteins without a current recognized involvement in apoptosis based on gene ontology. Comparison between induction of apoptosis by the intrinsic and the extrinsic apoptotic signaling pathway revealed slight differences. Furthermore, proteomics has significantly contributed to the field of apoptosis in identifying hundreds of caspase substrates. The database is available at http://apoptoproteomics.uio.no. PMID:22067098

  13. SING: Subgraph search In Non-homogeneous Graphs

    PubMed Central

    2010-01-01

    Background Finding the subgraphs of a graph database that are isomorphic to a given query graph has practical applications in several fields, from cheminformatics to image understanding. Since subgraph isomorphism is a computationally hard problem, indexing techniques have been intensively exploited to speed up the process. Such systems filter out those graphs which cannot contain the query, and apply a subgraph isomorphism algorithm to each residual candidate graph. The applicability of such systems is limited to databases of small graphs, because their filtering power degrades on large graphs. Results In this paper, SING (Subgraph search In Non-homogeneous Graphs), a novel indexing system able to cope with large graphs, is presented. The method uses the notion of feature, which can be a small subgraph, subtree or path. Each graph in the database is annotated with the set of all its features. The key point is to make use of feature locality information. This idea is used to both improve the filtering performance and speed up the subgraph isomorphism task. Conclusions Extensive tests on chemical compounds, biological networks and synthetic graphs show that the proposed system outperforms the most popular systems in query time over databases of medium and large graphs. Other specific tests show that the proposed system is effective for single large graphs. PMID:20170516

  14. Validation of a common data model for active safety surveillance research

    PubMed Central

    Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E

    2011-01-01

    Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893

  15. [Effects of soil data and map scale on assessment of total phosphorus storage in upland soils.

    PubMed

    Li, Heng Rong; Zhang, Li Ming; Li, Xiao di; Yu, Dong Sheng; Shi, Xue Zheng; Xing, Shi He; Chen, Han Yue

    2016-06-01

    Accurate assessment of total phosphorus storage in farmland soils is of great significance to sustainable agricultural and non-point source pollution control. However, previous studies haven't considered the estimation errors from mapping scales and various databases with different sources of soil profile data. In this study, a total of 393×10 4 hm 2 of upland in the 29 counties (or cities) of North Jiangsu was cited as a case for study. Analysis was performed of how the four sources of soil profile data, namely, "Soils of County", "Soils of Prefecture", "Soils of Province" and "Soils of China", and the six scales, i.e. 1:50000, 1:250000, 1:500000, 1:1000000, 1:4000000 and1:10000000, used in the 24 soil databases established for the four soil journals, affected assessment of soil total phosphorus. Compared with the most detailed 1:50000 soil database established with 983 upland soil profiles, relative deviation of the estimates of soil total phosphorus density (STPD) and soil total phosphorus storage (STPS) from the other soil databases varied from 4.8% to 48.9% and from 1.6% to 48.4%, respectively. The estimated STPD and STPS based on the 1:50000 database of "Soils of County" and most of the estimates based on the databases of each scale in "Soils of County" and "Soils of Prefecture" were different, with the significance levels of P<0.001 or P<0.05. Extremely significant differences (P<0.001) existed between the estimates based on the 1:50000 database of "Soils of County" and the estimates based on the databases of each scale in "Soils of Province" and "Soils of China". This study demonstrated the significance of appropriate soil data sources and appropriate mapping scales in estimating STPS.

  16. Patterns of Undergraduates' Use of Scholarly Databases in a Large Research University

    ERIC Educational Resources Information Center

    Mbabu, Loyd Gitari; Bertram, Albert; Varnum, Ken

    2013-01-01

    Authentication data was utilized to explore undergraduate usage of subscription electronic databases. These usage patterns were linked to the information literacy curriculum of the library. The data showed that out of the 26,208 enrolled undergraduate students, 42% of them accessed a scholarly database at least once in the course of the entire…

  17. S2RSLDB: a comprehensive manually curated, internet-accessible database of the sigma-2 receptor selective ligands.

    PubMed

    Nastasi, Giovanni; Miceli, Carla; Pittalà, Valeria; Modica, Maria N; Prezzavento, Orazio; Romeo, Giuseppe; Rescifina, Antonio; Marrazzo, Agostino; Amata, Emanuele

    2017-01-01

    Sigma (σ) receptors are accepted as a particular receptor class consisting of two subtypes: sigma-1 (σ 1 ) and sigma-2 (σ 2 ). The two receptor subtypes have specific drug actions, pharmacological profiles and molecular characteristics. The σ 2 receptor is overexpressed in several tumor cell lines, and its ligands are currently under investigation for their role in tumor diagnosis and treatment. The σ 2 receptor structure has not been disclosed, and researchers rely on σ 2 receptor radioligand binding assay to understand the receptor's pharmacological behavior and design new lead compounds. Here we present the sigma-2 Receptor Selective Ligands Database (S2RSLDB) a manually curated database of the σ 2 receptor selective ligands containing more than 650 compounds. The database is built with chemical structure information, radioligand binding affinity data, computed physicochemical properties, and experimental radioligand binding procedures. The S2RSLDB is freely available online without account login and having a powerful search engine the user may build complex queries, sort tabulated results, generate color coded 2D and 3D graphs and download the data for additional screening. The collection here reported is extremely useful for the development of new ligands endowed of σ 2 receptor affinity, selectivity, and appropriate physicochemical properties. The database will be updated yearly and in the near future, an online submission form will be available to help with keeping the database widely spread in the research community and continually updated. The database is available at http://www.researchdsf.unict.it/S2RSLDB.

  18. Synoptic moisture pathways associated with mean and extreme precipitation over Canada for winter and spring

    NASA Astrophysics Data System (ADS)

    Tan, X.; Gan, T. Y. Y.; Chen, Y. D.

    2017-12-01

    Dominant synoptic moisture pathway patterns of vertically integrated water vapor transport (IVT) in winter and spring over Canada West and East were identified using the self-organizing map method. Large-scale meteorological patterns (LSMPs) were related to the variability in seasonal precipitation totals and occurrences of precipitation extremes. Changes in both occurrences of LSMPs and seasonal precipitation occurred under those LSMPs were evaluated to attribute observed changes in seasonal precipitation totals and occurrences of precipitation extremes. Effects of large-scale climate anomalies on occurrences of LSMPs were also examined. Results show that synoptic moisture pathways and LSMPs exhibit the propagation of jet streams as the location and direction of ridges and troughs, and the strength and center of pressure lows and highs varied considerably between LSMPs. Significant decreases in occurrences of synoptic moisture pathway patterns that are favorable with positive precipitation anomalies and more precipitation extremes in winter over Canada West resulted in decreases in seasonal precipitation and occurrences of precipitation extremes. LSMPs resulting in a hot and dry climate and less (more) frequent precipitation extremes over the Canadian Prairies in winter and northwestern Canada in spring are more likely to occur in years with a negative phase of PNA. Occurrences of LSMPs for a wet climate and frequent occurrences of extreme precipitation events over southeastern Canada are associated with a positive phase of NAO. In El Niño years or negative PDO years, LSMPs associated with a dry climate and less frequent precipitation extremes over western Canada tend to occur.

  19. The Influence of the Madden-Julian Oscillation (mjo) on Extreme Rainfall Over the Central and Southern Peruvian Andes

    NASA Astrophysics Data System (ADS)

    Heidinger, H.; Jones, C.; Carvalho, L. V.

    2015-12-01

    Extreme rainfall is important for the Andean region because of the large contribution of these events to the seasonal totals and consequent impacts on water resources for agriculture, water consumption, industry and hydropower generation, as well as the occurrence of floods and landslides. Over Central and Southern Peruvian Andes (CSPA), rainfall exceeding the 90th percentile contributed between 44 to 100% to the total Nov-Mar 1979-2010 rainfall. Additionally, precipitation from a large majority of stations in the CSPA exhibits statistically significant spectral peaks on intraseasonal time-scales (20 to 70 days). The Madden-Julian Oscillation (MJO) is the most important intraseasonal mode of atmospheric circulation and moist convection in the tropics and the occurrence of extreme weather events worldwide. Mechanisms explaining the relationships between the MJO and precipitation in the Peruvian Andes have not been properly described yet. The present study examines the relationships between the activity and phases of the MJO and the occurrence of extreme rainfall over the CSPA. We found that the frequency of extreme rainfall events increase in the CSPA when the MJO is active. MJO phases 5, 6 and 7 contribute to the overall occurrence of extreme rainfall events over the CSPA. However, how the MJO phases modulate extreme rainfall depends on the location of the stations. For instance, extreme precipitation (above the 90th percentile) in stations in the Amazon basin are slightly more sensitive to phases 2, 3 and 4; the frequency of extremes in stations in the Pacific basin increases in phases 5, 6 and 7 whereas phase 2, 3 and 7 modulates extreme precipitation in stations in the Titicaca basin. Greater variability among stations is observed when using the 95th and 99th percentiles to identify extremes. Among the main mechanisms that explain the increase in extreme rainfall events in the Peruvian Andes is the intensification of the easterly moisture flux anomalies, which are favored during certain phases of the MJO. Here dynamical mechanisms linking the MJO to the occurrence of extreme rainfall in stations in the Peruvian Andes are investigated using composites of integrated moisture flux and geopotential height anomalies.

  20. Upper extremity fractures among hospitalized road traffic accident adults.

    PubMed

    Rubin, Guy; Peleg, Kobi; Givon, Adi; Rozen, Nimrod

    2015-02-01

    Upper extremity fractures (UEFs) associated with road traffic accidents (RTAs) may result in long-term disability. Previous studies have examined UEF profiles with small patient populations. The objective of this study was to examine the injury profiles of UEFs in all mechanisms of injury related to RTAs. Data on 71,231 RTA adult patients between 1997 and 2012 whose records were entered in a centralized country trauma database were reviewed. Data on UEFs related to mechanism of injury (car, motorcycle, bicycle, and pedestrian) including associated injuries, multiple UEFs, and frequency of UEF were analyzed. Of 71,231 adult RTA cases recorded in 1997-2012, 12,754 (17.9%) included UEFs. Motorcycle (27%) and bicycle riders (25%) had the greater risk for UEF (P<.0001). Of 12,754 patients with UEFs, 9701 (76%) had other injuries. Pedestrians (86%) and car occupants (81%) had the greater risk for associated injuries (P<.0001). Most of the injuries were head/face/neck (52%), lower extremities (49%), and chest (46%) injuries (P<.0001). Twenty-two percent of all cases had multiple UEFs. The motorcycle riders (27%) had the greater risk for multiple UEFs (P<.0001). Of 12,754 patients with UEFs we found 16,371 UEFs. Most of the fractures were in the radius (22%), humerus (19%), and clavicle (17%) (P<.0001). This study contributes the largest database on reported adult UEFs related to all mechanisms of injury in RTAs and finds the comparative epidemiology of associated injuries, multiple UEFs, and frequency of UEFs. It is important that the treating surgeon is aware of the complexity of the UEF patient, the strong possibility for associated injury, the possibility for multiple fractures in the upper limbs, and the most common fractures associated with each mechanism of accident. Copyright © 2014 Elsevier Inc. All rights reserved.

Top