Sample records for analysis information distribution

  1. 78 FR 2992 - Agency Information Collection Activities; Proposed Collection; Comment Request; Distribution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-15

    ... Activities; Proposed Collection; Comment Request; Distribution of Offsite Consequence Analysis Information... Collection Request (ICR), Distribution of Offsite Consequence Analysis Information under Section 112(r)(7)(H... Air Act Section 112(r)(7); Distribution of Off-Site Consequence Analysis Information. CAA section 112...

  2. 40 CFR 1400.12 - Qualified researchers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.12 Qualified researchers. The Administrator is authorized to provide OCA information, including facility identification, to qualified researchers pursuant to a system developed and...

  3. 40 CFR 1400.12 - Qualified researchers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.12 Qualified researchers. The Administrator is authorized to provide OCA information, including facility identification, to qualified researchers pursuant to a system developed and...

  4. 40 CFR 1400.12 - Qualified researchers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.12 Qualified researchers. The Administrator is authorized to provide OCA information, including facility identification, to qualified researchers pursuant to a system developed and...

  5. 40 CFR 1400.12 - Qualified researchers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.12 Qualified researchers. The Administrator is authorized to provide OCA information, including facility identification, to qualified researchers pursuant to a system developed and...

  6. 40 CFR 1400.10 - Limitation on public dissemination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.10 Limitation on public dissemination. Except as... officials, and qualified researchers are prohibited from disseminating OCA information and OCA rankings to...

  7. 40 CFR 1400.10 - Limitation on public dissemination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.10 Limitation on public dissemination. Except as... officials, and qualified researchers are prohibited from disseminating OCA information and OCA rankings to...

  8. 40 CFR 1400.10 - Limitation on public dissemination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.10 Limitation on public dissemination. Except as... officials, and qualified researchers are prohibited from disseminating OCA information and OCA rankings to...

  9. 40 CFR 1400.10 - Limitation on public dissemination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.10 Limitation on public dissemination. Except as... officials, and qualified researchers are prohibited from disseminating OCA information and OCA rankings to...

  10. Power Distribution Analysis For Electrical Usage In Province Area Using Olap (Online Analytical Processing)

    NASA Astrophysics Data System (ADS)

    Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi

    2018-02-01

    The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.

  11. 40 CFR 1400.6 - Enhanced local access.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.6 Enhanced local access. (a) OCA data elements. Consistent with 42 U.S.C...) OCA information. (1) LEPCs and related local government agencies are authorized and encouraged to...

  12. 40 CFR 1400.6 - Enhanced local access.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.6 Enhanced local access. (a) OCA data elements. Consistent with 42 U.S.C...) OCA information. (1) LEPCs and related local government agencies are authorized and encouraged to...

  13. 40 CFR 1400.6 - Enhanced local access.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.6 Enhanced local access. (a) OCA data elements. Consistent with 42 U.S.C...) OCA information. (1) LEPCs and related local government agencies are authorized and encouraged to...

  14. 40 CFR 1400.6 - Enhanced local access.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.6 Enhanced local access. (a) OCA data elements. Consistent with 42 U.S.C...) OCA information. (1) LEPCs and related local government agencies are authorized and encouraged to...

  15. 40 CFR 1400.5 - Internet access to certain off-site consequence analysis data elements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... UNDER THE CLEAN AIR ACT SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.5 Internet access to certain off... consequence analysis data elements. 1400.5 Section 1400.5 Protection of Environment ENVIRONMENTAL PROTECTION...

  16. 40 CFR 1400.5 - Internet access to certain off-site consequence analysis data elements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... UNDER THE CLEAN AIR ACT SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.5 Internet access to certain off... consequence analysis data elements. 1400.5 Section 1400.5 Protection of Environment ENVIRONMENTAL PROTECTION...

  17. 40 CFR 1400.5 - Internet access to certain off-site consequence analysis data elements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... UNDER THE CLEAN AIR ACT SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.5 Internet access to certain off... consequence analysis data elements. 1400.5 Section 1400.5 Protection of Environment ENVIRONMENTAL PROTECTION...

  18. 40 CFR 1400.11 - Limitation on dissemination to State and local government officials.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS UNDER THE CLEAN AIR ACT SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.11 Limitation on... prohibited from disseminating OCA information to State and local government officials. Violation of this...

  19. 40 CFR 1400.11 - Limitation on dissemination to State and local government officials.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS UNDER THE CLEAN AIR ACT SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.11 Limitation on... prohibited from disseminating OCA information to State and local government officials. Violation of this...

  20. 40 CFR 1400.11 - Limitation on dissemination to State and local government officials.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS UNDER THE CLEAN AIR ACT SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.11 Limitation on... prohibited from disseminating OCA information to State and local government officials. Violation of this...

  1. 40 CFR 1400.11 - Limitation on dissemination to State and local government officials.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS UNDER THE CLEAN AIR ACT SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Other Provisions § 1400.11 Limitation on... prohibited from disseminating OCA information to State and local government officials. Violation of this...

  2. 40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... analysis information by Federal government officials. 1400.8 Section 1400.8 Protection of Environment... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.8 Access to off-site consequence analysis information by Federal...

  3. 40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... analysis information by Federal government officials. 1400.8 Section 1400.8 Protection of Environment... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.8 Access to off-site consequence analysis information by Federal...

  4. 40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... analysis information by Federal government officials. 1400.8 Section 1400.8 Protection of Environment... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.8 Access to off-site consequence analysis information by Federal...

  5. 40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... analysis information by Federal government officials. 1400.8 Section 1400.8 Protection of Environment... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.8 Access to off-site consequence analysis information by Federal...

  6. 40 CFR 1400.9 - Access to off-site consequence analysis information by State and local government officials.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...

  7. 40 CFR 1400.9 - Access to off-site consequence analysis information by State and local government officials.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...

  8. 40 CFR 1400.9 - Access to off-site consequence analysis information by State and local government officials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...

  9. 40 CFR 1400.9 - Access to off-site consequence analysis information by State and local government officials.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...

  10. 40 CFR 1400.9 - Access to off-site consequence analysis information by State and local government officials.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...

  11. 40 CFR 1400.3 - Public access to paper copies of off-site consequence analysis information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-site consequence analysis information. 1400.3 Section 1400.3 Protection of Environment ENVIRONMENTAL... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.3 Public access to paper copies of off-site consequence analysis information. (a) General. The Administrator and the...

  12. 40 CFR 1400.3 - Public access to paper copies of off-site consequence analysis information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-site consequence analysis information. 1400.3 Section 1400.3 Protection of Environment ENVIRONMENTAL... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.3 Public access to paper copies of off-site consequence analysis information. (a) General. The Administrator and the...

  13. 40 CFR 1400.3 - Public access to paper copies of off-site consequence analysis information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-site consequence analysis information. 1400.3 Section 1400.3 Protection of Environment ENVIRONMENTAL... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.3 Public access to paper copies of off-site consequence analysis information. (a) General. The Administrator and the...

  14. 40 CFR 1400.3 - Public access to paper copies of off-site consequence analysis information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-site consequence analysis information. 1400.3 Section 1400.3 Protection of Environment ENVIRONMENTAL... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.3 Public access to paper copies of off-site consequence analysis information. (a) General. The Administrator and the...

  15. 40 CFR 1400.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... analysis (OCA) information means sections 2 through 5 of a risk management plan (consisting of an... consequence analysis (OCA) data elements means the results of the off-site consequence analysis conducted by a...

  16. 40 CFR 1400.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... analysis (OCA) information means sections 2 through 5 of a risk management plan (consisting of an... consequence analysis (OCA) data elements means the results of the off-site consequence analysis conducted by a...

  17. 40 CFR 1400.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... analysis (OCA) information means sections 2 through 5 of a risk management plan (consisting of an... consequence analysis (OCA) data elements means the results of the off-site consequence analysis conducted by a...

  18. 40 CFR 1400.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... analysis (OCA) information means sections 2 through 5 of a risk management plan (consisting of an... consequence analysis (OCA) data elements means the results of the off-site consequence analysis conducted by a...

  19. 40 CFR 1400.7 - In general.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.7 In general. The Administrator shall provide OCA information to government officials as provided in this subpart. Any OCA...

  20. 40 CFR 1400.7 - In general.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.7 In general. The Administrator shall provide OCA information to government officials as provided in this subpart. Any OCA...

  1. 40 CFR 1400.7 - In general.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.7 In general. The Administrator shall provide OCA information to government officials as provided in this subpart. Any OCA...

  2. 40 CFR 1400.7 - In general.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.7 In general. The Administrator shall provide OCA information to government officials as provided in this subpart. Any OCA...

  3. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  4. Propagation of population pharmacokinetic information using a Bayesian approach: comparison with meta-analysis.

    PubMed

    Dokoumetzidis, Aristides; Aarons, Leon

    2005-08-01

    We investigated the propagation of population pharmacokinetic information across clinical studies by applying Bayesian techniques. The aim was to summarize the population pharmacokinetic estimates of a study in appropriate statistical distributions in order to use them as Bayesian priors in consequent population pharmacokinetic analyses. Various data sets of simulated and real clinical data were fitted with WinBUGS, with and without informative priors. The posterior estimates of fittings with non-informative priors were used to build parametric informative priors and the whole procedure was carried on in a consecutive manner. The posterior distributions of the fittings with informative priors where compared to those of the meta-analysis fittings of the respective combinations of data sets. Good agreement was found, for the simulated and experimental datasets when the populations were exchangeable, with the posterior distribution from the fittings with the prior to be nearly identical to the ones estimated with meta-analysis. However, when populations were not exchangeble an alternative parametric form for the prior, the natural conjugate prior, had to be used in order to have consistent results. In conclusion, the results of a population pharmacokinetic analysis may be summarized in Bayesian prior distributions that can be used consecutively with other analyses. The procedure is an alternative to meta-analysis and gives comparable results. It has the advantage that it is faster than the meta-analysis, due to the large datasets used with the latter and can be performed when the data included in the prior are not actually available.

  5. Entropy-based derivation of generalized distributions for hydrometeorological frequency analysis

    NASA Astrophysics Data System (ADS)

    Chen, Lu; Singh, Vijay P.

    2018-02-01

    Frequency analysis of hydrometeorological and hydrological extremes is needed for the design of hydraulic and civil infrastructure facilities as well as water resources management. A multitude of distributions have been employed for frequency analysis of these extremes. However, no single distribution has been accepted as a global standard. Employing the entropy theory, this study derived five generalized distributions for frequency analysis that used different kinds of information encoded as constraints. These distributions were the generalized gamma (GG), the generalized beta distribution of the second kind (GB2), and the Halphen type A distribution (Hal-A), Halphen type B distribution (Hal-B) and Halphen type inverse B distribution (Hal-IB), among which the GG and GB2 distribution were previously derived by Papalexiou and Koutsoyiannis (2012) and the Halphen family was first derived using entropy theory in this paper. The entropy theory allowed to estimate parameters of the distributions in terms of the constraints used for their derivation. The distributions were tested using extreme daily and hourly rainfall data. Results show that the root mean square error (RMSE) values were very small, which indicated that the five generalized distributions fitted the extreme rainfall data well. Among them, according to the Akaike information criterion (AIC) values, generally the GB2 and Halphen family gave a better fit. Therefore, those general distributions are one of the best choices for frequency analysis. The entropy-based derivation led to a new way for frequency analysis of hydrometeorological extremes.

  6. An XML-Based Protocol for Distributed Event Services

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A recent trend in distributed computing is the construction of high-performance distributed systems called computational grids. One difficulty we have encountered is that there is no standard format for the representation of performance information and no standard protocol for transmitting this information. This limits the types of performance analysis that can be undertaken in complex distributed systems. To address this problem, we present an XML-based protocol for transmitting performance events in distributed systems and evaluate the performance of this protocol.

  7. The Healthcare Administrator's Associate: an experiment in distributed healthcare information systems.

    PubMed Central

    Fowler, J.; Martin, G.

    1997-01-01

    The Healthcare Administrator's Associate is a collection of portable tools designed to support analysis of data retrieved via the Internet from diverse distributed healthcare information systems by means of the InfoSleuth system of distributed software agents. Development of these tools is part of an effort to enhance access to diverse and geographically distributed healthcare data in order to improve the basis upon which administrative and clinical decisions are made. PMID:9357686

  8. Fundamental finite key limits for one-way information reconciliation in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Martinez-Mateo, Jesus; Pacher, Christoph; Elkouss, David

    2017-11-01

    The security of quantum key distribution protocols is guaranteed by the laws of quantum mechanics. However, a precise analysis of the security properties requires tools from both classical cryptography and information theory. Here, we employ recent results in non-asymptotic classical information theory to show that one-way information reconciliation imposes fundamental limitations on the amount of secret key that can be extracted in the finite key regime. In particular, we find that an often used approximation for the information leakage during information reconciliation is not generally valid. We propose an improved approximation that takes into account finite key effects and numerically test it against codes for two probability distributions, that we call binary-binary and binary-Gaussian, that typically appear in quantum key distribution protocols.

  9. Requirements Analysis for the Army Safety Management Information System (ASMIS)

    DTIC Science & Technology

    1989-03-01

    8217_>’ Telephone Number « .. PNL-6819 Limited Distribution Requirements Analysis for the Army Safety Management Information System (ASMIS) Final...PNL-6819 REQUIREMENTS ANALYSIS FOR THE ARMY SAFETY MANAGEMENT INFORMATION SYSTEM (ASMIS) FINAL REPORT J. S. Littlefield A. L. Corrigan March...accidents. This accident data is available under the Army Safety Management Information System (ASMIS) which is an umbrella for many databases

  10. Condition Assessment of Drinking Water Transmission and Distribution Systems

    EPA Science Inventory

    Condition assessment of water transmission and distribution mains is the collection of data and information through direct and/or indirect methods, followed by analysis of the data and information, to make a determination of the current and/or future structural, water quality, an...

  11. NATIONAL WATER INFORMATION SYSTEM OF THE U. S. GEOLOGICAL SURVEY.

    USGS Publications Warehouse

    Edwards, Melvin D.

    1985-01-01

    National Water Information System (NWIS) has been designed as an interactive, distributed data system. It will integrate the existing, diverse data-processing systems into a common system. It will also provide easier, more flexible use as well as more convenient access and expanded computing, dissemination, and data-analysis capabilities. The NWIS is being implemented as part of a Distributed Information System (DIS) being developed by the Survey's Water Resources Division. The NWIS will be implemented on each node of the distributed network for the local processing, storage, and dissemination of hydrologic data collected within the node's area of responsibility. The processor at each node will also be used to perform hydrologic modeling, statistical data analysis, text editing, and some administrative work.

  12. Information Acquisition, Analysis and Integration

    DTIC Science & Technology

    2016-08-03

    of sensing and processing, theory, applications, signal processing, image and video processing, machine learning , technology transfer. 16. SECURITY... learning . 5. Solved elegantly old problems like image and video debluring, intro- ducing new revolutionary approaches. 1 DISTRIBUTION A: Distribution...Polatkan, G. Sapiro, D. Blei, D. B. Dunson, and L. Carin, “ Deep learning with hierarchical convolution factor analysis,” IEEE 6 DISTRIBUTION A

  13. WATER DISTRIBUTION SYSTEM ANALYSIS: FIELD STUDIES, MODELING AND MANAGEMENT

    EPA Science Inventory

    The user‘s guide entitled “Water Distribution System Analysis: Field Studies, Modeling and Management” is a reference guide for water utilities and an extensive summarization of information designed to provide drinking water utility personnel (and related consultants and research...

  14. Adaptability and phenotypic stability of common bean genotypes through Bayesian inference.

    PubMed

    Corrêa, A M; Teodoro, P E; Gonçalves, M C; Barroso, L M A; Nascimento, M; Santos, A; Torres, F E

    2016-04-27

    This study used Bayesian inference to investigate the genotype x environment interaction in common bean grown in Mato Grosso do Sul State, and it also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 13 common bean genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian inference was effective for the selection of upright common bean genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. According to Bayesian inference, the EMGOPA-201, BAMBUÍ, CNF 4999, CNF 4129 A 54, and CNFv 8025 genotypes had specific adaptability to favorable environments, while the IAPAR 14 and IAC CARIOCA ETE genotypes had specific adaptability to unfavorable environments.

  15. Information Interaction Study for DER and DMS Interoperability

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Lu, Yiming; Lv, Guangxian; Liu, Peng; Chen, Yu; Zhang, Xinhui

    The Common Information Model (CIM) is an abstract data model that can be used to represent the major objects in Distribution Management System (DMS) applications. Because the Common Information Model (CIM) doesn't modeling the Distributed Energy Resources (DERs), it can't meet the requirements of DER operation and management for Distribution Management System (DMS) advanced applications. Modeling of DER were studied based on a system point of view, the article initially proposed a CIM extended information model. By analysis the basic structure of the message interaction between DMS and DER, a bidirectional messaging mapping method based on data exchange was proposed.

  16. 75 FR 2539 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... Analysis Information Under Section 112(r)(7)(H) of the Clean Air Act (CAA) (Renewal); EPA ICR No. 1981.04... . Title: Distribution of Offsite Consequence Analysis Information under Section 112(r)(7)(H) of the Clean... ENVIRONMENTAL PROTECTION AGENCY [EPA-HQ-OAR-2003-0073; FRL-9103-6] Agency Information Collection...

  17. Information, Understanding, and Influence: An Agency Theory Strategy for Air Base Communications and Cyberspace Support

    DTIC Science & Technology

    2014-05-01

    DISTRIBUTION A. Approved for public release: distribution unlimited. INFORMATION, UNDERSTANDING, AND INFLUENCE: AN AGENCY THEORY STRATEGY ...Influence: An Agency Theory Strategy For Air Base Communications And Cyberspace Support 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...present communications and cyberspace support organizations. Next, it introduces a strategy based on this analysis to bring information, understanding

  18. 40 CFR 1400.1 - Purpose.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... chemical accidents and submit the results of their analyses to the U.S. Environmental Protection Agency as... the portions of risk management plans containing the results of those analyses and certain related...

  19. 40 CFR 1400.1 - Purpose.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... chemical accidents and submit the results of their analyses to the U.S. Environmental Protection Agency as... the portions of risk management plans containing the results of those analyses and certain related...

  20. 40 CFR 1400.1 - Purpose.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... chemical accidents and submit the results of their analyses to the U.S. Environmental Protection Agency as... the portions of risk management plans containing the results of those analyses and certain related...

  1. 40 CFR 1400.1 - Purpose.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... chemical accidents and submit the results of their analyses to the U.S. Environmental Protection Agency as... the portions of risk management plans containing the results of those analyses and certain related...

  2. Can Distributed Volunteers Accomplish Massive Data Analysis Tasks?

    NASA Technical Reports Server (NTRS)

    Kanefsky, B.; Barlow, N. G.; Gulick, V. C.

    2001-01-01

    We argue that many image analysis tasks can be performed by distributed amateurs. Our pilot study, with crater surveying and classification, has produced encouraging results in terms of both quantity (100,000 crater entries in 2 months) and quality. Additional information is contained in the original extended abstract.

  3. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOEpatents

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  4. Distributed morality in an information society.

    PubMed

    Floridi, Luciano

    2013-09-01

    The phenomenon of distributed knowledge is well-known in epistemic logic. In this paper, a similar phenomenon in ethics, somewhat neglected so far, is investigated, namely distributed morality. The article explains the nature of distributed morality, as a feature of moral agency, and explores the implications of its occurrence in advanced information societies. In the course of the analysis, the concept of infraethics is introduced, in order to refer to the ensemble of moral enablers, which, although morally neutral per se, can significantly facilitate or hinder both positive and negative moral behaviours.

  5. Evaluation of the best fit distribution for partial duration series of daily rainfall in Madinah, western Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.

    2014-09-01

    Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.

  6. Automatic analysis of attack data from distributed honeypot network

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel

    2013-05-01

    There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.

  7. Determine Neuronal Tuning Curves by Exploring Optimum Firing Rate Distribution for Information Efficiency

    PubMed Central

    Han, Fang; Wang, Zhijie; Fan, Hong

    2017-01-01

    This paper proposed a new method to determine the neuronal tuning curves for maximum information efficiency by computing the optimum firing rate distribution. Firstly, we proposed a general definition for the information efficiency, which is relevant to mutual information and neuronal energy consumption. The energy consumption is composed of two parts: neuronal basic energy consumption and neuronal spike emission energy consumption. A parameter to model the relative importance of energy consumption is introduced in the definition of the information efficiency. Then, we designed a combination of exponential functions to describe the optimum firing rate distribution based on the analysis of the dependency of the mutual information and the energy consumption on the shape of the functions of the firing rate distributions. Furthermore, we developed a rapid algorithm to search the parameter values of the optimum firing rate distribution function. Finally, we found with the rapid algorithm that a combination of two different exponential functions with two free parameters can describe the optimum firing rate distribution accurately. We also found that if the energy consumption is relatively unimportant (important) compared to the mutual information or the neuronal basic energy consumption is relatively large (small), the curve of the optimum firing rate distribution will be relatively flat (steep), and the corresponding optimum tuning curve exhibits a form of sigmoid if the stimuli distribution is normal. PMID:28270760

  8. Local coexistence of VO 2 phases revealed by deep data analysis

    DOE PAGES

    Strelcov, Evgheni; Ievlev, Anton; Tselev, Alexander; ...

    2016-07-07

    We report a synergistic approach of micro-Raman spectroscopic mapping and deep data analysis to study the distribution of crystallographic phases and ferroelastic domains in a defected Al-doped VO 2 microcrystal. Bayesian linear unmixing revealed an uneven distribution of the T phase, which is stabilized by the surface defects and uneven local doping that went undetectable by other classical analysis techniques such as PCA and SIMPLISMA. This work demonstrates the impact of information recovery via statistical analysis and full mapping in spectroscopic studies of vanadium dioxide systems, which is commonly substituted by averaging or single point-probing approaches, both of which suffermore » from information misinterpretation due to low resolving power.« less

  9. Research on the novel FBG detection system for temperature and strain field distribution

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-chao; Yang, Jin-hua

    2017-10-01

    In order to collect the information of temperature and strain field distribution information, the novel FBG detection system was designed. The system applied linear chirped FBG structure for large bandwidth. The structure of novel FBG cover was designed as a linear change in thickness, in order to have a different response at different locations. It can obtain the temperature and strain field distribution information by reflection spectrum simultaneously. The structure of novel FBG cover was designed, and its theoretical function is calculated. Its solution is derived for strain field distribution. By simulation analysis the change trend of temperature and strain field distribution were analyzed in the conditions of different strain strength and action position, the strain field distribution can be resolved. The FOB100 series equipment was used to test the temperature in experiment, and The JSM-A10 series equipment was used to test the strain field distribution in experiment. The average error of experimental results was better than 1.1% for temperature, and the average error of experimental results was better than 1.3% for strain. There were individual errors when the strain was small in test data. It is feasibility by theoretical analysis, simulation calculation and experiment, and it is very suitable for application practice.

  10. Inverse statistics and information content

    NASA Astrophysics Data System (ADS)

    Ebadi, H.; Bolgorian, Meysam; Jafari, G. R.

    2010-12-01

    Inverse statistics analysis studies the distribution of investment horizons to achieve a predefined level of return. This distribution provides a maximum investment horizon which determines the most likely horizon for gaining a specific return. There exists a significant difference between inverse statistics of financial market data and a fractional Brownian motion (fBm) as an uncorrelated time-series, which is a suitable criteria to measure information content in financial data. In this paper we perform this analysis for the DJIA and S&P500 as two developed markets and Tehran price index (TEPIX) as an emerging market. We also compare these probability distributions with fBm probability, to detect when the behavior of the stocks are the same as fBm.

  11. A Methodology to Seperate and Analyze a Seismic Wide Angle Profile

    NASA Astrophysics Data System (ADS)

    Weinzierl, Wolfgang; Kopp, Heidrun

    2010-05-01

    General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.

  12. An investigation on the intra-sample distribution of cotton color by using image analysis

    USDA-ARS?s Scientific Manuscript database

    The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...

  13. [Spatial distribution of occupational disease prevalence in Guangzhou and Foshan city by geographic information system].

    PubMed

    Tan, Q; Tu, H W; Gu, C H; Li, X D; Li, R Z; Wang, M; Chen, S G; Cheng, Y J; Liu, Y M

    2017-11-20

    Objective: To explore the occupational disease spatial distribution characteristics in Guangzhou and Foshan city in 2006-2013 with Geographic Information System and to provide evidence for making control strategy. Methods: The data on occupational disease diagnosis in Guangzhou and Foshan city from 2006 through 2013 were collected and linked to the digital map at administrative county level with Arc GIS12.0 software for spatial analysis. Results: The maps of occupational disease and Moran's spatial autocor-relation analysis showed that the spatial aggregation existed in Shunde and Nanhai region with Moran's index 1.727, -0.003. Local Moran's I spatial autocorrelation analysis pointed out the "positive high incidence re-gion" and the "negative high incidence region" during 2006~2013. Trend analysis showed that the diagnosis case increased slightly then declined from west to east, increase obviously from north to south, declined from? southwest to northeast, high in the middle and low on both sides in northwest-southeast direction. Conclusions: The occupational disease is obviously geographical distribution in Guangzhou and Foshan city. The corresponding prevention measures should be made according to the geographical distribution.

  14. Library and Information Science Research Areas: A Content Analysis of Articles from the Top 10 Journals 2007-8

    ERIC Educational Resources Information Center

    Aharony, Noa

    2012-01-01

    The current study seeks to describe and analyze journal research publications in the top 10 Library and Information Science journals from 2007-8. The paper presents a statistical descriptive analysis of authorship patterns (geographical distribution and affiliation) and keywords. Furthermore, it displays a thorough content analysis of keywords and…

  15. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  16. A Development of Nonstationary Regional Frequency Analysis Model with Large-scale Climate Information: Its Application to Korean Watershed

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Young; Kwon, Hyun-Han; Kim, Hung-Soo

    2015-04-01

    The existing regional frequency analysis has disadvantages in that it is difficult to consider geographical characteristics in estimating areal rainfall. In this regard, this study aims to develop a hierarchical Bayesian model based nonstationary regional frequency analysis in that spatial patterns of the design rainfall with geographical information (e.g. latitude, longitude and altitude) are explicitly incorporated. This study assumes that the parameters of Gumbel (or GEV distribution) are a function of geographical characteristics within a general linear regression framework. Posterior distribution of the regression parameters are estimated by Bayesian Markov Chain Monte Carlo (MCMC) method, and the identified functional relationship is used to spatially interpolate the parameters of the distributions by using digital elevation models (DEM) as inputs. The proposed model is applied to derive design rainfalls over the entire Han-river watershed. It was found that the proposed Bayesian regional frequency analysis model showed similar results compared to L-moment based regional frequency analysis. In addition, the model showed an advantage in terms of quantifying uncertainty of the design rainfall and estimating the area rainfall considering geographical information. Finally, comprehensive discussion on design rainfall in the context of nonstationary will be presented. KEYWORDS: Regional frequency analysis, Nonstationary, Spatial information, Bayesian Acknowledgement This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  17. Chase: Control of Heterogeneous Autonomous Sensors for Situational Awareness

    DTIC Science & Technology

    2016-08-03

    remained the discovery and analysis of new foundational methodology for information collection and fusion that exercises rigorous feedback control over...simultaneously achieve quantified information and physical objectives. New foundational methodology for information collection and fusion that exercises...11.2.1. In the general area of novel stochastic systems analysis it seems appropriate to mention the pioneering work on non -Bayesian distributed learning

  18. Test results management and distributed cognition in electronic health record-enabled primary care.

    PubMed

    Smith, Michael W; Hughes, Ashley M; Brown, Charnetta; Russo And, Elise; Giardina, Traber D; Mehta, Praveen; Singh, Hardeep

    2018-06-01

    Managing abnormal test results in primary care involves coordination across various settings. This study identifies how primary care teams manage test results in a large, computerized healthcare system in order to inform health information technology requirements for test results management and other distributed healthcare services. At five US Veterans Health Administration facilities, we interviewed 37 primary care team members, including 16 primary care providers, 12 registered nurses, and 9 licensed practical nurses. We performed content analysis using a distributed cognition approach, identifying patterns of information transmission across people and artifacts (e.g. electronic health records). Results illustrate challenges (e.g. information overload) as well as strategies used to overcome challenges. Various communication paths were used. Some team members served as intermediaries, processing information before relaying it. Artifacts were used as memory aids. Health information technology should address the risks of distributed work by supporting awareness of team and task status for reliable management of results.

  19. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  20. Abstracts versus Full Texts and Patents: A Quantitative Analysis of Biomedical Entities

    NASA Astrophysics Data System (ADS)

    Müller, Bernd; Klinger, Roman; Gurulingappa, Harsha; Mevissen, Heinz-Theodor; Hofmann-Apitius, Martin; Fluck, Juliane; Friedrich, Christoph M.

    In information retrieval, named entity recognition gives the opportunity to apply semantic search in domain specific corpora. Recently, more full text patents and journal articles became freely available. As the information distribution amongst the different sections is unknown, an analysis of the diversity is of interest.

  1. Age of onset of recurrent respiratory papillomatosis: a distribution analysis.

    PubMed

    San Giorgi, M R M; van den Heuvel, E R; Tjon Pian Gi, R E A; Brunings, J W; Chirila, M; Friedrich, G; Golusinski, W; Graupp, M; Horcasitas Pous, R A; Ilmarinen, T; Jackowska, J; Koelmel, J C; Ferran Vilà, F; Weichbold, V; Wierzbicka, M; Dikkers, F G

    2016-10-01

    Distribution of age of onset of recurrent respiratory papillomatosis (RRP) is generally described to be bimodal, with peaks at approximately 5 years and 30 years. This assumption has never been scientifically confirmed, and authors tend to refer to an article that does not describe distribution. Knowledge of the distribution of age of onset is important for virological and epidemiological comprehension. The objective of this study was to determine the distribution of age of onset of RRP in a large international sample. Cross-sectional distribution analysis. Laryngologists from 12 European hospitals provided information on date of birth and date of onset of all their RRP patients treated between 1998 and 2012. Centers that exclusively treated either patients with juvenile onset RRP or patients with adult onset RRP, or were less accessible for one of these groups, were excluded to prevent skewness. A mixture model was implemented to describe distribution of age of onset. The best fitting model was selected using the Bayesian information criterion. Six hundred and thirty-nine patients were included in the analysis. Age of onset was described by a three component mixture distribution with lognormally distributed components. Recurrent respiratory papillomatosis starts at three median ages 7, 35 and 64 years. Distribution of age of onset of RRP shows three peaks. In addition to the already adopted idea of age peaks at paediatric and adult age, there is an additional peak around the age of 64. © 2015 John Wiley & Sons Ltd.

  2. A comparison of two methods for expert elicitation in health technology assessments.

    PubMed

    Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken

    2016-07-26

    When data needed to inform parameters in decision models are lacking, formal elicitation of expert judgement can be used to characterise parameter uncertainty. Although numerous methods for eliciting expert opinion as probability distributions exist, there is little research to suggest whether one method is more useful than any other method. This study had three objectives: (i) to obtain subjective probability distributions characterising parameter uncertainty in the context of a health technology assessment; (ii) to compare two elicitation methods by eliciting the same parameters in different ways; (iii) to collect subjective preferences of the experts for the different elicitation methods used. Twenty-seven clinical experts were invited to participate in an elicitation exercise to inform a published model-based cost-effectiveness analysis of alternative treatments for prostate cancer. Participants were individually asked to express their judgements as probability distributions using two different methods - the histogram and hybrid elicitation methods - presented in a random order. Individual distributions were mathematically aggregated across experts with and without weighting. The resulting combined distributions were used in the probabilistic analysis of the decision model and mean incremental cost-effectiveness ratios and the expected values of perfect information (EVPI) were calculated for each method, and compared with the original cost-effectiveness analysis. Scores on the ease of use of the two methods and the extent to which the probability distributions obtained from each method accurately reflected the expert's opinion were also recorded. Six experts completed the task. Mean ICERs from the probabilistic analysis ranged between £162,600-£175,500 per quality-adjusted life year (QALY) depending on the elicitation and weighting methods used. Compared to having no information, use of expert opinion decreased decision uncertainty: the EVPI value at the £30,000 per QALY threshold decreased by 74-86 % from the original cost-effectiveness analysis. Experts indicated that the histogram method was easier to use, but attributed a perception of more accuracy to the hybrid method. Inclusion of expert elicitation can decrease decision uncertainty. Here, choice of method did not affect the overall cost-effectiveness conclusions, but researchers intending to use expert elicitation need to be aware of the impact different methods could have.

  3. Local structure studies of materials using pair distribution function analysis

    NASA Astrophysics Data System (ADS)

    Peterson, Joseph W.

    A collection of pair distribution function studies on various materials is presented in this dissertation. In each case, local structure information of interest pushes the current limits of what these studies can accomplish. The goal is to provide insight into the individual material behaviors as well as to investigate ways to expand the current limits of PDF analysis. Where possible, I provide a framework for how PDF analysis might be applied to a wider set of material phenomena. Throughout the dissertation, I discuss 0 the capabilities of the PDF method to provide information pertaining to a material's structure and properties, ii) current limitations in the conventional approach to PDF analysis, iii) possible solutions to overcome certain limitations in PDF analysis, and iv) suggestions for future work to expand and improve the capabilities PDF analysis.

  4. A new space-time information expression and analysis approach based on 3S technology: a case study of China's coastland

    NASA Astrophysics Data System (ADS)

    Cao, Bao; Luo, Hong; Gao, Zhenji

    2009-10-01

    Space-time Information Expression and Analysis (SIEA) uses vivid graphic images of thinking to deal with information units according to series distribution rules with a variety of arranging, which combined with the use of information technology, powerful data-processing capabilities to carry out analysis and integration of information units. In this paper, a new SIEA approach was proposed and its model was constructed. And basic units, methodologies and steps of SIEA were discussed. Taking China's coastland as an example, the new SIEA approach were applied for the parameters of air humidity, rainfall and surface temperature from the year 1981 to 2000. The case study shows that the parameters change within month alternation, but little change within year alternation. From the view of spatial distribution, it was significantly different for the parameters in north and south of China's coastland. The new SIEA approach proposed in this paper not only has the intuitive, image characteristics, but also can solved the problem that it is difficult to express the biophysical parameters of space-time distribution using traditional charts and tables. It can reveal the complexity of the phenomenon behind the movement of things and laws of nature. And it can quantitatively analyze the phenomenon and nature law of the parameters, which inherited the advantages of graphics of traditional ways of thinking. SIEA provides a new space-time analysis and expression approach, using comprehensive 3S technologies, for the research of Earth System Science.

  5. Large-scale implementation of disease control programmes: a cost-effectiveness analysis of long-lasting insecticide-treated bed net distribution channels in a malaria-endemic area of western Kenya-a study protocol.

    PubMed

    Gama, Elvis; Were, Vincent; Ouma, Peter; Desai, Meghna; Niessen, Louis; Buff, Ann M; Kariuki, Simon

    2016-11-21

    Historically, Kenya has used various distribution models for long-lasting insecticide-treated bed nets (LLINs) with variable results in population coverage. The models presently vary widely in scale, target population and strategy. There is limited information to determine the best combination of distribution models, which will lead to sustained high coverage and are operationally efficient and cost-effective. Standardised cost information is needed in combination with programme effectiveness estimates to judge the efficiency of LLIN distribution models and options for improvement in implementing malaria control programmes. The study aims to address the information gap, estimating distribution cost and the effectiveness of different LLIN distribution models, and comparing them in an economic evaluation. Evaluation of cost and coverage will be determined for 5 different distribution models in Busia County, an area of perennial malaria transmission in western Kenya. Cost data will be collected retrospectively from health facilities, the Ministry of Health, donors and distributors. Programme-effectiveness data, defined as the number of people with access to an LLIN per 1000 population, will be collected through triangulation of data from a nationally representative, cross-sectional malaria survey, a cross-sectional survey administered to a subsample of beneficiaries in Busia County and LLIN distributors' records. Descriptive statistics and regression analysis will be used for the evaluation. A cost-effectiveness analysis will be performed from a health-systems perspective, and cost-effectiveness ratios will be calculated using bootstrapping techniques. The study has been evaluated and approved by Kenya Medical Research Institute, Scientific and Ethical Review Unit (SERU number 2997). All participants will provide written informed consent. The findings of this economic evaluation will be disseminated through peer-reviewed publications. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. Spatially variable stage-driven groundwater-surface water interaction inferred from time-frequency analysis of distributed temperature sensing data

    USGS Publications Warehouse

    Mwakanyamale, Kisa; Slater, Lee; Day-Lewis, Frederick D.; Elwaseif, Mehrez; Johnson, Carole D.

    2012-01-01

    Characterization of groundwater-surface water exchange is essential for improving understanding of contaminant transport between aquifers and rivers. Fiber-optic distributed temperature sensing (FODTS) provides rich spatiotemporal datasets for quantitative and qualitative analysis of groundwater-surface water exchange. We demonstrate how time-frequency analysis of FODTS and synchronous river stage time series from the Columbia River adjacent to the Hanford 300-Area, Richland, Washington, provides spatial information on the strength of stage-driven exchange of uranium contaminated groundwater in response to subsurface heterogeneity. Although used in previous studies, the stage-temperature correlation coefficient proved an unreliable indicator of the stage-driven forcing on groundwater discharge in the presence of other factors influencing river water temperature. In contrast, S-transform analysis of the stage and FODTS data definitively identifies the spatial distribution of discharge zones and provided information on the dominant forcing periods (≥2 d) of the complex dam operations driving stage fluctuations and hence groundwater-surface water exchange at the 300-Area.

  7. Inverse analysis of aerodynamic loads from strain information using structural models and neural networks

    NASA Astrophysics Data System (ADS)

    Wada, Daichi; Sugimoto, Yohei

    2017-04-01

    Aerodynamic loads on aircraft wings are one of the key parameters to be monitored for reliable and effective aircraft operations and management. Flight data of the aerodynamic loads would be used onboard to control the aircraft and accumulated data would be used for the condition-based maintenance and the feedback for the fatigue and critical load modeling. The effective sensing techniques such as fiber optic distributed sensing have been developed and demonstrated promising capability of monitoring structural responses, i.e., strains on the surface of the aircraft wings. By using the developed techniques, load identification methods for structural health monitoring are expected to be established. The typical inverse analysis for load identification using strains calculates the loads in a discrete form of concentrated forces, however, the distributed form of the loads is essential for the accurate and reliable estimation of the critical stress at structural parts. In this study, we demonstrate an inverse analysis to identify the distributed loads from measured strain information. The introduced inverse analysis technique calculates aerodynamic loads not in a discrete but in a distributed manner based on a finite element model. In order to verify the technique through numerical simulations, we apply static aerodynamic loads on a flat panel model, and conduct the inverse identification of the load distributions. We take two approaches to build the inverse system between loads and strains. The first one uses structural models and the second one uses neural networks. We compare the performance of the two approaches, and discuss the effect of the amount of the strain sensing information.

  8. An object-oriented software approach for a distributed human tracking motion system

    NASA Astrophysics Data System (ADS)

    Micucci, Daniela L.

    2003-06-01

    Tracking is a composite job involving the co-operation of autonomous activities which exploit a complex information model and rely on a distributed architecture. Both information and activities must be classified and related in several dimensions: abstraction levels (what is modelled and how information is processed); topology (where the modelled entities are); time (when entities exist); strategy (why something happens); responsibilities (who is in charge of processing the information). A proper Object-Oriented analysis and design approach leads to a modular architecture where information about conceptual entities is modelled at each abstraction level via classes and intra-level associations, whereas inter-level associations between classes model the abstraction process. Both information and computation are partitioned according to level-specific topological models. They are also placed in a temporal framework modelled by suitable abstractions. Domain-specific strategies control the execution of the computations. Computational components perform both intra-level processing and intra-level information conversion. The paper overviews the phases of the analysis and design process, presents major concepts at each abstraction level, and shows how the resulting design turns into a modular, flexible and adaptive architecture. Finally, the paper sketches how the conceptual architecture can be deployed into a concrete distribute architecture by relying on an experimental framework.

  9. Distributed collaborative environments for predictive battlespace awareness

    NASA Astrophysics Data System (ADS)

    McQuay, William K.

    2003-09-01

    The past decade has produced significant changes in the conduct of military operations: asymmetric warfare, the reliance on dynamic coalitions, stringent rules of engagement, increased concern about collateral damage, and the need for sustained air operations. Mission commanders need to assimilate a tremendous amount of information, make quick-response decisions, and quantify the effects of those decisions in the face of uncertainty. Situational assessment is crucial in understanding the battlespace. Decision support tools in a distributed collaborative environment offer the capability of decomposing complex multitask processes and distributing them over a dynamic set of execution assets that include modeling, simulations, and analysis tools. Decision support technologies can semi-automate activities, such as analysis and planning, that have a reasonably well-defined process and provide machine-level interfaces to refine the myriad of information that the commander must fused. Collaborative environments provide the framework and integrate models, simulations, and domain specific decision support tools for the sharing and exchanging of data, information, knowledge, and actions. This paper describes ongoing AFRL research efforts in applying distributed collaborative environments to predictive battlespace awareness.

  10. Autonomous Information Fading and Provision to Achieve High Response Time in Distributed Information Systems

    NASA Astrophysics Data System (ADS)

    Lu, Xiaodong; Arfaoui, Helene; Mori, Kinji

    In highly dynamic electronic commerce environment, the need for adaptability and rapid response time to information service systems has become increasingly important. In order to cope with the continuously changing conditions of service provision and utilization, Faded Information Field (FIF) has been proposed. FIF is a distributed information service system architecture, sustained by push/pull mobile agents to bring high-assurance of services through a recursive demand-oriented provision of the most popular information closer to the users to make a tradeoff between the cost of information service allocation and access. In this paper, based on the analysis of the relationship that exists among the users distribution, information provision and access time, we propose the technology for FIF design to resolve the competing requirements of users and providers to improve users' access time. In addition, to achieve dynamic load balancing with changing users preference, the autonomous information reallocation technology is proposed. We proved the effectiveness of the proposed technology through the simulation and comparison with the conventional system.

  11. Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, K. P.; Mather, B. A.; Pal, B. C.

    For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less

  12. Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders

    DOE PAGES

    Schneider, K. P.; Mather, B. A.; Pal, B. C.; ...

    2017-10-10

    For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less

  13. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  14. Review and Analysis of Curricula for Occupations in Food Processing and Distribution. Information Series No. 32.

    ERIC Educational Resources Information Center

    Lewis, Wiley B.

    A review and analysis of Educational Resources Information Center (ERIC) publications and non-ERIC publications was made to assess availability and identify major findings, promising developments, strategies, and methodological strengths and weaknesses which exist in curricula designed for preparing food industry workers. Project national figures…

  15. Influenza Virus Database (IVDB): an integrated information resource and analysis platform for influenza virus research.

    PubMed

    Chang, Suhua; Zhang, Jiajie; Liao, Xiaoyun; Zhu, Xinxing; Wang, Dahai; Zhu, Jiang; Feng, Tao; Zhu, Baoli; Gao, George F; Wang, Jian; Yang, Huanming; Yu, Jun; Wang, Jing

    2007-01-01

    Frequent outbreaks of highly pathogenic avian influenza and the increasing data available for comparative analysis require a central database specialized in influenza viruses (IVs). We have established the Influenza Virus Database (IVDB) to integrate information and create an analysis platform for genetic, genomic, and phylogenetic studies of the virus. IVDB hosts complete genome sequences of influenza A virus generated by Beijing Institute of Genomics (BIG) and curates all other published IV sequences after expert annotation. Our Q-Filter system classifies and ranks all nucleotide sequences into seven categories according to sequence content and integrity. IVDB provides a series of tools and viewers for comparative analysis of the viral genomes, genes, genetic polymorphisms and phylogenetic relationships. A search system has been developed for users to retrieve a combination of different data types by setting search options. To facilitate analysis of global viral transmission and evolution, the IV Sequence Distribution Tool (IVDT) has been developed to display the worldwide geographic distribution of chosen viral genotypes and to couple genomic data with epidemiological data. The BLAST, multiple sequence alignment and phylogenetic analysis tools were integrated for online data analysis. Furthermore, IVDB offers instant access to pre-computed alignments and polymorphisms of IV genes and proteins, and presents the results as SNP distribution plots and minor allele distributions. IVDB is publicly available at http://influenza.genomics.org.cn.

  16. Two dimensional Fourier transform methods for fringe pattern analysis

    NASA Astrophysics Data System (ADS)

    Sciammarella, C. A.; Bhat, G.

    An overview of the use of FFTs for fringe pattern analysis is presented, with emphasis on fringe patterns containing displacement information. The techniques are illustrated via analysis of the displacement and strain distributions in the direction perpendicular to the loading, in a disk under diametral compression. The experimental strain distribution is compared to the theoretical, and the agreement is found to be excellent in regions where the elasticity solution models well the actual problem.

  17. Using classification tree analysis to predict oak wilt distribution in Minnesota and Texas

    Treesearch

    Marla c. Downing; Vernon L. Thomas; Jennifer Juzwik; David N. Appel; Robin M. Reich; Kim Camilli

    2008-01-01

    We developed a methodology and compared results for predicting the potential distribution of Ceratocystis fagacearum (causal agent of oak wilt), in both Anoka County, MN, and Fort Hood, TX. The Potential Distribution of Oak Wilt (PDOW) utilizes a binary classification tree statistical technique that incorporates: geographical information systems (GIS...

  18. Development of Distributed System for Electronic Business Based on Java-Technologies

    ERIC Educational Resources Information Center

    Kintonova, Aliya Zh.; Andassova, Bakkaisha Z.; Ermaganbetova, Madina A.; Maikibaeva, Elmira K.

    2016-01-01

    This paper describes the results of studies on the peculiarities of distributed information systems, and the research of distributed systems technology development. The paper also provides the results of the analysis of E-business market research in the Republic of Kazakhstan. The article briefly describes the implementation of a possible solution…

  19. Distributed representations in memory: Insights from functional brain imaging

    PubMed Central

    Rissman, Jesse; Wagner, Anthony D.

    2015-01-01

    Forging new memories for facts and events, holding critical details in mind on a moment-to-moment basis, and retrieving knowledge in the service of current goals all depend on a complex interplay between neural ensembles throughout the brain. Over the past decade, researchers have increasingly leveraged powerful analytical tools (e.g., multi-voxel pattern analysis) to decode the information represented within distributed fMRI activity patterns. In this review, we discuss how these methods can sensitively index neural representations of perceptual and semantic content, and how leverage on the engagement of distributed representations provides unique insights into distinct aspects of memory-guided behavior. We emphasize that, in addition to characterizing the contents of memories, analyses of distributed patterns shed light on the processes that influence how information is encoded, maintained, or retrieved, and thus inform memory theory. We conclude by highlighting open questions about memory that can be addressed through distributed pattern analyses. PMID:21943171

  20. Industry sector analysis: The profile of the market for water and wastewater pollution control systems (the Philippines). Export trade information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda, A.L.

    1990-11-01

    The market survey covers the water and wastewater pollution control systems market in the Philippines. The analysis contains statistical and narrative information on projected market demand, end-users; receptivity of Philippine consumers to U.S. products; the competitive situation, and market access (tariffs, non-tariff barriers, standards, taxes, distribution channels). It also contains key contact information.

  1. Development of tiger habitat suitability model using geospatial tools-a case study in Achankmar Wildlife Sanctuary (AMWLS), Chhattisgarh India.

    PubMed

    Singh, R; Joshi, P K; Kumar, M; Dash, P P; Joshi, B D

    2009-08-01

    Geospatial tools supported by ancillary geo-database and extensive fieldwork regarding the distribution of tiger and its prey in Anchankmar Wildlife Sanctuary (AMWLS) were used to build a tiger habitat suitability model. This consists of a quantitative geographical information system (GIS) based approach using field parameters and spatial thematic information. The estimates of tiger sightings, its prey sighting and predicted distribution with the assistance of contextual environmental data including terrain, road network, settlement and drainage surfaces were used to develop the model. Eight variables in the dataset viz., forest cover type, forest cover density, slope, aspect, altitude, and distance from road, settlement and drainage were seen as suitable proxies and were used as independent variables in the analysis. Principal component analysis and binomial multiple logistic regression were used for statistical treatments of collected habitat parameters from field and independent variables respectively. The assessment showed a strong expert agreement between the predicted and observed suitable areas. A combination of the generated information and published literature was also used while building a habitat suitability map for the tiger. The modeling approach has taken the habitat preference parameters of the tiger and potential distribution of prey species into account. For assessing the potential distribution of prey species, independent suitability models were developed and validated with the ground truth. It is envisaged that inclusion of the prey distribution probability strengthens the model when a key species is under question. The results of the analysis indicate that tiger occur throughout the sanctuary. The results have been found to be an important input as baseline information for population modeling and natural resource management in the wildlife sanctuary. The development and application of similar models can help in better management of the protected areas of national interest.

  2. Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-08-01

    The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.

  3. Variation Principles and Applications in the Study of Cell Structure and Aging

    NASA Technical Reports Server (NTRS)

    Economos, Angelos C.; Miquel, Jaime; Ballard, Ralph C.; Johnson, John E., Jr.

    1981-01-01

    In this report we have attempted to show that "some reality lies concealed in biological variation". This "reality" has its principles, laws, mechanisms, and rules, only a few of which we have sketched. A related idea we pursued was that important information may be lost in the process of ignoring frequency distributions of physiological variables (as is customary in experimental physiology and gerontology). We suggested that it may be advantageous to expand one's "statistical field of vision" beyond simple averages +/- standard deviations. Indeed, frequency distribution analysis may make visible some hidden information not evident from a simple qualitative analysis, particularly when the effect of some external factor or condition (e.g., aging, dietary chemicals) is being investigated. This was clearly illustrated by the application of distribution analysis in the study of variation in mouse liver cellular and fine structure, and may be true of fine structural studies in general. In living systems, structure and function interact in a dynamic way; they are "inseparable," unlike in technological systems or machines. Changes in fine structure therefore reflect changes in function. If such changes do not exceed a certain physiologic range, a quantitative analysis of structure will provide valuable information on quantitative changes in function that may not be possible or easy to measure directly. Because there is a large inherent variation in fine structure of cells in a given organ of an individual and among individuals, changes in fine structure can be analyzed only by studying frequency distribution curves of various structural characteristics (dimensions). Simple averages +/- S.D. do not in general reveal all information on the effect of a certain factor, because often this effect is not uniform; on the contrary, this will be apparent from distribution analysis because the form of the curves will be affected. We have also attempted to show in this chapter that similar general statistical principles and mechanisms may be operative in biological and technological systems. Despite the common belief that most biological and technological characteristics of interest have a symmetric bell-shaped (normal or Gaussian) distribution, we have shown that more often than not, distributions tend to be asymmetric and often resemble a so-called log-normal distribution. We saw that at least three general mechanisms may be operative, i.e., nonadditivity of influencing factors, competition among individuals for a common resource, and existence of an "optimum" value for a studied characteristic; more such mechanisms could exist.

  4. Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks

    NASA Technical Reports Server (NTRS)

    Rahmani, Amirreza; Mesbahi, Mehran; Fathpour, Nanaz; Hadaegh, Fred Y.

    2008-01-01

    In this work, we develop an approach to formation estimation by explicitly characterizing formation's system-theoretic attributes in terms of the underlying inter-spacecraft information-exchange network. In particular, we approach the formation observer/estimator design by relaxing the accessibility to the global state information by a centralized observer/estimator- and in turn- providing an analysis and synthesis framework for formation observers/estimators that rely on local measurements. The noveltyof our approach hinges upon the explicit examination of the underlying distributed spacecraft network in the realm of guidance, navigation, and control algorithmic analysis and design. The overarching goal of our general research program, some of whose results are reported in this paper, is the development of distributed spacecraft estimation algorithms that are scalable, modular, and robust to variations inthe topology and link characteristics of the formation information exchange network. In this work, we consider the observability of a spacecraft formation from a single observation node and utilize the agreement protocol as a mechanism for observing formation states from local measurements. Specifically, we show how the symmetry structure of the network, characterized in terms of its automorphism group, directly relates to the observability of the corresponding multi-agent system The ramification of this notion of observability over networks is then explored in the context of distributed formation estimation.

  5. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses

    PubMed Central

    Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.

    2015-01-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447

  6. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    PubMed

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  7. 78 FR 26775 - Information Collection Request Submitted to OMB for Review and Approval; Comment Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-08

    ...), Distribution of Offsite Consequence Analysis Information under Section 112(r)(7)(H) of the Clean Air Act (CAA... a risk management plan (RMP) to EPA. The RMP includes information on offsite consequence analyses... ENVIRONMENTAL PROTECTION AGENCY [EPA-HQ-OAR-2003-0073; FRL-9530-6] Information Collection Request...

  8. MULTIMEDIA ENVIRONMENTAL DISTRIBUTION OF TOXICS (MEND-TOX): PART II, SOFTWARE IMPLEMENTATION AND CASE STUDIES

    EPA Science Inventory

    An integrated hybrid spatial-compartmental simulator is presented for analyzing the dynamic distribution of chemicals in the multimedia environment. Information obtained from such analysis, which includes temporal chemical concentration profiles in various media, mass distribu...

  9. MULTIMEDIA ENVIRONMENTAL DISTRIBUTION OF TOXICS (MEND-TOX): PART I, HYBRID COMPARTMENTAL-SPATIAL MODELING FRAMEWORK

    EPA Science Inventory

    An integrated hybrid spatial-compartmental modeling approach is presented for analyzing the dynamic distribution of chemicals in the multimedia environment. Information obtained from such analysis, which includes temporal chemical concentration profiles in various media, mass ...

  10. Determining the Publication Impact of a Digital Library

    NASA Technical Reports Server (NTRS)

    Kaplan, Nancy R.; Nelson, Michael L.

    2000-01-01

    We attempt to assess the publication impact of a digital library (DL) of aerospace scientific and technical information (STI). The Langley Technical Report Server (LTRS) is a digital library of over 1,400 electronic publications authored by NASA Langley Research Center personnel or contractors and has been available in its current World Wide Web (WWW) form since 1994. In this study, we examine calendar year 1997 usage statistics of LTRS and the Center for AeroSpace Information (CASI), a facility that archives and distributes hard copies of NASA and aerospace information. We also perform a citation analysis on some of the top publications distributed by LTRS. We find that although LTRS distributes over 71,000 copies of publications (compared with an estimated 24,000 copies from CASI), citation analysis indicates that LTRS has almost no measurable publication impact. We discuss the caveats of our investigation, speculate on possible different models of usage facilitated by DLs , and suggest retrieval analysis as a complementary metric to citation analysis. While our investigation failed to establish a relationship between LTRS and increased citations and raises at least as many questions as it answers, we hope it will serve as a invitation to, and guide for, further research in the use of DLs.

  11. A model to systematically employ professional judgment in the Bayesian Decision Analysis for a semiconductor industry exposure assessment.

    PubMed

    Torres, Craig; Jones, Rachael; Boelter, Fred; Poole, James; Dell, Linda; Harper, Paul

    2014-01-01

    Bayesian Decision Analysis (BDA) uses Bayesian statistics to integrate multiple types of exposure information and classify exposures within the exposure rating categorization scheme promoted in American Industrial Hygiene Association (AIHA) publications. Prior distributions for BDA may be developed from existing monitoring data, mathematical models, or professional judgment. Professional judgments may misclassify exposures. We suggest that a structured qualitative risk assessment (QLRA) method can provide consistency and transparency in professional judgments. In this analysis, we use a structured QLRA method to define prior distributions (priors) for BDA. We applied this approach at three semiconductor facilities in South Korea, and present an evaluation of the performance of structured QLRA for determination of priors, and an evaluation of occupational exposures using BDA. Specifically, the structured QLRA was applied to chemical agents in similar exposure groups to identify provisional risk ratings. Standard priors were developed for each risk rating before review of historical monitoring data. Newly collected monitoring data were used to update priors informed by QLRA or historical monitoring data, and determine the posterior distribution. Exposure ratings were defined by the rating category with the highest probability--i.e., the most likely. We found the most likely exposure rating in the QLRA-informed priors to be consistent with historical and newly collected monitoring data, and the posterior exposure ratings developed with QLRA-informed priors to be equal to or greater than those developed with data-informed priors in 94% of comparisons. Overall, exposures at these facilities are consistent with well-controlled work environments. That is, the 95th percentile of exposure distributions are ≤50% of the occupational exposure limit (OEL) for all chemical-SEG combinations evaluated; and are ≤10% of the limit for 94% of chemical-SEG combinations evaluated.

  12. Industry sector analysis: The market for renewable energy resources (the Philippines). Export trade information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, E.; Miranda, A.L.

    1990-08-01

    The market survey covers the renewable energy resources market in the Philippines. Sub-sectors covered include biomass, solar energy, photovoltaic cells, windmills, and mini-hydro systems. The analysis contains statistical and narrative information on projected market demand, end-users; receptivity of Philippine consumers to U.S. products; the competitive situation, and market access (tariffs, non-tariff barriers, standards, taxes, distribution channels). It also contains key contact information.

  13. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  14. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  15. Confounding adjustment in comparative effectiveness research conducted within distributed research networks.

    PubMed

    Toh, Sengwee; Gagne, Joshua J; Rassen, Jeremy A; Fireman, Bruce H; Kulldorff, Martin; Brown, Jeffrey S

    2013-08-01

    A distributed research network (DRN) of electronic health care databases, in which data reside behind the firewall of each data partner, can support a wide range of comparative effectiveness research (CER) activities. An essential component of a fully functional DRN is the capability to perform robust statistical analyses to produce valid, actionable evidence without compromising patient privacy, data security, or proprietary interests. We describe the strengths and limitations of different confounding adjustment approaches that can be considered in observational CER studies conducted within DRNs, and the theoretical and practical issues to consider when selecting among them in various study settings. Several methods can be used to adjust for multiple confounders simultaneously, either as individual covariates or as confounder summary scores (eg, propensity scores and disease risk scores), including: (1) centralized analysis of patient-level data, (2) case-centered logistic regression of risk set data, (3) stratified or matched analysis of aggregated data, (4) distributed regression analysis, and (5) meta-analysis of site-specific effect estimates. These methods require different granularities of information be shared across sites and afford investigators different levels of analytic flexibility. DRNs are growing in use and sharing of highly detailed patient-level information is not always feasible in DRNs. Methods that incorporate confounder summary scores allow investigators to adjust for a large number of confounding factors without the need to transfer potentially identifiable information in DRNs. They have the potential to let investigators perform many analyses traditionally conducted through a centralized dataset with detailed patient-level information.

  16. Laser fluorescence fluctuation excesses in molecular immunology experiments

    NASA Astrophysics Data System (ADS)

    Galich, N. E.; Filatov, M. V.

    2007-04-01

    A novel approach to statistical analysis of flow cytometry fluorescence data have been developed and applied for population analysis of blood neutrophils stained with hydroethidine during respiratory burst reaction. The staining based on intracellular oxidation hydroethidine to ethidium bromide, which intercalate into cell DNA. Fluorescence of the resultant product serves as a measure of the neutrophil ability to generate superoxide radicals after induction respiratory burst reaction by phorbol myristate acetate (PMA). It was demonstrated that polymorphonuclear leukocytes of persons with inflammatory diseases showed a considerably changed response. Cytofluorometric histograms obtained have unique information about condition of neutrophil population what might to allow a determination of the pathology processes type connecting with such inflammation. A novel approach to histogram analysis is based on analysis of high-momentum dynamic of distribution. The features of fluctuation excesses of distribution have unique information about disease under consideration.

  17. Chemical Safety Information, Site Security and Fuels Regulatory Relief Act: Public Distribution of Off-Site Consequence Analysis Information Fact Sheet

    EPA Pesticide Factsheets

    Based on assessments of increased risk of terrorist/criminal activity, EPA and DOJ have issued a rule that allows public access to OCA information in ways that are designed to minimize likelihood of chemical accidents and public harm.

  18. The distribution of common construction materials at risk to acid deposition in the United States

    NASA Astrophysics Data System (ADS)

    Lipfert, Frederick W.; Daum, Mary L.

    Information on the geographic distribution of various types of exposed materials is required to estimate the economic costs of damage to construction materials from acid deposition. This paper focuses on the identification, evaluation and interpretation of data describing the distributions of exterior construction materials, primarily in the United States. This information could provide guidance on how data needed for future economic assessments might be acquired in the most cost-effective ways. Materials distribution surveys from 16 cities in the U.S. and Canada and five related databases from government agencies and trade organizations were examined. Data on residential buildings are more commonly available than on nonresidential buildings; little geographically resolved information on distributions of materials in infrastructure was found. Survey results generally agree with the appropriate ancillary databases, but the usefulness of the databases is often limited by their coarse spatial resolution. Information on those materials which are most sensitive to acid deposition is especially scarce. Since a comprehensive error analysis has never been performed on the data required for an economic assessment, it is not possible to specify the corresponding detailed requirements for data on the distributions of materials.

  19. Multifractal characteristics of multiparticle production in heavy-ion collisions at SPS energies

    NASA Astrophysics Data System (ADS)

    Khan, Shaista; Ahmad, Shakeel

    Entropy, dimensions and other multifractal characteristics of multiplicity distributions of relativistic charged hadrons produced in ion-ion collisions at SPS energies are investigated. The analysis of the experimental data is carried out in terms of phase space bin-size dependence of multiplicity distributions following the Takagi’s approach. Yet another method is also followed to study the multifractality which, is not related to the bin-width and (or) the detector resolution, rather involves multiplicity distribution of charged particles in full phase space in terms of information entropy and its generalization, Rényi’s order-q information entropy. The findings reveal the presence of multifractal structure — a remarkable property of the fluctuations. Nearly constant values of multifractal specific heat “c” estimated by the two different methods of analysis followed indicate that the parameter “c” may be used as a universal characteristic of the particle production in high energy collisions. The results obtained from the analysis of the experimental data agree well with the predictions of Monte Carlo model AMPT.

  20. Examining the Reasoning of Conflicting Science Information from the Information Processing Perspective--An Eye Movement Analysis

    ERIC Educational Resources Information Center

    Yang, Fang-Ying

    2017-01-01

    The main goal of this study was to investigate how readers' visual attention distribution during reading of conflicting science information is related to their scientific reasoning behavior. A total of 25 university students voluntarily participated in the study. They were given conflicting science information about earthquake predictions to read…

  1. Geographic distribution of suicide and railway suicide in Belgium, 2008-2013: a principal component analysis.

    PubMed

    Strale, Mathieu; Krysinska, Karolina; Overmeiren, Gaëtan Van; Andriessen, Karl

    2017-06-01

    This study investigated the geographic distribution of suicide and railway suicide in Belgium over 2008--2013 on local (i.e., district or arrondissement) level. There were differences in the regional distribution of suicide and railway suicides in Belgium over the study period. Principal component analysis identified three groups of correlations among population variables and socio-economic indicators, such as population density, unemployment, and age group distribution, on two components that helped explaining the variance of railway suicide at a local (arrondissement) level. This information is of particular importance to prevent suicides in high-risk areas on the Belgian railway network.

  2. Planetarium instructional efficacy: A research synthesis

    NASA Astrophysics Data System (ADS)

    Brazell, Bruce D.

    The purpose of the current study was to explore the instructional effectiveness of the planetarium in astronomy education using meta-analysis. A review of the literature revealed 46 studies related to planetarium efficacy. However, only 19 of the studies satisfied selection criteria for inclusion in the meta-analysis. Selected studies were then subjected to coding procedures, which extracted information such as subject characteristics, experimental design, and outcome measures. From these data, 24 effect sizes were calculated in the area of student achievement and five effect sizes were determined in the area of student attitudes using reported statistical information. Mean effect sizes were calculated for both the achievement and the attitude distributions. Additionally, each effect size distribution was subjected to homogeneity analysis. The attitude distribution was found to be homogeneous with a mean effect size of -0.09, which was not significant, p = .2535. The achievement distribution was found to be heterogeneous with a statistically significant mean effect size of +0.28, p < .05. Since the achievement distribution was heterogeneous, the analog to the ANOVA procedure was employed to explore variability in this distribution in terms of the coded variables. The analog to the ANOVA procedure revealed that the variability introduced by the coded variables did not fully explain the variability in the achievement distribution beyond subject-level sampling error under a fixed effects model. Therefore, a random effects model analysis was performed which resulted in a mean effect size of +0.18, which was not significant, p = .2363. However, a large random effect variance component was determined indicating that the differences between studies were systematic and yet to be revealed. The findings of this meta-analysis showed that the planetarium has been an effective instructional tool in astronomy education in terms of student achievement. However, the meta-analysis revealed that the planetarium has not been a very effective tool for improving student attitudes towards astronomy.

  3. Bioactive phytochemicals in wheat: Extraction, analysis, processing, and functional properties

    USDA-ARS?s Scientific Manuscript database

    Whole wheat provides a rich source of bioactive phytochemicals namely, phenolic acids, carotenoids, tocopherols, alkylresorcinols, arabinoxylans, benzoxazinoids, phytosterols, and lignans. This review provides information on the distribution, extractability, analysis, and nutraceutical properties of...

  4. Risk analysis for biological hazards: What we need to know about invasive species

    USGS Publications Warehouse

    Stohlgren, T.J.; Schnase, J.L.

    2006-01-01

    Risk analysis for biological invasions is similar to other types of natural and human hazards. For example, risk analysis for chemical spills requires the evaluation of basic information on where a spill occurs; exposure level and toxicity of the chemical agent; knowledge of the physical processes involved in its rate and direction of spread; and potential impacts to the environment, economy, and human health relative to containment costs. Unlike typical chemical spills, biological invasions can have long lag times from introduction and establishment to successful invasion, they reproduce, and they can spread rapidly by physical and biological processes. We use a risk analysis framework to suggest a general strategy for risk analysis for invasive species and invaded habitats. It requires: (1) problem formation (scoping the problem, defining assessment endpoints); (2) analysis (information on species traits, matching species traits to suitable habitats, estimating exposure, surveys of current distribution and abundance); (3) risk characterization (understanding of data completeness, estimates of the “potential” distribution and abundance; estimates of the potential rate of spread; and probable risks, impacts, and costs); and (4) risk management (containment potential, costs, and opportunity costs; legal mandates and social considerations and information science and technology needs).

  5. Characterization of chlorinated and chloraminated drinking water microbial communities in a distribution system simulator using pyrosequencing data analysis

    EPA Science Inventory

    The molecular analysis of drinking water microbial communities has focused primarily on 16S rRNA gene sequence analysis. Since this approach provides limited information on function potential of microbial communities, analysis of whole-metagenome pyrosequencing data was used to...

  6. Income-rich and wealth-poor? The impact of measures of socio-economic status in the analysis of the distribution of long-term care use among older people.

    PubMed

    Rodrigues, Ricardo; Ilinca, Stefania; Schmidt, Andrea E

    2018-03-01

    This article aims to investigate the impact of using 2 measures of socio-economic status on the analysis of how informal care and home care use are distributed among older people living in the community. Using data from the Survey of Health, Ageing and Retirement in Europe for 14 European countries, we estimate differences in corrected concentration indices for use of informal care and home care, using equivalised household net income and equivalised net worth (as a proxy for wealth). We also calculate horizontal inequity indices using both measures of socio-economic status and accounting for differences in need. The findings show that using wealth as a ranking variable results, as a rule, in a less pro-poor inequality of use for both informal and home care. Once differences in need are controlled for (horizontal inequity), wealth still results in a less pro-poor distribution for informal care, in comparison with income, whereas the opposite is observed for home care. Possible explanations for these differences and research and policy implications are discussed. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Forward and backward uncertainty propagation: an oxidation ditch modelling example.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G

    2003-01-01

    In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.

  8. Cross-sectional analysis of BioBank Japan clinical data: A large cohort of 200,000 patients with 47 common diseases.

    PubMed

    Hirata, Makoto; Kamatani, Yoichiro; Nagai, Akiko; Kiyohara, Yutaka; Ninomiya, Toshiharu; Tamakoshi, Akiko; Yamagata, Zentaro; Kubo, Michiaki; Muto, Kaori; Mushiroda, Taisei; Murakami, Yoshinori; Yuji, Koichiro; Furukawa, Yoichi; Zembutsu, Hitoshi; Tanaka, Toshihiro; Ohnishi, Yozo; Nakamura, Yusuke; Matsuda, Koichi

    2017-03-01

    To implement personalized medicine, we established a large-scale patient cohort, BioBank Japan, in 2003. BioBank Japan contains DNA, serum, and clinical information derived from approximately 200,000 patients with 47 diseases. Serum and clinical information were collected annually until 2012. We analyzed clinical information of participants at enrollment, including age, sex, body mass index, hypertension, and smoking and drinking status, across 47 diseases, and compared the results with the Japanese database on Patient Survey and National Health and Nutrition Survey. We conducted multivariate logistic regression analysis, adjusting for sex and age, to assess the association between family history and disease development. Distribution of age at enrollment reflected the typical age of disease onset. Analysis of the clinical information revealed strong associations between smoking and chronic obstructive pulmonary disease, drinking and esophageal cancer, high body mass index and metabolic disease, and hypertension and cardiovascular disease. Logistic regression analysis showed that individuals with a family history of keloid exhibited a higher odds ratio than those without a family history, highlighting the strong impact of host genetic factor(s) on disease onset. Cross-sectional analysis of the clinical information of participants at enrollment revealed characteristics of the present cohort. Analysis of family history revealed the impact of host genetic factors on each disease. BioBank Japan, by publicly distributing DNA, serum, and clinical information, could be a fundamental infrastructure for the implementation of personalized medicine. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  9. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  10. A Simple Method for Estimating Informative Node Age Priors for the Fossil Calibration of Molecular Divergence Time Analyses

    PubMed Central

    Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.

    2013-01-01

    Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303

  11. Measurement and Analysis of P2P IPTV Program Resource

    PubMed Central

    Chen, Xingshu; Wang, Haizhou; Zhang, Qi

    2014-01-01

    With the rapid development of P2P technology, P2P IPTV applications have received more and more attention. And program resource distribution is very important to P2P IPTV applications. In order to collect IPTV program resources, a distributed multi-protocol crawler is proposed. And the crawler has collected more than 13 million pieces of information of IPTV programs from 2009 to 2012. In addition, the distribution of IPTV programs is independent and incompact, resulting in chaos of program names, which obstructs searching and organizing programs. Thus, we focus on characteristic analysis of program resources, including the distributions of length of program names, the entropy of the character types, and hierarchy depth of programs. These analyses reveal the disorderly naming conventions of P2P IPTV programs. The analysis results can help to purify and extract useful information from chaotic names for better retrieval and accelerate automatic sorting of program and establishment of IPTV repository. In order to represent popularity of programs and to predict user behavior and popularity of hot programs over a period, we also put forward an analytical model of hot programs. PMID:24772008

  12. Markov model plus k-word distributions: a synergy that produces novel statistical measures for sequence comparison.

    PubMed

    Dai, Qi; Yang, Yanchun; Wang, Tianming

    2008-10-15

    Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.

  13. Leadership for Community Engagement--A Distributed Leadership Perspective

    ERIC Educational Resources Information Center

    Liang, Jia G.; Sandmann, Lorilee R.

    2015-01-01

    This article presents distributed leadership as a framework for analysis, showing how the phenomenon complements formal higher education structures by mobilizing leadership from various sources, formal and informal. This perspective more accurately portrays the reality of leading engaged institutions. Using the application data from 224…

  14. Focus Structure in Persian Interrogative Sentences: An RRG Analysis

    ERIC Educational Resources Information Center

    Rezai, Vali; Hooshmand, Mozhgan

    2012-01-01

    The studies regarding information structure and its distribution in sentences are traced back to works of Prague School linguists such as Mathesius in 1920s. Recently, the issue of information structure has been dealt with by functionalists. In Role and Reference Grammar (RRG), information structure constitutes one of the main components of…

  15. 40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY AND DEPARTMENT OF JUSTICE ACCIDENTAL RELEASE PREVENTION REQUIREMENTS; RISK MANAGEMENT PROGRAMS UNDER THE CLEAN AIR ACT SECTION 112(r)(7); DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS...

  16. Bayesian hierarchical functional data analysis via contaminated informative priors.

    PubMed

    Scarpa, Bruno; Dunson, David B

    2009-09-01

    A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.

  17. Predictive distributions were developed for the extent of heterogeneity in meta-analyses of continuous outcome data.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; Higgins, Julian P T

    2015-01-01

    Estimation of between-study heterogeneity is problematic in small meta-analyses. Bayesian meta-analysis is beneficial because it allows incorporation of external evidence on heterogeneity. To facilitate this, we provide empirical evidence on the likely heterogeneity between studies in meta-analyses relating to specific research settings. Our analyses included 6,492 continuous-outcome meta-analyses within the Cochrane Database of Systematic Reviews. We investigated the influence of meta-analysis settings on heterogeneity by modeling study data from all meta-analyses on the standardized mean difference scale. Meta-analysis setting was described according to outcome type, intervention comparison type, and medical area. Predictive distributions for between-study variance expected in future meta-analyses were obtained, which can be used directly as informative priors. Among outcome types, heterogeneity was found to be lowest in meta-analyses of obstetric outcomes. Among intervention comparison types, heterogeneity was lowest in meta-analyses comparing two pharmacologic interventions. Predictive distributions are reported for different settings. In two example meta-analyses, incorporating external evidence led to a more precise heterogeneity estimate. Heterogeneity was influenced by meta-analysis characteristics. Informative priors for between-study variance were derived for each specific setting. Our analyses thus assist the incorporation of realistic prior information into meta-analyses including few studies. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Forecasting weed distributions using climate data: a GIS early warning tool

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Holcombe, Tracy R.; Barnett, David T.; Stohlgren, Thomas J.; Kartesz, John T.

    2010-01-01

    The number of invasive exotic plant species establishing in the United States is continuing to rise. When prevention of exotic species from entering into a country fails at the national level and the species establishes, reproduces, spreads, and becomes invasive, the most successful action at a local level is early detection followed eradication. We have developed a simple geographic information system (GIS) analysis for developing watch lists for early detection of invasive exotic plants that relies upon currently available species distribution data coupled with environmental data to aid in describing coarse-scale potential distributions. This GIS analysis tool develops environmental envelopes for species based upon the known distribution of a species thought to be invasive and represents the first approximation of its potential habitat while the necessary data are collected to perform more in­-depth analyses. To validate this method we looked at a time series of species distributions for 66 species in Pacific Northwest, and northern Rocky Mountain counties. The time series analysis presented here did select counties that the invasive exotic weeds invaded in subsequent years, showing that this technique could be useful in developing watch lists for the spread of particular exotic species. We applied this same habitat-matching model based upon bioclimaric envelopes to 100 invasive exotics with various levels of known distributions within continental U.S. counties. For species with climatically limited distributions, county watch lists describe county-specific vulnerability to invasion. Species with matching habitats in a county would be added to that county's list. These watch lists can influence management decisions for early warning, control prioritization, and targeted research to determine specific locations within vulnerable counties. This tool provides useful information for rapid assessment of the potential distribution based upon climate envelopes of current distributions for new invasive exotic species.

  19. [A citation analysis of National Journal of Andrology].

    PubMed

    Yang, Hua

    2005-01-01

    To evaluate the academic level and the popularity of National Journal of Andrology. According to the information of Chinese Medical Citation Index (CMCI), a statistical analysis was made of the amount and distribution of the originals published in National Journal of Andrology and cited by the journals included by CMCI. The originals published in National Journal of Andrology were highly qualified and influential, with their authors widely distributed all over China and even in some parts of the world. With its unique academic style and character, National Journal of Andrology is a main core periodical of medicine, as well as one of the most important information resources in the field of andrology in China.

  20. Observer-based distributed adaptive iterative learning control for linear multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Liu, Sanyang; Li, Junmin

    2017-10-01

    This paper investigates the consensus problem for linear multi-agent systems from the viewpoint of two-dimensional systems when the state information of each agent is not available. Observer-based fully distributed adaptive iterative learning protocol is designed in this paper. A local observer is designed for each agent and it is shown that without using any global information about the communication graph, all agents achieve consensus perfectly for all undirected connected communication graph when the number of iterations tends to infinity. The Lyapunov-like energy function is employed to facilitate the learning protocol design and property analysis. Finally, simulation example is given to illustrate the theoretical analysis.

  1. Chained Kullback-Leibler Divergences

    PubMed Central

    Pavlichin, Dmitri S.; Weissman, Tsachy

    2017-01-01

    We define and characterize the “chained” Kullback-Leibler divergence minw D(p‖w) + D(w‖q) minimized over all intermediate distributions w and the analogous k-fold chained K-L divergence min D(p‖wk−1) + … + D(w2‖w1) + D(w1‖q) minimized over the entire path (w1,…,wk−1). This quantity arises in a large deviations analysis of a Markov chain on the set of types – the Wright-Fisher model of neutral genetic drift: a population with allele distribution q produces offspring with allele distribution w, which then produce offspring with allele distribution p, and so on. The chained divergences enjoy some of the same properties as the K-L divergence (like joint convexity in the arguments) and appear in k-step versions of some of the same settings as the K-L divergence (like information projections and a conditional limit theorem). We further characterize the optimal k-step “path” of distributions appearing in the definition and apply our findings in a large deviations analysis of the Wright-Fisher process. We make a connection to information geometry via the previously studied continuum limit, where the number of steps tends to infinity, and the limiting path is a geodesic in the Fisher information metric. Finally, we offer a thermodynamic interpretation of the chained divergence (as the rate of operation of an appropriately defined Maxwell’s demon) and we state some natural extensions and applications (a k-step mutual information and k-step maximum likelihood inference). We release code for computing the objects we study. PMID:29130024

  2. Comparing Distributions of Environmental Outcomes for Regulatory Environmental Justice Analysis

    PubMed Central

    Maguire, Kelly; Sheriff, Glenn

    2011-01-01

    Economists have long been interested in measuring distributional impacts of policy interventions. As environmental justice (EJ) emerged as an ethical issue in the 1970s, the academic literature has provided statistical analyses of the incidence and causes of various environmental outcomes as they relate to race, income, and other demographic variables. In the context of regulatory impacts, however, there is a lack of consensus regarding what information is relevant for EJ analysis, and how best to present it. This paper helps frame the discussion by suggesting a set of questions fundamental to regulatory EJ analysis, reviewing past approaches to quantifying distributional equity, and discussing the potential for adapting existing tools to the regulatory context. PMID:21655146

  3. The Analysis of the Strength, Distribution and Direction for the EEG Phase Synchronization by Musical Stimulus

    NASA Astrophysics Data System (ADS)

    Ogawa, Yutaro; Ikeda, Akira; Kotani, Kiyoshi; Jimbo, Yasuhiko

    In this study, we propose the EEG phase synchronization analysis including not only the average strength of the synchronization but also the distribution and directions under the conditions that evoked emotion by musical stimuli. The experiment is performed with the two different musical stimuli that evoke happiness or sadness for 150 seconds. It is found that the average strength of synchronization indicates no difference between the right side and the left side of the frontal lobe during the happy stimulus, the distribution and directions indicate significant differences. Therefore, proposed analysis is useful for detecting emotional condition because it provides information that cannot be obtained only by the average strength of synchronization.

  4. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student’s t-distribution*

    PubMed Central

    Leão, William L.; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210

  5. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student's t-distribution.

    PubMed

    Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.

  6. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    PubMed

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents' safety. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. A Demographic Analysis of Poverty in Mississippi.

    ERIC Educational Resources Information Center

    Rogers, Tommy W.

    One of the functions of the Governor's Office of Human Resources is that of gathering, analyzing, and distributing information on the extent, distribution and characteristics of the poverty population in Mississippi and the social, economic and demographic conditions which affect the poor. This lengthy document disburses that kind of information…

  8. Assessment of forest fire impacts and emissions in the European Union based on the European forest fire information system

    Treesearch

    Paulo Barbosa; Andrea Camia; Jan Kucera; Giorgio Libertá; Ilaria Palumbo; Jesus San-Miguel-Ayanz; Guido Schmuck

    2009-01-01

    An analysis on the number of forest fires and burned area distribution as retrieved by the European Forest Fire Information System (EFFIS) database is presented. On average, from 2000 to 2005 about...

  9. Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.

    PubMed

    Wang, Xinghu; Hong, Yiguang; Ji, Haibo

    2016-07-01

    The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.

  10. Regional analysis of annual maximum rainfall using TL-moments method

    NASA Astrophysics Data System (ADS)

    Shabri, Ani Bin; Daud, Zalina Mohd; Ariff, Noratiqah Mohd

    2011-06-01

    Information related to distributions of rainfall amounts are of great importance for designs of water-related structures. One of the concerns of hydrologists and engineers is the probability distribution for modeling of regional data. In this study, a novel approach to regional frequency analysis using L-moments is revisited. Subsequently, an alternative regional frequency analysis using the TL-moments method is employed. The results from both methods were then compared. The analysis was based on daily annual maximum rainfall data from 40 stations in Selangor Malaysia. TL-moments for the generalized extreme value (GEV) and generalized logistic (GLO) distributions were derived and used to develop the regional frequency analysis procedure. TL-moment ratio diagram and Z-test were employed in determining the best-fit distribution. Comparison between the two approaches showed that the L-moments and TL-moments produced equivalent results. GLO and GEV distributions were identified as the most suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation was used for performance evaluation, and it showed that the method of TL-moments was more efficient for lower quantile estimation compared with the L-moments.

  11. Bayesian imperfect information analysis for clinical recurrent data

    PubMed Central

    Chang, Chih-Kuang; Chang, Chi-Chang

    2015-01-01

    In medical research, clinical practice must often be undertaken with imperfect information from limited resources. This study applied Bayesian imperfect information-value analysis to realistic situations to produce likelihood functions and posterior distributions, to a clinical decision-making problem for recurrent events. In this study, three kinds of failure models are considered, and our methods illustrated with an analysis of imperfect information from a trial of immunotherapy in the treatment of chronic granulomatous disease. In addition, we present evidence toward a better understanding of the differing behaviors along with concomitant variables. Based on the results of simulations, the imperfect information value of the concomitant variables was evaluated and different realistic situations were compared to see which could yield more accurate results for medical decision-making. PMID:25565853

  12. Data and information system requirements for Global Change Research

    NASA Technical Reports Server (NTRS)

    Skole, David L.; Chomentowski, Walter H.; Ding, Binbin; Moore, Berrien, III

    1992-01-01

    Efforts to develop local information systems for supporting interdisciplinary Global Change Research are described. A prototype system, the Interdisciplinary Science Data and Information System (IDS-DIS), designed to interface the larger archives centers of EOS-DIS is presented. Particular attention is given to a data query information management system (IMS), which has been used to tabulate information of Landsat data worldwide. The use of these data in a modeling analysis of deforestation and carbon dioxide emissions is demonstrated. The development of distributed local information systems is considered to be complementary to the development of central data archives. Global Change Research under the EOS program is likely to result in proliferation of data centers. It is concluded that a distributed system is a feasible and natural way to manage data and information for global change research.

  13. A Gaussian Approximation Approach for Value of Information Analysis.

    PubMed

    Jalal, Hawre; Alarid-Escudero, Fernando

    2018-02-01

    Most decisions are associated with uncertainty. Value of information (VOI) analysis quantifies the opportunity loss associated with choosing a suboptimal intervention based on current imperfect information. VOI can inform the value of collecting additional information, resource allocation, research prioritization, and future research designs. However, in practice, VOI remains underused due to many conceptual and computational challenges associated with its application. Expected value of sample information (EVSI) is rooted in Bayesian statistical decision theory and measures the value of information from a finite sample. The past few years have witnessed a dramatic growth in computationally efficient methods to calculate EVSI, including metamodeling. However, little research has been done to simplify the experimental data collection step inherent to all EVSI computations, especially for correlated model parameters. This article proposes a general Gaussian approximation (GA) of the traditional Bayesian updating approach based on the original work by Raiffa and Schlaifer to compute EVSI. The proposed approach uses a single probabilistic sensitivity analysis (PSA) data set and involves 2 steps: 1) a linear metamodel step to compute the EVSI on the preposterior distributions and 2) a GA step to compute the preposterior distribution of the parameters of interest. The proposed approach is efficient and can be applied for a wide range of data collection designs involving multiple non-Gaussian parameters and unbalanced study designs. Our approach is particularly useful when the parameters of an economic evaluation are correlated or interact.

  14. Collection and analysis of traditional ecological knowledge about a population of arctic tundra caribou

    USGS Publications Warehouse

    Ferguson, Michael A.D.; Messier, François

    1997-01-01

    Aboriginal peoples want their ecological knowledge used in the management of wildlife populations. To accomplish this, management agencies will need regional summaries of aboriginal knowledge about long-term changes in the distribution and abundance of wildlife populations and ecological factors that influence those changes. Between 1983 and 1994, we developed a method for collecting Inuit knowledge about historical changes in a caribou (Rangifer tarandus) population on southern Baffin Island from c. 1900 to 1994. Advice from Inuit allowed us to collect and interpret their oral knowledge in culturally appropriate ways. Local Hunters and Trappers Associations (HTAs) and other Inuit identified potential informants to maximize the spatial and temporal scope of the study. In the final interview protocol, each informant (i) established his biographical map and time line, (ii) described changes in caribou distribution and density during his life, and (iii) discussed ecological factors that may have caused changes in caribou populations. Personal and parental observations of caribou distribution and abundance were reliable and precise. Inuit who had hunted caribou during periods of scarcity provided more extensive information than those hunters who had hunted mainly ringed seals (Phoca hispida); nevertheless, seal hunters provided information about coastal areas where caribou densities were insufficient for the needs of caribou hunters. The wording of our questions influenced the reliability of informants' answers; leading questions were especially problematic. We used only information that we considered reliable after analyzing the wording of both questions and answers from translated transcripts. This analysis may have excluded some reliable information because informants tended to understate certainty in their recollections. We tried to retain the accuracy and precision inherent in Inuit oral traditions; comparisons of information from several informants and comparisons with published and archival historical reports indicate that we retained these qualities of Inuit knowledge.

  15. The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.

    ERIC Educational Resources Information Center

    Rich, Joseph R.; Boudreau, John W.

    1987-01-01

    Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…

  16. Using occlusal wear information and finite element analysis to investigate stress distributions in human molars

    PubMed Central

    Benazzi, Stefano; Kullmer, Ottmar; Grosse, Ian R; Weber, Gerhard W

    2011-01-01

    Simulations based on finite element analysis (FEA) have attracted increasing interest in dentistry and dental anthropology for evaluating the stress and strain distribution in teeth under occlusal loading conditions. Nonetheless, FEA is usually applied without considering changes in contacts between antagonistic teeth during the occlusal power stroke. In this contribution we show how occlusal information can be used to investigate the stress distribution with 3D FEA in lower first molars (M1). The antagonistic crowns M1 and P2–M1 of two dried modern human skulls were scanned by μCT in maximum intercuspation (centric occlusion) contact. A virtual analysis of the occlusal power stroke between M1 and P2–M1 was carried out in the Occlusal Fingerprint Analyser (OFA) software, and the occlusal trajectory path was recorded, while contact areas per time-step were visualized and quantified. Stress distribution of the M1 in selected occlusal stages were analyzed in strand7, considering occlusal information taken from OFA results for individual loading direction and loading area. Our FEA results show that the stress pattern changes considerably during the power stroke, suggesting that wear facets have a crucial influence on the distribution of stress on the whole tooth. Grooves and fissures on the occlusal surface are seen as critical locations, as tensile stresses are concentrated at these features. Properly accounting for the power stroke kinematics of occluding teeth results in quite different results (less tensile stresses in the crown) than usual loading scenarios based on parallel forces to the long axis of the tooth. This leads to the conclusion that functional studies considering kinematics of teeth are important to understand biomechanics and interpret morphological adaptation of teeth. PMID:21615398

  17. Decision support tools for proton therapy ePR: intelligent treatment planning navigator and radiation toxicity tool for evaluating of prostate cancer treatment

    NASA Astrophysics Data System (ADS)

    Le, Anh H.; Deshpande, Ruchi; Liu, Brent J.

    2010-03-01

    The electronic patient record (ePR) has been developed for prostate cancer patients treated with proton therapy. The ePR has functionality to accept digital input from patient data, perform outcome analysis and patient and physician profiling, provide clinical decision support and suggest courses of treatment, and distribute information across different platforms and health information systems. In previous years, we have presented the infrastructure of a medical imaging informatics based ePR for PT with functionality to accept digital patient information and distribute this information across geographical location using Internet protocol. In this paper, we present the ePR decision support tools which utilize the imaging processing tools and data collected in the ePR. The two decision support tools including the treatment plan navigator and radiation toxicity tool are presented to evaluate prostate cancer treatment to improve proton therapy operation and improve treatment outcomes analysis.

  18. Coupling Analysis of Heat Island Effects, Vegetation Coverage and Urban Flood in Wuhan

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Liu, Q.; Fan, W.; Wang, G.

    2018-04-01

    In this paper, satellite image, remote sensing technique and geographic information system technique are main technical bases. Spectral and other factors comprehensive analysis and visual interpretation are main methods. We use GF-1 and Landsat8 remote sensing satellite image of Wuhan as data source, and from which we extract vegetation distribution, urban heat island relative intensity distribution map and urban flood submergence range. Based on the extracted information, through spatial analysis and regression analysis, we find correlations among heat island effect, vegetation coverage and urban flood. The results show that there is a high degree of overlap between of urban heat island and urban flood. The area of urban heat island has buildings with little vegetation cover, which may be one of the reasons for the local heavy rainstorms. Furthermore, the urban heat island has a negative correlation with vegetation coverage, and the heat island effect can be alleviated by the vegetation to a certain extent. So it is easy to understand that the new industrial zones and commercial areas which under constructions distribute in the city, these land surfaces becoming bare or have low vegetation coverage, can form new heat islands easily.

  19. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    NASA Astrophysics Data System (ADS)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  20. a Discussion about Effective Ways of Basic Resident Register on GIS

    NASA Astrophysics Data System (ADS)

    Oku, Naoya; Nonaka, Yasuaki; Ito, Yutaka

    2016-06-01

    In Japan, each municipality keeps a database of every resident's name, address, gender and date of birth called the Basic Resident Register. If the address information in the register is converted into coordinates by geocoding, it can be plotted as point data on a map. This would enable prompt evacuation from disaster, analysis of distribution of residents, integrating statistics and so on. Further, it can be used for not only analysis of the current situation but also future planning. However, the geographic information system (GIS) incorporating the Basic Resident Register is not widely used in Japan because of the following problems: - Geocoding In order to plot address point data, it is necessary to match the Basic Resident Register and the address dictionary by using the address as a key. The information in the Basic Resident Register does not always match the actual addresses. As the register is based on applications made by residents, the information is prone to errors, such as incorrect Kanji characters. - Security policy on personal information In the register, the address of a resident is linked with his/her name and date of birth. If the information in the Basic Resident Register were to be leaked, it could be used for malicious purposes. This paper proposes solutions to the above problems. The suitable solutions for the problems depend on the purpose of use, thus it is important that the purpose should be defined and a suitable way of the application for each purpose should be chosen. In this paper, we mainly focus on the specific purpose of use: to analyse the distribution of the residents. We provide two solutions to improve the matching rate in geocoding. First, regarding errors in Kanji characters, a correction list of possible errors should be compiled in advance. Second, some sort of analyses such as distribution of residents may not require exactly correct position for the address point. Therefore we set the matching level in order: prefecture, city, town, city-block, house-code, house, and decided to accept up to city-block level for the matching. Moreover, in terms of security policy on personal information, some part of information may not be needed for the distribution analysis. For example, the personal information like resident's name should be excluded from the attribute of address point in order to secure the safety operation of the system.

  1. Empirical evidence about inconsistency among studies in a pair‐wise meta‐analysis

    PubMed Central

    Turner, Rebecca M.; Higgins, Julian P. T.

    2015-01-01

    This paper investigates how inconsistency (as measured by the I2 statistic) among studies in a meta‐analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta‐analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta‐analyses were obtained, which can inform priors for between‐study variance. Inconsistency estimates were highest on average for binary outcome meta‐analyses of risk differences and continuous outcome meta‐analyses. For a planned binary outcome meta‐analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta‐analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta‐analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta‐analysis with an informative prior for heterogeneity. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. PMID:26679486

  2. Monitoring Method of Cow Anthrax Based on Gis and Spatial Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Li, Lin; Yang, Yong; Wang, Hongbin; Dong, Jing; Zhao, Yujun; He, Jianbin; Fan, Honggang

    Geographic information system (GIS) is a computer application system, which possesses the ability of manipulating spatial information and has been used in many fields related with the spatial information management. Many methods and models have been established for analyzing animal diseases distribution models and temporal-spatial transmission models. Great benefits have been gained from the application of GIS in animal disease epidemiology. GIS is now a very important tool in animal disease epidemiological research. Spatial analysis function of GIS can be widened and strengthened by using spatial statistical analysis, allowing for the deeper exploration, analysis, manipulation and interpretation of spatial pattern and spatial correlation of the animal disease. In this paper, we analyzed the cow anthrax spatial distribution characteristics in the target district A (due to the secret of epidemic data we call it district A) based on the established GIS of the cow anthrax in this district in combination of spatial statistical analysis and GIS. The Cow anthrax is biogeochemical disease, and its geographical distribution is related closely to the environmental factors of habitats and has some spatial characteristics, and therefore the correct analysis of the spatial distribution of anthrax cow for monitoring and the prevention and control of anthrax has a very important role. However, the application of classic statistical methods in some areas is very difficult because of the pastoral nomadic context. The high mobility of livestock and the lack of enough suitable sampling for the some of the difficulties in monitoring currently make it nearly impossible to apply rigorous random sampling methods. It is thus necessary to develop an alternative sampling method, which could overcome the lack of sampling and meet the requirements for randomness. The GIS computer application software ArcGIS9.1 was used to overcome the lack of data of sampling sites.Using ArcGIS 9.1 and GEODA to analyze the cow anthrax spatial distribution of district A. we gained some conclusions about cow anthrax' density: (1) there is a spatial clustering model. (2) there is an intensely spatial autocorrelation. We established a prediction model to estimate the anthrax distribution based on the spatial characteristic of the density of cow anthrax. Comparing with the true distribution, the prediction model has a well coincidence and is feasible to the application. The method using a GIS tool facilitates can be implemented significantly in the cow anthrax monitoring and investigation, and the space statistics - related prediction model provides a fundamental use for other study on space-related animal diseases.

  3. Herbicide Metabolites in Surface Water and Groundwater: Introduction and Overview

    USGS Publications Warehouse

    Thurman, E.M.; Meyer, M.T.

    1996-01-01

    Several future research topics for herbicide metabolites in surface and ground water are outlined in this chapter. They are herbicide usage, chemical analysis of metabolites, and fate and transport of metabolites in surface and ground water. These three ideas follow the themes in this book, which are the summary of a symposium of the American Chemical Society on herbicide metabolites in surface and ground water. First, geographic information systems allow the spatial distribution of herbicide-use data to be combined with geochemical information on fate and transport of herbicides. Next these two types of information are useful in predicting the kinds of metabolites present and their probable distribution in surface and ground water. Finally, methods development efforts may be focused on these specific target analytes. This chapter discusses these three concepts and provides an introduction to this book on the analysis, chemistry, and fate and transport of herbicide metabolites in surface and ground water.

  4. Investigating effects of communications modulation technique on targeting performance

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Eusebio, Gerald; Huling, Edward

    2006-05-01

    One of the key challenges facing the global war on terrorism (GWOT) and urban operations is the increased need for rapid and diverse information from distributed sources. For users to get adequate information on target types and movements, they would need reliable data. In order to facilitate reliable computational intelligence, we seek to explore the communication modulation tradeoffs affecting information distribution and accumulation. In this analysis, we explore the modulation techniques of Orthogonal Frequency Division Multiplexing (OFDM), Direct Sequence Spread Spectrum (DSSS), and statistical time-division multiple access (TDMA) as a function of the bit error rate and jitter that affect targeting performance. In the analysis, we simulate a Link 16 with a simple bandpass frequency shift keying (PSK) technique using different Signal-to-Noise ratios. The communications transfer delay and accuracy tradeoffs are assessed as to the effects incurred in targeting performance.

  5. Filtering observations without the initial guess

    NASA Astrophysics Data System (ADS)

    Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.

    2017-12-01

    Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.

  6. Characterization of autoregressive processes using entropic quantifiers

    NASA Astrophysics Data System (ADS)

    Traversaro, Francisco; Redelico, Francisco O.

    2018-01-01

    The aim of the contribution is to introduce a novel information plane, the causal-amplitude informational plane. As previous works seems to indicate, Bandt and Pompe methodology for estimating entropy does not allow to distinguish between probability distributions which could be fundamental for simulation or for probability analysis purposes. Once a time series is identified as stochastic by the causal complexity-entropy informational plane, the novel causal-amplitude gives a deeper understanding of the time series, quantifying both, the autocorrelation strength and the probability distribution of the data extracted from the generating processes. Two examples are presented, one from climate change model and the other from financial markets.

  7. DGIC Interconnection Insights | Distributed Generation Interconnection

    Science.gov Websites

    Collaborative | NREL disseminate analysis findings to inform decision making and planning. Cost (SEPA) What is the need for cost certainty? As the distributed solar photovoltaic (PV) industry has , equitably and at a reasonable cost. This dynamic is now playing out in the cost certainty proposals being

  8. Category Induction via Distributional Analysis: Evidence from a Serial Reaction Time Task

    ERIC Educational Resources Information Center

    Hunt, Ruskin H.; Aslin, Richard N.

    2010-01-01

    Category formation lies at the heart of a number of higher-order behaviors, including language. We assessed the ability of human adults to learn, from distributional information alone, categories embedded in a sequence of input stimuli using a serial reaction time task. Artificial grammars generated corpora of input strings containing a…

  9. Sampling theorem for geometric moment determination and its application to a laser beam position detector.

    PubMed

    Loce, R P; Jodoin, R E

    1990-09-10

    Using the tools of Fourier analysis, a sampling requirement is derived that assures that sufficient information is contained within the samples of a distribution to calculate accurately geometric moments of that distribution. The derivation follows the standard textbook derivation of the Whittaker-Shannon sampling theorem, which is used for reconstruction, but further insight leads to a coarser minimum sampling interval for moment determination. The need for fewer samples to determine moments agrees with intuition since less information should be required to determine a characteristic of a distribution compared with that required to construct the distribution. A formula for calculation of the moments from these samples is also derived. A numerical analysis is performed to quantify the accuracy of the calculated first moment for practical nonideal sampling conditions. The theory is applied to a high speed laser beam position detector, which uses the normalized first moment to measure raster line positional accuracy in a laser printer. The effects of the laser irradiance profile, sampling aperture, number of samples acquired, quantization, and noise are taken into account.

  10. [Ecology suitability study of Chinese materia medica Gentianae Macrophyllae Radix].

    PubMed

    Lu, You-Yuan; Yang, Yan-Mei; Ma, Xiao-Hui; Zhang, Xiao-Bo; Zhu, Shou-Dong; Jin, Ling

    2016-09-01

    This paper is aimed to predict ecology suitability distribution of Gentianae Macrophyllae Radix and search the main ecological factors affecting the suitability distribution. The 313 distribution information about G. macrophylla, 186 distribution information about G. straminea, 343 distribution information about G. dauricaand 131 distribution information about G. crasicaulis were collected though investigation and network sharing platform data . The ecology suitable distribution factors for production Gentianae Macrophyllae Radix was analyzed respectively by the software of ArcGIS and MaxEnt with 55 environmental factors. The result of MaxEnt prediction was very well (AUC was above 0.9). The results of predominant factors analysis showed that precipitation and altitude were all the major factors impacting the ecology suitable of Getiana Macrophylla Radix production. G. macrophylla ecology suitable region was mainly concentrated in south of Gansu, Shanxi, central of Shaanxi and east of Qinghai provinces. G. straminea ecology suitable region was mainly concentrated in southwest of Gansu, east of Qinghai, north and northwest of Sichuan, east of Xizang province. G. daurica ecology suitable region was mainly concentrated in south and southwest of Gansu, east of Qinghai, Shanxi and north of Shaanxi province. G. crasicaulis ecology suitable region was mainly concentrated in Sichuan and north of Yunnan, east of Xizang, south of Gansu and east of Qinghai province. The ecological suitability distribution result of Gentianae Macrophyllae Radix was consistent with each species actual distribution. The study could provide reference for the collection and protection of wild resources, meanwhile, provide the basis for the selection of cultivation area of Gentiana Macrophylla Radix. Copyright© by the Chinese Pharmaceutical Association.

  11. On the way to identify microorganisms in drinking water distribution networks via DNA analysis of the gut content of freshwater isopods.

    PubMed

    Mayer, Michael; Keller, Adrian; Szewzyk, Ulrich; Warnecke, Hans-Joachim

    2015-05-10

    Pure drinking water is the basis for a healthy society. In Germany the drinking water regulations demand for analysis of water via detection of certain microbiological parameters by cultivation only. However, not all prokaryotes can be detected by these standard methods. How to gain more and better information about the bacteria present in drinking water and its distribution systems? The biofilms in drinking water distribution systems are built by bacteria and therefore represent a valuable source of information about the species present. Unfortunately, these biofilms are badly accessible. We thus exploited the circumstance that a lot of metazoans graze the biofilms, so that the content of their guts partly reflects the respective biofilm biocenosis. Therefore, we collected omnivorous isopods, prepared their guts and examined and characterized their contents based on 16S und 18S rDNA analysis. These molecularbiological investigations provide a profound basis for the characterization of the biocenosis and thereby biologically assess the drinking water ecosystems. Combined with a thorough identification of the species and the knowledge of their habitats, this approach can provide useful indications for the assessment of drinking-water quality and the early detection of problems in the distribution system. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Distribution System Upgrade Unit Cost Database

    DOE Data Explorer

    Horowitz, Kelsey

    2017-11-30

    This database contains unit cost information for different components that may be used to integrate distributed photovotaic (D-PV) systems onto distribution systems. Some of these upgrades and costs may also apply to integration of other distributed energy resources (DER). Which components are required, and how many of each, is system-specific and should be determined by analyzing the effects of distributed PV at a given penetration level on the circuit of interest in combination with engineering assessments on the efficacy of different solutions to increase the ability of the circuit to host additional PV as desired. The current state of the distribution system should always be considered in these types of analysis. The data in this database was collected from a variety of utilities, PV developers, technology vendors, and published research reports. Where possible, we have included information on the source of each data point and relevant notes. In some cases where data provided is sensitive or proprietary, we were not able to specify the source, but provide other information that may be useful to the user (e.g. year, location where equipment was installed). NREL has carefully reviewed these sources prior to inclusion in this database. Additional information about the database, data sources, and assumptions is included in the "Unit_cost_database_guide.doc" file included in this submission. This guide provides important information on what costs are included in each entry. Please refer to this guide before using the unit cost database for any purpose.

  13. Research on target information optics communications transmission characteristic and performance in multi-screens testing system

    NASA Astrophysics Data System (ADS)

    Li, Hanshan

    2016-04-01

    To enhance the stability and reliability of multi-screens testing system, this paper studies multi-screens target optical information transmission link properties and performance in long-distance, sets up the discrete multi-tone modulation transmission model based on geometric model of laser multi-screens testing system and visible light information communication principle; analyzes the electro-optic and photoelectric conversion function of sender and receiver in target optical information communication system; researches target information transmission performance and transfer function of the generalized visible-light communication channel; found optical information communication transmission link light intensity space distribution model and distribution function; derives the SNR model of information transmission communication system. Through the calculation and experiment analysis, the results show that the transmission error rate increases with the increment of transmission rate in a certain channel modulation depth; when selecting the appropriate transmission rate, the bit error rate reach 0.01.

  14. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  15. Computational Model for Ethnographically Informed Systems Design

    NASA Astrophysics Data System (ADS)

    Iqbal, Rahat; James, Anne; Shah, Nazaraf; Terken, Jacuqes

    This paper presents a computational model for ethnographically informed systems design that can support complex and distributed cooperative activities. This model is based on an ethnographic framework consisting of three important dimensions (e.g., distributed coordination, awareness of work and plans and procedure), and the BDI (Belief, Desire and Intention) model of intelligent agents. The ethnographic framework is used to conduct ethnographic analysis and to organise ethnographically driven information into three dimensions, whereas the BDI model allows such information to be mapped upon the underlying concepts of multi-agent systems. The advantage of this model is that it is built upon an adaptation of existing mature and well-understood techniques. By the use of this model, we also address the cognitive aspects of systems design.

  16. Multi-Connection Pattern Analysis: Decoding the representational content of neural communication.

    PubMed

    Li, Yuanning; Richardson, Robert Mark; Ghuman, Avniel Singh

    2017-11-15

    The lack of multivariate methods for decoding the representational content of interregional neural communication has left it difficult to know what information is represented in distributed brain circuit interactions. Here we present Multi-Connection Pattern Analysis (MCPA), which works by learning mappings between the activity patterns of the populations as a factor of the information being processed. These maps are used to predict the activity from one neural population based on the activity from the other population. Successful MCPA-based decoding indicates the involvement of distributed computational processing and provides a framework for probing the representational structure of the interaction. Simulations demonstrate the efficacy of MCPA in realistic circumstances. In addition, we demonstrate that MCPA can be applied to different signal modalities to evaluate a variety of hypothesis associated with information coding in neural communications. We apply MCPA to fMRI and human intracranial electrophysiological data to provide a proof-of-concept of the utility of this method for decoding individual natural images and faces in functional connectivity data. We further use a MCPA-based representational similarity analysis to illustrate how MCPA may be used to test computational models of information transfer among regions of the visual processing stream. Thus, MCPA can be used to assess the information represented in the coupled activity of interacting neural circuits and probe the underlying principles of information transformation between regions. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Flood Frequency Curves - Use of information on the likelihood of extreme floods

    NASA Astrophysics Data System (ADS)

    Faber, B.

    2011-12-01

    Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.

  18. On the theoretical velocity distribution and flow resistance in natural channels

    NASA Astrophysics Data System (ADS)

    Moramarco, Tommaso; Dingman, S. Lawrence

    2017-12-01

    The velocity distribution in natural channels is of considerable interest for streamflow measurements to obtain information on discharge and flow resistance. This study focuses on the comparison of theoretical velocity distributions based on 1) entropy theory, and 2) the two-parameter power law. The analysis identifies the correlation between the parameters of the distributions and defines their dependence on the geometric and hydraulic characteristics of the channel. Specifically, we investigate how the parameters are related to the flow resistance in terms of Manning roughness, shear velocity and water surface slope, and several formulae showing their relationships are proposed. Velocity measurements carried out in the past 20 years at Ponte Nuovo gauged section along the Tiber River, central Italy, are the basis for the analysis.

  19. Reading Information about a Scientific Phenomenon on Webpages Varying for Reliability: An Eye-Movement Analysis

    ERIC Educational Resources Information Center

    Mason, Lucia; Pluchino, Patrik; Ariasi, Nicola

    2014-01-01

    Students search the Web frequently for many purposes, one of which is to search information for academic assignments. Given the huge amount of easily accessible online information, they are required to develop new reading skills and become more able to effectively evaluate the reliability of web sources. This study investigates the distribution of…

  20. Head Start Program and Cost Data Analysis: Final Report - Volume II.

    ERIC Educational Resources Information Center

    Cordes, Joseph; And Others

    This second volume of the Head Start Program and Cost Data Analysis Final Report analyzes data from sources other than the Head Start Program Information Report (PIR). The report is divided into three sections: Distributional Impact of Head Start Financing, Pilot Study of Program Compliance, and Recommendations for Secondary Data Analysis. The…

  1. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system structural components

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.

    1987-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  2. Probabilistic Structural Analysis Methods for select space propulsion system structural components (PSAM)

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.

    1988-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  3. Statistical Methods of Latent Structure Discovery in Child-Directed Speech

    ERIC Educational Resources Information Center

    Panteleyeva, Natalya B.

    2010-01-01

    This dissertation investigates how distributional information in the speech stream can assist infants in the initial stages of acquisition of their native language phonology. An exploratory statistical analysis derives this information from the adult speech data in the corpus of conversations between adults and young children in Russian. Because…

  4. Modeling Statistical Insensitivity: Sources of Suboptimal Behavior

    ERIC Educational Resources Information Center

    Gagliardi, Annie; Feldman, Naomi H.; Lidz, Jeffrey

    2017-01-01

    Children acquiring languages with noun classes (grammatical gender) have ample statistical information available that characterizes the distribution of nouns into these classes, but their use of this information to classify novel nouns differs from the predictions made by an optimal Bayesian classifier. We use rational analysis to investigate the…

  5. Security Notice To Federal, State and Local Officials Receiving Access to the Risk Management Program’s Off-site Consequence Analysis Information

    EPA Pesticide Factsheets

    Based on the Chemical Safety Information, Site Security and Fuels Regulatory Relief Act (CSISSFRRA), this notice states that while you may share with the public data from OCA sections, it is illegal to disclose/distribute the sections themselves.

  6. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  7. Temporal Dynamics and Spatial Patterns of Aedes aegypti Breeding Sites, in the Context of a Dengue Control Program in Tartagal (Salta Province, Argentina).

    PubMed

    Espinosa, Manuel; Weinberg, Diego; Rotela, Camilo H; Polop, Francisco; Abril, Marcelo; Scavuzzo, Carlos Marcelo

    2016-05-01

    Since 2009, Fundación Mundo Sano has implemented an Aedes aegypti Surveillance and Control Program in Tartagal city (Salta Province, Argentina). The purpose of this study was to analyze temporal dynamics of Ae. aegypti breeding sites spatial distribution, during five years of samplings, and the effect of control actions over vector population dynamics. Seasonal entomological (larval) samplings were conducted in 17,815 fixed sites in Tartagal urban area between 2009 and 2014. Based on information of breeding sites abundance, from satellite remote sensing data (RS), and by the use of Geographic Information Systems (GIS), spatial analysis (hotspots and cluster analysis) and predictive model (MaxEnt) were performed. Spatial analysis showed a distribution pattern with the highest breeding densities registered in city outskirts. The model indicated that 75% of Ae. aegypti distribution is explained by 3 variables: bare soil coverage percentage (44.9%), urbanization coverage percentage(13.5%) and water distribution (11.6%). This results have called attention to the way entomological field data and information from geospatial origin (RS/GIS) are used to infer scenarios which could then be applied in epidemiological surveillance programs and in the determination of dengue control strategies. Predictive maps development constructed with Ae. aegypti systematic spatiotemporal data, in Tartagal city, would allow public health workers to identify and target high-risk areas with appropriate and timely control measures. These tools could help decision-makers to improve health system responses and preventive measures related to vector control.

  8. Temporal Dynamics and Spatial Patterns of Aedes aegypti Breeding Sites, in the Context of a Dengue Control Program in Tartagal (Salta Province, Argentina)

    PubMed Central

    Espinosa, Manuel; Weinberg, Diego; Rotela, Camilo H.; Polop, Francisco; Abril, Marcelo; Scavuzzo, Carlos Marcelo

    2016-01-01

    Background Since 2009, Fundación Mundo Sano has implemented an Aedes aegypti Surveillance and Control Program in Tartagal city (Salta Province, Argentina). The purpose of this study was to analyze temporal dynamics of Ae. aegypti breeding sites spatial distribution, during five years of samplings, and the effect of control actions over vector population dynamics. Methodology/Principal Findings Seasonal entomological (larval) samplings were conducted in 17,815 fixed sites in Tartagal urban area between 2009 and 2014. Based on information of breeding sites abundance, from satellite remote sensing data (RS), and by the use of Geographic Information Systems (GIS), spatial analysis (hotspots and cluster analysis) and predictive model (MaxEnt) were performed. Spatial analysis showed a distribution pattern with the highest breeding densities registered in city outskirts. The model indicated that 75% of Ae. aegypti distribution is explained by 3 variables: bare soil coverage percentage (44.9%), urbanization coverage percentage(13.5%) and water distribution (11.6%). Conclusions/Significance This results have called attention to the way entomological field data and information from geospatial origin (RS/GIS) are used to infer scenarios which could then be applied in epidemiological surveillance programs and in the determination of dengue control strategies. Predictive maps development constructed with Ae. aegypti systematic spatiotemporal data, in Tartagal city, would allow public health workers to identify and target high-risk areas with appropriate and timely control measures. These tools could help decision-makers to improve health system responses and preventive measures related to vector control. PMID:27223693

  9. Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.

    PubMed

    Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar

    2010-09-01

    A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.

  10. Qualitative fusion technique based on information poor system and its application to factor analysis for vibration of rolling bearings

    NASA Astrophysics Data System (ADS)

    Xia, Xintao; Wang, Zhongyu

    2008-10-01

    For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.

  11. Spatial and temporal statistical analysis of bycatch data: Patterns of sea turtle bycatch in the North Atlantic

    USGS Publications Warehouse

    Gardner, B.; Sullivan, P.J.; Morreale, S.J.; Epperly, S.P.

    2008-01-01

    Loggerhead (Caretta caretta) and leatherback (Dermochelys coriacea) sea turtle distributions and movements in offshore waters of the western North Atlantic are not well understood despite continued efforts to monitor, survey, and observe them. Loggerhead and leatherback sea turtles are listed as endangered by the World Conservation Union, and thus anthropogenic mortality of these species, including fishing, is of elevated interest. This study quantifies spatial and temporal patterns of sea turtle bycatch distributions to identify potential processes influencing their locations. A Ripley's K function analysis was employed on the NOAA Fisheries Atlantic Pelagic Longline Observer Program data to determine spatial, temporal, and spatio-temporal patterns of sea turtle bycatch distributions within the pattern of the pelagic fishery distribution. Results indicate that loggerhead and leatherback sea turtle catch distributions change seasonally, with patterns of spatial clustering appearing from July through October. The results from the space-time analysis indicate that sea turtle catch distributions are related on a relatively fine scale (30-200 km and 1-5 days). The use of spatial and temporal point pattern analysis, particularly K function analysis, is a novel way to examine bycatch data and can be used to inform fishing practices such that fishing could still occur while minimizing sea turtle bycatch. ?? 2008 NRC.

  12. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy.

    PubMed

    Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L

    2017-05-07

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  13. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy

    NASA Astrophysics Data System (ADS)

    Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.

    2017-05-01

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  14. Compositional analysis and depth profiling of thin film CrO{sub 2} by heavy ion ERDA and standard RBS: a comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khamlich, S., E-mail: skhamlich@gmail.com; Department of Chemistry, Tshwane University of Technology, Private Bag X 680, Pretoria, 0001; The African Laser Centre, CSIR campus, P.O. Box 395, Pretoria

    2012-08-15

    Chromium dioxide (CrO{sub 2}) thin film has generated considerable interest in applied research due to the wide variety of its technological applications. It has been extensively investigated in recent years, attracting the attention of researchers working on spintronic heterostructures and in the magnetic recording industry. However, its synthesis is usually a difficult task due to its metastable nature and various synthesis techniques are being investigated. In this work a polycrystalline thin film of CrO{sub 2} was prepared by electron beam vaporization of Cr{sub 2}O{sub 3} onto a Si substrate. The polycrystalline structure was confirmed through XRD analysis. The stoichiometry andmore » elemental depth distribution of the deposited film were measured by ion beam nuclear analytical techniques heavy ion elastic recoil detection analysis (ERDA) and Rutherford backscattering spectrometry (RBS), which both have relative advantage over non-nuclear spectrometries in that they can readily provide quantitative information about the concentration and distribution of different atomic species in a layer. Moreover, the analysis carried out highlights the importance of complementary usage of the two techniques to obtain a more complete description of elemental content and depth distribution in thin films. - Graphical abstract: Heavy ion elastic recoil detection analysis (ERDA) and Rutherford backscattering spectrometry (RBS) both have relative advantage over non-nuclear spectrometries in that they can readily provide quantitative information about the concentration and distribution of different atomic species in a layer. Highlights: Black-Right-Pointing-Pointer Thin films of CrO{sub 2} have been grown by e-beam evaporation of Cr{sub 2}O{sub 3} target in vacuum. Black-Right-Pointing-Pointer The composition was determined by heavy ion-ERDA and RBS. Black-Right-Pointing-Pointer HI-ERDA and RBS provided information on the light and heavy elements, respectively.« less

  15. Exploring image data assimilation in the prospect of high-resolution satellite oceanic observations

    NASA Astrophysics Data System (ADS)

    Durán Moro, Marina; Brankart, Jean-Michel; Brasseur, Pierre; Verron, Jacques

    2017-07-01

    Satellite sensors increasingly provide high-resolution (HR) observations of the ocean. They supply observations of sea surface height (SSH) and of tracers of the dynamics such as sea surface salinity (SSS) and sea surface temperature (SST). In particular, the Surface Water Ocean Topography (SWOT) mission will provide measurements of the surface ocean topography at very high-resolution (HR) delivering unprecedented information on the meso-scale and submeso-scale dynamics. This study investigates the feasibility to use these measurements to reconstruct meso-scale features simulated by numerical models, in particular on the vertical dimension. A methodology to reconstruct three-dimensional (3D) multivariate meso-scale scenes is developed by using a HR numerical model of the Solomon Sea region. An inverse problem is defined in the framework of a twin experiment where synthetic observations are used. A true state is chosen among the 3D multivariate states which is considered as a reference state. In order to correct a first guess of this true state, a two-step analysis is carried out. A probability distribution of the first guess is defined and updated at each step of the analysis: (i) the first step applies the analysis scheme of a reduced-order Kalman filter to update the first guess probability distribution using SSH observation; (ii) the second step minimizes a cost function using observations of HR image structure and a new probability distribution is estimated. The analysis is extended to the vertical dimension using 3D multivariate empirical orthogonal functions (EOFs) and the probabilistic approach allows the update of the probability distribution through the two-step analysis. Experiments show that the proposed technique succeeds in correcting a multivariate state using meso-scale and submeso-scale information contained in HR SSH and image structure observations. It also demonstrates how the surface information can be used to reconstruct the ocean state below the surface.

  16. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  17. An information-based approach to change-point analysis with applications to biophysics and cell biology.

    PubMed

    Wiggins, Paul A

    2015-07-21

    This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  18. Spatial accessibility of the population to urban health centres in Kermanshah, Islamic Republic of Iran: a geographic information systems analysis.

    PubMed

    Reshadat, S; Saedi, S; Zangeneh, A; Ghasemi, S R; Gilan, N R; Karbasi, A; Bavandpoor, E

    2015-09-08

    Geographic information systems (GIS) analysis has not been widely used in underdeveloped countries to ensure that vulnerable populations have accessibility to primary health-care services. This study applied GIS methods to analyse the spatial accessibility to urban primary-care centres of the population in Kermanshah city, Islamic Republic of Iran, by age and sex groups. In a descriptive-analytical study over 3 time periods, network analysis, mean centre and standard distance methods were applied using ArcGIS 9.3. The analysis was based on a standard radius of 750 m distance from health centres, walking speed of 1 m/s and desired access time to health centres of 12.5 mins. The proportion of the population with inadequate geographical access to health centres rose from 47.3% in 1997 to 58.4% in 2012. The mean centre and standard distance mapping showed that the spatial distribution of health centres in Kermanshah needed to be adjusted to changes in population distribution.

  19. Traffic information computing platform for big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  20. A Tool for Comparative Collection Analysis: Conducting a Shelflist Sample to Construct a Collection Profile.

    ERIC Educational Resources Information Center

    Paskoff, Beth M.; Perrault, Anna H.

    1990-01-01

    Describes a project that examined a random sample of 5 percent of a shelflist to provide detailed information about the distribution of imprints according to age and language, percentage of duplication, and distribution of serial and monographic formats. It is concluded that the resulting collection profile provides a multidimensional, quantified…

  1. An Autonomous Mobile Agent-Based Distributed Learning Architecture: A Proposal and Analytical Analysis

    ERIC Educational Resources Information Center

    Ahmed, Iftikhar; Sadeq, Muhammad Jafar

    2006-01-01

    Current distance learning systems are increasingly packing highly data-intensive contents on servers, resulting in the congestion of network and server resources at peak service times. A distributed learning system based on faded information field (FIF) architecture that employs mobile agents (MAs) has been proposed and simulated in this work. The…

  2. A new technique for testing distribution of knowledge and to estimate sampling sufficiency in ethnobiology studies

    PubMed Central

    2012-01-01

    Background We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Methods Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. Results We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data. PMID:22420565

  3. Probability Distributions for Random Quantum Operations

    NASA Astrophysics Data System (ADS)

    Schultz, Kevin

    Motivated by uncertainty quantification and inference of quantum information systems, in this work we draw connections between the notions of random quantum states and operations in quantum information with probability distributions commonly encountered in the field of orientation statistics. This approach identifies natural sample spaces and probability distributions upon these spaces that can be used in the analysis, simulation, and inference of quantum information systems. The theory of exponential families on Stiefel manifolds provides the appropriate generalization to the classical case. Furthermore, this viewpoint motivates a number of additional questions into the convex geometry of quantum operations relative to both the differential geometry of Stiefel manifolds as well as the information geometry of exponential families defined upon them. In particular, we draw on results from convex geometry to characterize which quantum operations can be represented as the average of a random quantum operation. This project was supported by the Intelligence Advanced Research Projects Activity via Department of Interior National Business Center Contract Number 2012-12050800010.

  4. A new technique for testing distribution of knowledge and to estimate sampling sufficiency in ethnobiology studies.

    PubMed

    Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino

    2012-03-15

    We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.

  5. An information-theoretic approach to the modeling and analysis of whole-genome bisulfite sequencing data.

    PubMed

    Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John

    2018-03-07

    DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.

  6. On the Analysis of Output Information of S-tree Method

    NASA Astrophysics Data System (ADS)

    Bekaryan, Karen M.; Melkonyan, Anahit A.

    2007-08-01

    On of the most popular and effective method of analysis of hierarchical structure of N-body gravitating systems is method of S-tree diagrams. Apart from many interesting peculiarities, the method, unfortunately, is not free from some disadvantages, among which most important is an extremely complexity of analysis of output information. To solve this problem a number of methods are suggested. From our point of view, most effective approach is an application of all these methods simultaneousely. This allows to obtaine more complete and objective «picture» concerning a final distribution.

  7. Joint modality fusion and temporal context exploitation for semantic video analysis

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.

    2011-12-01

    In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.

  8. Bayesian data analysis for newcomers.

    PubMed

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.

  9. [The application of Doppler broadening and Doppler shift to spectral analysis].

    PubMed

    Xu, Wei; Fang, Zi-shen

    2002-08-01

    The distinction between Doppler broadening and Doppler shift has analyzed, Doppler broadening locally results from the distribution of velocities of the emitting particles, the line width gives the information on temperature of emitting particles. Doppler shift results when the emitting particles have a bulk non random flow velocity in a particular direction, the drift of central wavelength gives the information on flow velocity of emitting particles, and the Doppler shift only drifts the profile of line without changing the width. The difference between Gaussian fitting and the distribution of chord-integral line shape have also been discussed. The distribution of H alpha spectral line shape has been derived from the surface of limiter in HT-6M Tokamak with optical spectroscope multichannel analysis (OSMA), the result by double Gaussian fitting shows that the line shape make up of two port, the emitting of reflect particles with higher energy and the release particle from the limiter surface. Ion temperature and recycling particle flow velocity have been obtained from Doppler broadening and Doppler shift.

  10. Characterizing rare-event property distributions via replicate molecular dynamics simulations of proteins.

    PubMed

    Krishnan, Ranjani; Walton, Emily B; Van Vliet, Krystyn J

    2009-11-01

    As computational resources increase, molecular dynamics simulations of biomolecules are becoming an increasingly informative complement to experimental studies. In particular, it has now become feasible to use multiple initial molecular configurations to generate an ensemble of replicate production-run simulations that allows for more complete characterization of rare events such as ligand-receptor unbinding. However, there are currently no explicit guidelines for selecting an ensemble of initial configurations for replicate simulations. Here, we use clustering analysis and steered molecular dynamics simulations to demonstrate that the configurational changes accessible in molecular dynamics simulations of biomolecules do not necessarily correlate with observed rare-event properties. This informs selection of a representative set of initial configurations. We also employ statistical analysis to identify the minimum number of replicate simulations required to sufficiently sample a given biomolecular property distribution. Together, these results suggest a general procedure for generating an ensemble of replicate simulations that will maximize accurate characterization of rare-event property distributions in biomolecules.

  11. The Cost of Distribution System Upgrades to Accommodate Increasing Penetrations of Distributed Photovoltaic Systems on Real Feeders in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horowitz, Kelsey A; Ding, Fei; Mather, Barry A

    The increasing deployment of distributed photovoltaic systems (DPV) can impact operations at the distribution level and the transmission level of the electric grid. It is important to develop and implement forward-looking approaches for calculating distribution upgrade costs that can be used to inform system planning, market and tariff design, cost allocation, and other policymaking as penetration levels of DPV increase. Using a bottom-up approach that involves iterative hosting capacity analysis, this report calculates distribution upgrade costs as a function of DPV penetration on three real feeders - two in California and one in the Northeastern United States.

  12. Empirical evidence about inconsistency among studies in a pair-wise meta-analysis.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; Higgins, Julian P T

    2016-12-01

    This paper investigates how inconsistency (as measured by the I 2 statistic) among studies in a meta-analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta-analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta-analyses were obtained, which can inform priors for between-study variance. Inconsistency estimates were highest on average for binary outcome meta-analyses of risk differences and continuous outcome meta-analyses. For a planned binary outcome meta-analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta-analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta-analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta-analysis with an informative prior for heterogeneity. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.

  13. Nuclear microscopy in trace-element biology — from cellular studies to the clinic

    NASA Astrophysics Data System (ADS)

    Lindh, Ulf

    1993-05-01

    The concentration and distribution of trace and major elements in cells are of great interest in cell biology. PIXE can provide elemental concentrations in the bulk of cells or organelles as other bulk techniques such as atomic absorption spectrophotometry and nuclear activation analysis. Supplementary information, perhaps more exciting, on the intracellular distributions of trace elements can be provided using nuclear microscopy. Intracellular distributions of trace elements in normal and malignant cells are presented. The toxicity of mercury and cadmium can be prevented by supplementation of the essential trace element selenium. Some results from an experimental animal model are discussed. The intercellular distribution of major and trace elements in isolated blood cells, as revealed by nuclear microscopy, provides useful clinical information. Examples are given concerning inflammatory connective-tissue diseases and the chronic fatigue syndrome.

  14. Distributed software framework and continuous integration in hydroinformatics systems

    NASA Astrophysics Data System (ADS)

    Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao

    2017-08-01

    When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.

  15. Hyperspectral imaging spectro radiometer improves radiometric accuracy

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Bouchard, Robert; Bullis, Ritchie D.; Roy, Claude; Vallières, Christian; Levesque, Luc

    2013-06-01

    Reliable and accurate infrared characterization is necessary to measure the specific spectral signatures of aircrafts and associated infrared counter-measures protections (i.e. flares). Infrared characterization is essential to improve counter measures efficiency, improve friend-foe identification and reduce the risk of friendly fire. Typical infrared characterization measurement setups include a variety of panchromatic cameras and spectroradiometers. Each instrument brings essential information; cameras measure the spatial distribution of targets and spectroradiometers provide the spectral distribution of the emitted energy. However, the combination of separate instruments brings out possible radiometric errors and uncertainties that can be reduced with Hyperspectral imagers. These instruments combine both spectral and spatial information into the same data. These instruments measure both the spectral and spatial distribution of the energy at the same time ensuring the temporal and spatial cohesion of collected information. This paper presents a quantitative analysis of the main contributors of radiometric uncertainties and shows how a hyperspectral imager can reduce these uncertainties.

  16. Market Analysis for Expansion of Surgical Services at USAF Medical Center Wright-Patterson.

    DTIC Science & Technology

    1991-04-22

    Rockville, Maryland: Aspen Malhotra , Naresh K . (1989) Decision Support Systems for Health Care Marketing Managers, Journal of Health Care Marketing Vol 9...information processing, analysis, storage, and distribution. Authors including Nickels (1978), Kropf and Greenberg (1984), Keckley (1988) and Malhotra (1989

  17. Fluctuations in Wikipedia access-rate and edit-event data

    NASA Astrophysics Data System (ADS)

    Kämpf, Mirko; Tismer, Sebastian; Kantelhardt, Jan W.; Muchnik, Lev

    2012-12-01

    Internet-based social networks often reflect extreme events in nature and society by drastic increases in user activity. We study and compare the dynamics of the two major complex processes necessary for information spread via the online encyclopedia ‘Wikipedia’, i.e., article editing (information upload) and article access (information viewing) based on article edit-event time series and (hourly) user access-rate time series for all articles. Daily and weekly activity patterns occur in addition to fluctuations and bursting activity. The bursts (i.e., significant increases in activity for an extended period of time) are characterized by a power-law distribution of durations of increases and decreases. For describing the recurrence and clustering of bursts we investigate the statistics of the return intervals between them. We find stretched exponential distributions of return intervals in access-rate time series, while edit-event time series yield simple exponential distributions. To characterize the fluctuation behavior we apply detrended fluctuation analysis (DFA), finding that most article access-rate time series are characterized by strong long-term correlations with fluctuation exponents α≈0.9. The results indicate significant differences in the dynamics of information upload and access and help in understanding the complex process of collecting, processing, validating, and distributing information in self-organized social networks.

  18. Development of a Dmt Monitor for Statistical Tracking of Gravitational-Wave Burst Triggers Generated from the Omega Pipeline

    NASA Astrophysics Data System (ADS)

    Li, Jun-Wei; Cao, Jun-Wei

    2010-04-01

    One challenge in large-scale scientific data analysis is to monitor data in real-time in a distributed environment. For the LIGO (Laser Interferometer Gravitational-wave Observatory) project, a dedicated suit of data monitoring tools (DMT) has been developed, yielding good extensibility to new data type and high flexibility to a distributed environment. Several services are provided, including visualization of data information in various forms and file output of monitoring results. In this work, a DMT monitor, OmegaMon, is developed for tracking statistics of gravitational-wave (OW) burst triggers that are generated from a specific OW burst data analysis pipeline, the Omega Pipeline. Such results can provide diagnostic information as reference of trigger post-processing and interferometer maintenance.

  19. Security Analysis of Measurement-Device-Independent Quantum Key Distribution in Collective-Rotation Noisy Environment

    NASA Astrophysics Data System (ADS)

    Li, Na; Zhang, Yu; Wen, Shuang; Li, Lei-lei; Li, Jian

    2018-01-01

    Noise is a problem that communication channels cannot avoid. It is, thus, beneficial to analyze the security of MDI-QKD in noisy environment. An analysis model for collective-rotation noise is introduced, and the information theory methods are used to analyze the security of the protocol. The maximum amount of information that Eve can eavesdrop is 50%, and the eavesdropping can always be detected if the noise level ɛ ≤ 0.68. Therefore, MDI-QKD protocol is secure as quantum key distribution protocol. The maximum probability that the relay outputs successful results is 16% when existing eavesdropping. Moreover, the probability that the relay outputs successful results when existing eavesdropping is higher than the situation without eavesdropping. The paper validates that MDI-QKD protocol has better robustness.

  20. Historical floods in flood frequency analysis: Is this game worth the candle?

    NASA Astrophysics Data System (ADS)

    Strupczewski, Witold G.; Kochanek, Krzysztof; Bogdanowicz, Ewa

    2017-11-01

    In flood frequency analysis (FFA) the profit from inclusion of historical information on the largest historical pre-instrumental floods depends primarily on reliability of the information, i.e. the accuracy of magnitude and return period of floods. This study is focused on possible theoretical maximum gain in accuracy of estimates of upper quantiles, that can be obtained by incorporating the largest historical floods of known return periods into the FFA. We assumed a simple case: N years of systematic records of annual maximum flows and either one largest (XM1) or two largest (XM1 and XM2) flood peak flows in a historical M-year long period. The problem is explored by Monte Carlo simulations with the maximum likelihood (ML) method. Both correct and false distributional assumptions are considered. In the first case the two-parameter extreme value models (Gumbel, log-Gumbel, Weibull) with various coefficients of variation serve as parent distributions. In the case of unknown parent distribution, the Weibull distribution was assumed as estimating model and the truncated Gumbel as parent distribution. The return periods of XM1 and XM2 are determined from the parent distribution. The results are then compared with the case, when return periods of XM1 and XM2 are defined by their plotting positions. The results are presented in terms of bias, root mean square error and the probability of overestimation of the quantile with 100-year return period. The results of the research indicate that the maximal profit of inclusion of pre-instrumental foods in the FFA may prove smaller than the cost of reconstruction of historical hydrological information.

  1. Application of computer-aided diagnosis (CAD) in MR-mammography (MRM): do we really need whole lesion time curve distribution analysis?

    PubMed

    Baltzer, Pascal Andreas Thomas; Renz, Diane M; Kullnig, Petra E; Gajda, Mieczyslaw; Camara, Oumar; Kaiser, Werner A

    2009-04-01

    The identification of the most suspect enhancing part of a lesion is regarded as a major diagnostic criterion in dynamic magnetic resonance mammography. Computer-aided diagnosis (CAD) software allows the semi-automatic analysis of the kinetic characteristics of complete enhancing lesions, providing additional information about lesion vasculature. The diagnostic value of this information has not yet been quantified. Consecutive patients from routine diagnostic studies (1.5 T, 0.1 mmol gadopentetate dimeglumine, dynamic gradient-echo sequences at 1-minute intervals) were analyzed prospectively using CAD. Dynamic sequences were processed and reduced to a parametric map. Curve types were classified by initial signal increase (not significant, intermediate, and strong) and the delayed time course of signal intensity (continuous, plateau, and washout). Lesion enhancement was measured using CAD. The most suspect curve, the curve-type distribution percentage, and combined dynamic data were compared. Statistical analysis included logistic regression analysis and receiver-operating characteristic analysis. Fifty-one patients with 46 malignant and 44 benign lesions were enrolled. On receiver-operating characteristic analysis, the most suspect curve showed diagnostic accuracy of 76.7 +/- 5%. In comparison, the curve-type distribution percentage demonstrated accuracy of 80.2 +/- 4.9%. Combined dynamic data had the highest diagnostic accuracy (84.3 +/- 4.2%). These differences did not achieve statistical significance. With appropriate cutoff values, sensitivity and specificity, respectively, were found to be 80.4% and 72.7% for the most suspect curve, 76.1% and 83.6% for the curve-type distribution percentage, and 78.3% and 84.5% for both parameters. The integration of whole-lesion dynamic data tends to improve specificity. However, no statistical significance backs up this finding.

  2. Protection of Location Privacy Based on Distributed Collaborative Recommendations

    PubMed Central

    Wang, Peng; Yang, Jing; Zhang, Jian-Pei

    2016-01-01

    In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users’ location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users’ location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users’ location information profiles and used generalization and encryption to ensure the safety of the user’s location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user’s location privacy. PMID:27649308

  3. Protection of Location Privacy Based on Distributed Collaborative Recommendations.

    PubMed

    Wang, Peng; Yang, Jing; Zhang, Jian-Pei

    2016-01-01

    In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users' location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users' location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users' location information profiles and used generalization and encryption to ensure the safety of the user's location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user's location privacy.

  4. Information resources at the National Center for Biotechnology Information.

    PubMed Central

    Woodsmall, R M; Benson, D A

    1993-01-01

    The National Center for Biotechnology Information (NCBI), part of the National Library of Medicine, was established in 1988 to perform basic research in the field of computational molecular biology as well as build and distribute molecular biology databases. The basic research has led to new algorithms and analysis tools for interpreting genomic data and has been instrumental in the discovery of human disease genes for neurofibromatosis and Kallmann syndrome. The principal database responsibility is the National Institutes of Health (NIH) genetic sequence database, GenBank. NCBI, in collaboration with international partners, builds, distributes, and provides online and CD-ROM access to over 112,000 DNA sequences. Another major program is the integration of multiple sequences databases and related bibliographic information and the development of network-based retrieval systems for Internet access. PMID:8374583

  5. Flood Frequency Analysis With Historical and Paleoflood Information

    NASA Astrophysics Data System (ADS)

    Stedinger, Jery R.; Cohn, Timothy A.

    1986-05-01

    An investigation is made of flood quantile estimators which can employ "historical" and paleoflood information in flood frequency analyses. Two categories of historical information are considered: "censored" data, where the magnitudes of historical flood peaks are known; and "binomial" data, where only threshold exceedance information is available. A Monte Carlo study employing the two-parameter lognormal distribution shows that maximum likelihood estimators (MLEs) can extract the equivalent of an additional 10-30 years of gage record from a 50-year period of historical observation. The MLE routines are shown to be substantially better than an adjusted-moment estimator similar to the one recommended in Bulletin 17B of the United States Water Resources Council Hydrology Committee (1982). The MLE methods performed well even when floods were drawn from other than the assumed lognormal distribution.

  6. Informative Bayesian Type A uncertainty evaluation, especially applicable to a small number of observations

    NASA Astrophysics Data System (ADS)

    Cox, M.; Shirono, K.

    2017-10-01

    A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.

  7. What Are the Relationships between Teachers' Engagement with Management Information Systems and Their Sense of Accountability?

    ERIC Educational Resources Information Center

    Perelman, Uri

    2014-01-01

    Many public and private sector organizations are supported by Management Information Systems (MIS) for collection, management, analysis, and distribution of the data needed for effective decision-making and enhanced organizational management. The existing body of research on MIS in education focuses on the systems' contribution to achieving…

  8. A Bayesian Surrogate for Regional Skew in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Kuczera, George

    1983-06-01

    The problem of how to best utilize site and regional flood data to infer the shape parameter of a flood distribution is considered. One approach to this problem is given in Bulletin 17B of the U.S. Water Resources Council (1981) for the log-Pearson distribution. Here a lesser known distribution is considered, namely, the power normal which fits flood data as well as the log-Pearson and has a shape parameter denoted by λ derived from a Box-Cox power transformation. The problem of regionalizing λ is considered from an empirical Bayes perspective where site and regional flood data are used to infer λ. The distortive effects of spatial correlation and heterogeneity of site sampling variance of λ are explicitly studied with spatial correlation being found to be of secondary importance. The end product of this analysis is the posterior distribution of the power normal parameters expressing, in probabilistic terms, what is known about the parameters given site flood data and regional information on λ. This distribution can be used to provide the designer with several types of information. The posterior distribution of the T-year flood is derived. The effect of nonlinearity in λ on inference is illustrated. Because uncertainty in λ is explicitly allowed for, the understatement in confidence limits due to fixing λ (analogous to fixing log skew) is avoided. Finally, it is shown how to obtain the marginal flood distribution which can be used to select a design flood with specified exceedance probability.

  9. Methods of Information Geometry to model complex shapes

    NASA Astrophysics Data System (ADS)

    De Sanctis, A.; Gattone, S. A.

    2016-09-01

    In this paper, a new statistical method to model patterns emerging in complex systems is proposed. A framework for shape analysis of 2- dimensional landmark data is introduced, in which each landmark is represented by a bivariate Gaussian distribution. From Information Geometry we know that Fisher-Rao metric endows the statistical manifold of parameters of a family of probability distributions with a Riemannian metric. Thus this approach allows to reconstruct the intermediate steps in the evolution between observed shapes by computing the geodesic, with respect to the Fisher-Rao metric, between the corresponding distributions. Furthermore, the geodesic path can be used for shape predictions. As application, we study the evolution of the rat skull shape. A future application in Ophthalmology is introduced.

  10. The Value of Transparency in Distributed Solar PV Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OShaughnessy, Eric J.; Margolis, Robert M.

    Distributed solar photovoltaic (PV) markets are relatively non-transparent: PV price and product information is not readily available, searching for this information is costly (in terms of time and effort), and customers are mostly unfamiliar with the new technology. Quote aggregation, where third-party companies collect PV quotes on behalf of customers, may be one way to increase PV market transparency. In this paper, quote aggregation data are analyzed to study the value of transparency for distributed solar PV markets. The results suggest that easier access to more quotes results in lower prices. We find that installers tend to offer lower pricesmore » in more competitive market environments. We supplement the empirical analysis with key findings from interviews of residential PV installers.« less

  11. Authenticated multi-user quantum key distribution with single particles

    NASA Astrophysics Data System (ADS)

    Lin, Song; Wang, Hui; Guo, Gong-De; Ye, Guo-Hua; Du, Hong-Zhen; Liu, Xiao-Fen

    2016-03-01

    Quantum key distribution (QKD) has been growing rapidly in recent years and becomes one of the hottest issues in quantum information science. During the implementation of QKD on a network, identity authentication has been one main problem. In this paper, an efficient authenticated multi-user quantum key distribution (MQKD) protocol with single particles is proposed. In this protocol, any two users on a quantum network can perform mutual authentication and share a secure session key with the assistance of a semi-honest center. Meanwhile, the particles, which are used as quantum information carriers, are not required to be stored, therefore the proposed protocol is feasible with current technology. Finally, security analysis shows that this protocol is secure in theory.

  12. DataHub: Science data management in support of interactive exploratory analysis

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Rubin, Mark R.

    1993-01-01

    The DataHub addresses four areas of significant needs: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within the DataHub is the integration of three technologies, viz. knowledge-based expert systems, science visualization, and science data management. This integration is based on a concept called the DataHub. With the DataHub concept, science investigators are able to apply a more complete solution to all nodes of a distributed system. Both computational nodes and interactives nodes are able to effectively and efficiently use the data services (access, retrieval, update, etc), in a distributed, interdisciplinary information system in a uniform and standard way. This allows the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to information. The DataHub includes all the required end-to-end components and interfaces to demonstrate the complete concept.

  13. DataHub - Science data management in support of interactive exploratory analysis

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Rubin, Mark R.

    1993-01-01

    DataHub addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within the DataHub is the integration of three technologies, viz. knowledge-based expert systems, science visualization, and science data management. This integration is based on a concept called the DataHub. With the DataHub concept, science investigators are able to apply a more complete solution to all nodes of a distributed system. Both computational nodes and interactive nodes are able to effectively and efficiently use the data services (access, retrieval, update, etc.) in a distributed, interdisciplinary information system in a uniform and standard way. This allows the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis is on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to information. The DataHub includes all the required end-to-end components and interfaces to demonstrate the complete concept.

  14. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors

    PubMed Central

    van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.

    2015-01-01

    Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534

  15. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors.

    PubMed

    van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E

    2015-01-01

    Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.

  16. Hyperspectral imaging to investigate the distribution of organic matter and iron down the soil profile

    NASA Astrophysics Data System (ADS)

    Hobley, Eleanor; Kriegs, Stefanie; Steffens, Markus

    2017-04-01

    Obtaining reliable and accurate data regarding the spatial distribution of different soil components is difficult due to issues related with sampling scale and resolution on the one hand and laboratory analysis on the other. When investigating the chemical composition of soil, studies frequently limit themselves to two dimensional characterisations, e.g. spatial variability near the surface or depth distribution down the profile, but rarely combine both approaches due to limitations to sampling and analytical capacities. Furthermore, when assessing depth distributions, samples are taken according to horizon or depth increments, resulting in a mixed sample across the sampling depth. Whilst this facilitates mean content estimation per depth increment and therefore reduces analytical costs, the sample information content with regards to heterogeneity within the profile is lost. Hyperspectral imaging can overcome these sampling limitations, yielding high resolution spectral data of down the soil profile, greatly enhancing the information content of the samples. This can then be used to augment horizontal spatial characterisation of a site, yielding three dimensional information into the distribution of spectral characteristics across a site and down the profile. Soil spectral characteristics are associated with specific chemical components of soil, such as soil organic matter or iron contents. By correlating the content of these soil components with their spectral behaviour, high resolution multi-dimensional analysis of soil chemical composition can be obtained. Here we present a hyperspectral approach to the characterisation of soil organic matter and iron down different soil profiles, outlining advantages and issues associated with the methodology.

  17. Cost Benefit Analysis and Other Fun and Games.

    ERIC Educational Resources Information Center

    White, Herbert S.

    1985-01-01

    Discussion of application of cost benefit analysis (CBA) accounting techniques to libraries highlights user willingness to be charged for services provided, reasons why CBA will not work in library settings, libraries and budgets, cost distribution on basis of presumed or expected use, implementation of information-seeking behavior control, and…

  18. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  19. Developing a Methodology for Risk-Informed Trade-Space Analysis in Acquisition

    DTIC Science & Technology

    2015-01-01

    73 6.10. Research, Development, Test, and Evaluation Cost Distribution, Technology 1 Mitigation of...6.11. Research, Development, Test, and Evaluation Cost Distribution, Technology 3 Mitigation of the Upgrade Alternative...courses of action, or risk- mitigation behaviors, which take place in the event that the technology is not developed by the mile- stone date (e.g

  20. TREX13 Data Analysis/Modeling

    DTIC Science & Technology

    2018-03-29

    www.apl.washington.edu 29 Mar 2018 To: Dr. Robert H. Headrick Office of Naval Research (Code 322) 875 North Randolph Street Arlington, VA 22203-1995...Benjamin Blake Naval Research Laboratory Defense Technical Information Center DISTRIBUTION STATEMENT A. Approved for public release; distribution is... quantitatively impact sound behavior. To gain quantitative knowledge, TREX13 was designed to contemporaneously measure acoustics quantities and environmental

  1. Methodology development for quantitative optimization of security enhancement in medical information systems -Case study in a PACS and a multi-institutional radiotherapy database-.

    PubMed

    Haneda, Kiyofumi; Umeda, Tokuo; Koyama, Tadashi; Harauchi, Hajime; Inamura, Kiyonari

    2002-01-01

    The target of our study is to establish the methodology for analyzing level of security requirements, for searching suitable security measures and for optimizing security distribution to every portion of medical practice. Quantitative expression must be introduced to our study as possible for the purpose of easy follow up of security procedures and easy evaluation of security outcomes or results. Results of system analysis by fault tree analysis (FTA) clarified that subdivided system elements in detail contribute to much more accurate analysis. Such subdivided composition factors very much depended on behavior of staff, interactive terminal devices, kinds of service, and routes of network. As conclusion, we found the methods to analyze levels of security requirements for each medical information systems employing FTA, basic events for each composition factor and combination of basic events. Methods for searching suitable security measures were found. Namely risk factors for each basic event, number of elements for each composition factor and candidates of security measure elements were found. Method to optimize the security measures for each medical information system was proposed. Namely optimum distribution of risk factors in terms of basic events were figured out, and comparison of them between each medical information systems became possible.

  2. Distributed memory parallel Markov random fields using graph partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinemann, C.; Perciano, T.; Ushizima, D.

    Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less

  3. Development of a site analysis tool for distributed wind projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Shawn

    The Cadmus Group, Inc., in collaboration with the National Renewable Energy Laboratory (NREL) and Encraft, was awarded a grant from the Department of Energy (DOE) to develop a site analysis tool for distributed wind technologies. As the principal investigator for this project, Mr. Shawn Shaw was responsible for overall project management, direction, and technical approach. The product resulting from this project is the Distributed Wind Site Analysis Tool (DSAT), a software tool for analyzing proposed sites for distributed wind technology (DWT) systems. This user-friendly tool supports the long-term growth and stability of the DWT market by providing reliable, realistic estimatesmore » of site and system energy output and feasibility. DSAT-which is accessible online and requires no purchase or download of software-is available in two account types; Standard: This free account allows the user to analyze a limited number of sites and to produce a system performance report for each; and Professional: For a small annual fee users can analyze an unlimited number of sites, produce system performance reports, and generate other customizable reports containing key information such as visual influence and wind resources. The tool’s interactive maps allow users to create site models that incorporate the obstructions and terrain types present. Users can generate site reports immediately after entering the requisite site information. Ideally, this tool also educates users regarding good site selection and effective evaluation practices.« less

  4. Intelligent infrastructure for sustainable potable water: a roundtable for emerging transnational research and technology development needs.

    PubMed

    Adriaens, Peter; Goovaerts, Pierre; Skerlos, Steven; Edwards, Elizabeth; Egli, Thomas

    2003-12-01

    Recent commercial and residential development have substantially impacted the fluxes and quality of water that recharge the aquifers and discharges to streams, lakes and wetlands and, ultimately, is recycled for potable use. Whereas the contaminant sources may be varied in scope and composition, these issues of urban water sustainability are of public health concern at all levels of economic development worldwide, and require cheap and innovative environmental sensing capabilities and interactive monitoring networks, as well as tailored distributed water treatment technologies. To address this need, a roundtable was organized to explore the potential role of advances in biotechnology and bioengineering to aid in developing causative relationships between spatial and temporal changes in urbanization patterns and groundwater and surface water quality parameters, and to address aspects of socioeconomic constraints in implementing sustainable exploitation of water resources. An interactive framework for quantitative analysis of the coupling between human and natural systems requires integrating information derived from online and offline point measurements with Geographic Information Systems (GIS)-based remote sensing imagery analysis, groundwater-surface water hydrologic fluxes and water quality data to assess the vulnerability of potable water supplies. Spatially referenced data to inform uncertainty-based dynamic models can be used to rank watershed-specific stressors and receptors to guide researchers and policymakers in the development of targeted sensing and monitoring technologies, as well as tailored control measures for risk mitigation of potable water from microbial and chemical environmental contamination. The enabling technologies encompass: (i) distributed sensing approaches for microbial and chemical contamination (e.g. pathogens, endocrine disruptors); (ii) distributed application-specific, and infrastructure-adaptive water treatment systems; (iii) geostatistical integration of monitoring data and GIS layers; and (iv) systems analysis of microbial and chemical proliferation in distribution systems. This operational framework is aimed at technology implementation while maximizing economic and public health benefits. The outcomes of the roundtable will further research agendas in information technology-based monitoring infrastructure development, integration of processes and spatial analysis, as well as in new educational and training platforms for students, practitioners and regulators. The potential for technology diffusion to emerging economies with limited financial resources is substantial.

  5. Analysis of counterfactual quantum key distribution using error-correcting theory

    NASA Astrophysics Data System (ADS)

    Li, Yan-Bing

    2014-10-01

    Counterfactual quantum key distribution is an interesting direction in quantum cryptography and has been realized by some researchers. However, it has been pointed that its insecure in information theory when it is used over a high lossy channel. In this paper, we retry its security from a error-correcting theory point of view. The analysis indicates that the security flaw comes from the reason that the error rate in the users' raw key pair is as high as that under the Eve's attack when the loss rate exceeds 50 %.

  6. Development of pair distribution function analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondreele, R.; Billinge, S.; Kwei, G.

    1996-09-01

    This is the final report of a 3-year LDRD project at LANL. It has become more and more evident that structural coherence in the CuO{sub 2} planes of high-{Tc} superconducting materials over some intermediate length scale (nm range) is important to superconductivity. In recent years, the pair distribution function (PDF) analysis of powder diffraction data has been developed for extracting structural information on these length scales. This project sought to expand and develop this technique, use it to analyze neutron powder diffraction data, and apply it to problems. In particular, interest is in the area of high-{Tc} superconductors, although wemore » planned to extend the study to the closely related perovskite ferroelectric materials andother materials where the local structure affects the properties where detailed knowledge of the local and intermediate range structure is important. In addition, we planned to carry out single crystal experiments to look for diffuse scattering. This information augments the information from the PDF.« less

  7. Distributed information system (water fact sheet)

    USGS Publications Warehouse

    Harbaugh, A.W.

    1986-01-01

    During 1982-85, the Water Resources Division (WRD) of the U.S. Geological Survey (USGS) installed over 70 large minicomputers in offices across the country to support its mission in the science of hydrology. These computers are connected by a communications network that allows information to be shared among computers in each office. The computers and network together are known as the Distributed Information System (DIS). The computers are accessed through the use of more than 1500 terminals and minicomputers. The WRD has three fundamentally different needs for computing: data management; hydrologic analysis; and administration. Data management accounts for 50% of the computational workload of WRD because hydrologic data are collected in all 50 states, Puerto Rico, and the Pacific trust territories. Hydrologic analysis consists of 40% of the computational workload of WRD. Cost accounting, payroll, personnel records, and planning for WRD programs occupies an estimated 10% of the computer workload. The DIS communications network is shown on a map. (Lantz-PTT)

  8. Phase 1 of the near term hybrid passenger vehicle development program, appendix A. Mission analysis and performance specification studies. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Traversi, M.; Barbarek, L. A. C.

    1979-01-01

    A handy reference for JPL minimum requirements and guidelines is presented as well as information on the use of the fundamental information source represented by the Nationwide Personal Transportation Survey. Data on U.S. demographic statistics and highway speeds are included along with methodology for normal parameters evaluation, synthesis of daily distance distributions, and projection of car ownership distributions. The synthesis of tentative mission quantification results, of intermediate mission quantification results, and of mission quantification parameters are considered and 1985 in place fleet fuel economy data are included.

  9. Popularity Prediction Tool for ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Beermann, T.; Maettig, P.; Stewart, G.; Lassnig, M.; Garonne, V.; Barisits, M.; Vigne, R.; Serfon, C.; Goossens, L.; Nairz, A.; Molfetas, A.; Atlas Collaboration

    2014-06-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.

  10. A fractal approach to dynamic inference and distribution analysis

    PubMed Central

    van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552

  11. Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

    PubMed Central

    Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.

    2015-01-01

    In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152

  12. Tree-ring cellulose exhibits several distinct intramolecular 13C signals

    NASA Astrophysics Data System (ADS)

    Wieloch, Thomas; Ehlers, Ina; Frank, David; Gessler, Arthur; Grabner, Michael; Yu, Jun; Schleucher, Jürgen

    2017-04-01

    Stable carbon isotopes are a key tool in biogeosciences. Present applications including compound-specific isotope analysis measure 13C/12C ratios (δ13C) of bulk material or of whole molecules. However, it is well known that primary metabolites also show large intramolecular 13C variation - also called isotopomer variation. This variation reflects 13C fractionation by enzyme reactions and therefore encodes metabolic information. Furthermore, δ13C must be considered an average of the intramolecular 13C distribution. Here we will present (1) methodology to analyse intramolecular 13C distributions of tree-ring cellulose by quantitative 13C NMR (Chaintreau et al., 2013, Anal Chim Acta, 788, 108-113); (2) intramolecular 13C distributions of an annually-resolved tree ring chronology (Pinus nigra, 1961-1995); (3) isotope parameters and terminology for analysis of intramolecular isotope time series; (4) a method for correcting for heterotrophic C redistribution. We will show that the intramolecular 13C distribution of tree-ring cellulose shows large variation, with differences between isotopomers exceeding 10‰Ṫhus, individual 13C isotopomers of cellulose constitute distinct 13C inputs into major global C pools such as wood and soil organic matter. When glucose units with the observed intramolecular 13C pattern are broken down along alternative catabolic pathways, it must be expected that respired CO2 with strongly differing δ13C will be released; indicating that intramolecular 13C variation affects isotope signals of atmosphere-biosphere C exchange fluxes. taking this variation into account will improve modelling of the global C cycle. Furthermore, cluster analysis shows that tree-ring glucose exhibits several independent intramolecular 13C signals, which constitute distinct ecophysiological information channels. Thus, whole-molecule 13C analysis likely misses a large part of the isotope information stored in tree rings. As we have shown for deuterium (Ehlers et al., 2015, PNAS, 112, 15585), intramolecular isotope signals allow tracing plant acclimation over centuries, and intramolecular 13C distributions will also improve our understanding of 13C signatures of global C fluxes.

  13. Quantitative Analysis of High-Quality Officer Selection by Commandants Career-Level Education Board

    DTIC Science & Technology

    2017-03-01

    due to Marines being evaluated before the end of their initial service commitment. Our research utilizes quantitative variables to analyze the...not provide detailed information why. B. LIMITATIONS The photograph analysis in this research is strictly limited to a quantitative analysis in...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. QUANTITATIVE

  14. Regression analysis using dependent Polya trees.

    PubMed

    Schörgendorfer, Angela; Branscum, Adam J

    2013-11-30

    Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Dynamic Singularity Spectrum Distribution of Sea Clutter

    NASA Astrophysics Data System (ADS)

    Xiong, Gang; Yu, Wenxian; Zhang, Shuning

    2015-12-01

    The fractal and multifractal theory have provided new approaches for radar signal processing and target-detecting under the background of ocean. However, the related research mainly focuses on fractal dimension or multifractal spectrum (MFS) of sea clutter. In this paper, a new dynamic singularity analysis method of sea clutter using MFS distribution is developed, based on moving detrending analysis (DMA-MFSD). Theoretically, we introduce the time information by using cyclic auto-correlation of sea clutter. For transient correlation series, the instantaneous singularity spectrum based on multifractal detrending moving analysis (MF-DMA) algorithm is calculated, and the dynamic singularity spectrum distribution of sea clutter is acquired. In addition, we analyze the time-varying singularity exponent ranges and maximum position function in DMA-MFSD of sea clutter. For the real sea clutter data, we analyze the dynamic singularity spectrum distribution of real sea clutter in level III sea state, and conclude that the radar sea clutter has the non-stationary and time-varying scale characteristic and represents the time-varying singularity spectrum distribution based on the proposed DMA-MFSD method. The DMA-MFSD will also provide reference for nonlinear dynamics and multifractal signal processing.

  16. Informing Mexico's Distributed Generation Policy with System Advisor Model (SAM) Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aznar, Alexandra Y; Zinaman, Owen R; McCall, James D

    The Government of Mexico recognizes the potential for clean distributed generation (DG) to meaningfully contribute to Mexico's clean energy and emissions reduction goals. However, important questions remain about how to fairly value DG and foster inclusive and equitable market growth that is beneficial to investors, electricity ratepayers, electricity distributors, and society. The U.S. National Renewable Energy Laboratory (NREL) has partnered with power sector institutions and stakeholders in Mexico to provide timely analytical support and expertise to help inform policymaking processes on clean DG. This document describes two technical assistance interventions that used the System Advisor Model (SAM) to inform Mexico'smore » DG policymaking processes with a focus on rooftop solar regulation and policy.« less

  17. Into the Future: The Foundations of Library and Information Services in the Post-Industrial Era.

    ERIC Educational Resources Information Center

    Harris, Michael H.; Hannah, Stan A.

    In 1962, Harvard sociologist Daniel Bell began distributing and discussing a draft paper in which he introduced the concept of the "post-industrial" or "information" society. This idea has been the subject of intense and often insightful analysis by proponents and critics from a vast range of disciplinary sites. Until now, the…

  18. Power & Decisions: Institutions in an Information Era. Trend Analysis Program. TAP 18.

    ERIC Educational Resources Information Center

    American Council of Life Insurance, New York, NY.

    New and cheaper means of acquisition and distribution are giving individuals easier access to information. This has contributed to dissatisfaction with corporate governance and business management and is encouraging the growth of new types of interest groups that are seeking a voice in the private sector. This may lead to a reversal of the trend…

  19. [Imaging Mass Spectrometry in Histopathologic Analysis].

    PubMed

    Yamazaki, Fumiyoshi; Seto, Mitsutoshi

    2015-04-01

    Matrix-assisted laser desorption/ionization (MALDI)-imaging mass spectrometry (IMS) enables visualization of the distribution of a range of biomolecules by integrating biochemical information from mass spectrometry with positional information from microscopy. IMS identifies a target molecule. In addition, IMS enables global analysis of biomolecules containing unknown molecules by detecting the ratio of the molecular weight to electric charge without any target, which makes it possible to identify novel molecules. IMS generates data on the distribution of lipids and small molecules in tissues, which is difficult to visualize with either conventional counter-staining or immunohistochemistry. In this review, we firstly introduce the principle of imaging mass spectrometry and recent advances in the sample preparation method. Secondly, we present findings regarding biological samples, especially pathological ones. Finally, we discuss the limitations and problems of the IMS technique and clinical application, such as in drug development.

  20. In vivo evaluation of the effect of stimulus distribution on FIR statistical efficiency in event-related fMRI

    PubMed Central

    Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L

    2013-01-01

    Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a-priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. PMID:23473798

  1. Statistical methods of fracture characterization using acoustic borehole televiewer log interpretation

    NASA Astrophysics Data System (ADS)

    Massiot, Cécile; Townend, John; Nicol, Andrew; McNamara, David D.

    2017-08-01

    Acoustic borehole televiewer (BHTV) logs provide measurements of fracture attributes (orientations, thickness, and spacing) at depth. Orientation, censoring, and truncation sampling biases similar to those described for one-dimensional outcrop scanlines, and other logging or drilling artifacts specific to BHTV logs, can affect the interpretation of fracture attributes from BHTV logs. K-means, fuzzy K-means, and agglomerative clustering methods provide transparent means of separating fracture groups on the basis of their orientation. Fracture spacing is calculated for each of these fracture sets. Maximum likelihood estimation using truncated distributions permits the fitting of several probability distributions to the fracture attribute data sets within truncation limits, which can then be extrapolated over the entire range where they naturally occur. Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC) statistical information criteria rank the distributions by how well they fit the data. We demonstrate these attribute analysis methods with a data set derived from three BHTV logs acquired from the high-temperature Rotokawa geothermal field, New Zealand. Varying BHTV log quality reduces the number of input data points, but careful selection of the quality levels where fractures are deemed fully sampled increases the reliability of the analysis. Spacing data analysis comprising up to 300 data points and spanning three orders of magnitude can be approximated similarly well (similar AIC rankings) with several distributions. Several clustering configurations and probability distributions can often characterize the data at similar levels of statistical criteria. Thus, several scenarios should be considered when using BHTV log data to constrain numerical fracture models.

  2. Resolution analysis of archive films for the purpose of their optimal digitization and distribution

    NASA Astrophysics Data System (ADS)

    Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek

    2017-09-01

    With recent high demand for ultra-high-definition (UHD) content to be screened in high-end digital movie theaters but also in the home environment, film archives full of movies in high-definition and above are in the scope of UHD content providers. Movies captured with the traditional film technology represent a virtually unlimited source of UHD content. The goal to maintain complete image information is also related to the choice of scanning resolution and spatial resolution for further distribution. It might seem that scanning the film material in the highest possible resolution using state-of-the-art film scanners and also its distribution in this resolution is the right choice. The information content of the digitized images is however limited, and various degradations moreover lead to its further reduction. Digital distribution of the content in the highest image resolution might be therefore unnecessary or uneconomical. In other cases, the highest possible resolution is inevitable if we want to preserve fine scene details or film grain structure for archiving purposes. This paper deals with the image detail content analysis of archive film records. The resolution limit in captured scene image and factors which lower the final resolution are discussed. Methods are proposed to determine the spatial details of the film picture based on the analysis of its digitized image data. These procedures allow determining recommendations for optimal distribution of digitized video content intended for various display devices with lower resolutions. Obtained results are illustrated on spatial downsampling use case scenario, and performance evaluation of the proposed techniques is presented.

  3. Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.

    PubMed

    Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S

    2017-01-05

    The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.

  4. Fractal-like Distributions over the Rational Numbers in High-throughput Biological and Clinical Data

    NASA Astrophysics Data System (ADS)

    Trifonov, Vladimir; Pasqualucci, Laura; Dalla-Favera, Riccardo; Rabadan, Raul

    2011-12-01

    Recent developments in extracting and processing biological and clinical data are allowing quantitative approaches to studying living systems. High-throughput sequencing (HTS), expression profiles, proteomics, and electronic health records (EHR) are some examples of such technologies. Extracting meaningful information from those technologies requires careful analysis of the large volumes of data they produce. In this note, we present a set of fractal-like distributions that commonly appear in the analysis of such data. The first set of examples are drawn from a HTS experiment. Here, the distributions appear as part of the evaluation of the error rate of the sequencing and the identification of tumorogenic genomic alterations. The other examples are obtained from risk factor evaluation and analysis of relative disease prevalence and co-mordbidity as these appear in EHR. The distributions are also relevant to identification of subclonal populations in tumors and the study of quasi-species and intrahost diversity of viral populations.

  5. Shuttle Electrical Power Analysis Program (SEPAP); single string circuit analysis report

    NASA Technical Reports Server (NTRS)

    Murdock, C. R.

    1974-01-01

    An evaluation is reported of the data obtained from an analysis of the distribution network characteristics of the shuttle during a spacelab mission. A description of the approach utilized in the development of the computer program and data base is provided and conclusions are drawn from the analysis of the data. Data sheets are provided for information to support the detailed discussion on each computer run.

  6. Characterizing the spatial structure of endangered species habitat using geostatistical analysis of IKONOS imagery

    USGS Publications Warehouse

    Wallace, C.S.A.; Marsh, S.E.

    2005-01-01

    Our study used geostatistics to extract measures that characterize the spatial structure of vegetated landscapes from satellite imagery for mapping endangered Sonoran pronghorn habitat. Fine spatial resolution IKONOS data provided information at the scale of individual trees or shrubs that permitted analysis of vegetation structure and pattern. We derived images of landscape structure by calculating local estimates of the nugget, sill, and range variogram parameters within 25 ?? 25-m image windows. These variogram parameters, which describe the spatial autocorrelation of the 1-m image pixels, are shown in previous studies to discriminate between different species-specific vegetation associations. We constructed two independent models of pronghorn landscape preference by coupling the derived measures with Sonoran pronghorn sighting data: a distribution-based model and a cluster-based model. The distribution-based model used the descriptive statistics for variogram measures at pronghorn sightings, whereas the cluster-based model used the distribution of pronghorn sightings within clusters of an unsupervised classification of derived images. Both models define similar landscapes, and validation results confirm they effectively predict the locations of an independent set of pronghorn sightings. Such information, although not a substitute for field-based knowledge of the landscape and associated ecological processes, can provide valuable reconnaissance information to guide natural resource management efforts. ?? 2005 Taylor & Francis Group Ltd.

  7. Comparative analysis through probability distributions of a data set

    NASA Astrophysics Data System (ADS)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  8. Contribution of climate, soil, and MODIS predictors when modeling forest inventory invasive species distribution using forest inventory data

    Treesearch

    Dumitru Salajanu; Dennis Jacobs

    2010-01-01

    Forest inventory and analysis data are used to monitor the presence and extent of certain non-native invasive species. Effective control of its spread requires quality spatial distribution information. There is no clear consensus why some ecosystems are more favorable to non-native species. The objective of this study is to evaluate the reelative contribution of geo-...

  9. Implementation of a cost-accounting model in a biobank: practical implications.

    PubMed

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.

  10. Interaction Support for Information Finding and Comparative Analysis in Online Video

    ERIC Educational Resources Information Center

    Xia, Jinyue

    2017-01-01

    Current online video interaction is typically designed with a focus on straightforward distribution and passive consumption of individual videos. This "click play, sit back and watch" context is typical of videos for entertainment. However, there are many task scenarios that require active engagement and analysis of video content as a…

  11. A Bayesian approach to meta-analysis of plant pathology studies.

    PubMed

    Mila, A L; Ngugi, H K

    2011-01-01

    Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework. Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.

  12. Bilinear Time-frequency Analysis for Lamb Wave Signal Detected by Electromagnetic Acoustic Transducer

    NASA Astrophysics Data System (ADS)

    Sun, Wenxiu; Liu, Guoqiang; Xia, Hui; Xia, Zhengwu

    2018-03-01

    Accurate acquisition of the detection signal travel time plays a very important role in cross-hole tomography. The experimental platform of aluminum plate under the perpendicular magnetic field is established and the bilinear time-frequency analysis methods, Wigner-Ville Distribution (WVD) and the pseudo-Wigner-Ville distribution (PWVD), are applied to analyse the Lamb wave signals detected by electromagnetic acoustic transducer (EMAT). By extracting the same frequency component of the time-frequency spectrum as the excitation frequency, the travel time information can be obtained. In comparison with traditional linear time-frequency analysis method such as short-time Fourier transform (STFT), the bilinear time-frequency analysis method PWVD is more appropriate in extracting travel time and recognizing patterns of Lamb wave.

  13. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  14. Analysis of haptic information in the cerebral cortex

    PubMed Central

    2016-01-01

    Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level. PMID:27440247

  15. Enabling analytical and Modeling Tools for Enhanced Disease Surveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawn K. Manley

    2003-04-01

    Early detection, identification, and warning are essential to minimize casualties from a biological attack. For covert attacks, sick people are likely to provide the first indication of an attack. An enhanced medical surveillance system that synthesizes distributed health indicator information and rapidly analyzes the information can dramatically increase the number of lives saved. Current surveillance methods to detect both biological attacks and natural outbreaks are hindered by factors such as distributed ownership of information, incompatible data storage and analysis programs, and patient privacy concerns. Moreover, because data are not widely shared, few data mining algorithms have been tested on andmore » applied to diverse health indicator data. This project addressed both integration of multiple data sources and development and integration of analytical tools for rapid detection of disease outbreaks. As a first prototype, we developed an application to query and display distributed patient records. This application incorporated need-to-know access control and incorporated data from standard commercial databases. We developed and tested two different algorithms for outbreak recognition. The first is a pattern recognition technique that searches for space-time data clusters that may signal a disease outbreak. The second is a genetic algorithm to design and train neural networks (GANN) that we applied toward disease forecasting. We tested these algorithms against influenza, respiratory illness, and Dengue Fever data. Through this LDRD in combination with other internal funding, we delivered a distributed simulation capability to synthesize disparate information and models for earlier recognition and improved decision-making in the event of a biological attack. The architecture incorporates user feedback and control so that a user's decision inputs can impact the scenario outcome as well as integrated security and role-based access-control for communicating between distributed data and analytical tools. This work included construction of interfaces to various commercial database products and to one of the data analysis algorithms developed through this LDRD.« less

  16. Addressing data privacy in matched studies via virtual pooling.

    PubMed

    Saha-Chaudhuri, P; Weinberg, C R

    2017-09-07

    Data confidentiality and shared use of research data are two desirable but sometimes conflicting goals in research with multi-center studies and distributed data. While ideal for straightforward analysis, confidentiality restrictions forbid creation of a single dataset that includes covariate information of all participants. Current approaches such as aggregate data sharing, distributed regression, meta-analysis and score-based methods can have important limitations. We propose a novel application of an existing epidemiologic tool, specimen pooling, to enable confidentiality-preserving analysis of data arising from a matched case-control, multi-center design. Instead of pooling specimens prior to assay, we apply the methodology to virtually pool (aggregate) covariates within nodes. Such virtual pooling retains most of the information used in an analysis with individual data and since individual participant data is not shared externally, within-node virtual pooling preserves data confidentiality. We show that aggregated covariate levels can be used in a conditional logistic regression model to estimate individual-level odds ratios of interest. The parameter estimates from the standard conditional logistic regression are compared to the estimates based on a conditional logistic regression model with aggregated data. The parameter estimates are shown to be similar to those without pooling and to have comparable standard errors and confidence interval coverage. Virtual data pooling can be used to maintain confidentiality of data from multi-center study and can be particularly useful in research with large-scale distributed data.

  17. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    NASA Astrophysics Data System (ADS)

    Holá, Markéta; Kalvoda, Jiří; Nováková, Hana; Škoda, Radek; Kanický, Viktor

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp ( Cyprinus carpio) scales: A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 μm width and 50 μm depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  18. Longitudinal analysis of the strengths and difficulties questionnaire scores of the Millennium Cohort Study children in England using M-quantile random-effects regression.

    PubMed

    Tzavidis, Nikos; Salvati, Nicola; Schmid, Timo; Flouri, Eirini; Midouhas, Emily

    2016-02-01

    Multilevel modelling is a popular approach for longitudinal data analysis. Statistical models conventionally target a parameter at the centre of a distribution. However, when the distribution of the data is asymmetric, modelling other location parameters, e.g. percentiles, may be more informative. We present a new approach, M -quantile random-effects regression, for modelling multilevel data. The proposed method is used for modelling location parameters of the distribution of the strengths and difficulties questionnaire scores of children in England who participate in the Millennium Cohort Study. Quantile mixed models are also considered. The analyses offer insights to child psychologists about the differential effects of risk factors on children's outcomes.

  19. A Petri Net model for distributed energy system

    NASA Astrophysics Data System (ADS)

    Konopko, Joanna

    2015-12-01

    Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of the model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.

  20. Phylogenetic analysis reveals a scattered distribution of autumn colours

    PubMed Central

    Archetti, Marco

    2009-01-01

    Background and Aims Leaf colour in autumn is rarely considered informative for taxonomy, but there is now growing interest in the evolution of autumn colours and different hypotheses are debated. Research efforts are hindered by the lack of basic information: the phylogenetic distribution of autumn colours. It is not known when and how autumn colours evolved. Methods Data are reported on the autumn colours of 2368 tree species belonging to 400 genera of the temperate regions of the world, and an analysis is made of their phylogenetic relationships in order to reconstruct the evolutionary origin of red and yellow in autumn leaves. Key Results Red autumn colours are present in at least 290 species (70 genera), and evolved independently at least 25 times. Yellow is present independently from red in at least 378 species (97 genera) and evolved at least 28 times. Conclusions The phylogenetic reconstruction suggests that autumn colours have been acquired and lost many times during evolution. This scattered distribution could be explained by hypotheses involving some kind of coevolutionary interaction or by hypotheses that rely on the need for photoprotection. PMID:19126636

  1. Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos

    2015-05-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  2. How Did the Information Flow in the #AlphaGo Hashtag Network? A Social Network Analysis of the Large-Scale Information Network on Twitter.

    PubMed

    Kim, Jinyoung

    2017-12-01

    As it becomes common for Internet users to use hashtags when posting and searching information on social media, it is important to understand who builds a hashtag network and how information is circulated within the network. This article focused on unlocking the potential of the #AlphaGo hashtag network by addressing the following questions. First, the current study examined whether traditional opinion leadership (i.e., the influentials hypothesis) or grassroot participation by the public (i.e., the interpersonal hypothesis) drove dissemination of information in the hashtag network. Second, several unique patterns of information distribution by key users were identified. Finally, the association between attributes of key users who exerted great influence on information distribution (i.e., the number of followers and follows) and their central status in the network was tested. To answer the proffered research questions, a social network analysis was conducted using a large-scale hashtag network data set from Twitter (n = 21,870). The results showed that the leading actors in the network were actively receiving information from their followers rather than serving as intermediaries between the original information sources and the public. Moreover, the leading actors played several roles (i.e., conversation starters, influencers, and active engagers) in the network. Furthermore, the number of their follows and followers were significantly associated with their central status in the hashtag network. Based on the results, the current research explained how the information was exchanged in the hashtag network by proposing the reciprocal model of information flow.

  3. Load flow and state estimation algorithms for three-phase unbalanced power distribution systems

    NASA Astrophysics Data System (ADS)

    Madvesh, Chiranjeevi

    Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.

  4. Designing smart analytical data services for a personal health framework.

    PubMed

    Koumakis, Lefteris; Kondylakis, Haridimos; Chatzimina, Maria; Iatraki, Galatia; Argyropaidas, Panagiotis; Kazantzaki, Eleni; Tsiknakis, Manolis; Kiefer, Stephan; Marias, Kostas

    2016-01-01

    Information in the healthcare domain and in particular personal health record information is heterogeneous by nature. Clinical, lifestyle, environmental data and personal preferences are stored and managed within such platforms. As a result, significant information from such diverse data is difficult to be delivered, especially to non-IT users like patients, physicians or managers. Another issue related to the management and analysis is the volume, which increases more and more making the need for efficient data visualization and analysis methods mandatory. The objective of this work is to present the architectural design for seamless integration and intelligent analysis of distributed and heterogeneous clinical information in the PHR context, as a result of a requirements elicitation process in iManageCancer project. This systemic approach aims to assist health-care professionals to orient themselves in the disperse information space and enhance their decision-making capabilities, to encourage patients to have an active role by managing their health information and interacting with health-care professionals.

  5. Mentoring needs of distributed medical education faculty at a Canadian medical school: a mixed-methods descriptive study.

    PubMed

    Krishnan, Rohin J; Uruthiramoorthy, Lavanya; Jawaid, Noor; Steele, Margaret; Jones, Douglas L

    2018-01-01

    The Schulich School of Medicine & Dentistry in London, Ontario, has a mentorship program for all full-time faculty. The school would like to expand its outreach to physician faculty located in distributed medical education sites. The purpose of this study was to determine what, if any, mentorship distributed physician faculty currently have, to gauge their interest in expanding the mentorship program to distributed physician faculty and to determine their vision of the most appropriate design of a mentorship program that would address their needs. We conducted a mixed-methods study. The quantitative phase consisted of surveys sent to all distributed faculty members that elicited information on basic demographic characteristics and mentorship experiences/needs. The qualitative phase consisted of 4 focus groups of distributed faculty administered in 2 large and 2 small centres in both regions of the school's distributed education network: Sarnia, Leamington, Stratford and Hanover. Interviews were 90 minutes long and involved standardized semistructured questions. Of the 678 surveys sent, 210 (31.0%) were returned. Most respondents (136 [64.8%]) were men, and almost half (96 [45.7%]) were family physicians. Most respondents (197 [93.8%]) were not formal mentors to Schulich faculty, and 178 (84.8%) were not currently being formally mentored. Qualitative analysis suggested that many respondents were involved in informal mentoring. In addition, about half of the respondents (96 [45.7%]) wished to be formally mentored in the future, but they may be inhibited owing to time constraints and geographical isolation. Consistently, respondents wished to have mentoring by a colleague in a similar practice, with the most practical being one-on-one mentoring. Our analysis suggests that the school's current formal mentoring program may not be applicable and will require modification to address the needs of distributed faculty.

  6. The use of combined single photon emission computed tomography and X-ray computed tomography to assess the fate of inhaled aerosol.

    PubMed

    Fleming, John; Conway, Joy; Majoral, Caroline; Tossici-Bolt, Livia; Katz, Ira; Caillibotte, Georges; Perchet, Diane; Pichelin, Marine; Muellinger, Bernhard; Martonen, Ted; Kroneberg, Philipp; Apiou-Sbirlea, Gabriela

    2011-02-01

    Gamma camera imaging is widely used to assess pulmonary aerosol deposition. Conventional planar imaging provides limited information on its regional distribution. In this study, single photon emission computed tomography (SPECT) was used to describe deposition in three dimensions (3D) and combined with X-ray computed tomography (CT) to relate this to lung anatomy. Its performance was compared to planar imaging. Ten SPECT/CT studies were performed on five healthy subjects following carefully controlled inhalation of radioaerosol from a nebulizer, using a variety of inhalation regimes. The 3D spatial distribution was assessed using a central-to-peripheral ratio (C/P) normalized to lung volume and for the right lung was compared to planar C/P analysis. The deposition by airway generation was calculated for each lung and the conducting airways deposition fraction compared to 24-h clearance. The 3D normalized C/P ratio correlated more closely with 24-h clearance than the 2D ratio for the right lung [coefficient of variation (COV), 9% compared to 15% p < 0.05]. Analysis of regional distribution was possible for both lungs in 3D but not in 2D due to overlap of the stomach on the left lung. The mean conducting airways deposition fraction from SPECT for both lungs was not significantly different from 24-h clearance (COV 18%). Both spatial and generational measures of central deposition were significantly higher for the left than for the right lung. Combined SPECT/CT enabled improved analysis of aerosol deposition from gamma camera imaging compared to planar imaging. 3D radionuclide imaging combined with anatomical information from CT and computer analysis is a useful approach for applications requiring regional information on deposition.

  7. Using ultrasound tomography to identify the distributions of density throughout the breast

    NASA Astrophysics Data System (ADS)

    Sak, Mark; Duric, Neb; Littrup, Peter; Sherman, Mark E.; Gierach, Gretchen L.

    2016-04-01

    Women with high breast density are at increased risk of developing breast cancer. Breast density has usually been defined using mammography as the ratio of fibroglandular tissue to total breast area. Ultrasound tomography (UST) is an emerging modality that can also be used to measure breast density. UST creates tomographic sound speed images of the patient's breast which is useful as sound speed is directly proportional to tissue density. Furthermore, the volumetric and quantitative information contained in the sound speed images can be used to describe the distribution of breast density. The work presented here measures the UST sound speed density distributions of 165 women with negative screening mammography. Frequency distributions of the sound speed voxel information were examined for each patient. In a preliminary analysis, the UST sound speed distributions were averaged across patients and grouped by various patient and density-related factors (e.g., age, body mass index, menopausal status, average mammographic breast density). It was found that differences in the distribution of density could be easily visualized for different patient groupings. Furthermore, findings suggest that the shape of the distributions may be used to identify participants with varying amounts of dense and non-dense tissue.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakafuji, Dora; Gouveia, Lauren

    This project supports development of the next generation, integrated energy management infrastructure (EMS) able to incorporate advance visualization of behind-the-meter distributed resource information and probabilistic renewable energy generation forecasts to inform real-time operational decisions. The project involves end-users and active feedback from an Utility Advisory Team (UAT) to help inform how information can be used to enhance operational functions (e.g. unit commitment, load forecasting, Automatic Generation Control (AGC) reserve monitoring, ramp alerts) within two major EMS platforms. Objectives include: Engaging utility operations personnel to develop user input on displays, set expectations, test and review; Developing ease of use and timelinessmore » metrics for measuring enhancements; Developing prototype integrated capabilities within two operational EMS environments; Demonstrating an integrated decision analysis platform with real-time wind and solar forecasting information and timely distributed resource information; Seamlessly integrating new 4-dimensional information into operations without increasing workload and complexities; Developing sufficient analytics to inform and confidently transform and adopt new operating practices and procedures; Disseminating project lessons learned through industry sponsored workshops and conferences;Building on collaborative utility-vendor partnership and industry capabilities« less

  9. Spatial analysis and characteristics of pig farming in Thailand.

    PubMed

    Thanapongtharm, Weerapong; Linard, Catherine; Chinson, Pornpiroon; Kasemsuwan, Suwicha; Visser, Marjolein; Gaughan, Andrea E; Epprech, Michael; Robinson, Timothy P; Gilbert, Marius

    2016-10-06

    In Thailand, pig production intensified significantly during the last decade, with many economic, epidemiological and environmental implications. Strategies toward more sustainable future developments are currently investigated, and these could be informed by a detailed assessment of the main trends in the pig sector, and on how different production systems are geographically distributed. This study had two main objectives. First, we aimed to describe the main trends and geographic patterns of pig production systems in Thailand in terms of pig type (native, breeding, and fattening pigs), farm scales (smallholder and large-scale farming systems) and type of farming systems (farrow-to-finish, nursery, and finishing systems) based on a very detailed 2010 census. Second, we aimed to study the statistical spatial association between these different types of pig farming distribution and a set of spatial variables describing access to feed and markets. Over the last decades, pig population gradually increased, with a continuously increasing number of pigs per holder, suggesting a continuing intensification of the sector. The different pig-production systems showed very contrasted geographical distributions. The spatial distribution of large-scale pig farms corresponds with that of commercial pig breeds, and spatial analysis conducted using Random Forest distribution models indicated that these were concentrated in lowland urban or peri-urban areas, close to means of transportation, facilitating supply to major markets such as provincial capitals and the Bangkok Metropolitan region. Conversely the smallholders were distributed throughout the country, with higher densities located in highland, remote, and rural areas, where they supply local rural markets. A limitation of the study was that pig farming systems were defined from the number of animals per farm, resulting in their possible misclassification, but this should have a limited impact on the main patterns revealed by the analysis. The very contrasted distribution of different pig production systems present opportunities for future regionalization of pig production. More specifically, the detailed geographical analysis of the different production systems will be used to spatially-inform planning decisions for pig farming accounting for the specific health, environment and economical implications of the different pig production systems.

  10. Integrated Multi-Scale Data Analytics and Machine Learning for the Distribution Grid and Building-to-Grid Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Emma M.; Hendrix, Val; Chertkov, Michael

    This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper wemore » consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis are becoming significant, with more data and multi-objective concerns. Efficient applications of analysis and the machine learning field are being considered in the loop.« less

  11. A study on building data warehouse of hospital information system.

    PubMed

    Li, Ping; Wu, Tao; Chen, Mu; Zhou, Bin; Xu, Wei-guo

    2011-08-01

    Existing hospital information systems with simple statistical functions cannot meet current management needs. It is well known that hospital resources are distributed with private property rights among hospitals, such as in the case of the regional coordination of medical services. In this study, to integrate and make full use of medical data effectively, we propose a data warehouse modeling method for the hospital information system. The method can also be employed for a distributed-hospital medical service system. To ensure that hospital information supports the diverse needs of health care, the framework of the hospital information system has three layers: datacenter layer, system-function layer, and user-interface layer. This paper discusses the role of a data warehouse management system in handling hospital information from the establishment of the data theme to the design of a data model to the establishment of a data warehouse. Online analytical processing tools assist user-friendly multidimensional analysis from a number of different angles to extract the required data and information. Use of the data warehouse improves online analytical processing and mitigates deficiencies in the decision support system. The hospital information system based on a data warehouse effectively employs statistical analysis and data mining technology to handle massive quantities of historical data, and summarizes from clinical and hospital information for decision making. This paper proposes the use of a data warehouse for a hospital information system, specifically a data warehouse for the theme of hospital information to determine latitude, modeling and so on. The processing of patient information is given as an example that demonstrates the usefulness of this method in the case of hospital information management. Data warehouse technology is an evolving technology, and more and more decision support information extracted by data mining and with decision-making technology is required for further research.

  12. Estimating and circumventing the effects of perturbing and swapping inventory plot locations

    Treesearch

    Ronald E. McRoberts; Geoffrey R. Holden; Mark D. Nelson; Greg C. Liknes; Warren K. Moser; Andrew J. Lister; Susan L. King; Elizabeth B. LaPoint; John W. Coulston; W. Brad Smith; Gregory A. Reams

    2005-01-01

    The Forest Inventory and Analysis (FIA) program of the USDA Forest Service reports data and information about the Nation's forest resources. Increasingly, users request that FIA data and information be reported and distributed in a geospatial context, and they request access to exact plot locations for their own analyses. However, the FIA program is constrained by...

  13. Informal Statistics Help Desk

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, R. J.; Feiveson, A. H.

    2015-01-01

    Back by popular demand, the JSC Biostatistics Lab is offering an opportunity for informal conversation about challenges you may have encountered with issues of experimental design, analysis, data visualization or related topics. Get answers to common questions about sample size, repeated measures, violation of distributional assumptions, missing data, multiple testing, time-to-event data, when to trust the results of your analyses (reproducibility issues) and more.

  14. Comparison of minute distribution frequency for anesthesia start and end times from an anesthesia information management system and paper records.

    PubMed

    Phelps, Michael; Latif, Asad; Thomsen, Robert; Slodzinski, Martin; Raghavan, Rahul; Paul, Sharon Leigh; Stonemetz, Jerry

    2017-08-01

    Use of an anesthesia information management system (AIMS) has been reported to improve accuracy of recorded information. We tested the hypothesis that analyzing the distribution of times charted on paper and computerized records could reveal possible rounding errors, and that this effect could be modulated by differences in the user interface for documenting certain event times with an AIMS. We compared the frequency distribution of start and end times for anesthesia cases completed with paper records and an AIMS. Paper anesthesia records had significantly more times ending with "0" and "5" compared to those from the AIMS (p < 0.001). For case start times, AIMS still exhibited end-digit preference, with times whose last digits had significantly higher frequencies of "0" and "5" than other integers. This effect, however, was attenuated compared to that for paper anesthesia records. For case end times, the distribution of minutes recorded with AIMS was almost evenly distributed, unlike those from paper records that still showed significant end-digit preference. The accuracy of anesthesia case start times and case end times, as inferred by statistical analysis of the distribution of the times, is enhanced with the use of an AIMS. Furthermore, the differences in AIMS user interface for documenting case start and case end times likely affects the degree of end-digit preference, and likely accuracy, of those times.

  15. Discriminative components of data.

    PubMed

    Peltonen, Jaakko; Kaski, Samuel

    2005-01-01

    A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.

  16. Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaite, José, E-mail: jose.gaite@upm.es

    2010-03-01

    We develop a method of multifractal analysis of N-body cosmological simulations that improves on the customary counts-in-cells method by taking special care of the effects of discreteness and large scale homogeneity. The analysis of the Mare-Nostrum simulation with our method provides strong evidence of self-similar multifractal distributions of dark matter and gas, with a halo mass function that is of Press-Schechter type but has a power-law exponent -2, as corresponds to a multifractal. Furthermore, our analysis shows that the dark matter and gas distributions are indistinguishable as multifractals. To determine if there is any gas biasing, we calculate the cross-correlationmore » coefficient, with negative but inconclusive results. Hence, we develop an effective Bayesian analysis connected with information theory, which clearly demonstrates that the gas is biased in a long range of scales, up to the scale of homogeneity. However, entropic measures related to the Bayesian analysis show that this gas bias is small (in a precise sense) and is such that the fractal singularities of both distributions coincide and are identical. We conclude that this common multifractal cosmic web structure is determined by the dynamics and is independent of the initial conditions.« less

  17. Bayesian analysis of multimodal data and brain imaging

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; Backonja, Miroslav; Wakai, Ronald T.; Rutecki, Paul; Haughton, Victor

    2000-06-01

    It is often the case that information about a process can be obtained using a variety of methods. Each method is employed because of specific advantages over the competing alternatives. An example in medical neuro-imaging is the choice between fMRI and MEG modes where fMRI can provide high spatial resolution in comparison to the superior temporal resolution of MEG. The combination of data from varying modes provides the opportunity to infer results that may not be possible by means of any one mode alone. We discuss a Bayesian and learning theoretic framework for enhanced feature extraction that is particularly suited to multi-modal investigations of massive data sets from multiple experiments. In the following Bayesian approach, acquired knowledge (information) regarding various aspects of the process are all directly incorporated into the formulation. This information can come from a variety of sources. In our case, it represents statistical information obtained from other modes of data collection. The information is used to train a learning machine to estimate a probability distribution, which is used in turn by a second machine as a prior, in order to produce a more refined estimation of the distribution of events. The computational demand of the algorithm is handled by proposing a distributed parallel implementation on a cluster of workstations that can be scaled to address real-time needs if required. We provide a simulation of these methods on a set of synthetically generated MEG and EEG data. We show how spatial and temporal resolutions improve by using prior distributions. The method on fMRI signals permits one to construct the probability distribution of the non-linear hemodynamics of the human brain (real data). These computational results are in agreement with biologically based measurements of other labs, as reported to us by researchers from UK. We also provide preliminary analysis involving multi-electrode cortical recording that accompanies behavioral data in pain experiments on freely moving mice subjected to moderate heat delivered by an electric bulb. Summary of new or breakthrough ideas: (1) A new method to estimate probability distribution for measurement of nonlinear hemodynamics of brain from a multi- modal neuronal data. This is the first time that such an idea is tried, to our knowledge. (2) Breakthrough in improvement of time resolution of fMRI signals using (1) above.

  18. Amino acid distribution in meteorites: diagenesis, extraction methods, and standard metrics in the search for extraterrestrial biosignatures.

    PubMed

    McDonald, Gene D; Storrie-Lombardi, Michael C

    2006-02-01

    The relative abundance of the protein amino acids has been previously investigated as a potential marker for biogenicity in meteoritic samples. However, these investigations were executed without a quantitative metric to evaluate distribution variations, and they did not account for the possibility of interdisciplinary systematic error arising from inter-laboratory differences in extraction and detection techniques. Principal component analysis (PCA), hierarchical cluster analysis (HCA), and stochastic probabilistic artificial neural networks (ANNs) were used to compare the distributions for nine protein amino acids previously reported for the Murchison carbonaceous chondrite, Mars meteorites (ALH84001, Nakhla, and EETA79001), prebiotic synthesis experiments, and terrestrial biota and sediments. These techniques allowed us (1) to identify a shift in terrestrial amino acid distributions secondary to diagenesis; (2) to detect differences in terrestrial distributions that may be systematic differences between extraction and analysis techniques in biological and geological laboratories; and (3) to determine that distributions in meteoritic samples appear more similar to prebiotic chemistry samples than they do to the terrestrial unaltered or diagenetic samples. Both diagenesis and putative interdisciplinary differences in analysis complicate interpretation of meteoritic amino acid distributions. We propose that the analysis of future samples from such diverse sources as meteoritic influx, sample return missions, and in situ exploration of Mars would be less ambiguous with adoption of standardized assay techniques, systematic inclusion of assay standards, and the use of a quantitative, probabilistic metric. We present here one such metric determined by sequential feature extraction and normalization (PCA), information-driven automated exploration of classification possibilities (HCA), and prediction of classification accuracy (ANNs).

  19. Power in Bayesian Mediation Analysis for Small Sample Research

    PubMed Central

    Miočević, Milica; MacKinnon, David P.; Levy, Roy

    2018-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results. PMID:29662296

  20. Power in Bayesian Mediation Analysis for Small Sample Research.

    PubMed

    Miočević, Milica; MacKinnon, David P; Levy, Roy

    2017-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results.

  1. Geographic analysis of shigellosis in Vietnam.

    PubMed

    Kim, Deok Ryun; Ali, Mohammad; Thiem, Vu Dinh; Park, Jin-Kyung; von Seidlein, Lorenz; Clemens, John

    2008-12-01

    Geographic and ecological analysis may provide investigators useful ecological information for the control of shigellosis. This paper provides distribution of individual Shigella species in space, and ecological covariates for shigellosis in Nha Trang, Vietnam. Data on shigellosis in neighborhoods were used to identify ecological covariates. A Bayesian hierarchical model was used to obtain joint posterior distribution of model parameters and to construct smoothed risk maps for shigellosis. Neighborhoods with a high proportion of worshippers of traditional religion, close proximity to hospital, or close proximity to the river had increased risk for shigellosis. The ecological covariates associated with Shigella flexneri differed from the covariates for Shigella sonnei. In contrast the spatial distribution of the two species was similar. The disease maps can help identify high-risk areas of shigellosis that can be targeted for interventions. This approach may be useful for the selection of populations and the analysis of vaccine trials.

  2. GIS-based spatial statistical analysis of risk areas for liver flukes in Surin Province of Thailand.

    PubMed

    Rujirakul, Ratana; Ueng-arporn, Naporn; Kaewpitoon, Soraya; Loyd, Ryan J; Kaewthani, Sarochinee; Kaewpitoon, Natthawut

    2015-01-01

    It is urgently necessary to be aware of the distribution and risk areas of liver fluke, Opisthorchis viverrini, for proper allocation of prevention and control measures. This study aimed to investigate the human behavior, and environmental factors influencing the distribution in Surin Province of Thailand, and to build a model using stepwise multiple regression analysis with a geographic information system (GIS) on environment and climate data. The relationship between the human behavior, attitudes (<50%; X111), environmental factors like population density (148-169 pop/km2; X73), and land use as wetland (X64), were correlated with the liver fluke disease distribution at 0.000, 0.034, and 0.006 levels, respectively. Multiple regression analysis, by equations OV=-0.599+0.005(population density (148-169 pop/km2); X73)+0.040 (human attitude (<50%); X111)+0.022 (land used (wetland; X64), was used to predict the distribution of liver fluke. OV is the patients of liver fluke infection, R Square=0.878, and, Adjust R Square=0.849. By GIS analysis, we found Si Narong, Sangkha, Phanom Dong Rak, Mueang Surin, Non Narai, Samrong Thap, Chumphon Buri, and Rattanaburi to have the highest distributions in Surin province. In conclusion, the combination of GIS and statistical analysis can help simulate the spatial distribution and risk areas of liver fluke, and thus may be an important tool for future planning of prevention and control measures.

  3. Bayesian road safety analysis: incorporation of past evidence and effect of hyper-prior choice.

    PubMed

    Miranda-Moreno, Luis F; Heydari, Shahram; Lord, Dominique; Fu, Liping

    2013-09-01

    This paper aims to address two related issues when applying hierarchical Bayesian models for road safety analysis, namely: (a) how to incorporate available information from previous studies or past experiences in the (hyper) prior distributions for model parameters and (b) what are the potential benefits of incorporating past evidence on the results of a road safety analysis when working with scarce accident data (i.e., when calibrating models with crash datasets characterized by a very low average number of accidents and a small number of sites). A simulation framework was developed to evaluate the performance of alternative hyper-priors including informative and non-informative Gamma, Pareto, as well as Uniform distributions. Based on this simulation framework, different data scenarios (i.e., number of observations and years of data) were defined and tested using crash data collected at 3-legged rural intersections in California and crash data collected for rural 4-lane highway segments in Texas. This study shows how the accuracy of model parameter estimates (inverse dispersion parameter) is considerably improved when incorporating past evidence, in particular when working with the small number of observations and crash data with low mean. The results also illustrates that when the sample size (more than 100 sites) and the number of years of crash data is relatively large, neither the incorporation of past experience nor the choice of the hyper-prior distribution may affect the final results of a traffic safety analysis. As a potential solution to the problem of low sample mean and small sample size, this paper suggests some practical guidance on how to incorporate past evidence into informative hyper-priors. By combining evidence from past studies and data available, the model parameter estimates can significantly be improved. The effect of prior choice seems to be less important on the hotspot identification. The results show the benefits of incorporating prior information when working with limited crash data in road safety studies. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  4. Comparative cost analysis of insecticide-treated net delivery strategies: sales supported by social marketing and free distribution through antenatal care.

    PubMed

    De Allegri, Manuela; Marschall, Paul; Flessa, Steffen; Tiendrebéogo, Justin; Kouyaté, Bocar; Jahn, Albrecht; Müller, Olaf

    2010-01-01

    Insecticide-treated nets (ITNs) are effective in substantially reducing malaria transmission. Still, ITN coverage in sub-Saharan Africa (SSA) remains extremely low. Policy makers are concerned with identifying the most suitable delivery mechanism to achieve rapid yet sustainable increases in ITN coverage. Little is known, however, on the comparative costs of alternative ITN distribution strategies. This paper aimed to fill this gap in knowledge by developing such a comparative cost analysis, looking at the cost per ITN distributed for two alternative interventions: subsidized sales supported by social marketing and free distribution to pregnant women through antenatal care (ANC). The study was conducted in rural Burkina Faso, where the two interventions were carried out alongside one another in 2006/07. Cost information was collected prospectively to derive both a financial analysis adopting a provider's perspective and an economic analysis adopting a societal perspective. The average financial cost per ITN distributed was US$8.08 and US$7.21 for sales supported by social marketing and free distribution through ANC, respectively. The average economic cost per ITN distributed was US$4.81 for both interventions. Contrary to common belief, costs did not differ substantially between the two interventions. Due to the district's ability to rely fully on the use of existing resources, financial costs associated with free ITN distribution through ANC were in fact even lower than those associated with the social marketing campaign. This represents an encouraging finding for SSA governments and points to the possibility to invest in programmes to favour free ITN distribution through existing health facilities. Given restricted budgets, however, free distribution programmes are unlikely to be feasible.

  5. Model-Driven Test Generation of Distributed Systems

    NASA Technical Reports Server (NTRS)

    Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin

    2012-01-01

    This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.

  6. Determination of optical parameters of atmospheric particulates from ground-based polarimeter measurements

    NASA Technical Reports Server (NTRS)

    Kuriyan, J. G.; Phillips, D. H.; Willson, R. C.

    1974-01-01

    This paper describes the theoretical analysis that is required to infer, from polarimeter measurements of skylight, the size distribution, refractive index and abundance of particulates in the atmosphere. To illustrate the viability of the method, some data obtained at UCLA is analyzed and the atmospheric parameters are derived. The explicit demonstration of the redundancy in the description of aerosol distributions suggests that radiation field measurements will not uniquely determine the modal radius of the size distribution. In spite of this nonuniqueness information useful to heat budget calculations can be derived.

  7. Geostatistics and Geographic Information Systems to Study the Spatial Distribution of Grapholita molesta (Busck) (Lepidoptera: Tortricidae) in Peach Fields.

    PubMed

    Duarte, F; Calvo, M V; Borges, A; Scatoni, I B

    2015-08-01

    The oriental fruit moth, Grapholita molesta (Busck), is the most serious pest in peach, and several insecticide applications are required to reduce crop damage to acceptable levels. Geostatistics and Geographic Information Systems (GIS) are employed to measure the range of spatial correlation of G. molesta in order to define the optimum sampling distance for performing spatial analysis and to determine the current distribution of the pest in peach orchards of southern Uruguay. From 2007 to 2010, 135 pheromone traps per season were installed and georeferenced in peach orchards distributed over 50,000 ha. Male adult captures were recorded weekly from September to April. Structural analysis of the captures was performed, yielding 14 semivariograms for the accumulated captures analyzed by generation and growing season. Two sets of maps were constructed to describe the pest distribution. Nine significant models were obtained in the 14 evaluated periods. The range estimated for the correlation was from 908 to 6884 m. Three hot spots of high population level and some areas with comparatively low populations were constant over the 3-year period, while there is a greater variation in the size of the population in different generations and years in other areas.

  8. Distribution of selected healthcare resources for influenza pandemic response in Cambodia

    PubMed Central

    2013-01-01

    Introduction Human influenza infection poses a serious public health threat in Cambodia, a country at risk for the emergence and spread of novel influenza viruses with pandemic potential. Prior pandemics demonstrated the adverse impact of influenza on poor communities in developing countries. Investigation of healthcare resource distribution can inform decisions regarding resource mobilization and investment for pandemic mitigation. Methods A health facility survey performed across Cambodia obtained data on availability of healthcare resources important for pandemic influenza response. Focusing on five key resources considered most necessary for treating severe influenza (inpatient beds, doctors, nurses, oseltamivir, and ventilators), resource distributions were analyzed at the Operational District (OD) and Province levels, refining data analysis from earlier studies. Resources were stratified by respondent type (hospital vs. District Health Office [DHO]). A summary index of distribution inequality was calculated using the Gini coefficient. Indices for local spatial autocorrelation were measured at the OD level using geographical information system (GIS) analysis. Finally, a potential link between socioeconomic status and resource distribution was explored by mapping resource densities against poverty rates. Results Gini coefficient calculation revealed variable inequality in distribution of the five key resources at the Province and OD levels. A greater percentage of the population resides in areas of relative under-supply (28.5%) than over-supply (21.3%). Areas with more resources per capita showed significant clustering in central Cambodia while areas with fewer resources clustered in the northern and western provinces. Hospital-based inpatient beds, doctors, and nurses were most heavily concentrated in areas of the country with the lowest poverty rates; however, beds and nurses in Non-Hospital Medical Facilities (NHMF) showed increasing concentrations at higher levels of poverty. Conclusions There is considerable heterogeneity in healthcare resource distribution across Cambodia. Distribution mapping at the local level can inform policy decisions on where to stockpile resources in advance of and for reallocation in the event of a pandemic. These findings will be useful in determining future health resource investment, both for pandemic preparedness and for general health system strengthening, and provide a foundation for future analyses of equity in health services provision for pandemic mitigation planning in Cambodia. PMID:24090286

  9. Distribution of selected healthcare resources for influenza pandemic response in Cambodia.

    PubMed

    Schwanke Khilji, Sara U; Rudge, James W; Drake, Tom; Chavez, Irwin; Borin, Khieu; Touch, Sok; Coker, Richard

    2013-10-04

    Human influenza infection poses a serious public health threat in Cambodia, a country at risk for the emergence and spread of novel influenza viruses with pandemic potential. Prior pandemics demonstrated the adverse impact of influenza on poor communities in developing countries. Investigation of healthcare resource distribution can inform decisions regarding resource mobilization and investment for pandemic mitigation. A health facility survey performed across Cambodia obtained data on availability of healthcare resources important for pandemic influenza response. Focusing on five key resources considered most necessary for treating severe influenza (inpatient beds, doctors, nurses, oseltamivir, and ventilators), resource distributions were analyzed at the Operational District (OD) and Province levels, refining data analysis from earlier studies. Resources were stratified by respondent type (hospital vs. District Health Office [DHO]). A summary index of distribution inequality was calculated using the Gini coefficient. Indices for local spatial autocorrelation were measured at the OD level using geographical information system (GIS) analysis. Finally, a potential link between socioeconomic status and resource distribution was explored by mapping resource densities against poverty rates. Gini coefficient calculation revealed variable inequality in distribution of the five key resources at the Province and OD levels. A greater percentage of the population resides in areas of relative under-supply (28.5%) than over-supply (21.3%). Areas with more resources per capita showed significant clustering in central Cambodia while areas with fewer resources clustered in the northern and western provinces. Hospital-based inpatient beds, doctors, and nurses were most heavily concentrated in areas of the country with the lowest poverty rates; however, beds and nurses in Non-Hospital Medical Facilities (NHMF) showed increasing concentrations at higher levels of poverty. There is considerable heterogeneity in healthcare resource distribution across Cambodia. Distribution mapping at the local level can inform policy decisions on where to stockpile resources in advance of and for reallocation in the event of a pandemic. These findings will be useful in determining future health resource investment, both for pandemic preparedness and for general health system strengthening, and provide a foundation for future analyses of equity in health services provision for pandemic mitigation planning in Cambodia.

  10. Detailed Requirements Analysis for a Management Information System for the Department of Family Practice and Community Medicine at Silas B. Hays Army Community Hospital, Fort Ord, California

    DTIC Science & Technology

    1989-03-01

    Chapter II. Chapter UI discusses the theory of information systems and the analysis and design of such systems. The last section of Chapter II introduces...34 Improved personnel morale and job satisfaction. Doctors and hospital administrators are trying to recover from the medical computing lag which has...discussed below). The primary source of equipment authorizations is the Table of Distribution and Allowances ( TDA ) which shows the equipment authorized to be

  11. Space and Earth Science Data Compression Workshop

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Editor)

    1991-01-01

    The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The focus was on scientists' data requirements, as well as constraints imposed by the data collection, transmission, distribution, and archival systems. The workshop consisted of several invited papers; two described information systems for space and Earth science data, four depicted analysis scenarios for extracting information of scientific interest from data collected by Earth orbiting and deep space platforms, and a final one was a general tutorial on image data compression.

  12. Analysis of Distribution of Vector-Borne Diseases Using Geographic Information Systems.

    PubMed

    Nihei, Naoko

    2017-01-01

    The distribution of vector-borne diseases is changing on a global scale owing to issues involving natural environments, socioeconomic conditions and border disputes among others. Geographic information systems (GIS) provide an important method of establishing a prompt and precise understanding of local data on disease outbreaks, from which disease eradication programs can be established. Having first defined GIS as a combination of GPS, RS and GIS, we showed the processes through which these technologies were being introduced into our research. GIS-derived geographical information attributes were interpreted in terms of point, area, line, spatial epidemiology, risk and development for generating the vector dynamic models associated with the spread of the disease. The need for interdisciplinary scientific and administrative collaboration in the use of GIS to control infectious diseases is highly warranted.

  13. Brain Radiation Information Data Exchange (BRIDE): integration of experimental data from low-dose ionising radiation research for pathway discovery.

    PubMed

    Karapiperis, Christos; Kempf, Stefan J; Quintens, Roel; Azimzadeh, Omid; Vidal, Victoria Linares; Pazzaglia, Simonetta; Bazyka, Dimitry; Mastroberardino, Pier G; Scouras, Zacharias G; Tapio, Soile; Benotmane, Mohammed Abderrafi; Ouzounis, Christos A

    2016-05-11

    The underlying molecular processes representing stress responses to low-dose ionising radiation (LDIR) in mammals are just beginning to be understood. In particular, LDIR effects on the brain and their possible association with neurodegenerative disease are currently being explored using omics technologies. We describe a light-weight approach for the storage, analysis and distribution of relevant LDIR omics datasets. The data integration platform, called BRIDE, contains information from the literature as well as experimental information from transcriptomics and proteomics studies. It deploys a hybrid, distributed solution using both local storage and cloud technology. BRIDE can act as a knowledge broker for LDIR researchers, to facilitate molecular research on the systems biology of LDIR response in mammals. Its flexible design can capture a range of experimental information for genomics, epigenomics, transcriptomics, and proteomics. The data collection is available at: .

  14. Representing distributed cognition in complex systems: how a submarine returns to periscope depth.

    PubMed

    Stanton, Neville A

    2014-01-01

    This paper presents the Event Analysis of Systemic Teamwork (EAST) method as a means of modelling distributed cognition in systems. The method comprises three network models (i.e. task, social and information) and their combination. This method was applied to the interactions between the sound room and control room in a submarine, following the activities of returning the submarine to periscope depth. This paper demonstrates three main developments in EAST. First, building the network models directly, without reference to the intervening methods. Second, the application of analysis metrics to all three networks. Third, the combination of the aforementioned networks in different ways to gain a broader understanding of the distributed cognition. Analyses have shown that EAST can be used to gain both qualitative and quantitative insights into distributed cognition. Future research should focus on the analyses of network resilience and modelling alternative versions of a system.

  15. Implementing an SIG based platform of application and service for city spatial information in Shanghai

    NASA Astrophysics Data System (ADS)

    Yu, Bailang; Wu, Jianping

    2006-10-01

    Spatial Information Grid (SIG) is an infrastructure that has the ability to provide the services for spatial information according to users' needs by means of collecting, sharing, organizing and processing the massive distributed spatial information resources. This paper presents the architecture, technologies and implementation of the Shanghai City Spatial Information Application and Service System, a SIG based platform, which is an integrated platform that serves for administration, planning, construction and development of the city. In the System, there are ten categories of spatial information resources, including city planning, land-use, real estate, river system, transportation, municipal facility construction, environment protection, sanitation, urban afforestation and basic geographic information data. In addition, spatial information processing services are offered as a means of GIS Web Services. The resources and services are all distributed in different web-based nodes. A single database is created to store the metadata of all the spatial information. A portal site is published as the main user interface of the System. There are three main functions in the portal site. First, users can search the metadata and consequently acquire the distributed data by using the searching results. Second, some spatial processing web applications that developed with GIS Web Services, such as file format conversion, spatial coordinate transfer, cartographic generalization and spatial analysis etc, are offered to use. Third, GIS Web Services currently available in the System can be searched and new ones can be registered. The System has been working efficiently in Shanghai Government Network since 2005.

  16. A Geographic-Information-Systems-Based Approach to Analysis of Characteristics Predicting Student Persistence and Graduation

    ERIC Educational Resources Information Center

    Ousley, Chris

    2010-01-01

    This study sought to provide empirical evidence regarding the use of spatial analysis in enrollment management to predict persistence and graduation. The research utilized data from the 2000 U.S. Census and applicant records from The University of Arizona to study the spatial distributions of enrollments. Based on the initial results, stepwise…

  17. Spatially Resolved Analysis of Amines Using a Fluorescence Molecular Probe: Molecular Analysis of IDPs

    NASA Technical Reports Server (NTRS)

    Clemett, S. J.; Messenger, S.; Thomas-Keprta, K. L.; Wentworth, S. J.; Robinson, G. A.; McKay, D. S.

    2002-01-01

    Some Interplanetary Dust Particles (IDPs) have large isotope anomalies in H and N. To address the nature of the carrier phase, we are developing a procedure to spatially resolve the distribution of organic species on IDP thin sections utilizing fluorescent molecular probes. Additional information is contained in the original extended abstract.

  18. 40 CFR 1400.5 - Internet access to certain off-site consequence analysis data elements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Internet access to certain off-site... DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.5 Internet access to certain off... elements in the risk management plan database available on the Internet: (a) The concentration of the...

  19. Privacy-Preserving Distributed Information Sharing

    DTIC Science & Technology

    2006-07-01

    80 B.2.4 Analysis for Bloom Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 B.3 Details of One...be chosen by slightly adjusting the analysis given in the proof of Theorem 26. 59 Using Bloom Filters. Bloom filters provide a compact probabilistic...representation of set membership [6]. Instead of using T filters, we can use a combined Bloom filter. This achieves the same asymptotic communication

  20. Web-GIS platform for monitoring and forecasting of regional climate and ecological changes

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.

    2012-12-01

    Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is presented. Platform software developed (Shulgina et al, 2012, Okladnikov et al, 2012) includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also data preprocessing, run and visualization of modeling results of models WRF and «Planet Simulator» integrated into the platform is provided. All functions of the center are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of visualization of processing results, selection of geographical region of interest (pan and zoom) and data layers manipulation (order, enable/disable, features extraction). Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches (Shulgina et al, 2011). Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified graphical web-interface.

  1. Investigating the influence of standard staining procedures on the copper distribution and concentration in Wilson's disease liver samples by laser ablation-inductively coupled plasma-mass spectrometry.

    PubMed

    Hachmöller, Oliver; Aichler, Michaela; Schwamborn, Kristina; Lutz, Lisa; Werner, Martin; Sperling, Michael; Walch, Axel; Karst, Uwe

    2017-12-01

    The influence of rhodanine and haematoxylin and eosin (HE) staining on the copper distribution and concentration in liver needle biopsy samples originating from patients with Wilson's disease (WD), a rare autosomal recessive inherited disorder of the copper metabolism, is investigated. In contemporary diagnostic of WD, rhodanine staining is used for histopathology, since rhodanine and copper are forming a red to orange-red complex, which can be recognized in the liver tissue using a microscope. In this paper, a laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) method is applied for the analysis of eight different WD liver samples. Apart from a spatially resolved elemental detection as qualitative information, this LA-ICP-MS method offers also quantitative information by external calibration with matrix-matched gelatine standards. The sample set of this work included an unstained and a rhodanine stained section of each WD liver sample. While unstained sections of WD liver samples showed very distinct structures of the copper distribution with high copper concentrations, rhodanine stained sections revealed a blurred copper distribution with significant decreased concentrations in a range from 20 to more than 90%. This implies a copper removal from the liver tissue by complexation during the rhodanine staining. In contrast to this, a further HE stained sample of one WD liver sample did not show a significant decrease in the copper concentration and influence on the copper distribution in comparison to the unstained section. Therefore, HE staining can be combined with the analysis by means of LA-ICP-MS in two successive steps from one thin section of a biopsy specimen. This allows further information to be gained on the elemental distribution by LA-ICP-MS additional to results obtained by histological staining. Copyright © 2017 Elsevier GmbH. All rights reserved.

  2. Information Transfer in the Brain: Insights from a Unified Approach

    NASA Astrophysics Data System (ADS)

    Marinazzo, Daniele; Wu, Guorong; Pellicoro, Mario; Stramaglia, Sebastiano

    Measuring directed interactions in the brain in terms of information flow is a promising approach, mathematically treatable and amenable to encompass several methods. In this chapter we propose some approaches rooted in this framework for the analysis of neuroimaging data. First we will explore how the transfer of information depends on the network structure, showing how for hierarchical networks the information flow pattern is characterized by exponential distribution of the incoming information and a fat-tailed distribution of the outgoing information, as a signature of the law of diminishing marginal returns. This was reported to be true also for effective connectivity networks from human EEG data. Then we address the problem of partial conditioning to a limited subset of variables, chosen as the most informative ones for the driver node.We will then propose a formal expansion of the transfer entropy to put in evidence irreducible sets of variables which provide information for the future state of each assigned target. Multiplets characterized by a large contribution to the expansion are associated to informational circuits present in the system, with an informational character (synergetic or redundant) which can be associated to the sign of the contribution. Applications are reported for EEG and fMRI data.

  3. Ecology and geography of human monkeypox case occurrences across Africa.

    PubMed

    Ellis, Christine K; Carroll, Darin S; Lash, Ryan R; Peterson, A Townsend; Damon, Inger K; Malekani, Jean; Formenty, Pierre

    2012-04-01

    As ecologic niche modeling (ENM) evolves as a tool in spatial epidemiology and public health, selection of the most appropriate and informative environmental data sets becomes increasingly important. Here, we build on a previous ENM analysis of the potential distribution of human monkeypox in Africa by refining georeferencing criteria and using more-diverse environmental data to identify environmental parameters contributing to monkeypox distributional ecology. Significant environmental variables include annual precipitation, several temperature-related variables, primary productivity, evapotranspiration, soil moisture, and pH. The potential distribution identified with this set of variables was broader than that identified in previous analyses but does not include areas recently found to hold monkeypox in southern Sudan. Our results emphasize the importance of selecting the most appropriate and informative environmental data sets for ENM analyses in pathogen transmission mapping.

  4. A European Database of Fusarium graminearum and F. culmorum Trichothecene Genotypes

    PubMed Central

    Pasquali, Matias; Beyer, Marco; Logrieco, Antonio; Audenaert, Kris; Balmas, Virgilio; Basler, Ryan; Boutigny, Anne-Laure; Chrpová, Jana; Czembor, Elżbieta; Gagkaeva, Tatiana; González-Jaén, María T.; Hofgaard, Ingerd S.; Köycü, Nagehan D.; Hoffmann, Lucien; Lević, Jelena; Marin, Patricia; Miedaner, Thomas; Migheli, Quirico; Moretti, Antonio; Müller, Marina E. H.; Munaut, Françoise; Parikka, Päivi; Pallez-Barthel, Marine; Piec, Jonathan; Scauflaire, Jonathan; Scherm, Barbara; Stanković, Slavica; Thrane, Ulf; Uhlig, Silvio; Vanheule, Adriaan; Yli-Mattila, Tapani; Vogelgsang, Susanne

    2016-01-01

    Fusarium species, particularly Fusarium graminearum and F. culmorum, are the main cause of trichothecene type B contamination in cereals. Data on the distribution of Fusarium trichothecene genotypes in cereals in Europe are scattered in time and space. Furthermore, a common core set of related variables (sampling method, host cultivar, previous crop, etc.) that would allow more effective analysis of factors influencing the spatial and temporal population distribution, is lacking. Consequently, based on the available data, it is difficult to identify factors influencing chemotype distribution and spread at the European level. Here we describe the results of a collaborative integrated work which aims (1) to characterize the trichothecene genotypes of strains from three Fusarium species, collected over the period 2000–2013 and (2) to enhance the standardization of epidemiological data collection. Information on host plant, country of origin, sampling location, year of sampling and previous crop of 1147 F. graminearum, 479 F. culmorum, and 3 F. cortaderiae strains obtained from 17 European countries was compiled and a map of trichothecene type B genotype distribution was plotted for each species. All information on the strains was collected in a freely accessible and updatable database (www.catalogueeu.luxmcc.lu), which will serve as a starting point for epidemiological analysis of potential spatial and temporal trichothecene genotype shifts in Europe. The analysis of the currently available European dataset showed that in F. graminearum, the predominant genotype was 15-acetyldeoxynivalenol (15-ADON) (82.9%), followed by 3-acetyldeoxynivalenol (3-ADON) (13.6%), and nivalenol (NIV) (3.5%). In F. culmorum, the prevalent genotype was 3-ADON (59.9%), while the NIV genotype accounted for the remaining 40.1%. Both, geographical and temporal patterns of trichothecene genotypes distribution were identified. PMID:27092107

  5. An Interoperable, Agricultural Information System Based on Satellite Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Teng, William; Chiu, Long; Doraiswamy, Paul; Kempler, Steven; Liu, Zhong; Pham, Long; Rui, Hualan

    2005-01-01

    Monitoring global agricultural crop conditions during the growing season and estimating potential seasonal production are critically important for market development of US. agricultural products and for global food security. The Goddard Space Flight Center Earth Sciences Data and Information Services Center Distributed Active Archive Center (GES DISC DAAC) is developing an Agricultural Information System (AIS), evolved from an existing TRMM Online Visualization and Analysis System (TOVAS), which will operationally provide satellite remote sensing data products (e.g., rainfall) and services. The data products will include crop condition and yield prediction maps, generated from a crop growth model with satellite data inputs, in collaboration with the USDA Agricultural Research Service. The AIS will enable the remote, interoperable access to distributed data, by using the GrADS-DODS Server (GDS) and by being compliant with Open GIS Consortium standards. Users will be able to download individual files, perform interactive online analysis, as well as receive operational data flows. AIS outputs will be integrated into existing operational decision support systems for global crop monitoring, such as those of the USDA Foreign Agricultural Service and the U.N. World Food Program.

  6. Environmental factor analysis of cholera in China using remote sensing and geographical information systems.

    PubMed

    Xu, M; Cao, C X; Wang, D C; Kan, B; Xu, Y F; Ni, X L; Zhu, Z C

    2016-04-01

    Cholera is one of a number of infectious diseases that appears to be influenced by climate, geography and other natural environments. This study analysed the environmental factors of the spatial distribution of cholera in China. It shows that temperature, precipitation, elevation, and distance to the coastline have significant impact on the distribution of cholera. It also reveals the oceanic environmental factors associated with cholera in Zhejiang, which is a coastal province of China, using both remote sensing (RS) and geographical information systems (GIS). The analysis has validated the correlation between indirect satellite measurements of sea surface temperature (SST), sea surface height (SSH) and ocean chlorophyll concentration (OCC) and the local number of cholera cases based on 8-year monthly data from 2001 to 2008. The results show the number of cholera cases has been strongly affected by the variables of SST, SSH and OCC. Utilizing this information, a cholera prediction model has been established based on the oceanic and climatic environmental factors. The model indicates that RS and GIS have great potential for designing an early warning system for cholera.

  7. [Public scientific knowledge distribution in health information, communication and information technology indexed in MEDLINE and LILACS databases].

    PubMed

    Packer, Abel Laerte; Tardelli, Adalberto Otranto; Castro, Regina Célia Figueiredo

    2007-01-01

    This study explores the distribution of international, regional and national scientific output in health information and communication, indexed in the MEDLINE and LILACS databases, between 1996 and 2005. A selection of articles was based on the hierarchical structure of Information Science in MeSH vocabulary. Four specific domains were determined: health information, medical informatics, scientific communications on healthcare and healthcare communications. The variables analyzed were: most-covered subjects and journals, author affiliation and publication countries and languages, in both databases. The Information Science category is represented in nearly 5% of MEDLINE and LILACS articles. The four domains under analysis showed a relative annual increase in MEDLINE. The Medical Informatics domain showed the highest number of records in MEDLINE, representing about half of all indexed articles. The importance of Information Science as a whole is more visible in publications from developed countries and the findings indicate the predominance of the United States, with significant growth in scientific output from China and South Korea and, to a lesser extent, Brazil.

  8. In vivo evaluation of the effect of stimulus distribution on FIR statistical efficiency in event-related fMRI.

    PubMed

    Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L

    2013-05-15

    Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. Published by Elsevier B.V.

  9. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  10. Spatially explicit spectral analysis of point clouds and geospatial data

    USGS Publications Warehouse

    Buscombe, Daniel D.

    2015-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described, and its functionality illustrated with an example of a high-resolution bathymetric point cloud data collected with multibeam echosounder.

  11. Rényi’s information transfer between financial time series

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Kleinert, Hagen; Shefaat, Mohammad

    2012-05-01

    In this paper, we quantify the statistical coherence between financial time series by means of the Rényi entropy. With the help of Campbell’s coding theorem, we show that the Rényi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others. This accentuation is controlled with Rényi’s parameter q. To tackle the issue of the information flow between time series, we formulate the concept of Rényi’s transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series, where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Rényian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990-31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between the DAX and S&P500 indices based on minute tick data gathered in the period 02.04.2008-11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia-Pacific region to both European and US markets. An important yet less dramatic excess of information also flows from Europe to the US. This is particularly clearly seen from a careful analysis of Rényi information flow between the DAX and S&P500 indices.

  12. Navigating the Decision Space: Shared Medical Decision Making as Distributed Cognition.

    PubMed

    Lippa, Katherine D; Feufel, Markus A; Robinson, F Eric; Shalin, Valerie L

    2017-06-01

    Despite increasing prominence, little is known about the cognitive processes underlying shared decision making. To investigate these processes, we conceptualize shared decision making as a form of distributed cognition. We introduce a Decision Space Model to identify physical and social influences on decision making. Using field observations and interviews, we demonstrate that patients and physicians in both acute and chronic care consider these influences when identifying the need for a decision, searching for decision parameters, making actionable decisions Based on the distribution of access to information and actions, we then identify four related patterns: physician dominated; physician-defined, patient-made; patient-defined, physician-made; and patient-dominated decisions. Results suggests that (a) decision making is necessarily distributed between physicians and patients, (b) differential access to information and action over time requires participants to transform a distributed task into a shared decision, and (c) adverse outcomes may result from failures to integrate physician and patient reasoning. Our analysis unifies disparate findings in the medical decision-making literature and has implications for improving care and medical training.

  13. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    One of the largest continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available. Approaches used in Earth science research such as case study analysis and climatology studies involve gathering discovering and gathering diverse data sets and information to support the research goals. Research based on case studies involves a detailed description of specific weather events using data from different sources, to characterize physical processes in play for a specific event. Climatology-based research tends to focus on the representativeness of a given event, by studying the characteristics and distribution of a large number of events. This allows researchers to generalize characteristics such as spatio-temporal distribution, intensity, annual cycle, duration, etc. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the datasets of interest can obtain the specific files they need using these systems. However, in cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. In these cases, a search process needs to be organized around the event rather than observing instruments. In addition, the existing data systems assume users have sufficient knowledge regarding the domain vocabulary to be able to effectively utilize their catalogs. These systems do not support new or interdisciplinary researchers who may be unfamiliar with the domain terminology. This paper presents a specialized search, aggregation and curation tool for Earth science to address these existing challenges. The search tool automatically creates curated "Data Albums", aggregated collections of information related to a specific science topic or event, containing links to relevant data files (granules) from different instruments; tools and services for visualization and analysis; and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non-relevant information and data.

  14. NASA's SPICE System Models the Solar System

    NASA Technical Reports Server (NTRS)

    Acton, Charles

    1996-01-01

    SPICE is NASA's multimission, multidiscipline information system for assembling, distributing, archiving, and accessing space science geometry and related data used by scientists and engineers for mission design and mission evaluation, detailed observation planning, mission operations, and science data analysis.

  15. PROCEEDINGS OF THE NHEXAS DATA ANALYSIS WORKSHOP

    EPA Science Inventory

    The National Human Exposure Assessment Survey (NHEXAS) was developed by the Office of Research and Development (ORD) of the U.S. Environmental Protection Agency (EPA) early in the 1990s to provide critical information about multipathway, multimedia population exposure distribut...

  16. Development of geotechnical analysis and design modules for the Virginia Department of Transportation's geotechnical database.

    DOT National Transportation Integrated Search

    2005-01-01

    In 2003, an Internet-based Geotechnical Database Management System (GDBMS) was developed for the Virginia Department of Transportation (VDOT) using distributed Geographic Information System (GIS) methodology for data management, archival, retrieval, ...

  17. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.

  18. A Petri Net model for distributed energy system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konopko, Joanna

    2015-12-31

    Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of themore » model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.« less

  19. Evaluation of Online Information Sources on Alien Species in Europe: The Need of Harmonization and Integration

    NASA Astrophysics Data System (ADS)

    Gatto, Francesca; Katsanevakis, Stelios; Vandekerkhove, Jochen; Zenetos, Argyro; Cardoso, Ana Cristina

    2013-06-01

    Europe is severely affected by alien invasions, which impact biodiversity, ecosystem services, economy, and human health. A large number of national, regional, and global online databases provide information on the distribution, pathways of introduction, and impacts of alien species. The sufficiency and efficiency of the current online information systems to assist the European policy on alien species was investigated by a comparative analysis of occurrence data across 43 online databases. Large differences among databases were found which are partially explained by variations in their taxonomical, environmental, and geographical scopes but also by the variable efforts for continuous updates and by inconsistencies on the definition of "alien" or "invasive" species. No single database covered all European environments, countries, and taxonomic groups. In many European countries national databases do not exist, which greatly affects the quality of reported information. To be operational and useful to scientists, managers, and policy makers, online information systems need to be regularly updated through continuous monitoring on a country or regional level. We propose the creation of a network of online interoperable web services through which information in distributed resources can be accessed, aggregated and then used for reporting and further analysis at different geographical and political scales, as an efficient approach to increase the accessibility of information. Harmonization, standardization, conformity on international standards for nomenclature, and agreement on common definitions of alien and invasive species are among the necessary prerequisites.

  20. Informing Species Conservation at Multiple Scales Using Data Collected for Marine Mammal Stock Assessments

    PubMed Central

    Grech, Alana; Sheppard, James; Marsh, Helene

    2011-01-01

    Background Conservation planning and the design of marine protected areas (MPAs) requires spatially explicit information on the distribution of ecological features. Most species of marine mammals range over large areas and across multiple planning regions. The spatial distributions of marine mammals are difficult to predict using habitat modelling at ecological scales because of insufficient understanding of their habitat needs, however, relevant information may be available from surveys conducted to inform mandatory stock assessments. Methodology and Results We use a 20-year time series of systematic aerial surveys of dugong (Dugong dugong) abundance to create spatially-explicit models of dugong distribution and relative density at the scale of the coastal waters of northeast Australia (∼136,000 km2). We interpolated the corrected data at the scale of 2 km * 2 km planning units using geostatistics. Planning units were classified as low, medium, high and very high dugong density on the basis of the relative density of dugongs estimated from the models and a frequency analysis. Torres Strait was identified as the most significant dugong habitat in northeast Australia and the most globally significant habitat known for any member of the Order Sirenia. The models are used by local, State and Federal agencies to inform management decisions related to the Indigenous harvest of dugongs, gill-net fisheries and Australia's National Representative System of Marine Protected Areas. Conclusion/Significance In this paper we demonstrate that spatially-explicit population models add value to data collected for stock assessments, provide a robust alternative to predictive habitat distribution models, and inform species conservation at multiple scales. PMID:21464933

  1. Contribution of multiple inert gas elimination technique to pulmonary medicine. 1. Principles and information content of the multiple inert gas elimination technique.

    PubMed Central

    Roca, J.; Wagner, P. D.

    1994-01-01

    This introductory review summarises four different aspects of the multiple inert gas elimination technique (MIGET). Firstly, the historical background that facilitated, in the mid 1970s, the development of the MIGET as a tool to obtain more information about the entire spectrum of VA/Q distribution in the lung by measuring the exchange of six gases of different solubility in trace concentrations. Its principle is based on the observation that the retention (or excretion) of any gas is dependent on the solubility (lambda) of that gas and the VA/Q distribution. A second major aspect is the analysis of the information content and limitations of the technique. During the last 15 years a substantial amount of clinical research using the MIGET has been generated by several groups around the world. The technique has been shown to be adequate in understanding the mechanisms of hypoxaemia in different forms of pulmonary disease and the effects of therapeutic interventions, but also in separately determining the quantitative role of each extrapulmonary factor on systemic arterial PO2 when they change between two conditions of MIGET measurement. This information will be extensively reviewed in the forthcoming articles of this series. Next, the different modalities of the MIGET, practical considerations involved in the measurements and the guidelines for quality control have been indicated. Finally, a section has been devoted to the analysis of available data in healthy subjects under different conditions. The lack of systematic information on the VA/Q distributions of older healthy subjects is emphasised, since it will be required to fully understand the changes brought about by diseases that affect the older population. PMID:8091330

  2. Two Instruments for Measuring Distributions of Low-Energy Charged Particles in Space

    NASA Technical Reports Server (NTRS)

    Bader, Michel; Fryer, Thomas B.; Witteborn, Fred C.

    1961-01-01

    Current estimates indicate that the bulk of interplanetary gas consists of protons with energies between 0 and 20 kev and concentrations of 1 to 105 particles/cu cm. Methods and instrumentation for measuring the energy and density distribution of such a gas are considered from the standpoint of suitability for space vehicle payloads. It is concluded that electrostatic analysis of the energy distribution can provide sufficient information in initial experiments. Both magnetic and electrostatic analyzers should eventually be used. Several instruments designed and constructed at the Ames Research Center for space plasma measurements, and the methods of calibration and data reduction are described. In particular, the instrument designed for operation on solar cell power has the following characteristics: weight, 1.1 pounds; size, 2 by 3 by 4 inches; and power consumption, 145 mw. The instrument is designed to yield information on the concentration, energy distribution, and the anisotropy of ion trajectories in the 0.2 to 20 kev range.

  3. Developing Tools for Mission Engineering Analysis During Hurricane Preparation and Operations

    DTIC Science & Technology

    2017-06-01

    Reserve Headquarters to help their MFRHTCs prepare, the Naval PostgraduateSchool and the Center for Educational Design , Development, and Distribution...types of information and resources necessary for hurricanepreparations operations and form a conceptual design for a database support system (DBSS...preparation for a hurricane. The results of this thesis detail aconceptual design , functional baseline for the DBSS, specify the information and

  4. [Research on suitable distribution of Paris yunnanensis based on remote sensing and GIS].

    PubMed

    Luo, Yao; Dong, Yong-Bo; Zhu, Cong; Peng, Wen-Fu; Fang, Qing-Mao; Xu, Xin-Liang

    2017-11-01

    Paris yunnanensis is a kind of rare medicinal herb, having a very high medicinal value. Studying its suitable ecological condition can provide a basis for its rational exploitation, artificial cultivation, and sustainable utilization. A practicable method in this paper has been proposed to research the suitable regional distribution of P. yunnanensis in Sichuan province. By the case study of P. yunnanensis in Sichuan province, and according to related literatures, the suitable ecological condition of P. yunnanensis such as altitude, mean annual temperature (MAT), annual precipitation, regional slope, slope ranges, vegetative cover, and soil types was analyzed following remote sensing (RS) and GIS.The appropriate distribution regionof P. yunnanensis and its area were extracted based on RS and GIS technology,combing with the information of the field validation data. The results showed that the concentrated distribution regions in counties of Sichuan province were, Liangshan prefecture, Aba prefecture, Sertar county of Ganzi prefecture, Panzhihua city, Ya'an city, Chengdu city, Meishan city, Leshan city, Yibin city, Neijiang city, Luzhou city, Bazhong city, Nanchong city, Guangyuan city and other cities and counties area.The suitable distribution area in Sichuan is about 7 338 km², accounting for 3.02% of the total study regional area. The analysis result has high consistency with the filed validation data, and the research method for P. yunnanensis distribution region based onspatial overlay analysis and the extracted the information of land usage and ecological factors following the RS and GIS is reliable. Copyright© by the Chinese Pharmaceutical Association.

  5. Pattern detection in stream networks: Quantifying spatialvariability in fish distribution

    USGS Publications Warehouse

    Torgersen, Christian E.; Gresswell, Robert E.; Bateman, Douglas S.

    2004-01-01

    Biological and physical properties of rivers and streams are inherently difficult to sample and visualize at the resolution and extent necessary to detect fine-scale distributional patterns over large areas. Satellite imagery and broad-scale fish survey methods are effective for quantifying spatial variability in biological and physical variables over a range of scales in marine environments but are often too coarse in resolution to address conservation needs in inland fisheries management. We present methods for sampling and analyzing multiscale, spatially continuous patterns of stream fishes and physical habitat in small- to medium-size watersheds (500–1000 hectares). Geospatial tools, including geographic information system (GIS) software such as ArcInfo dynamic segmentation and ArcScene 3D analyst modules, were used to display complex biological and physical datasets. These tools also provided spatial referencing information (e.g. Cartesian and route-measure coordinates) necessary for conducting geostatistical analyses of spatial patterns (empirical semivariograms and wavelet analysis) in linear stream networks. Graphical depiction of fish distribution along a one-dimensional longitudinal profile and throughout the stream network (superimposed on a 10-metre digital elevation model) provided the spatial context necessary for describing and interpreting the relationship between landscape pattern and the distribution of coastal cutthroat trout (Oncorhynchus clarki clarki) in western Oregon, U.S.A. The distribution of coastal cutthroat trout was highly autocorrelated and exhibited a spherical semivariogram with a defined nugget, sill, and range. Wavelet analysis of the main-stem longitudinal profile revealed periodicity in trout distribution at three nested spatial scales corresponding ostensibly to landscape disturbances and the spacing of tributary junctions.

  6. Temporal patterns of mental model convergence: implications for distributed teams interacting in electronic collaboration spaces.

    PubMed

    McComb, Sara; Kennedy, Deanna; Perryman, Rebecca; Warner, Norman; Letsky, Michael

    2010-04-01

    Our objective is to capture temporal patterns in mental model convergence processes and differences in these patterns between distributed teams using an electronic collaboration space and face-to-face teams with no interface. Distributed teams, as sociotechnical systems, collaborate via technology to work on their task. The way in which they process information to inform their mental models may be examined via team communication and may unfold differently than it does in face-to-face teams. We conducted our analysis on 32 three-member teams working on a planning task. Half of the teams worked as distributed teams in an electronic collaboration space, and the other half worked face-to-face without an interface. Using event history analysis, we found temporal interdependencies among the initial convergence points of the multiple mental models we examined. Furthermore, the timing of mental model convergence and the onset of task work discussions were related to team performance. Differences existed in the temporal patterns of convergence and task work discussions across conditions. Distributed teams interacting via an electronic interface and face-to-face teams with no interface converged on multiple mental models, but their communication patterns differed. In particular, distributed teams with an electronic interface required less overall communication, converged on all mental models later in their life cycles, and exhibited more linear cognitive processes than did face-to-face teams interacting verbally. Managers need unique strategies for facilitating communication and mental model convergence depending on teams' degrees of collocation and access to an interface, which in turn will enhance team performance.

  7. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  8. Report of the Defense Science Board Task Force on DoD Energy Strategy, More Fight - Less Fuel

    DTIC Science & Technology

    2008-02-01

    Budgeting, and Execution System PUC Public Utility Commission Q R RAMCAP Risk Analysis and Management for Critical Asset Protection R&D Research and...0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16

  9. A Predictive Analysis of the Department of Defense Distribution System Utilizing Random Forests

    DTIC Science & Technology

    2016-06-01

    resources capable of meeting both customer and individual resource constraints and goals while also maximizing the global benefit to the supply...and probability rules to determine the optimal red wine distribution network for an Italian-based wine producer. The decision support model for...combinations of factors that will result in delivery of the highest quality wines . The model’s first stage inputs basic logistics information to look

  10. Climate Informed Low Flow Frequency Analysis Using Nonstationary Modeling

    NASA Astrophysics Data System (ADS)

    Liu, D.; Guo, S.; Lian, Y.

    2014-12-01

    Stationarity is often assumed for frequency analysis of low flows in water resources management and planning. However, many studies have shown that flow characteristics, particularly the frequency spectrum of extreme hydrologic events,were modified by climate change and human activities and the conventional frequency analysis without considering the non-stationary characteristics may lead to costly design. The analysis presented in this paper was based on the more than 100 years of daily flow data from the Yichang gaging station 44 kilometers downstream of the Three Gorges Dam. The Mann-Kendall trend test under the scaling hypothesis showed that the annual low flows had significant monotonic trend, whereas an abrupt change point was identified in 1936 by the Pettitt test. The climate informed low flow frequency analysis and the divided and combined method are employed to account for the impacts from related climate variables and the nonstationarities in annual low flows. Without prior knowledge of the probability density function for the gaging station, six distribution functions including the Generalized Extreme Values (GEV), Pearson Type III, Gumbel, Gamma, Lognormal, and Weibull distributions have been tested to find the best fit, in which the local likelihood method is used to estimate the parameters. Analyses show that GEV had the best fit for the observed low flows. This study has also shown that the climate informed low flow frequency analysis is able to exploit the link between climate indices and low flows, which would account for the dynamic feature for reservoir management and provide more accurate and reliable designs for infrastructure and water supply.

  11. Mixture distributions of wind speed in the UAE

    NASA Astrophysics Data System (ADS)

    Shin, J.; Ouarda, T.; Lee, T. S.

    2013-12-01

    Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.

  12. Fundamental study for scattering suppression in biological tissue using digital phase-conjugate light with intensity modulation

    NASA Astrophysics Data System (ADS)

    Toda, Sogo; Kato, Yuji; Kudo, Nobuki; Shimizu, Koichi

    2017-04-01

    For transillumination imaging of an animal body, we have attempted to suppress the scattering effect in a turbid medium. It is possible to restore the optical image before scattering using phase-conjugate light. We examined the effect of intensity information as well as the phase information for the restoration of the original light distribution. In an experimental analysis using animal tissue, the contributions of the phase- and the intensity-information to the image restoration through turbid medium were demonstrated.

  13. Representing situation awareness in collaborative systems: a case study in the energy distribution domain.

    PubMed

    Salmon, P M; Stanton, N A; Walker, G H; Jenkins, D; Baber, C; McMaster, R

    2008-03-01

    The concept of distributed situation awareness (DSA) is currently receiving increasing attention from the human factors community. This article investigates DSA in a collaborative real-world industrial setting by discussing the results derived from a recent naturalistic study undertaken within the UK energy distribution domain. The results describe the DSA-related information used by the networks of agents involved in the scenarios analysed, the sharing of this information between the agents and the salience of different information elements used. Thus, the structure, quality and content of each network's DSA is discussed, along with the implications for DSA theory. The findings reinforce the notion that when viewing situation awareness (SA) in collaborative systems, it is useful to focus on the coordinated behaviour of the system itself, rather than on the individual as the unit of analysis and suggest that the findings from such assessments can potentially be used to inform system, procedure and training design. SA is a critical commodity for teams working in industrial systems and systems, procedures and training programmes should be designed to facilitate efficient system SA acquisition and maintenance. This article presents approaches for describing and understanding SA during real-world collaborative tasks, the outputs from which can potentially be used to inform system, training programmes and procedure design.

  14. Data for Renewable Energy Planning, Policy, and Investment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, Sarah L

    Reliable, robust, and validated data are critical for informed planning, policy development, and investment in the clean energy sector. The Renewable Energy (RE) Explorer was developed to support data-driven renewable energy analysis that can inform key renewable energy decisions globally. This document presents the types of geospatial and other data at the core of renewable energy analysis and decision making. Individual data sets used to inform decisions vary in relation to spatial and temporal resolution, quality, and overall usefulness. From Data to Decisions, a complementary geospatial data and analysis decision guide, provides an in-depth view of these and other considerationsmore » to enable data-driven planning, policymaking, and investment. Data support a wide variety of renewable energy analyses and decisions, including technical and economic potential assessment, renewable energy zone analysis, grid integration, risk and resiliency identification, electrification, and distributed solar photovoltaic potential. This fact sheet provides information on the types of data that are important for renewable energy decision making using the RE Data Explorer or similar types of geospatial analysis tools.« less

  15. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    NASA Astrophysics Data System (ADS)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;

  16. Calculated spanwise lift distributions, influence functions, and influence coefficients for unswept wings in subsonic flow

    NASA Technical Reports Server (NTRS)

    Diederich, Franklin W; Zlotnick, Martin

    1955-01-01

    Spanwise lift distributions have been calculated for nineteen unswept wings with various aspect ratios and taper ratios and with a variety of angle-of-attack or twist distributions, including flap and aileron deflections, by means of the Weissinger method with eight control points on the semispan. Also calculated were aerodynamic influence coefficients which pertain to a certain definite set of stations along the span, and several methods are presented for calculating aerodynamic influence functions and coefficients for stations other than those stipulated. The information presented in this report can be used in the analysis of untwisted wings or wings with known twist distributions, as well as in aeroelastic calculations involving initially unknown twist distributions.

  17. Tests for informative cluster size using a novel balanced bootstrap scheme.

    PubMed

    Nevalainen, Jaakko; Oja, Hannu; Datta, Somnath

    2017-07-20

    Clustered data are often encountered in biomedical studies, and to date, a number of approaches have been proposed to analyze such data. However, the phenomenon of informative cluster size (ICS) is a challenging problem, and its presence has an impact on the choice of a correct analysis methodology. For example, Dutta and Datta (2015, Biometrics) presented a number of marginal distributions that could be tested. Depending on the nature and degree of informativeness of the cluster size, these marginal distributions may differ, as do the choices of the appropriate test. In particular, they applied their new test to a periodontal data set where the plausibility of the informativeness was mentioned, but no formal test for the same was conducted. We propose bootstrap tests for testing the presence of ICS. A balanced bootstrap method is developed to successfully estimate the null distribution by merging the re-sampled observations with closely matching counterparts. Relying on the assumption of exchangeability within clusters, the proposed procedure performs well in simulations even with a small number of clusters, at different distributions and against different alternative hypotheses, thus making it an omnibus test. We also explain how to extend the ICS test to a regression setting and thereby enhancing its practical utility. The methodologies are illustrated using the periodontal data set mentioned earlier. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. In situ gene conservation of six conifers in western Washington and Oregon.

    Treesearch

    S.R. Lipow; K. Vance-Borland; J.B. St. Clair; J.A. Henderson; C. McCain

    2007-01-01

    A gap analysis was conducted to evaluate the extent to which genetic resources are conserved in situ in protected areas for six species of conifers in the Pacific Northwest. The gap analysis involved producing a geographic information system (GIS) detailing the location of protected areas and the distribution and abundance of tree species, as inferred from data on...

  19. Forest Fire History... A Computer Method of Data Analysis

    Treesearch

    Romain M. Meese

    1973-01-01

    A series of computer programs is available to extract information from the individual Fire Reports (U.S. Forest Service Form 5100-29). The programs use a statistical technique to fit a continuous distribution to a set of sampled data. The goodness-of-fit program is applicable to data other than the fire history. Data summaries illustrate analysis of fire occurrence,...

  20. Characterization of winemaking yeast by cell number-size distribution analysis through flow field-flow fractionation with multi-wavelength turbidimetric detection.

    PubMed

    Zattoni, Andrea; Melucci, Dora; Reschiglian, Pierluigi; Sanz, Ramsés; Puignou, Lluís; Galceran, Maria Teresa

    2004-10-29

    Yeasts are widely used in several areas of food industry, e.g. baking, beer brewing, and wine production. Interest in new analytical methods for quality control and characterization of yeast cells is thus increasing. The biophysical properties of yeast cells, among which cell size, are related to yeast cell capabilities to produce primary and secondary metabolites during the fermentation process. Biophysical properties of winemaking yeast strains can be screened by field-flow fractionation (FFF). In this work we present the use of flow FFF (FlFFF) with turbidimetric multi-wavelength detection for the number-size distribution analysis of different commercial winemaking yeast varieties. The use of a diode-array detector allows to apply to dispersed samples like yeast cells the recently developed method for number-size (or mass-size) analysis in flow-assisted separation techniques. Results for six commercial winemaking yeast strains are compared with data obtained by a standard method for cell sizing (Coulter counter). The method here proposed gives, at short analysis time, accurate information on the number of cells of a given size, and information on the total number of cells.

  1. EXIMS: an improved data analysis pipeline based on a new peak picking method for EXploring Imaging Mass Spectrometry data.

    PubMed

    Wijetunge, Chalini D; Saeed, Isaam; Boughton, Berin A; Spraggins, Jeffrey M; Caprioli, Richard M; Bacic, Antony; Roessner, Ute; Halgamuge, Saman K

    2015-10-01

    Matrix Assisted Laser Desorption Ionization-Imaging Mass Spectrometry (MALDI-IMS) in 'omics' data acquisition generates detailed information about the spatial distribution of molecules in a given biological sample. Various data processing methods have been developed for exploring the resultant high volume data. However, most of these methods process data in the spectral domain and do not make the most of the important spatial information available through this technology. Therefore, we propose a novel streamlined data analysis pipeline specifically developed for MALDI-IMS data utilizing significant spatial information for identifying hidden significant molecular distribution patterns in these complex datasets. The proposed unsupervised algorithm uses Sliding Window Normalization (SWN) and a new spatial distribution based peak picking method developed based on Gray level Co-Occurrence (GCO) matrices followed by clustering of biomolecules. We also use gist descriptors and an improved version of GCO matrices to extract features from molecular images and minimum medoid distance to automatically estimate the number of possible groups. We evaluated our algorithm using a new MALDI-IMS metabolomics dataset of a plant (Eucalypt) leaf. The algorithm revealed hidden significant molecular distribution patterns in the dataset, which the current Component Analysis and Segmentation Map based approaches failed to extract. We further demonstrate the performance of our peak picking method over other traditional approaches by using a publicly available MALDI-IMS proteomics dataset of a rat brain. Although SWN did not show any significant improvement as compared with using no normalization, the visual assessment showed an improvement as compared to using the median normalization. The source code and sample data are freely available at http://exims.sourceforge.net/. awgcdw@student.unimelb.edu.au or chalini_w@live.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Digital Libraries and the Problem of Purpose [and] On DigiPaper and the Dissemination of Electronic Documents [and] DFAS: The Distributed Finding Aid Search System [and] Best Practices for Digital Archiving: An Information Life Cycle Approach [and] Mapping and Converting Essential Federal Geographic Data Committee (FGDC) Metadata into MARC21 and Dublin Core: Towards an Alternative to the FGDC Clearinghouse [and] Evaluating Website Modifications at the National Library of Medicine through Search Log analysis.

    ERIC Educational Resources Information Center

    Levy, David M.; Huttenlocher, Dan; Moll, Angela; Smith, MacKenzie; Hodge, Gail M.; Chandler, Adam; Foley, Dan; Hafez, Alaaeldin M.; Redalen, Aaron; Miller, Naomi

    2000-01-01

    Includes six articles focusing on the purpose of digital public libraries; encoding electronic documents through compression techniques; a distributed finding aid server; digital archiving practices in the framework of information life cycle management; converting metadata into MARC format and Dublin Core formats; and evaluating Web sites through…

  3. 88 hours: the U.S. Geological Survey National Earthquake Information Center response to the March 11, 2011 Mw 9.0 Tohoku earthquake

    USGS Publications Warehouse

    Wald, David J.; Hayes, Gavin P.; Benz, Harley M.; Earle, Paul S.; Briggs, Richard W.

    2011-01-01

    The M 9.0 11 March 2011 Tohoku, Japan, earthquake and associated tsunami near the east coast of the island of Honshu caused tens of thousands of deaths and potentially over one trillion dollars in damage, resulting in one of the worst natural disasters ever recorded. The U.S. Geological Survey National Earthquake Information Center (USGS NEIC), through its responsibility to respond to all significant global earthquakes as part of the National Earthquake Hazards Reduction Program, quickly produced and distributed a suite of earthquake information products to inform emergency responders, the public, the media, and the academic community of the earthquake's potential impact and to provide scientific background for the interpretation of the event's tectonic context and potential for future hazard. Here we present a timeline of the NEIC response to this devastating earthquake in the context of rapidly evolving information emanating from the global earthquake-response community. The timeline includes both internal and publicly distributed products, the relative timing of which highlights the inherent tradeoffs between the requirement to provide timely alerts and the necessity for accurate, authoritative information. The timeline also documents the iterative and evolutionary nature of the standard products produced by the NEIC and includes a behind-the-scenes look at the decisions, data, and analysis tools that drive our rapid product distribution.

  4. The economics of biobanking and pharmacogenetics databasing: the case of an adaptive platform on breast cancer.

    PubMed

    Huttin, Christine C; Liebman, Michael N

    2013-01-01

    This paper aims to discuss the economics of biobanking. Among the critical issues in evaluating potential ROI for creation of a bio-bank are: scale (e.g. local, national, international), centralized versus virtual/distributed, degree of sample annotation/QC procedures, targeted end-users and uses, types of samples, potential characterization, both of samples and annotations. The paper presents a review on cost models for an economic analysis of biobanking for different steps: data collection (e.g. biospecimens in different types of sites, storage, transport and distribution, information management for the different types of information (e.g. biological information such as cell, gene, and protein)). It also provides additional concepts to process biospecimens from laboratory to clinical practice and will help to identify how changing paradigms in translational medicine affect the economic modeling.

  5. Topic Time Series Analysis of Microblogs

    DTIC Science & Technology

    2014-10-01

    network, may be closer to a media distribution site, where the media is user produced [14]. Analysis of the text content includes both general models as...is generated by Instagram . Topic 80, Distance: 143.2101 Top words: 1. rawr 2. ˆ0ˆ 3. kill 4. jurassic 5. dinosaur Analysis: This topic is quite...data, lack of reliable event information, hidden temporal trends, and the vastly diverse nature of content . In the present work, we examine spatio

  6. Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, X.; Suganuma, Y.; Fujii, M.

    2017-12-01

    Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.

  7. A Transparent Translation from Legacy System Model into Common Information Model: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Simpson, Jeffrey; Zhang, Yingchen

    Advance in smart grid is forcing utilities towards better monitoring, control and analysis of distribution systems, and requires extensive cyber-based intelligent systems and applications to realize various functionalities. The ability of systems, or components within systems, to interact and exchange services or information with each other is the key to the success of smart grid technologies, and it requires efficient information exchanging and data sharing infrastructure. The Common Information Model (CIM) is a standard that allows different applications to exchange information about an electrical system, and it has become a widely accepted solution for information exchange among different platforms andmore » applications. However, most existing legacy systems are not developed using CIM, but using their own languages. Integrating such legacy systems is a challenge for utilities, and the appropriate utilization of the integrated legacy systems is even more intricate. Thus, this paper has developed an approach and open-source tool in order to translate legacy system models into CIM format. The developed tool is tested for a commercial distribution management system and simulation results have proved its effectiveness.« less

  8. Using Dedal to share and reuse distributed engineering design information

    NASA Technical Reports Server (NTRS)

    Baya, Vinod; Baudin, Catherine; Mabogunje, Ade; Das, Aseem; Cannon, David M.; Leifer, Larry J.

    1994-01-01

    The overall goal of the project is to facilitate the reuse of previous design experience for the maintenance, repair and redesign of artifacts in the electromechanical engineering domain. An engineering team creates information in the form of meeting summaries, project memos, progress reports, engineering notes, spreadsheet calculations and CAD drawings. Design information captured in these media is difficult to reuse because the way design concepts are referred to evolve over the life of a project and because decisions, requirements and structure are interrelated but rarely explicitly linked. Based on protocol analysis of the information seeking behavior of designer's, we defined a language to describe the content and the form of design records and implemented this language in Dedal, a tool for indexing, modeling and retrieving design information. We first describe the approach to indexing and retrieval in Dedal. Next we describe ongoing work in extending Dedal's capabilities to a distributed environment by integrating it with World Wide Web. This will enable members of a design team who are not co-located to share and reuse information.

  9. Conducting Privacy-Preserving Multivariable Propensity Score Analysis When Patient Covariate Information Is Stored in Separate Locations.

    PubMed

    Bohn, Justin; Eddings, Wesley; Schneeweiss, Sebastian

    2017-03-15

    Distributed networks of health-care data sources are increasingly being utilized to conduct pharmacoepidemiologic database studies. Such networks may contain data that are not physically pooled but instead are distributed horizontally (separate patients within each data source) or vertically (separate measures within each data source) in order to preserve patient privacy. While multivariable methods for the analysis of horizontally distributed data are frequently employed, few practical approaches have been put forth to deal with vertically distributed health-care databases. In this paper, we propose 2 propensity score-based approaches to vertically distributed data analysis and test their performance using 5 example studies. We found that these approaches produced point estimates close to what could be achieved without partitioning. We further found a performance benefit (i.e., lower mean squared error) for sequentially passing a propensity score through each data domain (called the "sequential approach") as compared with fitting separate domain-specific propensity scores (called the "parallel approach"). These results were validated in a small simulation study. This proof-of-concept study suggests a new multivariable analysis approach to vertically distributed health-care databases that is practical, preserves patient privacy, and warrants further investigation for use in clinical research applications that rely on health-care databases. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Benchmark experiments at ASTRA facility on definition of space distribution of {sup 235}U fission reaction rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.

    2012-07-01

    Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of {sup 235}U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of {sup 235}U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)

  11. MNE software for processing MEG and EEG data

    PubMed Central

    Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808

  12. Extracting chemical information from high-resolution Kβ X-ray emission spectroscopy

    NASA Astrophysics Data System (ADS)

    Limandri, S.; Robledo, J.; Tirao, G.

    2018-06-01

    High-resolution X-ray emission spectroscopy allows studying the chemical environment of a wide variety of materials. Chemical information can be obtained by fitting the X-ray spectra and observing the behavior of some spectral features. Spectral changes can also be quantified by means of statistical parameters calculated by considering the spectrum as a probability distribution. Another possibility is to perform statistical multivariate analysis, such as principal component analysis. In this work the performance of these procedures for extracting chemical information in X-ray emission spectroscopy spectra for mixtures of Mn2+ and Mn4+ oxides are studied. A detail analysis of the parameters obtained, as well as the associated uncertainties is shown. The methodologies are also applied for Mn oxidation state characterization of double perovskite oxides Ba1+xLa1-xMnSbO6 (with 0 ≤ x ≤ 0.7). The results show that statistical parameters and multivariate analysis are the most suitable for the analysis of this kind of spectra.

  13. Using task analysis to understand the Data System Operations Team

    NASA Technical Reports Server (NTRS)

    Holder, Barbara E.

    1994-01-01

    The Data Systems Operations Team (DSOT) currently monitors the Multimission Ground Data System (MGDS) at JPL. The MGDS currently supports five spacecraft and within the next five years, it will support ten spacecraft simultaneously. The ground processing element of the MGDS consists of a distributed UNIX-based system of over 40 nodes and 100 processes. The MGDS system provides operators with little or no information about the system's end-to-end processing status or end-to-end configuration. The lack of system visibility has become a critical issue in the daily operation of the MGDS. A task analysis was conducted to determine what kinds of tools were needed to provide DSOT with useful status information and to prioritize the tool development. The analysis provided the formality and structure needed to get the right information exchange between development and operations. How even a small task analysis can improve developer-operator communications is described, and the challenges associated with conducting a task analysis in a real-time mission operations environment are examined.

  14. The global impact distribution of Near-Earth objects

    NASA Astrophysics Data System (ADS)

    Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.

    2016-02-01

    Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.

  15. In Situ Three-Dimensional Reciprocal-Space Mapping of Diffuse Scattering Intensity Distribution and Data Analysis for Precursor Phenomenon in Shape-Memory Alloy

    NASA Astrophysics Data System (ADS)

    Cheng, Tian-Le; Ma, Fengde D.; Zhou, Jie E.; Jennings, Guy; Ren, Yang; Jin, Yongmei M.; Wang, Yu U.

    2012-01-01

    Diffuse scattering contains rich information on various structural disorders, thus providing a useful means to study the nanoscale structural deviations from the average crystal structures determined by Bragg peak analysis. Extraction of maximal information from diffuse scattering requires concerted efforts in high-quality three-dimensional (3D) data measurement, quantitative data analysis and visualization, theoretical interpretation, and computer simulations. Such an endeavor is undertaken to study the correlated dynamic atomic position fluctuations caused by thermal vibrations (phonons) in precursor state of shape-memory alloys. High-quality 3D diffuse scattering intensity data around representative Bragg peaks are collected by using in situ high-energy synchrotron x-ray diffraction and two-dimensional digital x-ray detector (image plate). Computational algorithms and codes are developed to construct the 3D reciprocal-space map of diffuse scattering intensity distribution from the measured data, which are further visualized and quantitatively analyzed to reveal in situ physical behaviors. Diffuse scattering intensity distribution is explicitly formulated in terms of atomic position fluctuations to interpret the experimental observations and identify the most relevant physical mechanisms, which help set up reduced structural models with minimal parameters to be efficiently determined by computer simulations. Such combined procedures are demonstrated by a study of phonon softening phenomenon in precursor state and premartensitic transformation of Ni-Mn-Ga shape-memory alloy.

  16. Automation for deep space vehicle monitoring

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.

    1991-01-01

    Information on automation for deep space vehicle monitoring is given in viewgraph form. Information is given on automation goals and strategy; the Monitor Analyzer of Real-time Voyager Engineering Link (MARVEL); intelligent input data management; decision theory for making tradeoffs; dynamic tradeoff evaluation; evaluation of anomaly detection results; evaluation of data management methods; system level analysis with cooperating expert systems; the distributed architecture of multiple expert systems; and event driven response.

  17. SPATIAL AND TEMPORAL PATTERNS OF THE MOVEMENT OF SEASONAL AGRICULTURAL MIGRANT CHILDREN INTO WISCONSIN, EDUCATIONAL PROGRAMS FOR CHILDREN OF MIGRATORY AGRICULTURAL WORKERS IN WISCONSIN, REPORT 2.

    ERIC Educational Resources Information Center

    LINDSEY, HERBERT H.; AND OTHERS

    USEFUL MEANS OF ANTICIPATING THE MOVEMENTS OF MIGRANT CHILDREN INCLUDE ANALYSIS OF CROPS, THE HARVESTING OF WHICH REQUIRES OUT-OF-STATE WORKERS, DISTRIBUTIONAL MAPS OF CROP ACREAGE, NORMAL TIME SCHEDULES FOR CROPS, AND INFORMATION ON AGRICULTURAL DEVELOPMENTS. SUCH INFORMATION ASSISTS IN THE PLANNING OF SCHOOL PROGRAMS. IN WISCONSIN, MOST MIGRANT…

  18. The pharmacokinetics of dexmedetomidine during long-term infusion in critically ill pediatric patients. A Bayesian approach with informative priors.

    PubMed

    Wiczling, Paweł; Bartkowska-Śniatkowska, Alicja; Szerkus, Oliwia; Siluk, Danuta; Rosada-Kurasińska, Jowita; Warzybok, Justyna; Borsuk, Agnieszka; Kaliszan, Roman; Grześkowiak, Edmund; Bienert, Agnieszka

    2016-06-01

    The purpose of this study was to assess the pharmacokinetics of dexmedetomidine in the ICU settings during the prolonged infusion and to compare it with the existing literature data using the Bayesian population modeling with literature-based informative priors. Thirty-eight patients were included in the analysis with concentration measurements obtained at two occasions: first from 0 to 24 h after infusion initiation and second from 0 to 8 h after infusion end. Data analysis was conducted using WinBUGS software. The prior information on dexmedetomidine pharmacokinetics was elicited from the literature study pooling results from a relatively large group of 95 children. A two compartment PK model, with allometrically scaled parameters, maturation of clearance and t-student residual distribution on a log-scale was used to describe the data. The incorporation of time-dependent (different between two occasions) PK parameters improved the model. It was observed that volume of distribution is 1.5-fold higher during the second occasion. There was also an evidence of increased (1.3-fold) clearance for the second occasion with posterior probability equal to 62 %. This work demonstrated the usefulness of Bayesian modeling with informative priors in analyzing pharmacokinetic data and comparing it with existing literature knowledge.

  19. Generative models for discovering sparse distributed representations.

    PubMed Central

    Hinton, G E; Ghahramani, Z

    1997-01-01

    We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations. PMID:9304685

  20. Fragmentation Data Analysis. I. Computer Program for Mass and Number Distributions and Effects of Errors on Mass Distributions

    DTIC Science & Technology

    1974-11-01

    PR(20)aPO(20)#NNC20)aNCC2S)DFOW(28)a 3FOM(90)*SOM(2@)iSOW(20) CALL IFILECI&SIIFRAGS) CALL OFILE(2a5HMENTS) I FORMAT(2A5) 2 FORMAT( F6 *0) 3 FORMAT...Department of National Defence (2 copies) The Director, Defence Scientific Information & Documentation Centre, India Director, Defence Research (entre, Ministry of Derence, Malaysia i; 4 * 4

  1. Distributed collaborative environments for virtual capability-based planning

    NASA Astrophysics Data System (ADS)

    McQuay, William K.

    2003-09-01

    Distributed collaboration is an emerging technology that will significantly change how decisions are made in the 21st century. Collaboration involves two or more geographically dispersed individuals working together to share and exchange data, information, knowledge, and actions. The marriage of information, collaboration, and simulation technologies provides the decision maker with a collaborative virtual environment for planning and decision support. This paper reviews research that is focusing on the applying open standards agent-based framework with integrated modeling and simulation to a new Air Force initiative in capability-based planning and the ability to implement it in a distributed virtual environment. Virtual Capability Planning effort will provide decision-quality knowledge for Air Force resource allocation and investment planning including examining proposed capabilities and cost of alternative approaches, the impact of technologies, identification of primary risk drivers, and creation of executable acquisition strategies. The transformed Air Force business processes are enabled by iterative use of constructive and virtual modeling, simulation, and analysis together with information technology. These tools are applied collaboratively via a technical framework by all the affected stakeholders - warfighter, laboratory, product center, logistics center, test center, and primary contractor.

  2. Unconditional security of time-energy entanglement quantum key distribution using dual-basis interferometry.

    PubMed

    Zhang, Zheshen; Mower, Jacob; Englund, Dirk; Wong, Franco N C; Shapiro, Jeffrey H

    2014-03-28

    High-dimensional quantum key distribution (HDQKD) offers the possibility of high secure-key rate with high photon-information efficiency. We consider HDQKD based on the time-energy entanglement produced by spontaneous parametric down-conversion and show that it is secure against collective attacks. Its security rests upon visibility data-obtained from Franson and conjugate-Franson interferometers-that probe photon-pair frequency correlations and arrival-time correlations. From these measurements, an upper bound can be established on the eavesdropper's Holevo information by translating the Gaussian-state security analysis for continuous-variable quantum key distribution so that it applies to our protocol. We show that visibility data from just the Franson interferometer provides a weaker, but nonetheless useful, secure-key rate lower bound. To handle multiple-pair emissions, we incorporate the decoy-state approach into our protocol. Our results show that over a 200-km transmission distance in optical fiber, time-energy entanglement HDQKD could permit a 700-bit/sec secure-key rate and a photon information efficiency of 2 secure-key bits per photon coincidence in the key-generation phase using receivers with a 15% system efficiency.

  3. PORE SPACE ANALYSIS OF NAPL DISTRIBUTION IN SAND-CLAY MEDIA. (R827120)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  4. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  5. Item response theory analysis applied to the Spanish version of the Personal Outcomes Scale.

    PubMed

    Guàrdia-Olmos, J; Carbó-Carreté, M; Peró-Cebollero, M; Giné, C

    2017-11-01

    The study of measurements of quality of life (QoL) is one of the great challenges of modern psychology and psychometric approaches. This issue has greater importance when examining QoL in populations that were historically treated on the basis of their deficiency, and recently, the focus has shifted to what each person values and desires in their life, as in cases of people with intellectual disability (ID). Many studies of QoL scales applied in this area have attempted to improve the validity and reliability of their components by incorporating various sources of information to achieve consistency in the data obtained. The adaptation of the Personal Outcomes Scale (POS) in Spanish has shown excellent psychometric attributes, and its administration has three sources of information: self-assessment, practitioner and family. The study of possible congruence or incongruence of observed distributions of each item between sources is therefore essential to ensure a correct interpretation of the measure. The aim of this paper was to analyse the observed distribution of items and dimensions from the three Spanish POS information sources cited earlier, using the item response theory. We studied a sample of 529 people with ID and their respective practitioners and family member, and in each case, we analysed items and factors using Samejima's model of polytomic ordinal scales. The results indicated an important number of items with differential effects regarding sources, and in some cases, they indicated significant differences in the distribution of items, factors and sources of information. As a result of this analysis, we must affirm that the administration of the POS, considering three sources of information, was adequate overall, but a correct interpretation of the results requires that it obtain much more information to consider, as well as some specific items in specific dimensions. The overall ratings, if these comments are considered, could result in bias. © 2017 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  6. The Impacts of Information-Sharing Mechanisms on Spatial Market Formation Based on Agent-Based Modeling

    PubMed Central

    Li, Qianqian; Yang, Tao; Zhao, Erbo; Xia, Xing’ang; Han, Zhangang

    2013-01-01

    There has been an increasing interest in the geographic aspects of economic development, exemplified by P. Krugman’s logical analysis. We show in this paper that the geographic aspects of economic development can be modeled using multi-agent systems that incorporate multiple underlying factors. The extent of information sharing is assumed to be a driving force that leads to economic geographic heterogeneity across locations without geographic advantages or disadvantages. We propose an agent-based market model that considers a spectrum of different information-sharing mechanisms: no information sharing, information sharing among friends and pheromone-like information sharing. Finally, we build a unified model that accommodates all three of these information-sharing mechanisms based on the number of friends who can share information. We find that the no information-sharing model does not yield large economic zones, and more information sharing can give rise to a power-law distribution of market size that corresponds to the stylized fact of city size and firm size distributions. The simulations show that this model is robust. This paper provides an alternative approach to studying economic geographic development, and this model could be used as a test bed to validate the detailed assumptions that regulate real economic agglomeration. PMID:23484007

  7. Local active information storage as a tool to understand distributed neural information processing

    PubMed Central

    Wibral, Michael; Lizier, Joseph T.; Vögler, Sebastian; Priesemann, Viola; Galuske, Ralf

    2013-01-01

    Every act of information processing can in principle be decomposed into the component operations of information storage, transfer, and modification. Yet, while this is easily done for today's digital computers, the application of these concepts to neural information processing was hampered by the lack of proper mathematical definitions of these operations on information. Recently, definitions were given for the dynamics of these information processing operations on a local scale in space and time in a distributed system, and the specific concept of local active information storage was successfully applied to the analysis and optimization of artificial neural systems. However, no attempt to measure the space-time dynamics of local active information storage in neural data has been made to date. Here we measure local active information storage on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat. We show that storage reflects neural properties such as stimulus preferences and surprise upon unexpected stimulus change, and in area 18 reflects the abstract concept of an ongoing stimulus despite the locally random nature of this stimulus. We suggest that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding. PMID:24501593

  8. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  9. Applications of statistical physics and information theory to the analysis of DNA sequences

    NASA Astrophysics Data System (ADS)

    Grosse, Ivo

    2000-10-01

    DNA carries the genetic information of most living organisms, and the of genome projects is to uncover that genetic information. One basic task in the analysis of DNA sequences is the recognition of protein coding genes. Powerful computer programs for gene recognition have been developed, but most of them are based on statistical patterns that vary from species to species. In this thesis I address the question if there exist universal statistical patterns that are different in coding and noncoding DNA of all living species, regardless of their phylogenetic origin. In search for such species-independent patterns I study the mutual information function of genomic DNA sequences, and find that it shows persistent period-three oscillations. To understand the biological origin of the observed period-three oscillations, I compare the mutual information function of genomic DNA sequences to the mutual information function of stochastic model sequences. I find that the pseudo-exon model is able to reproduce the mutual information function of genomic DNA sequences. Moreover, I find that a generalization of the pseudo-exon model can connect the existence and the functional form of long-range correlations to the presence and the length distributions of coding and noncoding regions. Based on these theoretical studies I am able to find an information-theoretical quantity, the average mutual information (AMI), whose probability distributions are significantly different in coding and noncoding DNA, while they are almost identical in all studied species. These findings show that there exist universal statistical patterns that are different in coding and noncoding DNA of all studied species, and they suggest that the AMI may be used to identify genes in different living species, irrespective of their taxonomic origin.

  10. Bivariate drought frequency analysis using the copula method

    NASA Astrophysics Data System (ADS)

    Mirabbasi, Rasoul; Fakheri-Fard, Ahmad; Dinpashoh, Yagob

    2012-04-01

    Droughts are major natural hazards with significant environmental and economic impacts. In this study, two-dimensional copulas were applied to the analysis of the meteorological drought characteristics of the Sharafkhaneh gauge station, located in the northwest of Iran. Two major drought characteristics, duration and severity, as defined by the standardized precipitation index, were abstracted from observed drought events. Since drought duration and severity exhibited a significant correlation and since they were modeled using different distributions, copulas were used to construct the joint distribution function of the drought characteristics. The parameter of copulas was estimated using the method of the Inference Function for Margins. Several copulas were tested in order to determine the best data fit. According to the error analysis and the tail dependence coefficient, the Galambos copula provided the best fit for the observed drought data. Some bivariate probabilistic properties of droughts, based on the derived copula-based joint distribution, were also investigated. These probabilistic properties can provide useful information for water resource planning and management.

  11. Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize

    PubMed Central

    Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto

    2014-01-01

    Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911

  12. Probabilistic sensitivity analysis for decision trees with multiple branches: use of the Dirichlet distribution in a Bayesian framework.

    PubMed

    Briggs, Andrew H; Ades, A E; Price, Martin J

    2003-01-01

    In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.

  13. Multivariate hydrological frequency analysis for extreme events using Archimedean copula. Case study: Lower Tunjuelo River basin (Colombia)

    NASA Astrophysics Data System (ADS)

    Gómez, Wilmar

    2017-04-01

    By analyzing the spatial and temporal variability of extreme precipitation events we can prevent or reduce the threat and risk. Many water resources projects require joint probability distributions of random variables such as precipitation intensity and duration, which can not be independent with each other. The problem of defining a probability model for observations of several dependent variables is greatly simplified by the joint distribution in terms of their marginal by taking copulas. This document presents a general framework set frequency analysis bivariate and multivariate using Archimedean copulas for extreme events of hydroclimatological nature such as severe storms. This analysis was conducted in the lower Tunjuelo River basin in Colombia for precipitation events. The results obtained show that for a joint study of the intensity-duration-frequency, IDF curves can be obtained through copulas and thus establish more accurate and reliable information from design storms and associated risks. It shows how the use of copulas greatly simplifies the study of multivariate distributions that introduce the concept of joint return period used to represent the needs of hydrological designs properly in frequency analysis.

  14. A preliminary analysis of quantifying computer security vulnerability data in "the wild"

    NASA Astrophysics Data System (ADS)

    Farris, Katheryn A.; McNamara, Sean R.; Goldstein, Adam; Cybenko, George

    2016-05-01

    A system of computers, networks and software has some level of vulnerability exposure that puts it at risk to criminal hackers. Presently, most vulnerability research uses data from software vendors, and the National Vulnerability Database (NVD). We propose an alternative path forward through grounding our analysis in data from the operational information security community, i.e. vulnerability data from "the wild". In this paper, we propose a vulnerability data parsing algorithm and an in-depth univariate and multivariate analysis of the vulnerability arrival and deletion process (also referred to as the vulnerability birth-death process). We find that vulnerability arrivals are best characterized by the log-normal distribution and vulnerability deletions are best characterized by the exponential distribution. These distributions can serve as prior probabilities for future Bayesian analysis. We also find that over 22% of the deleted vulnerability data have a rate of zero, and that the arrival vulnerability data is always greater than zero. Finally, we quantify and visualize the dependencies between vulnerability arrivals and deletions through a bivariate scatterplot and statistical observations.

  15. Comprehensive Child Welfare Information System. Final rule.

    PubMed

    2016-06-02

    This final rule replaces the Statewide and Tribal Automated Child Welfare Information Systems (S/TACWIS) rule with the Comprehensive Child Welfare Information System (CCWIS) rule. The rule also makes conforming amendments in rules in related requirements. This rule will assist title IV-E agencies in developing information management systems that leverage new innovations and technology in order to better serve children and families. More specifically, this final rule supports the use of cost-effective, innovative technologies to automate the collection of high-quality case management data and to promote its analysis, distribution, and use by workers, supervisors, administrators, researchers, and policy makers.

  16. Classification of cognitive systems dedicated to data sharing

    NASA Astrophysics Data System (ADS)

    Ogiela, Lidia; Ogiela, Marek R.

    2017-08-01

    In this paper will be presented classification of new cognitive information systems dedicated to cryptographic data splitting and sharing processes. Cognitive processes of semantic data analysis and interpretation, will be used to describe new classes of intelligent information and vision systems. In addition, cryptographic data splitting algorithms and cryptographic threshold schemes will be used to improve processes of secure and efficient information management with application of such cognitive systems. The utility of the proposed cognitive sharing procedures and distributed data sharing algorithms will be also presented. A few possible application of cognitive approaches for visual information management and encryption will be also described.

  17. Quantitative study of FORC diagrams in thermally corrected Stoner- Wohlfarth nanoparticles systems

    NASA Astrophysics Data System (ADS)

    De Biasi, E.; Curiale, J.; Zysler, R. D.

    2016-12-01

    The use of FORC diagrams is becoming increasingly popular among researchers devoted to magnetism and magnetic materials. However, a thorough interpretation of this kind of diagrams, in order to achieve quantitative information, requires an appropriate model of the studied system. For that reason most of the FORC studies are used for a qualitative analysis. In magnetic systems thermal fluctuations "blur" the signatures of the anisotropy, volume and particle interactions distributions, therefore thermal effects in nanoparticles systems conspire against a proper interpretation and analysis of these diagrams. Motivated by this fact, we have quantitatively studied the degree of accuracy of the information extracted from FORC diagrams for the special case of single-domain thermal corrected Stoner- Wohlfarth (easy axes along the external field orientation) nanoparticles systems. In this work, the starting point is an analytical model that describes the behavior of a magnetic nanoparticles system as a function of field, anisotropy, temperature and measurement time. In order to study the quantitative degree of accuracy of our model, we built FORC diagrams for different archetypical cases of magnetic nanoparticles. Our results show that from the quantitative information obtained from the diagrams, under the hypotheses of the proposed model, is possible to recover the features of the original system with accuracy above 95%. This accuracy is improved at low temperatures and also it is possible to access to the anisotropy distribution directly from the FORC coercive field profile. Indeed, our simulations predict that the volume distribution plays a secondary role being the mean value and its deviation the only important parameters. Therefore it is possible to obtain an accurate result for the inversion and interaction fields despite the features of the volume distribution.

  18. Mathematical model of information process of protection of the social sector

    NASA Astrophysics Data System (ADS)

    Novikov, D. A.; Tsarkova, E. G.; Dubrovin, A. S.; Soloviev, A. S.

    2018-03-01

    In work the mathematical model of information protection of society against distribution of extremist moods by means of impact on mass consciousness of information placed in media is investigated. Internal and external channels on which there is a dissemination of information are designated. The problem of optimization consisting in search of the optimum strategy allowing to use most effectively media for dissemination of antiterrorist information with the minimum financial expenses is solved. The algorithm of a numerical method of the solution of a problem of optimization is constructed and also the analysis of results of a computing experiment is carried out.

  19. Coupled CFD and Particle Vortex Transport Method: Wing Performance and Wake Validations

    DTIC Science & Technology

    2008-06-26

    the PVTM analysis. The results obtained using the coupled RANS/PVTM analysis compare well with experimental data , in particular the pressure...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments...is validated against wind tunnel test data . Comparisons with measured pressure distribution, loadings, and vortex parameters, and the corresponding

  20. Using Public Libraries To Provide Technology Access for Individuals in Poverty: A Nationwide Analysis of Library Market Areas Using a Geographic Information System.

    ERIC Educational Resources Information Center

    Jue, Dean K.; Koontz, Christie M.; Magpantay, J. Andrew; Lance, Keith Curry; Seidl, Ann M.

    1999-01-01

    Assesses the distribution of poverty areas in the United States relative to public library outlet locations to begin discussion on the best possible public library funding and development policies that would serve individuals in poverty areas. Provides a comparative analysis of poverty relative to public library outlets using two common methods of…

  1. A spatial analysis of the Burrowing Owl (Speotyto cunicularia) population in Santa Clara County, California, using a geographic information system

    Treesearch

    Janice Taylor Buchanan

    1997-01-01

    A small population of Burrowing Owls (Speotyto cunicularia) is found in the San Francisco Bay Area, particularly in Santa Clara County. These owls utilize habitat that is dispersed throughout this heavily urbanized region. In an effort to establish a conservation plan for Burrowing Owls in Santa Clara County, a spatial analysis of owl distribution...

  2. Real Estate Site Selection: An Application of Artificial Intelligence for Military Retail Facilities

    DTIC Science & Technology

    2006-09-01

    Information and Spatial Analysis (SCGISA), University of Sheffield. Kotler , P. (1984). Marketing Management: Analysis, Planning, and Control...Spatial Distribution of Retail Sales. Journal of Real Estate Finance and Economics, Vol. 31 Iss. 1, 53. Lilien, G., & Kotler , P. (1983). Marketing ...commissaries). The current business model for military retail facilities may not be optimized based upon current trends market data. Optimizing

  3. Neutron Physics Division progress report for period ending February 28, 1977

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maienschein, F.C.

    1977-05-01

    Summaries are given of research progress in the following areas: (1) measurements of cross sections and related quantities, (2) cross section evaluations and theory, (3) cross section processing, testing, and sensitivity analysis, (4) integral experiments and their analyses, (5) development of methods for shield and reactor analyses, (6) analyses for specific systems or applications, and (7) information analysis and distribution. (SDF)

  4. Dynamic area telethermometry and its clinical applications

    NASA Astrophysics Data System (ADS)

    Anbar, Michael

    1995-03-01

    Dynamic area telethermometry (DAT) is a recent development in thermology, the science of biological heat generation and dissipation. DAT is based on monitoring changes in infrared emission, deriving from them information on the kinetics and mechanisms of biological thermoregulation. Remotely monitoring infrared emission is the most reliable technique to study bioenergetics, because it minimally perturbs the investigated system. Area monitoring of heat dissipating surfaces is needed because temporal changes in the spatial distribution of temperature conveys information on mechanisms of thermoregulation. DAT can be applied to biological systems ranging from single cells (microtelecalorimetry) to large areas of human skin (clinical thermology). DAT requires the accumulation of many (hundreds to thousands) thermal images followed by analysis of the thermokinetics of each pixel or group of pixels. In clinical thermology this analysis uses FFT to extract systemic, regional and local thermoregulatory frequencies (TRFs). DAT also extracts information on local thermoregulation from the temporal behavior of homogeneity of skin temperature (HST). Analysis of the relative contributions (FFT amplitudes) of the different frequencies allows distinction between vascular, neurological, and immunological thermoregulatory dysfunctions. This analysis, which can reveal the mechanism of the dysfunction, can be very useful in the diagnosis and staging of various disorders, ranging from diabetes mellitus and liver cirrhosis to breast cancer and malignant melanoma. From the engineering standpoint DAT requires highly stable imaging systems and effective display of the spatial distribution of TRFs to allow identification of thermoregulatory pathways and their dysfunction.

  5. Measuring content overlap during handoff communication using distributional semantics: An exploratory study.

    PubMed

    Abraham, Joanna; Kannampallil, Thomas G; Srinivasan, Vignesh; Galanter, William L; Tagney, Gail; Cohen, Trevor

    2017-01-01

    We develop and evaluate a methodological approach to measure the degree and nature of overlap in handoff communication content within and across clinical professions. This extensible, exploratory approach relies on combining techniques from conversational analysis and distributional semantics. We audio-recorded handoff communication of residents and nurses on the General Medicine floor of a large academic hospital (n=120 resident and n=120 nurse handoffs). We measured semantic similarity, a proxy for content overlap, between resident-resident and nurse-nurse communication using multiple steps: a qualitative conversational content analysis; an automated semantic similarity analysis using Reflective Random Indexing (RRI); and comparing semantic similarity generated by RRI analysis with human ratings of semantic similarity. There was significant association between the semantic similarity as computed by the RRI method and human rating (ρ=0.88). Based on the semantic similarity scores, content overlap was relatively higher for content related to patient active problems, assessment of active problems, patient-identifying information, past medical history, and medications/treatments. In contrast, content overlap was limited on content related to allergies, family-related information, code status, and anticipatory guidance. Our approach using RRI analysis provides new opportunities for characterizing the nature and degree of overlap in handoff communication. Although exploratory, this method provides a basis for identifying content that can be used for determining shared understanding across clinical professions. Additionally, this approach can inform the development of flexibly standardized handoff tools that reflect clinical content that are most appropriate for fostering shared understanding during transitions of care. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Beyond mind-reading: multi-voxel pattern analysis of fMRI data.

    PubMed

    Norman, Kenneth A; Polyn, Sean M; Detre, Greg J; Haxby, James V

    2006-09-01

    A key challenge for cognitive neuroscience is determining how mental representations map onto patterns of neural activity. Recently, researchers have started to address this question by applying sophisticated pattern-classification algorithms to distributed (multi-voxel) patterns of functional MRI data, with the goal of decoding the information that is represented in the subject's brain at a particular point in time. This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading. More importantly, MVPA methods constitute a useful new tool for advancing our understanding of neural information processing. We review how researchers are using MVPA methods to characterize neural coding and information processing in domains ranging from visual perception to memory search.

  7. Distribution and magnitude of type I error of model-based multipoint lod scores: implications for multipoint mod scores.

    PubMed

    Xing, Chao; Elston, Robert C

    2006-07-01

    The multipoint lod score and mod score methods have been advocated for their superior power in detecting linkage. However, little has been done to determine the distribution of multipoint lod scores or to examine the properties of mod scores. In this paper we study the distribution of multipoint lod scores both analytically and by simulation. We also study by simulation the distribution of maximum multipoint lod scores when maximized over different penetrance models. The multipoint lod score is approximately normally distributed with mean and variance that depend on marker informativity, marker density, specified genetic model, number of pedigrees, pedigree structure, and pattern of affection status. When the multipoint lod scores are maximized over a set of assumed penetrances models, an excess of false positive indications of linkage appear under dominant analysis models with low penetrances and under recessive analysis models with high penetrances. Therefore, caution should be taken in interpreting results when employing multipoint lod score and mod score approaches, in particular when inferring the level of linkage significance and the mode of inheritance of a trait.

  8. Further Progress Applying the Generalized Wigner Distribution to Analysis of Vicinal Surfaces

    NASA Astrophysics Data System (ADS)

    Einstein, T. L.; Richards, Howard L.; Cohen, S. D.

    2001-03-01

    Terrace width distributions (TWDs) can be well fit by the generalized Wigner distribution (GWD), generally better than by conventional Gaussians, and thus offers a convenient way to estimate the dimensionless elastic repulsion strength tildeA from σ^2, the TWD variance.(T.L. Einstein and O. Pierre-Louis, Surface Sci. 424), L299 (1999) The GWD σ^2 accurately reproduces values for the two exactly soluble cases at small tildeA and in the asymptotic limit. Taxing numerical simulations show that the GWD σ^2 interpolates well between these limits. Extensive applications have been made to experimental data, esp. on Cu.(M. Giesen and T.L. Einstein, Surface Sci. 449), 191 (2000) Recommended analysis procedures are catalogued.(H.L. Richards, S.D. Cohen, TLE, & M. Giesen, Surf Sci 453), 59 (2000) Extensions of the GWD for multistep distributions are tested, with good agreement for second-neighbor distributions, less good for third.(TLE, HLR, SDC, & OP-L, Proc ISSI-PDSC2000, cond-mat/0012xxxxx) Alternatively, step-step correlation functions, about which there is more theoretical information, should be measured.

  9. Equity in the allocation of public sector financial resources in low- and middle-income countries: a systematic literature review.

    PubMed

    Anselmi, Laura; Lagarde, Mylene; Hanson, Kara

    2015-05-01

    This review aims to identify, assess and analyse the evidence on equity in the distribution of public health sector expenditure in low- and middle-income countries. Four bibliographic databases and five websites were searched to identify quantitative studies examining equity in the distribution of public health funding in individual countries or groups of countries. Two different types of studies were identified: benefit incidence analysis (BIA) and resource allocation comparison (RAC) studies. Quality appraisal and data synthesis were tailored to each study type to reflect differences in the methods used and in the information provided. We identified 39 studies focusing on African, Asian and Latin American countries. Of these, 31 were BIA studies that described the distribution, typically across socio-economic status, of individual monetary benefit derived from service utilization. The remaining eight were RAC studies that compared the actual expenditure across geographic areas to an ideal need-based distribution. Overall, the quality of the evidence from both types of study was relatively weak. Looking across studies, the evidence confirms that resource allocation formulae can enhance equity in resource allocation across geographic areas and that the poor benefits proportionally more from primary health care than from hospital expenditure. The lack of information on the distribution of benefit from utilization in RAC studies and on the countries' approaches to resource allocation in BIA studies prevents further policy analysis. Additional research that relates the type of resource allocation mechanism to service provision and to the benefit distribution is required for a better understanding of equity-enhancing resource allocation policies. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2014; all rights reserved.

  10. Semiautomatic and Automatic Cooperative Inversion of Seismic and Magnetotelluric Data

    NASA Astrophysics Data System (ADS)

    Le, Cuong V. A.; Harris, Brett D.; Pethick, Andrew M.; Takam Takougang, Eric M.; Howe, Brendan

    2016-09-01

    Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.

  11. The Arbo‑zoonet Information System.

    PubMed

    Di Lorenzo, Alessio; Di Sabatino, Daria; Blanda, Valeria; Cioci, Daniela; Conte, Annamaria; Bruno, Rossana; Sauro, Francesca; Calistri, Paolo; Savini, Lara

    2016-06-30

    The Arbo‑zoonet Information System has been developed as part of the 'International Network for Capacity Building for the Control of Emerging Viral Vector Borne Zoonotic Diseases (Arbo‑zoonet)' project. The project aims to create common knowledge, sharing data, expertise, experiences, and scientific information on West Nile Disease (WND), Crimean‑Congo haemorrhagic fever (CCHF), and Rift Valley fever (RVF). These arthropod‑borne diseases of domestic and wild animals can affect humans, posing great threat to public health. Since November 2011, when the Schmallenberg virus (SBV) has been discovered for the first time in Northern Europe, the Arbo‑zoonet Information System has been used in order to collect information on newly discovered disease and to manage the epidemic emergency. The system monitors the geographical distribution and epidemiological evolution of CCHF, RVF, and WND since 1946. More recently, it has also been deployed to monitor the SBV data. The Arbo‑zoonet Information System includes a web application for the management of the database in which data are stored and a WebGIS application to explore spatial disease distributions, facilitating the epidemiological analysis. The WebGIS application is an effective tool to show and share the information and to facilitate the exchange and dissemination of relevant data among project's participants.

  12. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  13. Spatio-temporal assessment of food safety risks in Canadian food distribution systems using GIS.

    PubMed

    Hashemi Beni, Leila; Villeneuve, Sébastien; LeBlanc, Denyse I; Côté, Kevin; Fazil, Aamir; Otten, Ainsley; McKellar, Robin; Delaquis, Pascal

    2012-09-01

    While the value of geographic information systems (GIS) is widely applied in public health there have been comparatively few examples of applications that extend to the assessment of risks in food distribution systems. GIS can provide decision makers with strong computing platforms for spatial data management, integration, analysis, querying and visualization. The present report addresses some spatio-analyses in a complex food distribution system and defines influence areas as travel time zones generated through road network analysis on a national scale rather than on a community scale. In addition, a dynamic risk index is defined to translate a contamination event into a public health risk as time progresses. More specifically, in this research, GIS is used to map the Canadian produce distribution system, analyze accessibility to contaminated product by consumers, and estimate the level of risk associated with a contamination event over time, as illustrated in a scenario. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  14. Data governance requirements for distributed clinical research networks: triangulating perspectives of diverse stakeholders

    PubMed Central

    Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila

    2014-01-01

    There is currently limited information on best practices for the development of governance requirements for distributed research networks (DRNs), an emerging model that promotes clinical data reuse and improves timeliness of comparative effectiveness research. Much of the existing information is based on a single type of stakeholder such as researchers or administrators. This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups. This approach is illustrated with an example from the Scalable National Network for Effectiveness Research, which resulted in 91 requirements. These requirements were analyzed against the Fair Information Practice Principles (FIPPs) and Health Insurance Portability and Accountability Act (HIPAA) protected versus non-protected health information. The requirements addressed all FIPPs, showing how a DRN's technical infrastructure is able to fulfill HIPAA regulations, protect privacy, and provide a trustworthy platform for research. PMID:24302285

  15. Censored data treatment using additional information in intelligent medical systems

    NASA Astrophysics Data System (ADS)

    Zenkova, Z. N.

    2015-11-01

    Statistical procedures are a very important and significant part of modern intelligent medical systems. They are used for proceeding, mining and analysis of different types of the data about patients and their diseases; help to make various decisions, regarding the diagnosis, treatment, medication or surgery, etc. In many cases the data can be censored or incomplete. It is a well-known fact that censorship considerably reduces the efficiency of statistical procedures. In this paper the author makes a brief review of the approaches which allow improvement of the procedures using additional information, and describes a modified estimation of an unknown cumulative distribution function involving additional information about a quantile which is known exactly. The additional information is used by applying a projection of a classical estimator to a set of estimators with certain properties. The Kaplan-Meier estimator is considered as an estimator of the unknown cumulative distribution function, the properties of the modified estimator are investigated for a case of a single right censorship by means of simulations.

  16. Frequent Statement and Dereference Elimination for Imperative and Object-Oriented Distributed Programs

    PubMed Central

    El-Zawawy, Mohamed A.

    2014-01-01

    This paper introduces new approaches for the analysis of frequent statement and dereference elimination for imperative and object-oriented distributed programs running on parallel machines equipped with hierarchical memories. The paper uses languages whose address spaces are globally partitioned. Distributed programs allow defining data layout and threads writing to and reading from other thread memories. Three type systems (for imperative distributed programs) are the tools of the proposed techniques. The first type system defines for every program point a set of calculated (ready) statements and memory accesses. The second type system uses an enriched version of types of the first type system and determines which of the ready statements and memory accesses are used later in the program. The third type system uses the information gather so far to eliminate unnecessary statement computations and memory accesses (the analysis of frequent statement and dereference elimination). Extensions to these type systems are also presented to cover object-oriented distributed programs. Two advantages of our work over related work are the following. The hierarchical style of concurrent parallel computers is similar to the memory model used in this paper. In our approach, each analysis result is assigned a type derivation (serves as a correctness proof). PMID:24892098

  17. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  18. A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates

    NASA Astrophysics Data System (ADS)

    Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh

    2016-10-01

    We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.

  19. Needs Analysis of Employee in Riau Main Island: A Reflection from Universitas Lancang Kuning Context

    NASA Astrophysics Data System (ADS)

    Hasnati; Chintia Utami, Bunga; Saputra, Trio

    2018-05-01

    Performance of employees is a major concern for all organization. It’s depend on workload. The availability of staff in Lancang Kuning University which established in Riau Main island is not adjusted to the changing needs of the Organizational Structure and the workload of the employees. One preparation to Achieve the Strategic HR Planning Is that ripe with a set amount of human resources that is ideal to get UNILAK UNGGUL 2030. But, it will not happen if the distribution of employees have not Referred to the real needs of the organization. The distribution has not been based on the workload of the organization. Stacking employees in one unit without clear work and lack of staff in other units is a fact of the underlying problem at Lancang Kuning University. So in order to Achieve the performance of employees, we must to to be done structuring. In the implementation, Lancang Kuning University conducted an analysis of positions that include positions produce information Position Information Analysis. The purpose of this study is to projection of the Needed Employees with qualitative method. The research informants were selected from three institutions / units in the University of Lancang Kuning. The results of the workload analysis of staff at the University of Lancang Kuning can be done if there is clarity of guidelines or work instructions, a good understanding of the work guidelines to everyone, the suitability between work guidelines and knowledge of educational personnel. Based on job information and workloads, it is found that only two institutions are LPPM and BPM roomates is suitable of needs but in other side there are excess and lack, library excess 1 employee.

  20. 40 CFR 1400.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PREVENTION REQUIREMENTS; RISK MANAGEMENT PROGRAMS UNDER THE CLEAN AIR ACT SECTION 112(r)(7); DISTRIBUTION OF... analysis (OCA) information means sections 2 through 5 of a risk management plan (consisting of an... through 5 of a risk management plan or any Administrator-created electronic database. (k) Off-site...

  1. Informatic analysis for hidden pulse attack exploiting spectral characteristics of optics in plug-and-play quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Ko, Heasin; Lim, Kyongchun; Oh, Junsang; Rhee, June-Koo Kevin

    2016-10-01

    Quantum channel loopholes due to imperfect implementations of practical devices expose quantum key distribution (QKD) systems to potential eavesdropping attacks. Even though QKD systems are implemented with optical devices that are highly selective on spectral characteristics, information theory-based analysis about a pertinent attack strategy built with a reasonable framework exploiting it has never been clarified. This paper proposes a new type of trojan horse attack called hidden pulse attack that can be applied in a plug-and-play QKD system, using general and optimal attack strategies that can extract quantum information from phase-disturbed quantum states of eavesdropper's hidden pulses. It exploits spectral characteristics of a photodiode used in a plug-and-play QKD system in order to probe modulation states of photon qubits. We analyze the security performance of the decoy-state BB84 QKD system under the optimal hidden pulse attack model that shows enormous performance degradation in terms of both secret key rate and transmission distance.

  2. Determining the location and nearest neighbours of aluminium in zeolites with atom probe tomography

    DOE PAGES

    Perea, Daniel E.; Arslan, Ilke; Liu, Jia; ...

    2015-07-02

    Zeolite catalysis is determined by a combination of pore architecture and Brønsted acidity. As Brønsted acid sites are formed by the substitution of AlO4 for SiO4 tetrahedra, it is of utmost importance to have information on the number as well as the location and neighbouring sites of framework aluminium. Unfortunately, such detailed information has not yet been obtained, mainly due to the lack of suitable characterization methods. Here we report, using the powerful atomic-scale analysis technique known as atom probe tomography, the quantitative spatial distribution of individual aluminium atoms, including their three-dimensional extent of segregation. Ultimately, using a nearest-neighbour statisticalmore » analysis, we precisely determine the short-range distribution of aluminium over the different T-sites and determine the most probable Al–Al neighbouring distance within parent and steamed ZSM-5 crystals, as well as assess the long-range redistribution of aluminium upon zeolite steaming.« less

  3. Determining the location and nearest neighbours of aluminium in zeolites with atom probe tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perea, Daniel E.; Arslan, Ilke; Liu, Jia

    Zeolite catalysis is determined by a combination of pore architecture and Brønsted acidity. As Brønsted acid sites are formed by the substitution of AlO4 for SiO4 tetrahedra, it is of utmost importance to have information on the number as well as the location and neighbouring sites of framework aluminium. Unfortunately, such detailed information has not yet been obtained, mainly due to the lack of suitable characterization methods. Here we report, using the powerful atomic-scale analysis technique known as atom probe tomography, the quantitative spatial distribution of individual aluminium atoms, including their three-dimensional extent of segregation. Ultimately, using a nearest-neighbour statisticalmore » analysis, we precisely determine the short-range distribution of aluminium over the different T-sites and determine the most probable Al–Al neighbouring distance within parent and steamed ZSM-5 crystals, as well as assess the long-range redistribution of aluminium upon zeolite steaming.« less

  4. Operando analysis of lithium profiles in Li-ion batteries using nuclear microanalysis

    NASA Astrophysics Data System (ADS)

    Surblé, S.; Paireau, C.; Martin, J.-F.; Tarnopolskiy, V.; Gauthier, M.; Khodja, H.; Daniel, L.; Patoux, S.

    2018-07-01

    A wide variety of analytical methods are used for studying the behavior of lithium-ion batteries and particularly the lithium ion distribution in the electrodes. However, the development of in situ/operando techniques proved powerful to understand the mechanisms responsible for the lithium trapping and then the aging phenomenon. Herein, we report the design of an electrochemical cell to profile operando lithium concentration in LiFePO4 electrodes using Ion Beam Analysis techniques. The specificity of the cell resides in its ability to not only provide qualitative information about the elements present but above all to measure quantitatively their content in the electrode at different states of charge of the battery. The nuclear methods give direct information about the degradation of the electrolyte and particularly reveal inhomogeneous distributions of lithium and fluorine along the entire thickness of the electrode. Higher concentrations of fluorine is detected near the electrode/electrolyte interface while a depletion of lithium is observed near the current collector at high states of charge.

  5. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing

    PubMed Central

    da Rocha, Armando Freitas; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i) provided by each electrode of the 10/20 system about the identified s i. H(e i) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i. This analysis evidenced 4 different patterns of H(e i) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089

  6. Simultaneous Comparison of Two Roller Compaction Techniques and Two Particle Size Analysis Methods.

    PubMed

    Saarinen, Tuomas; Antikainen, Osmo; Yliruusi, Jouko

    2017-11-01

    A new dry granulation technique, gas-assisted roller compaction (GARC), was compared with conventional roller compaction (CRC) by manufacturing 34 granulation batches. The process variables studied were roll pressure, roll speed, and sieve size of the conical mill. The main quality attributes measured were granule size and flow characteristics. Within granulations also the real applicability of two particle size analysis techniques, sieve analysis (SA) and fast imaging technique (Flashsizer, FS), was tested. All granules obtained were acceptable. In general, the particle size of GARC granules was slightly larger than that of CRC granules. In addition, the GARC granules had better flowability. For example, the tablet weight variation of GARC granules was close to 2%, indicating good flowing and packing characteristics. The comparison of the two particle size analysis techniques showed that SA was more accurate in determining wide and bimodal size distributions while FS showed narrower and mono-modal distributions. However, both techniques gave good estimates for mean granule sizes. Overall, SA was a time-consuming but accurate technique that provided reliable information for the entire granule size distribution. By contrast, FS oversimplified the shape of the size distribution, but nevertheless yielded acceptable estimates for mean particle size. In general, FS was two to three orders of magnitude faster than SA.

  7. The direct analysis of drug distribution of rotigotine-loaded microspheres from tissue sections by LESA coupled with tandem mass spectrometry.

    PubMed

    Xu, Li-Xiao; Wang, Tian-Tian; Geng, Yin-Yin; Wang, Wen-Yan; Li, Yin; Duan, Xiao-Kun; Xu, Bin; Liu, Charles C; Liu, Wan-Hui

    2017-09-01

    The direct analysis of drug distribution of rotigotine-loaded microspheres (RoMS) from tissue sections by liquid extraction surface analysis (LESA) coupled with tandem mass spectrometry (MS/MS) was demonstrated. The RoMS distribution in rat tissues assessed by the ambient LESA-MS/MS approach without extensive or tedious sample pretreatment was compared with that obtained by a conventional liquid chromatography tandem mass spectrometry (LC-MS/MS) method in which organ excision and subsequent solvent extraction were commonly employed before analysis. Results obtained from the two were well correlated for a majority of the organs, such as muscle, liver, stomach, and hippocampus. The distribution of RoMS in the brain, however, was found to be mainly focused in the hippocampus and striatum regions as shown by the LESA-imaged profiles. The LESA approach we developed is sensitive enough, with an estimated LLOQ at 0.05 ng/mL of rotigotine in brain tissue, and information-rich with minimal sample preparation, suitable, and promising in assisting the development of new drug delivery systems for controlled drug release and protection. Graphical abstract Workflow for the LESA-MS/MS imaging of brain tissue section after intramuscular RoMS administration.

  8. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing.

    PubMed

    Rocha, Armando Freitas da; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i ) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i ) provided by each electrode of the 10/20 system about the identified s i . H(e i ) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i . This analysis evidenced 4 different patterns of H(e i ) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies.

  9. AccessMod 3.0: computing geographic coverage and accessibility to health care services using anisotropic movement of patients

    PubMed Central

    Ray, Nicolas; Ebener, Steeve

    2008-01-01

    Background Access to health care can be described along four dimensions: geographic accessibility, availability, financial accessibility and acceptability. Geographic accessibility measures how physically accessible resources are for the population, while availability reflects what resources are available and in what amount. Combining these two types of measure into a single index provides a measure of geographic (or spatial) coverage, which is an important measure for assessing the degree of accessibility of a health care network. Results This paper describes the latest version of AccessMod, an extension to the Geographical Information System ArcView 3.×, and provides an example of application of this tool. AccessMod 3 allows one to compute geographic coverage to health care using terrain information and population distribution. Four major types of analysis are available in AccessMod: (1) modeling the coverage of catchment areas linked to an existing health facility network based on travel time, to provide a measure of physical accessibility to health care; (2) modeling geographic coverage according to the availability of services; (3) projecting the coverage of a scaling-up of an existing network; (4) providing information for cost effectiveness analysis when little information about the existing network is available. In addition to integrating travelling time, population distribution and the population coverage capacity specific to each health facility in the network, AccessMod can incorporate the influence of landscape components (e.g. topography, river and road networks, vegetation) that impact travelling time to and from facilities. Topographical constraints can be taken into account through an anisotropic analysis that considers the direction of movement. We provide an example of the application of AccessMod in the southern part of Malawi that shows the influences of the landscape constraints and of the modes of transportation on geographic coverage. Conclusion By incorporating the demand (population) and the supply (capacities of heath care centers), AccessMod provides a unifying tool to efficiently assess the geographic coverage of a network of health care facilities. This tool should be of particular interest to developing countries that have a relatively good geographic information on population distribution, terrain, and health facility locations. PMID:19087277

  10. Architectural Optimization of Digital Libraries

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  11. A review of second law techniques applicable to basic thermal science research

    NASA Astrophysics Data System (ADS)

    Drost, M. Kevin; Zamorski, Joseph R.

    1988-11-01

    This paper reports the results of a review of second law analysis techniques which can contribute to basic research in the thermal sciences. The review demonstrated that second law analysis has a role in basic thermal science research. Unlike traditional techniques, second law analysis accurately identifies the sources and location of thermodynamic losses. This allows the development of innovative solutions to thermal science problems by directing research to the key technical issues. Two classes of second law techniques were identified as being particularly useful. First, system and component investigations can provide information of the source and nature of irreversibilities on a macroscopic scale. This information will help to identify new research topics and will support the evaluation of current research efforts. Second, the differential approach can provide information on the causes and spatial and temporal distribution of local irreversibilities. This information enhances the understanding of fluid mechanics, thermodynamics, and heat and mass transfer, and may suggest innovative methods for reducing irreversibilities.

  12. Technology transfer and commercialization initiatives at TRI/Austin: Resources and examples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzkanin, G.A.; Dingus, M.L.

    1995-12-31

    Located at TRI/Austin, and operated under a Department of Defense contract, is the Nondestructive Testing Information Analysis Center (NTIAC). This is a full service Information Analysis Center sponsored by the Defense Technical Information Center (DTIC), although services of NTIAC are available to other government agencies, government contractors, industry and academia. The principal objective of NTIAC is to help increase the productivity of the nation`s scientists, engineers, and technical managers involved in, or requiring, nondestructive testing by providing broad information analysis services of technical excellence. TRI/Austin is actively pursuing commercialization of several products based on results from outside funded R andmore » D programs. As a small business, TRI/Austin has limited capabilities for large scale fabrication, production, marketing or distribution. Thus, part of a successful commercialization process involves making appropriate collaboration arrangements with other organizations to augment TRI/Austin`s capabilities. Brief descriptions are given here of two recent commercialization efforts at TRI/Austin.« less

  13. Application of Asymmetric Flow Field-Flow Fractionation hyphenations for liposome-antimicrobial peptide interaction.

    PubMed

    Iavicoli, Patrizia; Urbán, Patricia; Bella, Angelo; Ryadnov, Maxim G; Rossi, François; Calzolai, Luigi

    2015-11-27

    Asymmetric Flow Field-Flow Fractionation (AF4) combined with multidetector analysis form a promising technique in the field of nanoparticle characterization. This system is able to measure the dimensions and physicochemical properties of nanoparticles with unprecedented accuracy and precision. Here, for the first time, this technique is optimized to characterize the interaction between an archetypal antimicrobial peptide and synthetic membranes. By using charged and neutral liposomes it is possible to mimic some of the charge characteristics of biological membranes. The use of AF4 system allows determining, in a single analysis, information regarding the selectivity of the peptides, the quantity of peptides bound to each liposome, the induced change in the size distribution and morphology of the liposomes. The results obtained provide relevant information for the study of structure-activity relationships in the context of membrane-induced antimicrobial action. This information will contribute to the rational design of potent antimicrobial agents in the future. Moreover, the application of this method to other liposome systems is straightforward and would be extremely useful for a comprehensive characterization with regard to size distribution and protein interaction in the nanomedicine field. Copyright © 2015. Published by Elsevier B.V.

  14. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  15. A secure distributed logistic regression protocol for the detection of rare adverse drug events

    PubMed Central

    El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat

    2013-01-01

    Background There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. Objective To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. Methods We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. Results The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. Conclusion The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models. PMID:22871397

  16. A secure distributed logistic regression protocol for the detection of rare adverse drug events.

    PubMed

    El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat

    2013-05-01

    There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models.

  17. Identifying airborne metal particles sources near an optoelectronic and semiconductor industrial park

    NASA Astrophysics Data System (ADS)

    Chen, Ho-Wen; Chen, Wei-Yea; Chang, Cheng-Nan; Chuang, Yen-Hsun; Lin, Yu-Hao

    2016-06-01

    The recently developed Central Taiwan Science Park (CTSP) in central Taiwan is home to an optoelectronic and semiconductor industrial cluster. Therefore, exploring the elemental compositions and size distributions of airborne particles emitted from the CTSP would help to prevent pollution. This study analyzed size-fractionated metal-rich particle samples collected in upwind and downwind areas of CTSP during Jan. and Oct. 2013 by using micro-orifice uniform deposited impactor (MOUDI). Correlation analysis, hierarchical cluster analysis and particle mass-size distribution analysis are performed to identify the source of metal-rich particle near the CTSP. Analyses of elemental compositions and particle size distributions emitted from the CTSP revealed that the CTSP emits some metals (V, As, In Ga, Cd and Cu) in the ultrafine particles (< 1 μm). The statistical analysis combines with the particle mass-size distribution analysis could provide useful source identification information. In airborne particles with the size of 0.32 μm, Ga could be a useful pollution index for optoelectronic and semiconductor emission in the CTSP. Meanwhile, the ratios of As/Ga concentration at the particle size of 0.32 μm demonstrates that humans near the CTSP would be potentially exposed to GaAs ultrafine particles. That is, metals such as Ga and As and other metals that are not regulated in Taiwan are potentially harmful to human health.

  18. Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians

    NASA Astrophysics Data System (ADS)

    Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von

    2008-03-01

    Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  20. Modality-Driven Classification and Visualization of Ensemble Variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less

  1. Capacity for patterns and sequences in Kanerva's SDM as compared to other associative memory models. [Sparse, Distributed Memory

    NASA Technical Reports Server (NTRS)

    Keeler, James D.

    1988-01-01

    The information capacity of Kanerva's Sparse Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used here, it is shown that the total information stored in these systems is proportional to the number connections in the network. The proportionality constant is the same for the SDM and Hopfield-type models independent of the particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences of spatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns.

  2. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  3. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  4. Value of Information Analysis for Time-lapse Seismic Data by Simulation-Regression

    NASA Astrophysics Data System (ADS)

    Dutta, G.; Mukerji, T.; Eidsvik, J.

    2016-12-01

    A novel method to estimate the Value of Information (VOI) of time-lapse seismic data in the context of reservoir development is proposed. VOI is a decision analytic metric quantifying the incremental value that would be created by collecting information prior to making a decision under uncertainty. The VOI has to be computed before collecting the information and can be used to justify its collection. Previous work on estimating the VOI of geophysical data has involved explicit approximation of the posterior distribution of reservoir properties given the data and then evaluating the prospect values for that posterior distribution of reservoir properties. Here, we propose to directly estimate the prospect values given the data by building a statistical relationship between them using regression. Various regression techniques such as Partial Least Squares Regression (PLSR), Multivariate Adaptive Regression Splines (MARS) and k-Nearest Neighbors (k-NN) are used to estimate the VOI, and the results compared. For a univariate Gaussian case, the VOI obtained from simulation-regression has been shown to be close to the analytical solution. Estimating VOI by simulation-regression is much less computationally expensive since the posterior distribution of reservoir properties given each possible dataset need not be modeled and the prospect values need not be evaluated for each such posterior distribution of reservoir properties. This method is flexible, since it does not require rigid model specification of posterior but rather fits conditional expectations non-parametrically from samples of values and data.

  5. heterogeneous mixture distributions for multi-source extreme rainfall

    NASA Astrophysics Data System (ADS)

    Ouarda, T.; Shin, J.; Lee, T. S.

    2013-12-01

    Mixture distributions have been used to model hydro-meteorological variables showing mixture distributional characteristics, e.g. bimodality. Homogeneous mixture (HOM) distributions (e.g. Normal-Normal and Gumbel-Gumbel) have been traditionally applied to hydro-meteorological variables. However, there is no reason to restrict the mixture distribution as the combination of one identical type. It might be beneficial to characterize the statistical behavior of hydro-meteorological variables from the application of heterogeneous mixture (HTM) distributions such as Normal-Gamma. In the present work, we focus on assessing the suitability of HTM distributions for the frequency analysis of hydro-meteorological variables. In the present work, in order to estimate the parameters of HTM distributions, the meta-heuristic algorithm (Genetic Algorithm) is employed to maximize the likelihood function. In the present study, a number of distributions are compared, including the Gamma-Extreme value type-one (EV1) HTM distribution, the EV1-EV1 HOM distribution, and EV1 distribution. The proposed distribution models are applied to the annual maximum precipitation data in South Korea. The Akaike Information Criterion (AIC), the root mean squared errors (RMSE) and the log-likelihood are used as measures of goodness-of-fit of the tested distributions. Results indicate that the HTM distribution (Gamma-EV1) presents the best fitness. The HTM distribution shows significant improvement in the estimation of quantiles corresponding to the 20-year return period. It is shown that extreme rainfall in the coastal region of South Korea presents strong heterogeneous mixture distributional characteristics. Results indicate that HTM distributions are a good alternative for the frequency analysis of hydro-meteorological variables when disparate statistical characteristics are presented.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, S.D.; Smith, S.; Swank, P.R.

    Visual cell profiles were used to analyze the distribution of atypical bronchial cells in sputum specimens from cigarette-smoking volunteers, cigarette-smoking asbestos workers and cigarette-smoking uranium miners. The preliminary results of these sputum visual cell profile studies have demonstrated distinctive distributions of bronchial cell atypias in progressive patterns of squamous metaplasia, mild, moderate and severe atypias and carcinoma, similar to those the authors have previously reported using cell image analysis techniques to determine an atypia status index (ASI). The information gained from this study will be helpful in further validating this ASI and subsequently achieving the ultimate goal of employing cellmore » image analysis for the rapid and precise identification of premalignant atypias in sputum.« less

  7. Uncertainty of future projections of species distributions in mountainous regions.

    PubMed

    Tang, Ying; Winkler, Julie A; Viña, Andrés; Liu, Jianguo; Zhang, Yuanbin; Zhang, Xiaofeng; Li, Xiaohong; Wang, Fang; Zhang, Jindong; Zhao, Zhiqiang

    2018-01-01

    Multiple factors introduce uncertainty into projections of species distributions under climate change. The uncertainty introduced by the choice of baseline climate information used to calibrate a species distribution model and to downscale global climate model (GCM) simulations to a finer spatial resolution is a particular concern for mountainous regions, as the spatial resolution of climate observing networks is often insufficient to detect the steep climatic gradients in these areas. Using the maximum entropy (MaxEnt) modeling framework together with occurrence data on 21 understory bamboo species distributed across the mountainous geographic range of the Giant Panda, we examined the differences in projected species distributions obtained from two contrasting sources of baseline climate information, one derived from spatial interpolation of coarse-scale station observations and the other derived from fine-spatial resolution satellite measurements. For each bamboo species, the MaxEnt model was calibrated separately for the two datasets and applied to 17 GCM simulations downscaled using the delta method. Greater differences in the projected spatial distributions of the bamboo species were observed for the models calibrated using the different baseline datasets than between the different downscaled GCM simulations for the same calibration. In terms of the projected future climatically-suitable area by species, quantification using a multi-factor analysis of variance suggested that the sum of the variance explained by the baseline climate dataset used for model calibration and the interaction between the baseline climate data and the GCM simulation via downscaling accounted for, on average, 40% of the total variation among the future projections. Our analyses illustrate that the combined use of gridded datasets developed from station observations and satellite measurements can help estimate the uncertainty introduced by the choice of baseline climate information to the projected changes in species distribution.

  8. Uncertainty of future projections of species distributions in mountainous regions

    PubMed Central

    Tang, Ying; Viña, Andrés; Liu, Jianguo; Zhang, Yuanbin; Zhang, Xiaofeng; Li, Xiaohong; Wang, Fang; Zhang, Jindong; Zhao, Zhiqiang

    2018-01-01

    Multiple factors introduce uncertainty into projections of species distributions under climate change. The uncertainty introduced by the choice of baseline climate information used to calibrate a species distribution model and to downscale global climate model (GCM) simulations to a finer spatial resolution is a particular concern for mountainous regions, as the spatial resolution of climate observing networks is often insufficient to detect the steep climatic gradients in these areas. Using the maximum entropy (MaxEnt) modeling framework together with occurrence data on 21 understory bamboo species distributed across the mountainous geographic range of the Giant Panda, we examined the differences in projected species distributions obtained from two contrasting sources of baseline climate information, one derived from spatial interpolation of coarse-scale station observations and the other derived from fine-spatial resolution satellite measurements. For each bamboo species, the MaxEnt model was calibrated separately for the two datasets and applied to 17 GCM simulations downscaled using the delta method. Greater differences in the projected spatial distributions of the bamboo species were observed for the models calibrated using the different baseline datasets than between the different downscaled GCM simulations for the same calibration. In terms of the projected future climatically-suitable area by species, quantification using a multi-factor analysis of variance suggested that the sum of the variance explained by the baseline climate dataset used for model calibration and the interaction between the baseline climate data and the GCM simulation via downscaling accounted for, on average, 40% of the total variation among the future projections. Our analyses illustrate that the combined use of gridded datasets developed from station observations and satellite measurements can help estimate the uncertainty introduced by the choice of baseline climate information to the projected changes in species distribution. PMID:29320501

  9. An efficiency improvement in warehouse operation using simulation analysis

    NASA Astrophysics Data System (ADS)

    Samattapapong, N.

    2017-11-01

    In general, industry requires an efficient system for warehouse operation. There are many important factors that must be considered when designing an efficient warehouse system. The most important is an effective warehouse operation system that can help transfer raw material, reduce costs and support transportation. By all these factors, researchers are interested in studying about work systems and warehouse distribution. We start by collecting the important data for storage, such as the information on products, information on size and location, information on data collection and information on production, and all this information to build simulation model in Flexsim® simulation software. The result for simulation analysis found that the conveyor belt was a bottleneck in the warehouse operation. Therefore, many scenarios to improve that problem were generated and testing through simulation analysis process. The result showed that an average queuing time was reduced from 89.8% to 48.7% and the ability in transporting the product increased from 10.2% to 50.9%. Thus, it can be stated that this is the best method for increasing efficiency in the warehouse operation.

  10. Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2009-01-01

    Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.

  11. Public and private dental services in NSW: a geographic information system analysis of access to care for 7 million Australians.

    PubMed

    Willie-Stephens, Jenny; Kruger, Estie; Tennant, Marc

    2014-06-01

    To investigate the distribution of public and private dental practices in NSW in relation to population distribution and socioeconomic status. Dental practices (public and private) were mapped and overlayed with Census data on Collection District population and Socio-Economic Indexes for Areas (SEIFA). Overall, there was an uneven geographic distribution of public and private dental practices across NSW. When the geographic distribution was compared to population socioeconomics it was found that in rural NSW, 12% of the most disadvantaged residents lived further than 50km from a public dental practice, compared to 0% of the least disadvantaged. In Sydney, 9% of the three most disadvantaged groups lived greater than 7.5km from a public dental practice, compared to 21% of the three least disadvantaged groups. The findings of this study can contribute to informing decisions to determine future areas for focus of dental resource development (infrastructure and workforce) and identifying subgroups in the population (who are geographically isolated from accessing care) where public health initiatives focused on amelioration of disease consequences should be a focus.

  12. WaveJava: Wavelet-based network computing

    NASA Astrophysics Data System (ADS)

    Ma, Kun; Jiao, Licheng; Shi, Zhuoer

    1997-04-01

    Wavelet is a powerful theory, but its successful application still needs suitable programming tools. Java is a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multi- threaded, dynamic language. This paper addresses the design and development of a cross-platform software environment for experimenting and applying wavelet theory. WaveJava, a wavelet class library designed by the object-orient programming, is developed to take advantage of the wavelets features, such as multi-resolution analysis and parallel processing in the networking computing. A new application architecture is designed for the net-wide distributed client-server environment. The data are transmitted with multi-resolution packets. At the distributed sites around the net, these data packets are done the matching or recognition processing in parallel. The results are fed back to determine the next operation. So, the more robust results can be arrived quickly. The WaveJava is easy to use and expand for special application. This paper gives a solution for the distributed fingerprint information processing system. It also fits for some other net-base multimedia information processing, such as network library, remote teaching and filmless picture archiving and communications.

  13. Observations of Dust Using the NASA Geoscience Laser Altimeter System (GLAS): New New Measurements of Aerosol Vertical Distribution From Space

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth; Spinhirne, James D.; Palm, Steven P.; Hlavka, Dennis; Hart, William

    2003-01-01

    On January 12, 2003 NASA launched the first satellite-based lidar, the Geoscience Laser -Altimeter System (GLAS), onboard the ICESat spacecraft. The GLAS atmospheric measurements introduce a fundamentally new and important tool for understanding the atmosphere and climate. In the past, aerosols have only been studied from space using images gathered by passive sensors. Analysis of this passive data has lead to an improved understanding of aerosol properties, spatial distribution, and their effect on the earth's climate. However, these images do not show the aerosol's vertical distribution. As a result, a key piece of information has been missing. The measurements now obtained by GLAS will provide information on the vertical distribution of aerosols and clouds, and improve our ability to study their transport processes and aerosol-cloud interactions. Here we show an overview of GLAS, provide an update of its current status, and present initial observations of dust profiles. In particular, a strategy of characterizing the height profile of dust plumes over source regions will be presented.

  14. Daily bread: a novel vehicle for dissemination and evaluation of psychological first aid for families exposed to armed conflict in Syria.

    PubMed

    El-Khani, A; Cartwright, K; Redmond, A; Calam, R

    2016-01-01

    Risks to the mental health of children and families exposed to conflict in Syria are of such magnitude that research identifying how best to deliver psychological first aid is urgently required. This study tested the feasibility of a novel approach to large-scale distribution of information and data collection. Routine humanitarian deliveries of bread by a bakery run by a non-governmental organisation (NGO) were used to distribute parenting information leaflets and questionnaires to adults looking after children in conflict zones inside Syria. Study materials were emailed to a project worker in Turkey. Leaflets and questionnaires requesting feedback were transported alongside supplies to a bakery in Syria, and then packed with flatbreads. Three thousand bread-packs were distributed, from three distribution points to which questionnaires were returned, and then taken to Turkey and dispatched to the UK. Notwithstanding delays, 3000 leaflets and questionnaires were successfully distributed over 2 days. Questionnaire return yielded 1783 responses, a 59.5% return rate. Overall ratings of the usefulness of the leaflet were 1060 (59.5%) 'quite a lot' and 339 (19.0%) 'a great deal'. Content analysis was used to code 400 respondent comments. Four themes emerged; positive comments about the leaflet, suggestions for modifications, descriptions of children's needs and the value respondents placed on faith. Findings indicate the willingness of NGO staff and volunteers to assist in research, the remarkable willingness of caregivers to respond and the value of brief advice. It demonstrates the scope for using existing humanitarian routes to distribute information and receive feedback even in high-risk settings.

  15. The conservation status of the world’s reptiles

    USGS Publications Warehouse

    Böhm, Monika; Reynolds, Robert P.; ,

    2013-01-01

    Effective and targeted conservation action requires detailed information about species, their distribution, systematics and ecology as well as the distribution of threat processes which affect them. Knowledge of reptilian diversity remains surprisingly disparate, and innovative means of gaining rapid insight into the status of reptiles are needed in order to highlight urgent conservation cases and inform environmental policy with appropriate biodiversity information in a timely manner. We present the first ever global analysis of extinction risk in reptiles, based on a random representative sample of 1500 species (16% of all currently known species). To our knowledge, our results provide the first analysis of the global conservation status and distribution patterns of reptiles and the threats affecting them, highlighting conservation priorities and knowledge gaps which need to be addressed urgently to ensure the continued survival of the world’s reptiles. Nearly one in five reptilian species are threatened with extinction, with another one in five species classed as Data Deficient. The proportion of threatened reptile species is highest in freshwater environments, tropical regions and on oceanic islands, while data deficiency was highest in tropical areas, such as Central Africa and Southeast Asia, and among fossorial reptiles. Our results emphasise the need for research attention to be focussed on tropical areas which are experiencing the most dramatic rates of habitat loss, on fossorial reptiles for which there is a chronic lack of data, and on certain taxa such as snakes for which extinction risk may currently be underestimated due to lack of population information. Conservation actions specifically need to mitigate the effects of human-induced habitat loss and harvesting, which are the predominant threats to reptiles.

  16. Tool for Constructing Data Albums for Significant Weather Events

    NASA Astrophysics Data System (ADS)

    Kulkarni, A.; Ramachandran, R.; Conover, H.; McEniry, M.; Goodman, H.; Zavodsky, B. T.; Braun, S. A.; Wilson, B. D.

    2012-12-01

    Case study analysis and climatology studies are common approaches used in Atmospheric Science research. Research based on case studies involves a detailed description of specific weather events using data from different sources, to characterize physical processes in play for a given event. Climatology-based research tends to focus on the representativeness of a given event, by studying the characteristics and distribution of a large number of events. To gather relevant data and information for case studies and climatology analysis is tedious and time consuming; current Earth Science data systems are not suited to assemble multi-instrument, multi mission datasets around specific events. For example, in hurricane science, finding airborne or satellite data relevant to a given storm requires searching through web pages and data archives. Background information related to damages, deaths, and injuries requires extensive online searches for news reports and official storm summaries. We will present a knowledge synthesis engine to create curated "Data Albums" to support case study analysis and climatology studies. The technological challenges in building such a reusable and scalable knowledge synthesis engine are several. First, how to encode domain knowledge in a machine usable form? This knowledge must capture what information and data resources are relevant and the semantic relationships between the various fragments of information and data. Second, how to extract semantic information from various heterogeneous sources including unstructured texts using the encoded knowledge? Finally, how to design a structured database from the encoded knowledge to store all information and to support querying? The structured database must allow both knowledge overviews of an event as well as drill down capability needed for detailed analysis. An application ontology driven framework is being used to design the knowledge synthesis engine. The knowledge synthesis engine is being applied to build a portal for hurricane case studies at the Global Hydrology and Resource Center (GHRC), a NASA Data Center. This portal will auto-generate Data Albums for specific hurricane events, compiling information from distributed resources such as NASA field campaign collections, relevant data sets, storm reports, pictures, videos and other useful sources.

  17. Supporting the Loewenstein occupational therapy cognitive assessment using distributed user interfaces.

    PubMed

    Tesoriero, Ricardo; Gallud Lazaro, Jose A; Altalhi, Abdulrahman H

    2017-02-01

    Improve the quantity and quality of information obtained from traditional Loewenstein Occupational Therapy Cognitive Assessment Battery systems to monitor the evolution of patients' rehabilitation process as well as to compare different rehabilitation therapies. The system replaces traditional artefacts with virtual versions of them to take advantage of cutting edge interaction technology. The system is defined as a Distributed User Interface (DUI) supported by a display ecosystem, including mobile devices as well as multi-touch surfaces. Due to the heterogeneity of the devices involved in the system, the software technology is based on a client-server architecture using the Web as the software platform. The system provides therapists with information that is not available (or it is very difficult to gather) using traditional technologies (i.e. response time measurements, object tracking, information storage and retrieval facilities, etc.). The use of DUIs allows therapists to gather information that is unavailable using traditional assessment methods as well as adapt the system to patients' profile to increase the range of patients that are able to take this assessment. Implications for Rehabilitation Using a Distributed User Interface environment to carry out LOTCAs improves the quality of the information gathered during the rehabilitation assessment. This system captures physical data regarding patient's interaction during the assessment to improve the rehabilitation process analysis. Allows professionals to adapt the assessment procedure to create different versions according to patients' profile. Improves the availability of patients' profile information to therapists to adapt the assessment procedure.

  18. Effect of extreme data loss on heart rate signals quantified by entropy analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Wang, Jun; Li, Jin; Liu, Dazhao

    2015-02-01

    The phenomenon of data loss always occurs in the analysis of large databases. Maintaining the stability of analysis results in the event of data loss is very important. In this paper, we used a segmentation approach to generate a synthetic signal that is randomly wiped from data according to the Gaussian distribution and the exponential distribution of the original signal. Then, the logistic map is used as verification. Finally, two methods of measuring entropy-base-scale entropy and approximate entropy-are comparatively analyzed. Our results show the following: (1) Two key parameters-the percentage and the average length of removed data segments-can change the sequence complexity according to logistic map testing. (2) The calculation results have preferable stability for base-scale entropy analysis, which is not sensitive to data loss. (3) The loss percentage of HRV signals should be controlled below the range (p = 30 %), which can provide useful information in clinical applications.

  19. A Monte Carlo–Based Bayesian Approach for Measuring Agreement in a Qualitative Scale

    PubMed Central

    Pérez Sánchez, Carlos Javier

    2014-01-01

    Agreement analysis has been an active research area whose techniques have been widely applied in psychology and other fields. However, statistical agreement among raters has been mainly considered from a classical statistics point of view. Bayesian methodology is a viable alternative that allows the inclusion of subjective initial information coming from expert opinions, personal judgments, or historical data. A Bayesian approach is proposed by providing a unified Monte Carlo–based framework to estimate all types of measures of agreement in a qualitative scale of response. The approach is conceptually simple and it has a low computational cost. Both informative and non-informative scenarios are considered. In case no initial information is available, the results are in line with the classical methodology, but providing more information on the measures of agreement. For the informative case, some guidelines are presented to elicitate the prior distribution. The approach has been applied to two applications related to schizophrenia diagnosis and sensory analysis. PMID:29881002

  20. Predictive distributions for between-study heterogeneity and simple methods for their application in Bayesian meta-analysis

    PubMed Central

    Turner, Rebecca M; Jackson, Dan; Wei, Yinghui; Thompson, Simon G; Higgins, Julian P T

    2015-01-01

    Numerous meta-analyses in healthcare research combine results from only a small number of studies, for which the variance representing between-study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated. The aim of this paper is to provide tools that improve the accessibility of Bayesian meta-analysis. We present two methods for implementing Bayesian meta-analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta-analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta-analyses. The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log-normal distributions for the between-study variance, applicable to meta-analyses of binary outcomes on the log odds-ratio scale. The methods are applied to two example meta-analyses, incorporating the relevant predictive distributions as prior distributions for between-study heterogeneity. We have provided resources to facilitate Bayesian meta-analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:25475839

  1. Depth data research of GIS based on clustering analysis algorithm

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Xu, Wenli

    2018-03-01

    The data of GIS have spatial distribution. Geographic data has both spatial characteristics and attribute characteristics, and also changes with time. Therefore, the amount of data is very large. Nowadays, many industries and departments in the society are using GIS. However, without proper data analysis and mining scheme, GIS will not exert its maximum effectiveness and will waste a lot of data. In this paper, we use the geographic information demand of a national security department as the experimental object, combining the characteristics of GIS data, taking into account the characteristics of time, space, attributes and so on, and using cluster analysis algorithm. We further study the mining scheme for depth data, and get the algorithm model. This algorithm can automatically classify sample data, and then carry out exploratory analysis. The research shows that the algorithm model and the information mining scheme can quickly find hidden depth information from the surface data of GIS, thus improving the efficiency of the security department. This algorithm can also be extended to other fields.

  2. An analysis of available data on effects of wing-fuselage-tail and wing-nacelle interference on the distribution of the air load among components of airplanes

    NASA Technical Reports Server (NTRS)

    Wollner, Bertram C

    1949-01-01

    Available information on the effects of wing-fuselage-tail and wing-nacelle interference on the distribution of the air load among components of airplanes is analyzed. The effects of wing and nacelle incidence, horizontal andvertical position of wing and nacelle, fuselage shape, wing section and filleting are considered. Where sufficient data were unavailable to determine the distribution of the air load, the change in lift caused by interference between wing and fuselage was found. This increment is affected to the greatest extent by vertical wing position.

  3. An improved scheme on decoy-state method for measurement-device-independent quantum key distribution.

    PubMed

    Wang, Dong; Li, Mo; Guo, Guang-Can; Wang, Qin

    2015-10-14

    Quantum key distribution involving decoy-states is a significant application of quantum information. By using three-intensity decoy-states of single-photon-added coherent sources, we propose a practically realizable scheme on quantum key distribution which approaches very closely the ideal asymptotic case of an infinite number of decoy-states. We make a comparative study between this scheme and two other existing ones, i.e., two-intensity decoy-states with single-photon-added coherent sources, and three-intensity decoy-states with weak coherent sources. Through numerical analysis, we demonstrate the advantages of our scheme in secure transmission distance and the final key generation rate.

  4. On-Orbit Collision Hazard Analysis in Low Earth Orbit Using the Poisson Probability Distribution (Version 1.0)

    DOT National Transportation Integrated Search

    1992-08-26

    This document provides the basic information needed to estimate a general : probability of collision in Low Earth Orbit (LEO). Although the method : described in this primer is a first order approximation, its results are : reasonable. Furthermore, t...

  5. The Electronic Classroom.

    ERIC Educational Resources Information Center

    Mueller, Richard J.

    Current computerized electronic technology is making possible, not only the broad and rapid distribution of information, but also its manipulation, analysis, synthesis, and recombination. The shift from print to a combination of visual and oral expression is being propelled by the mass media, and visual literacy is both a concept and an…

  6. CSFII Analysis of Fat Intake Distributions 94-96 (1998 Survey Data)

    EPA Science Inventory

    Certain chemicals, such as dioxins tend to accumulate in fat tissue. Information about the total animal fat intake is necessary to adequately assess exposures and risks from these chemicals. Under this project, NCEA will desegregate the components of the various food items in the...

  7. A population structure and genome-wide association analysis on the USDA soybean germplasm collection

    USDA-ARS?s Scientific Manuscript database

    Genotype-phenotype associations within the soybean (Glycine max) germplasm collection could provide valuable information on the frequency and distribution of alleles affecting economically important traits. Here we performed a genome-wide association study (GWAS) for seed protein and oil content in ...

  8. Sub-10-Minute Characterization of an Ultrahigh Molar Mass Polymer by Multi-detector Hydrodynamic Chromatography

    USDA-ARS?s Scientific Manuscript database

    Molar mass averages, distributions, and architectural information of polymers are routinely obtained using size-exclusion chromatography (SEC). It has previously been shown that ultrahigh molar mass polymers may experience degradation during SEC analysis, leading to inaccurate molar mass averages a...

  9. Mathematics and Science Instruction in Southern California.

    ERIC Educational Resources Information Center

    Myers, Edwin C.; Mineo, R. James

    To provide information to support school district considerations of changes in mathematics and science instruction, three issues were considered: (1) the adequacy of the California Basic Education Data System (CBEDS) for supporting an analysis of subject matter instruction; (2) the distribution of teaching effort and student enrollments among…

  10. Signal analysis by means of time-frequency (Wigner-type) distributions -- Applications to sonar and radar echoes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaunaurd, G.; Strifors, H.C.

    1996-09-01

    Time series data have been traditionally analyzed in either the time or the frequency domains. For signals with a time-varying frequency content, the combined time-frequency (TF) representations, based on the Cohen class of (generalized) Wigner distributions (WD`s) offer a powerful analysis tool. Using them, it is possible to: (1) trace the time-evolution of the resonance features usually present in a standard sonar cross section (SCS), or in a radar cross section (RCS) and (2) extract target information that may be difficult to even notice in an ordinary SCS or RCS. After a brief review of the fundamental properties of themore » WD, the authors discuss ways to reduce or suppress the cross term interference that appears in the WD of multicomponent systems. These points are illustrated with a variety of three-dimensional (3-D) plots of Wigner and pseudo-Wigner distributions (PWD), in which the strength of the distribution is depicted as the height of a Wigner surface with height scales measured by various color shades or pseudocolors. The authors also review studies they have made of the echoes returned by conducting or dielectric targets in the atmosphere, when they are illuminated by broadband radar pings. A TF domain analysis of these impulse radar returns demonstrates their superior informative content. These plots allow the identification of targets in an easier and clearer fashion than by the conventional RCS of narrowband systems. The authors show computed and measured plots of WD and PWD of various types of aircraft to illustrate the classification advantages of the approach at any aspect angle. They also show analogous results for metallic objects buried underground, in dielectric media, at various depths.« less

  11. The relationship between extreme precipitation events and landslides distributions in 2009 in Lower Austria

    NASA Astrophysics Data System (ADS)

    Katzensteiner, H.; Bell, R.; Petschko, H.; Glade, T.

    2012-04-01

    The prediction and forecast of widespread landsliding for a given triggering event is an open research question. Numerous studies tried to link spatial rainfall and landslide distributions. This study focuses on analysing the relationship between intensive precipitation and rainfall-triggered shallow landslides in the year 2009 in Lower Austria. Landslide distributions were gained from the building ground register, which is maintained by the Geological Survey of Lower Austria. It contains detailed information of landslides, which were registered due to damage reports. Spatially distributed rainfall estimates were extracted from INCA (Integrated Nowcasting through Comprehensive Analysis) precipitation analysis, which is a combination of station data interpolation and radar data in a spatial resolution of 1km developed by the Central Institute for Meteorology and Geodynamics (ZAMG), Vienna, Austria. The importance of the data source is shown by comparing rainfall data based on reference gauges, spatial interpolation and INCA-analysis for a certain storm period. INCA precipitation data can detect precipitating cells that do not hit a station but might trigger a landslide, which is an advantage over the application of reference stations for the definition of rainfall thresholds. Empirical thresholds at regional scale were determined based on rainfall-intensity and duration in the year 2009 and landslide information. These thresholds are dependent on the criteria which separate the landslide triggering and non-triggering precipitation events from each other. Different approaches for defining thresholds alter the shape of the threshold as well. A temporarily threshold I=8,8263*D^(-0.672) for extreme rainfall events in summer in Lower Austria was defined. A verification of the threshold with similar events of other years as well as following analyses based on a larger landslide database are in progress.

  12. Fast-NPS-A Markov Chain Monte Carlo-based analysis tool to obtain structural information from single-molecule FRET measurements

    NASA Astrophysics Data System (ADS)

    Eilert, Tobias; Beckers, Maximilian; Drechsler, Florian; Michaelis, Jens

    2017-10-01

    The analysis tool and software package Fast-NPS can be used to analyse smFRET data to obtain quantitative structural information about macromolecules in their natural environment. In the algorithm a Bayesian model gives rise to a multivariate probability distribution describing the uncertainty of the structure determination. Since Fast-NPS aims to be an easy-to-use general-purpose analysis tool for a large variety of smFRET networks, we established an MCMC based sampling engine that approximates the target distribution and requires no parameter specification by the user at all. For an efficient local exploration we automatically adapt the multivariate proposal kernel according to the shape of the target distribution. In order to handle multimodality, the sampler is equipped with a parallel tempering scheme that is fully adaptive with respect to temperature spacing and number of chains. Since the molecular surrounding of a dye molecule affects its spatial mobility and thus the smFRET efficiency, we introduce dye models which can be selected for every dye molecule individually. These models allow the user to represent the smFRET network in great detail leading to an increased localisation precision. Finally, a tool to validate the chosen model combination is provided. Programme Files doi:http://dx.doi.org/10.17632/7ztzj63r68.1 Licencing provisions: Apache-2.0 Programming language: GUI in MATLAB (The MathWorks) and the core sampling engine in C++ Nature of problem: Sampling of highly diverse multivariate probability distributions in order to solve for macromolecular structures from smFRET data. Solution method: MCMC algorithm with fully adaptive proposal kernel and parallel tempering scheme.

  13. Interoperability Outlook in the Big Data Future

    NASA Astrophysics Data System (ADS)

    Kuo, K. S.; Ramachandran, R.

    2015-12-01

    The establishment of distributed active archive centers (DAACs) as data warehouses and the standardization of file format by NASA's Earth Observing System Data Information System (EOSDIS) had doubtlessly propelled interoperability of NASA Earth science data to unprecedented heights in the 1990s. However, we obviously still feel wanting two decades later. We believe the inadequate interoperability we experience is a result of the the current practice that data are first packaged into files before distribution and only the metadata of these files are cataloged into databases and become searchable. Data therefore cannot be efficiently filtered. Any extensive study thus requires downloading large volumes of data files to a local system for processing and analysis.The need to download data not only creates duplication and inefficiency but also further impedes interoperability, because the analysis has to be performed locally by individual researchers in individual institutions. Each institution or researcher often has its/his/her own preference in the choice of data management practice as well as programming languages. Analysis results (derived data) so produced are thus subject to the differences of these practices, which later form formidable barriers to interoperability. A number of Big Data technologies are currently being examined and tested to address Big Earth Data issues. These technologies share one common characteristics: exploiting compute and storage affinity to more efficiently analyze large volumes and great varieties of data. Distributed active "archive" centers are likely to evolve into distributed active "analysis" centers, which not only archive data but also provide analysis service right where the data reside. "Analysis" will become the more visible function of these centers. It is thus reasonable to expect interoperability to improve because analysis, in addition to data, becomes more centralized. Within a "distributed active analysis center" interoperability is almost guaranteed because data, analysis, and results all can be readily shared and reused. Effectively, with the establishment of "distributed active analysis centers", interoperation turns from a many-to-many problem into a less complicated few-to-few problem and becomes easier to solve.

  14. A Survey and Analysis of Socio-Economic, Reading, and Library Use Factors Influencing the Rose Park Branch of the Salt Lake Public Library.

    ERIC Educational Resources Information Center

    Heath, Janeth L.; Johnson, Kent B.

    The purpose of this study was to survey and analyze Rose Park, a residential area in the north-west part of Salt Lake City. A questionnaire was used to solicit information. The questions were formed through analysis of other surveys by means of an extensive literature search. Questionnaires were distributed randomly to 215 families and were…

  15. Development of quantitative security optimization approach for the picture archives and carrying system between a clinic and a rehabilitation center

    NASA Astrophysics Data System (ADS)

    Haneda, Kiyofumi; Kajima, Toshio; Koyama, Tadashi; Muranaka, Hiroyuki; Dojo, Hirofumi; Aratani, Yasuhiko

    2002-05-01

    The target of our study is to analyze the level of necessary security requirements, to search for suitable security measures and to optimize security distribution to every portion of the medical practice. Quantitative expression must be introduced to our study, if possible, to enable simplified follow-up security procedures and easy evaluation of security outcomes or results. Using fault tree analysis (FTA), system analysis showed that system elements subdivided into groups by details result in a much more accurate analysis. Such subdivided composition factors greatly depend on behavior of staff, interactive terminal devices, kinds of services provided, and network routes. Security measures were then implemented based on the analysis results. In conclusion, we identified the methods needed to determine the required level of security and proposed security measures for each medical information system, and the basic events and combinations of events that comprise the threat composition factors. Methods for identifying suitable security measures were found and implemented. Risk factors for each basic event, a number of elements for each composition factor, and potential security measures were found. Methods to optimize the security measures for each medical information system were proposed, developing the most efficient distribution of risk factors for basic events.

  16. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System.

    PubMed

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-06-27

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.

  17. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System

    PubMed Central

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-01-01

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946

  18. Use of artificial neural network for spatial rainfall analysis

    NASA Astrophysics Data System (ADS)

    Paraskevas, Tsangaratos; Dimitrios, Rozos; Andreas, Benardos

    2014-04-01

    In the present study, the precipitation data measured at 23 rain gauge stations over the Achaia County, Greece, were used to estimate the spatial distribution of the mean annual precipitation values over a specific catchment area. The objective of this work was achieved by programming an Artificial Neural Network (ANN) that uses the feed-forward back-propagation algorithm as an alternative interpolating technique. A Geographic Information System (GIS) was utilized to process the data derived by the ANN and to create a continuous surface that represented the spatial mean annual precipitation distribution. The ANN introduced an optimization procedure that was implemented during training, adjusting the hidden number of neurons and the convergence of the ANN in order to select the best network architecture. The performance of the ANN was evaluated using three standard statistical evaluation criteria applied to the study area and showed good performance. The outcomes were also compared with the results obtained from a previous study in the area of research which used a linear regression analysis for the estimation of the mean annual precipitation values giving more accurate results. The information and knowledge gained from the present study could improve the accuracy of analysis concerning hydrology and hydrogeological models, ground water studies, flood related applications and climate analysis studies.

  19. Open Data, Open Specifications and Free and Open Source Software: A powerful mix to create distributed Web-based water information systems

    NASA Astrophysics Data System (ADS)

    Arias, Carolina; Brovelli, Maria Antonia; Moreno, Rafael

    2015-04-01

    We are in an age when water resources are increasingly scarce and the impacts of human activities on them are ubiquitous. These problems don't respect administrative or political boundaries and they must be addressed integrating information from multiple sources at multiple spatial and temporal scales. Communication, coordination and data sharing are critical for addressing the water conservation and management issues of the 21st century. However, different countries, provinces, local authorities and agencies dealing with water resources have diverse organizational, socio-cultural, economic, environmental and information technology (IT) contexts that raise challenges to the creation of information systems capable of integrating and distributing information across their areas of responsibility in an efficient and timely manner. Tight and disparate financial resources, and dissimilar IT infrastructures (data, hardware, software and personnel expertise) further complicate the creation of these systems. There is a pressing need for distributed interoperable water information systems that are user friendly, easily accessible and capable of managing and sharing large volumes of spatial and non-spatial data. In a distributed system, data and processes are created and maintained in different locations each with competitive advantages to carry out specific activities. Open Data (data that can be freely distributed) is available in the water domain, and it should be further promoted across countries and organizations. Compliance with Open Specifications for data collection, storage and distribution is the first step toward the creation of systems that are capable of interacting and exchanging data in a seamlessly (interoperable) way. The features of Free and Open Source Software (FOSS) offer low access cost that facilitate scalability and long-term viability of information systems. The World Wide Web (the Web) will be the platform of choice to deploy and access these systems. Geospatial capabilities for mapping, visualization, and spatial analysis will be important components of these new generation of Web-based interoperable information systems in the water domain. The purpose of this presentation is to increase the awareness of scientists, IT personnel and agency managers about the advantages offered by the combined use of Open Data, Open Specifications for geospatial and water-related data collection, storage and sharing, as well as mature FOSS projects for the creation of interoperable Web-based information systems in the water domain. A case study is used to illustrate how these principles and technologies can be integrated to create a system with the previously mentioned characteristics for managing and responding to flood events.

  20. Agricultural Aircraft Aid

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Farmers are increasingly turning to aerial applications of pesticides, fertilizers and other materials. Sometimes uneven distribution of the chemicals is caused by worn nozzles, improper alignment of spray nozzles or system leaks. If this happens, job must be redone with added expense to both the pilot and customer. Traditional pattern analysis techniques take days or weeks. Utilizing NASA's wind tunnel and computer validation technology, Dr. Roth, Oklahoma State University (OSU), developed a system for providing answers within minutes. Called the Rapid Distribution Pattern Evaluation System, the OSU system consists of a 100-foot measurement frame tied in to computerized analysis and readout equipment. System is mobile, delivered by trailer to airfields in agricultural areas where OSU conducts educational "fly-ins." A fly-in typically draws 50 to 100 aerial applicators, researchers, chemical suppliers and regulatory officials. An applicator can have his spray pattern checked. A computerized readout, available in five to 12 minutes, provides information for correcting shortcomings in the distribution pattern.

  1. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1997-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  2. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1998-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  3. Time-Domain Nuclear Magnetic Resonance Investigation of Water Dynamics in Different Ginger Cultivars.

    PubMed

    Huang, Chongyang; Zhou, Qi; Gao, Shan; Bao, Qingjia; Chen, Fang; Liu, Chaoyang

    2016-01-20

    Different ginger cultivars may contain different nutritional and medicinal values. In this study, a time-domain nuclear magnetic resonance method was employed to study water dynamics in different ginger cultivars. Significant differences in transverse relaxation time T2 values assigned to the distribution of water in different parts of the plant were observed between Henan ginger and four other ginger cultivars. Ion concentration and metabolic analysis showed similar differences in Mn ion concentrations and organic solutes among the different ginger cultivars, respectively. On the basis of Pearson's correlation analysis, many organic solutes and 6-gingerol, the main active substance of ginger, exhibited significant correlations with water distribution as determined by NMR T2 relaxation, suggesting that the organic solute differences may impact water distribution. Our work demonstrates that low-field NMR relaxometry provides useful information about water dynamics in different ginger cultivars as affected by the presence of different organic solutes.

  4. Cluster analysis based on dimensional information with applications to feature selection and classification

    NASA Technical Reports Server (NTRS)

    Eigen, D. J.; Fromm, F. R.; Northouse, R. A.

    1974-01-01

    A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.

  5. Essays on inference in economics, competition, and the rate of profit

    NASA Astrophysics Data System (ADS)

    Scharfenaker, Ellis S.

    This dissertation is comprised of three papers that demonstrate the role of Bayesian methods of inference and Shannon's information theory in classical political economy. The first chapter explores the empirical distribution of profit rate data from North American firms from 1962-2012. This chapter address the fact that existing methods for sample selection from noisy profit rate data in the industrial organization field of economics tends to be conditional on a covariate's value that risks discarding information. Conditioning sample selection instead on the profit rate data's structure by means of a two component (signal and noise) Bayesian mixture model we find the the profit rate sample to be time stationary Laplace distributed, corroborating earlier estimates of cross section distributions. The second chapter compares alternative probabilistic approaches to discrete (quantal) choice analysis and examines the various ways in which they overlap. In particular, the work on individual choice behavior by Duncan Luce and the extension of this work to quantal response problems by game theoreticians is shown to be related both to the rational inattention work of Christopher Sims through Shannon's information theory as well as to the maximum entropy principle of inference proposed physicist Edwin T. Jaynes. In the third chapter I propose a model of ``classically" competitive firms facing informational entropy constraints in their decisions to potentially enter or exit markets based on profit rate differentials. The result is a three parameter logit quantal response distribution for firm entry and exit decisions. Bayesian methods are used for inference into the the distribution of entry and exit decisions conditional on profit rate deviations and firm level data from Compustat is used to test these predictions.

  6. Using Dual Regression to Investigate Network Shape and Amplitude in Functional Connectivity Analyses

    PubMed Central

    Nickerson, Lisa D.; Smith, Stephen M.; Öngür, Döst; Beckmann, Christian F.

    2017-01-01

    Independent Component Analysis (ICA) is one of the most popular techniques for the analysis of resting state FMRI data because it has several advantageous properties when compared with other techniques. Most notably, in contrast to a conventional seed-based correlation analysis, it is model-free and multivariate, thus switching the focus from evaluating the functional connectivity of single brain regions identified a priori to evaluating brain connectivity in terms of all brain resting state networks (RSNs) that simultaneously engage in oscillatory activity. Furthermore, typical seed-based analysis characterizes RSNs in terms of spatially distributed patterns of correlation (typically by means of simple Pearson's coefficients) and thereby confounds together amplitude information of oscillatory activity and noise. ICA and other regression techniques, on the other hand, retain magnitude information and therefore can be sensitive to both changes in the spatially distributed nature of correlations (differences in the spatial pattern or “shape”) as well as the amplitude of the network activity. Furthermore, motion can mimic amplitude effects so it is crucial to use a technique that retains such information to ensure that connectivity differences are accurately localized. In this work, we investigate the dual regression approach that is frequently applied with group ICA to assess group differences in resting state functional connectivity of brain networks. We show how ignoring amplitude effects and how excessive motion corrupts connectivity maps and results in spurious connectivity differences. We also show how to implement the dual regression to retain amplitude information and how to use dual regression outputs to identify potential motion effects. Two key findings are that using a technique that retains magnitude information, e.g., dual regression, and using strict motion criteria are crucial for controlling both network amplitude and motion-related amplitude effects, respectively, in resting state connectivity analyses. We illustrate these concepts using realistic simulated resting state FMRI data and in vivo data acquired in healthy subjects and patients with bipolar disorder and schizophrenia. PMID:28348512

  7. Overview spectra and axial distribution of spectral line intensities in a high-current vacuum arc with CuCr electrodes

    NASA Astrophysics Data System (ADS)

    Lisnyak, M.; Pipa, A. V.; Gorchakov, S.; Iseni, S.; Franke, St.; Khapour, A.; Methling, R.; Weltmann, K.-D.

    2015-09-01

    Spectroscopic investigations of free-burning vacuum arcs in diffuse mode with CuCr electrodes are presented. The experimental conditions of the investigated arc correspond to the typical system for vacuum circuit breakers. Spectra of six species Cu I, Cu II, Cu III, Cr I, Cr II, and Cr III have been analyzed in the wavelength range 350-810 nm. The axial intensity distributions were found to be strongly dependent on the ionization stage of radiating species. Emission distributions of Cr II and Cu II can be distinguished as well as the distributions of Cr III and Cu III. Information on the axial distribution was used to identify the spectra and for identification of overlapping spectral lines. The overview spectra and some spectral windows recorded with high resolution are presented. Analysis of axial distributions of emitted light, which originates from different ionization states, is presented and discussed.

  8. Spatial Distribution of Bed Particles in Natural Boulder-Bed Streams

    NASA Astrophysics Data System (ADS)

    Clancy, K. F.; Prestegaard, K. L.

    2001-12-01

    The Wolman pebble count is used to obtain the size distribution of bed particles in natural streams. Statistics such as median particle size (D50) are used in resistance calculations. Additional information such as bed particle heterogeneity may also be obtained from the particle distribution, which is used to predict sediment transport rates (Hey, 1979), (Ferguson, Prestegaard, Ashworth, 1989). Boulder-bed streams have an extreme range of particles in the particle size distribution ranging from sand size particles to particles larger than 0.5-m. A study of a natural boulder-bed reach demonstrated that the spatial distribution of the particles is a significant factor in predicting sediment transport and stream bed and bank stability. Further experiments were performed to test the limits of the spatial distribution's effect on sediment transport. Three stream reaches 40-m in length were selected with similar hydrologic characteristics and spatial distributions but varying average size particles. We used a grid 0.5 by 0.5-m and measured four particles within each grid cell. Digital photographs of the streambed were taken in each grid cell. The photographs were examined using image analysis software to obtain particle size and position of the largest particles (D84) within the reach's particle distribution. Cross section, topography and stream depth were surveyed. Velocity and velocity profiles were measured and recorded. With these data and additional surveys of bankfull floods, we tested the significance of the spatial distributions as average particle size decreases. The spatial distribution of streambed particles may provide information about stream valley formation, bank stability, sediment transport, and the growth rate of riparian vegetation.

  9. Methods to estimate distribution and range extent of grizzly bears in the Greater Yellowstone Ecosystem

    USGS Publications Warehouse

    Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.

    2014-01-01

    The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.

  10. Managing security risks for inter-organisational information systems: a multiagent collaborative model

    NASA Astrophysics Data System (ADS)

    Feng, Nan; Wu, Harris; Li, Minqiang; Wu, Desheng; Chen, Fuzan; Tian, Jin

    2016-09-01

    Information sharing across organisations is critical to effectively managing the security risks of inter-organisational information systems. Nevertheless, few previous studies on information systems security have focused on inter-organisational information sharing, and none have studied the sharing of inferred beliefs versus factual observations. In this article, a multiagent collaborative model (MACM) is proposed as a practical solution to assess the risk level of each allied organisation's information system and support proactive security treatment by sharing beliefs on event probabilities as well as factual observations. In MACM, for each allied organisation's information system, we design four types of agents: inspection agent, analysis agent, control agent, and communication agent. By sharing soft findings (beliefs) in addition to hard findings (factual observations) among the organisations, each organisation's analysis agent is capable of dynamically predicting its security risk level using a Bayesian network. A real-world implementation illustrates how our model can be used to manage security risks in distributed information systems and that sharing soft findings leads to lower expected loss from security risks.

  11. Deconvolution analysis of 24-h serum cortisol profiles informs the amount and distribution of hydrocortisone replacement therapy.

    PubMed

    Peters, Catherine J; Hill, Nathan; Dattani, Mehul T; Charmandari, Evangelia; Matthews, David R; Hindmarsh, Peter C

    2013-03-01

    Hydrocortisone therapy is based on a dosing regimen derived from estimates of cortisol secretion, but little is known of how the dose should be distributed throughout the 24 h. We have used deconvolution analysis of 24-h serum cortisol profiles to determine 24-h cortisol secretion and distribution to inform hydrocortisone dosing schedules in young children and older adults. Twenty four hour serum cortisol profiles from 80 adults (41 men, aged 60-74 years) and 29 children (24 boys, aged 5-9 years) were subject to deconvolution analysis using an 80-min half-life to ascertain total cortisol secretion and distribution throughout the 24-h period. Mean daily cortisol secretion was similar between adults (6.3 mg/m(2) body surface area/day, range 5.1-9.3) and children (8.0 mg/m(2) body surface area/day, range 5.3-12.0). Peak serum cortisol concentration was higher in children compared with adults, whereas nadir serum cortisol concentrations were similar. Timing of the peak serum cortisol concentration was similar (07.05-07.25), whereas that of the nadir concentration occurred later in adults (midnight) compared with children (22.48) (P = 0.003). Children had the highest percentage of cortisol secretion between 06.00 and 12.00 (38.4%), whereas in adults this took place between midnight and 06.00 (45.2%). These observations suggest that the daily hydrocortisone replacement dose should be equivalent on average to 6.3 mg/m(2) body surface area/day in adults and 8.0 mg/m(2) body surface area/day in children. Differences in distribution of the total daily dose between older adults and young children need to be taken into account when using a three or four times per day dosing regimen. © 2012 Blackwell Publishing Ltd.

  12. Model-independent analysis of the orientation of fluorescent probes with restricted mobility in muscle fibers.

    PubMed Central

    Dale, R E; Hopkins, S C; an der Heide, U A; Marszałek, T; Irving, M; Goldman, Y E

    1999-01-01

    The orientation of proteins in ordered biological samples can be investigated using steady-state polarized fluorescence from probes conjugated to the protein. A general limitation of this approach is that the probes typically exhibit rapid orientational motion ("wobble") with respect to the protein backbone. Here we present a method for characterizing the extent of this wobble and for removing its effects from the available information about the static orientational distribution of the probes. The analysis depends on four assumptions: 1) the probe wobble is fast compared with the nanosecond time scale of its excited-state decay; 2) the orientational distributions of the absorption and emission transition dipole moments are cylindrically symmetrical about a common axis c fixed in the protein; 3) protein motions are negligible during the excited-state decay; 4) the distribution of c is cylindrically symmetrical about the director of the experimental sample. In a muscle fiber, the director is the fiber axis, F. All of the information on the orientational order of the probe that is available from measurements of linearly polarized fluorescence is contained in five independent polarized fluorescence intensities measured with excitation and emission polarizers parallel or perpendicular to F and with the propagation axis of the detected fluorescence parallel or perpendicular to that of the excitation. The analysis then yields the average second-rank and fourth-rank order parameters ( and ) of the angular distribution of c relative to F, and and , the average second-rank order parameters of the angular distribution for wobble of the absorption and emission transition dipole moments relative to c. The method can also be applied to other cylindrically ordered systems such as oriented lipid bilayer membranes and to processes slower than fluorescence that may be observed using longer-lived optically excited states. PMID:10049341

  13. Identification of potential rockfall source areas at a regional scale using a DEM-based geomorphometric analysis

    NASA Astrophysics Data System (ADS)

    Loye, A.; Jaboyedoff, M.; Pedrazzini, A.

    2009-10-01

    The availability of high resolution Digital Elevation Models (DEM) at a regional scale enables the analysis of topography with high levels of detail. Hence, a DEM-based geomorphometric approach becomes more accurate for detecting potential rockfall sources. Potential rockfall source areas are identified according to the slope angle distribution deduced from high resolution DEM crossed with other information extracted from geological and topographic maps in GIS format. The slope angle distribution can be decomposed in several Gaussian distributions that can be considered as characteristic of morphological units: rock cliffs, steep slopes, footslopes and plains. A terrain is considered as potential rockfall sources when their slope angles lie over an angle threshold, which is defined where the Gaussian distribution of the morphological unit "Rock cliffs" become dominant over the one of "Steep slopes". In addition to this analysis, the cliff outcrops indicated by the topographic maps were added. They contain however "flat areas", so that only the slope angles values above the mode of the Gaussian distribution of the morphological unit "Steep slopes" were considered. An application of this method is presented over the entire Canton of Vaud (3200 km2), Switzerland. The results were compared with rockfall sources observed on the field and orthophotos analysis in order to validate the method. Finally, the influence of the cell size of the DEM is inspected by applying the methodology over six different DEM resolutions.

  14. A practical and systematic review of Weibull statistics for reporting strengths of dental materials

    PubMed Central

    Quinn, George D.; Quinn, Janet B.

    2011-01-01

    Objectives To review the history, theory and current applications of Weibull analyses sufficient to make informed decisions regarding practical use of the analysis in dental material strength testing. Data References are made to examples in the engineering and dental literature, but this paper also includes illustrative analyses of Weibull plots, fractographic interpretations, and Weibull distribution parameters obtained for a dense alumina, two feldspathic porcelains, and a zirconia. Sources Informational sources include Weibull's original articles, later articles specific to applications and theoretical foundations of Weibull analysis, texts on statistics and fracture mechanics and the international standards literature. Study Selection The chosen Weibull analyses are used to illustrate technique, the importance of flaw size distributions, physical meaning of Weibull parameters and concepts of “equivalent volumes” to compare measured strengths obtained from different test configurations. Conclusions Weibull analysis has a strong theoretical basis and can be of particular value in dental applications, primarily because of test specimen size limitations and the use of different test configurations. Also endemic to dental materials, however, is increased difficulty in satisfying application requirements, such as confirming fracture origin type and diligence in obtaining quality strength data. PMID:19945745

  15. Analysis of scientific collaboration in Chinese psychiatry research.

    PubMed

    Wu, Ying; Jin, Xing

    2016-05-26

    In recent decades, China has changed profoundly, becoming the country with the world's second-largest economy. The proportion of the Chinese population suffering from mental disorder has grown in parallel with the rapid economic development, as social stresses have increased. The aim of this study is to shed light on the status of collaborations in the Chinese psychiatry field, of which there is currently limited research. We sampled 16,224 publications (2003-2012) from 10 core psychiatry journals from Chinese National Knowledge Infrastructure (CNKI) and WanFang Database. We used various social network analysis (SNA) methods such as centrality analysis, and Core-Periphery analysis to study collaboration. We also used hierarchical clustering analysis in this study. From 2003-2012, there were increasing collaborations at the level of authors, institutions and regions in the Chinese psychiatry field. Geographically, these collaborations were distributed unevenly. The 100 most prolific authors and institutions and 32 regions were used to construct the collaboration map, from which we detected the core author, institution and region. Collaborative behavior was affected by economic development. We should encourage collaborative behavior in the Chinese psychiatry field, as this facilitates knowledge distribution, resource sharing and information acquisition. Collaboration has also helped the field narrow its current research focus, providing further evidence to inform policymakers to fund research in order to tackle the increase in mental disorder facing modern China.

  16. Analysis of the spatial distribution of dengue cases in the city of Rio de Janeiro, 2011 and 2012

    PubMed Central

    Carvalho, Silvia; Magalhães, Mônica de Avelar Figueiredo Mafra; Medronho, Roberto de Andrade

    2017-01-01

    ABSTRACT OBJECTIVE Analyze the spatial distribution of classical dengue and severe dengue cases in the city of Rio de Janeiro. METHODS Exploratory study, considering cases of classical dengue and severe dengue with laboratory confirmation of the infection in the city of Rio de Janeiro during the years 2011/2012. The georeferencing technique was applied for the cases notified in the Notification Increase Information System in the period of 2011 and 2012. For this process, the fields “street” and “number” were used. The ArcGis10 program’s Geocoding tool’s automatic process was performed. The spatial analysis was done through the kernel density estimator. RESULTS Kernel density pointed out hotspots for classic dengue that did not coincide geographically with severe dengue and were in or near favelas. The kernel ratio did not show a notable change in the spatial distribution pattern observed in the kernel density analysis. The georeferencing process showed a loss of 41% of classic dengue registries and 17% of severe dengue registries due to the address in the Notification Increase Information System form. CONCLUSIONS The hotspots near the favelas suggest that the social vulnerability of these localities can be an influencing factor for the occurrence of this aggravation since there is a deficiency of the supply and access to essential goods and services for the population. To reduce this vulnerability, interventions must be related to macroeconomic policies. PMID:28832752

  17. Management of Globally Distributed Software Development Projects in Multiple-Vendor Constellations

    NASA Astrophysics Data System (ADS)

    Schott, Katharina; Beck, Roman; Gregory, Robert Wayne

    Global information systems development outsourcing is an apparent trend that is expected to continue in the foreseeable future. Thereby, IS-related services are not only increasingly provided from different geographical sites simultaneously but beyond that from multiple service providers based in different countries. The purpose of this paper is to understand how the involvement of multiple service providers affects the management of the globally distributed information systems development projects. As research on this topic is scarce, we applied an exploratory in-depth single-case study design as research approach. The case we analyzed comprises a global software development outsourcing project initiated by a German bank together with several globally distributed vendors. For data collection and data analysis we have adopted techniques suggested by the grounded theory method. Whereas the extant literature points out the increased management overhead associated with multi-sourcing, the analysis of our case suggests that the required effort for managing global outsourcing projects with multiple vendors depends among other things on the maturation level of the cooperation within the vendor portfolio. Furthermore, our data indicate that this interplay maturity is positively impacted through knowledge about the client that has been derived based on already existing client-vendor relationships. The paper concludes by offering theoretical and practical implications.

  18. Assessing the features of extreme smog in China and the differentiated treatment strategy

    NASA Astrophysics Data System (ADS)

    Deng, Lu; Zhang, Zhengjun

    2018-01-01

    Extreme smog can have potentially harmful effects on human health, the economy and daily life. However, the average (mean) values do not provide strategically useful information on the hazard analysis and control of extreme smog. This article investigates China's smog extremes by applying extreme value analysis to hourly PM2.5 data from 2014 to 2016 obtained from monitoring stations across China. By fitting a generalized extreme value (GEV) distribution to exceedances over a station-specific extreme smog level at each monitoring location, all study stations are grouped into eight different categories based on the estimated mean and shape parameter values of fitted GEV distributions. The extreme features characterized by the mean of the fitted extreme value distribution, the maximum frequency and the tail index of extreme smog at each location are analysed. These features can provide useful information for central/local government to conduct differentiated treatments in cities within different categories and conduct similar prevention goals and control strategies among those cities belonging to the same category in a range of areas. Furthermore, hazardous hours, breaking probability and the 1-year return level of each station are demonstrated by category, based on which the future control and reduction targets of extreme smog are proposed for the cities of Beijing, Tianjin and Hebei as an example.

  19. Using meta-information of a posteriori Bayesian solutions of the hypocentre location task for improving accuracy of location error estimation

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2015-06-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.

  20. Glyph-based analysis of multimodal directional distributions in vector field ensembles

    NASA Astrophysics Data System (ADS)

    Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger

    2015-04-01

    Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.

Top