Sample records for class conditional probability

  1. Prevalence and co-occurrence of addictive behaviors among former alternative high school youth: A longitudinal follow-up study.

    PubMed

    Sussman, Steve; Pokhrel, Pallav; Sun, Ping; Rohrbach, Louise A; Spruijt-Metz, Donna

    2015-09-01

    Recent work has studied addictions using a matrix measure, which taps multiple addictions through single responses for each type. This is the first longitudinal study using a matrix measure. We investigated the use of this approach among former alternative high school youth (average age = 19.8 years at baseline; longitudinal n = 538) at risk for addictions. Lifetime and last 30-day prevalence of one or more of 11 addictions reviewed in other work was the primary focus (i.e., cigarettes, alcohol, hard drugs, shopping, gambling, Internet, love, sex, eating, work, and exercise). These were examined at two time-points one year apart. Latent class and latent transition analyses (LCA and LTA) were conducted in Mplus. Prevalence rates were stable across the two time-points. As in the cross-sectional baseline analysis, the 2-class model (addiction class, non-addiction class) fit the data better at follow-up than models with more classes. Item-response or conditional probabilities for each addiction type did not differ between time-points. As a result, the LTA model utilized constrained the conditional probabilities to be equal across the two time-points. In the addiction class, larger conditional probabilities (i.e., 0.40-0.49) were found for love, sex, exercise, and work addictions; medium conditional probabilities (i.e., 0.17-0.27) were found for cigarette, alcohol, other drugs, eating, Internet and shopping addiction; and a small conditional probability (0.06) was found for gambling. Persons in an addiction class tend to remain in this addiction class over a one-year period.

  2. Prevalence and co-occurrence of addictive behaviors among former alternative high school youth: A longitudinal follow-up study

    PubMed Central

    Sussman, Steve; Pokhrel, Pallav; Sun, Ping; Rohrbach, Louise A.; Spruijt-Metz, Donna

    2015-01-01

    Background and Aims Recent work has studied addictions using a matrix measure, which taps multiple addictions through single responses for each type. This is the first longitudinal study using a matrix measure. Methods We investigated the use of this approach among former alternative high school youth (average age = 19.8 years at baseline; longitudinal n = 538) at risk for addictions. Lifetime and last 30-day prevalence of one or more of 11 addictions reviewed in other work was the primary focus (i.e., cigarettes, alcohol, hard drugs, shopping, gambling, Internet, love, sex, eating, work, and exercise). These were examined at two time-points one year apart. Latent class and latent transition analyses (LCA and LTA) were conducted in Mplus. Results Prevalence rates were stable across the two time-points. As in the cross-sectional baseline analysis, the 2-class model (addiction class, non-addiction class) fit the data better at follow-up than models with more classes. Item-response or conditional probabilities for each addiction type did not differ between time-points. As a result, the LTA model utilized constrained the conditional probabilities to be equal across the two time-points. In the addiction class, larger conditional probabilities (i.e., 0.40−0.49) were found for love, sex, exercise, and work addictions; medium conditional probabilities (i.e., 0.17−0.27) were found for cigarette, alcohol, other drugs, eating, Internet and shopping addiction; and a small conditional probability (0.06) was found for gambling. Discussion and Conclusions Persons in an addiction class tend to remain in this addiction class over a one-year period. PMID:26551909

  3. Probabilistic cluster labeling of imagery data

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1980-01-01

    The problem of obtaining the probabilities of class labels for the clusters using spectral and spatial information from a given set of labeled patterns and their neighbors is considered. A relationship is developed between class and clusters conditional densities in terms of probabilities of class labels for the clusters. Expressions are presented for updating the a posteriori probabilities of the classes of a pixel using information from its local neighborhood. Fixed-point iteration schemes are developed for obtaining the optimal probabilities of class labels for the clusters. These schemes utilize spatial information and also the probabilities of label imperfections. Experimental results from the processing of remotely sensed multispectral scanner imagery data are presented.

  4. Class dependency of fuzzy relational database using relational calculus and conditional probability

    NASA Astrophysics Data System (ADS)

    Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya

    2018-03-01

    In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.

  5. 49 CFR 173.50 - Class 1-Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... insensitive that there is very little probability of initiation or of transition from burning to detonation under normal conditions of transport. 1 The probability of transition from burning to detonation is... contain only extremely insensitive detonating substances and which demonstrate a negligible probability of...

  6. REGULATION OF GEOGRAPHIC VARIABILITY IN HAPLOID:DIPLOD RATIOS OF BIPHASIC SEAWEED LIFE CYCLES(1).

    PubMed

    da Silva Vieira, Vasco Manuel Nobre de Carvalho; Santos, Rui Orlando Pimenta

    2012-08-01

    The relative abundance of haploid and diploid individuals (H:D) in isomorphic marine algal biphasic cycles varies spatially, but only if vital rates of haploid and diploid phases vary differently with environmental conditions (i.e. conditional differentiation between phases). Vital rates of isomorphic phases in particular environments may be determined by subtle morphological or physiological differences. Herein, we test numerically how geographic variability in H:D is regulated by conditional differentiation between isomorphic life phases and the type of life strategy of populations (i.e. life cycles dominated by reproduction, survival or growth). Simulation conditions were selected using available data on H:D spatial variability in seaweeds. Conditional differentiation between ploidy phases had a small effect on the H:D variability for species with life strategies that invest either in fertility or in growth. Conversely, species with life strategies that invest mainly in survival, exhibited high variability in H:D through a conditional differentiation in stasis (the probability of staying in the same size class), breakage (the probability of changing to a smaller size class) or growth (the probability of changing to a bigger size class). These results were consistent with observed geographic variability in H:D of natural marine algae populations. © 2012 Phycological Society of America.

  7. Multiple Chronic Conditions and Hospitalizations Among Recipients of Long-Term Services and Supports

    PubMed Central

    Van Cleave, Janet H.; Egleston, Brian L.; Abbott, Katherine M.; Hirschman, Karen B.; Rao, Aditi; Naylor, Mary D.

    2016-01-01

    Background Among older adults receiving long term-services and supports (LTSS), debilitating hospitalizations is a pervasive clinical and research problem. Multiple chronic conditions (MCC) are prevalent in LTSS recipients. However, the combination of MCC and diseases associated with hospitalizations of LTSS recipients is unclear. Objective The purpose of this analysis was to determine the association between classes of MCC in newly enrolled LTSS recipients and the number of hospitalizations over a one-year period following enrollment. Methods This report is based on secondary analysis of extant data from a longitudinal cohort study of 470 new recipients of LTSS, ages 60 years and older, receiving services in assisted living facilities, nursing homes, or through home- and community-based services. Using baseline chronic conditions reported in medical records, latent class analysis (LCA) was used to identify classes of MCC and posterior probabilities of membership in each class. Poisson regressions were used to estimate the relative ratio between posterior probabilities of class membership and number of hospitalizations during the 3 month period prior to the start of LTSS (baseline) and then every three months forward through 12 months. Results Three latent MCC-based classes named Cardiopulmonary, Cerebrovascular/Paralysis, and All Other Conditions were identified. The Cardiopulmonary class was associated with elevated numbers of hospitalization compared to the All Other Conditions class (relative ratio [RR] = 1.88, 95% CI [1.33, 2.65], p < .001). Conclusion Older LTSS recipients with a combination of MCCs that includes cardiopulmonary conditions have increased risk for hospitalization. PMID:27801713

  8. Dynamical Correspondence in a Generalized Quantum Theory

    NASA Astrophysics Data System (ADS)

    Niestegge, Gerd

    2015-05-01

    In order to figure out why quantum physics needs the complex Hilbert space, many attempts have been made to distinguish the C*-algebras and von Neumann algebras in more general classes of abstractly defined Jordan algebras (JB- and JBW-algebras). One particularly important distinguishing property was identified by Alfsen and Shultz and is the existence of a dynamical correspondence. It reproduces the dual role of the selfadjoint operators as observables and generators of dynamical groups in quantum mechanics. In the paper, this concept is extended to another class of nonassociative algebras, arising from recent studies of the quantum logics with a conditional probability calculus and particularly of those that rule out third-order interference. The conditional probability calculus is a mathematical model of the Lüders-von Neumann quantum measurement process, and third-order interference is a property of the conditional probabilities which was discovered by Sorkin (Mod Phys Lett A 9:3119-3127, 1994) and which is ruled out by quantum mechanics. It is shown then that the postulates that a dynamical correspondence exists and that the square of any algebra element is positive still characterize, in the class considered, those algebras that emerge from the selfadjoint parts of C*-algebras equipped with the Jordan product. Within this class, the two postulates thus result in ordinary quantum mechanics using the complex Hilbert space or, vice versa, a genuine generalization of quantum theory must omit at least one of them.

  9. An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations

    PubMed Central

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-01-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360

  10. A Repeated Trajectory Class Model for Intensive Longitudinal Categorical Outcome

    PubMed Central

    Lin, Haiqun; Han, Ling; Peduzzi, Peter N.; Murphy, Terrence E.; Gill, Thomas M.; Allore, Heather G.

    2014-01-01

    This paper presents a novel repeated latent class model for a longitudinal response that is frequently measured as in our prospective study of older adults with monthly data on activities of daily living (ADL) for more than ten years. The proposed method is especially useful when the longitudinal response is measured much more frequently than other relevant covariates. The repeated trajectory classes represent distinct temporal patterns of the longitudinal response wherein an individual’s membership in the trajectory classes may renew or change over time. Within a trajectory class, the longitudinal response is modeled by a class-specific generalized linear mixed model. Effectively, an individual may remain in a trajectory class or switch to another as the class membership predictors are updated periodically over time. The identification of a common set of trajectory classes allows changes among the temporal patterns to be distinguished from local fluctuations in the response. An informative event such as death is jointly modeled by class-specific probability of the event through shared random effects. We do not impose the conditional independence assumption given the classes. The method is illustrated by analyzing the change over time in ADL trajectory class among 754 older adults with 70500 person-months of follow-up in the Precipitating Events Project. We also investigate the impact of jointly modeling the class-specific probability of the event on the parameter estimates in a simulation study. The primary contribution of our paper is the periodic updating of trajectory classes for a longitudinal categorical response without assuming conditional independence. PMID:24519416

  11. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  12. Implementing Inquiry-Based Learning and Examining the Effects in Junior College Probability Lessons

    ERIC Educational Resources Information Center

    Chong, Jessie Siew Yin; Chong, Maureen Siew Fang; Shahrill, Masitah; Abdullah, Nor Azura

    2017-01-01

    This study examined how Year 12 students use their inquiry skills in solving conditional probability questions by means of Inquiry-Based Learning application. The participants consisted of 66 students of similar academic abilities in Mathematics, selected from three classes, along with their respective teachers. Observational rubric and lesson…

  13. Assessing Disease Class-Specific Diagnostic Ability: A Practical Adaptive Test Approach.

    ERIC Educational Resources Information Center

    Papa, Frank J.; Schumacker, Randall E.

    Measures of the robustness of disease class-specific diagnostic concepts could play a central role in training programs designed to assure the development of diagnostic competence. In the pilot study, the authors used disease/sign-symptom conditional probability estimates, Monte Carlo procedures, and artificial intelligence (AI) tools to create…

  14. Exploring the full natural variability of eruption sizes within probabilistic hazard assessment of tephra dispersal

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Sandri, Laura; Costa, Antonio; Tonini, Roberto; Folch, Arnau; Macedonio, Giovanni

    2014-05-01

    The intrinsic uncertainty and variability associated to the size of next eruption strongly affects short to long-term tephra hazard assessment. Often, emergency plans are established accounting for the effects of one or a few representative scenarios (meant as a specific combination of eruptive size and vent position), selected with subjective criteria. On the other hand, probabilistic hazard assessments (PHA) consistently explore the natural variability of such scenarios. PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping possible eruption sizes and vent positions in classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA results from combining simulations considering different volcanological and meteorological conditions through a weight given by their specific probability of occurrence. However, volcanological parameters, such as erupted mass, eruption column height and duration, bulk granulometry, fraction of aggregates, typically encompass a wide range of values. Because of such a variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. Here we propose a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological inputs are chosen by using a stratified sampling method. This procedure allows avoiding the bias introduced by selecting single representative scenarios and thus neglecting most of the intrinsic eruptive variability. When considering within-size-class variability, attention must be paid to appropriately weight events falling within the same size class. While a uniform weight to all the events belonging to a size class is the most straightforward idea, this implies a strong dependence on the thresholds dividing classes: under this choice, the largest event of a size class has a much larger weight than the smallest event of the subsequent size class. In order to overcome this problem, in this study, we propose an innovative solution able to smoothly link the weight variability within each size class to the variability among the size classes through a common power law, and, simultaneously, respect the probability of different size classes conditional to the occurrence of an eruption. Embedding this procedure into the Bayesian Event Tree scheme enables for tephra fall PHA, quantified through hazard curves and maps representing readable results applicable in planning risk mitigation actions, and for the quantification of its epistemic uncertainties. As examples, we analyze long-term tephra fall PHA at Vesuvius and Campi Flegrei. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained clearly show that PHA accounting for the whole natural variability significantly differs from that based on a representative scenarios, as in volcanic hazard common practice.

  15. 36 CFR 294.21 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Has a geographic feature that aids in creating an effective fire break, such as a road or a ridge top; or (3) Is in condition class 3 as defined by HFRA. Fire hazard and risk: The fuel conditions on the landscape. Fire occurrence: The probability of wildfire ignition based on historic fire occurrence records...

  16. 36 CFR 294.21 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Has a geographic feature that aids in creating an effective fire break, such as a road or a ridge top; or (3) Is in condition class 3 as defined by HFRA. Fire hazard and risk: The fuel conditions on the landscape. Fire occurrence: The probability of wildfire ignition based on historic fire occurrence records...

  17. 36 CFR 294.21 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Has a geographic feature that aids in creating an effective fire break, such as a road or a ridge top; or (3) Is in condition class 3 as defined by HFRA. Fire hazard and risk: The fuel conditions on the landscape. Fire occurrence: The probability of wildfire ignition based on historic fire occurrence records...

  18. Persistence and extinction for a class of stochastic SIS epidemic models with nonlinear incidence rate

    NASA Astrophysics Data System (ADS)

    Teng, Zhidong; Wang, Lei

    2016-06-01

    In this paper, a class of stochastic SIS epidemic models with nonlinear incidence rate is investigated. It is shown that the extinction and persistence of the disease in probability are determined by a threshold value R˜0. That is, if R˜0 < 1 and an additional condition holds then disease dies out, and if R˜0 > 1 then disease is weak permanent with probability one. To obtain the permanence in the mean of the disease, a new quantity R̂0 is introduced, and it is proved that if R̂0 > 1 the disease is permanent in the mean with probability one. Furthermore, the numerical simulations are presented to illustrate some open problems given in Remarks 1-3 and 5 of this paper.

  19. Temporal patterns of apparent leg band retention in North American geese

    USGS Publications Warehouse

    Zimmerman, Guthrie S.; Kendall, William L.; Moser, Timothy J.; White, Gary C.; Doherty, Paul F.

    2009-01-01

    An important assumption of mark?recapture studies is that individuals retain their marks, which has not been assessed for goose reward bands. We estimated aluminum leg band retention probabilities and modeled how band retention varied with band type (standard vs. reward band), band age (1-40 months), and goose characteristics (species and size class) for Canada (Branta canadensis), cackling (Branta hutchinsii), snow (Chen caerulescens), and Ross?s (Chen rossii) geese that field coordinators double-leg banded during a North American goose reward band study (N = 40,999 individuals from 15 populations). We conditioned all models in this analysis on geese that were encountered with >1 leg band still attached (n = 5,747 dead recoveries and live recaptures). Retention probabilities for standard aluminum leg bands were high (estimate of 0.9995, SE = 0.001) and constant over 1-40 months. In contrast, apparent retention probabilities for reward bands demonstrated an interactive relationship between 5 size and species classes (small cackling, medium Canada, large Canada, snow, and Ross?s geese). In addition, apparent retention probabilities for each of the 5 classes varied quadratically with time, being lower immediately after banding and at older age classes. The differential retention probabilities among band type (reward vs. standard) that we observed suggests that 1) models estimating reporting probability should incorporate differential band loss if it is nontrivial, 2) goose managers should consider the costs and benefits of double-banding geese on an operational basis, and 3) the United States Geological Survey Bird Banding Lab should modify protocols for receiving recovery data.

  20. Pattern recognition for passive polarimetric data using nonparametric classifiers

    NASA Astrophysics Data System (ADS)

    Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.

    2005-08-01

    Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.

  1. Class-conditional feature modeling for ignitable liquid classification with substantial substrate contribution in fire debris analysis.

    PubMed

    Lopatka, Martin; Sigman, Michael E; Sjerps, Marjan J; Williams, Mary R; Vivó-Truyols, Gabriel

    2015-07-01

    Forensic chemical analysis of fire debris addresses the question of whether ignitable liquid residue is present in a sample and, if so, what type. Evidence evaluation regarding this question is complicated by interference from pyrolysis products of the substrate materials present in a fire. A method is developed to derive a set of class-conditional features for the evaluation of such complex samples. The use of a forensic reference collection allows characterization of the variation in complex mixtures of substrate materials and ignitable liquids even when the dominant feature is not specific to an ignitable liquid. Making use of a novel method for data imputation under complex mixing conditions, a distribution is modeled for the variation between pairs of samples containing similar ignitable liquid residues. Examining the covariance of variables within the different classes allows different weights to be placed on features more important in discerning the presence of a particular ignitable liquid residue. Performance of the method is evaluated using a database of total ion spectrum (TIS) measurements of ignitable liquid and fire debris samples. These measurements include 119 nominal masses measured by GC-MS and averaged across a chromatographic profile. Ignitable liquids are labeled using the American Society for Testing and Materials (ASTM) E1618 standard class definitions. Statistical analysis is performed in the class-conditional feature space wherein new forensic traces are represented based on their likeness to known samples contained in a forensic reference collection. The demonstrated method uses forensic reference data as the basis of probabilistic statements concerning the likelihood of the obtained analytical results given the presence of ignitable liquid residue of each of the ASTM classes (including a substrate only class). When prior probabilities of these classes can be assumed, these likelihoods can be connected to class probabilities. In order to compare the performance of this method to previous work, a uniform prior was assumed, resulting in an 81% accuracy for an independent test of 129 real burn samples. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  3. Physiological condition of autumn-banded mallards and its relationship to hunting vulnerability

    USGS Publications Warehouse

    Hepp, G.R.; Blohm, R.J.; Reynolds, R.E.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    An important topic of waterfowl ecology concerns the relationship between the physiological condition of ducks during the nonbreeding season and fitness, i.e., survival and future reproductive success. We investigated this subject using direct band recovery records of mallards (Anas platyrhynchos) banded in autumn (1 Oct-15 Dec) 1981-83 in the Mississippi Alluvial Valley (MAV) [USA]. A condition index, weight (g)/wing length (mm), was calculated for each duck, and we tested whether condition of mallards at time of banding was related to their probability of recovery during the hunting season. In 3 years, 5,610 mallards were banded and there were 234 direct recoveries. Three binary regression model was used to test the relationship between recovery probability and condition. Likelihood-ratio tests were conducted to determine the most suitable model. For mallards banded in autumn there was a negative relationship between physical condition and the probability of recovery. Mallards in poor condition at the time of banding had a greater probability of being recovered during the hunting season. In general, this was true for all ages and sex classes; however, the strongest relationship occurred for adult males.

  4. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1995-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  5. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1993-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  6. Large margin nearest neighbor classifiers.

    PubMed

    Domeniconi, Carlotta; Gunopulos, Dimitrios; Peng, Jing

    2005-07-01

    The nearest neighbor technique is a simple and appealing approach to addressing classification problems. It relies on the assumption of locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with a finite number of examples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. The employment of a locally adaptive metric becomes crucial in order to keep class conditional probabilities close to uniform, thereby minimizing the bias of estimates. We propose a technique that computes a locally flexible metric by means of support vector machines (SVMs). The decision function constructed by SVMs is used to determine the most discriminant direction in a neighborhood around the query. Such a direction provides a local feature weighting scheme. We formally show that our method increases the margin in the weighted space where classification takes place. Moreover, our method has the important advantage of online computational efficiency over competing locally adaptive techniques for nearest neighbor classification. We demonstrate the efficacy of our method using both real and simulated data.

  7. Generalized Quantum Theory of Bianchi IX Cosmologies

    NASA Astrophysics Data System (ADS)

    Craig, David; Hartle, James

    2003-04-01

    We apply sum-over-histories generalized quantum theory to the closed homogeneous minisuperspace Bianchi IX cosmological model. We sketch how the probabilities in decoherent sets of alternative, coarse-grained histories of this model universe are calculated. We consider in particular, the probabilities for classical evolution in a suitable coarse-graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not, illustrating the prediction that these universes will evolve in an approximately classical manner with a probability near unity.

  8. HMM for hyperspectral spectrum representation and classification with endmember entropy vectors

    NASA Astrophysics Data System (ADS)

    Arabi, Samir Y. W.; Fernandes, David; Pizarro, Marco A.

    2015-10-01

    The Hyperspectral images due to its good spectral resolution are extensively used for classification, but its high number of bands requires a higher bandwidth in the transmission data, a higher data storage capability and a higher computational capability in processing systems. This work presents a new methodology for hyperspectral data classification that can work with a reduced number of spectral bands and achieve good results, comparable with processing methods that require all hyperspectral bands. The proposed method for hyperspectral spectra classification is based on the Hidden Markov Model (HMM) associated to each Endmember (EM) of a scene and the conditional probabilities of each EM belongs to each other EM. The EM conditional probability is transformed in EM vector entropy and those vectors are used as reference vectors for the classes in the scene. The conditional probability of a spectrum that will be classified is also transformed in a spectrum entropy vector, which is classified in a given class by the minimum ED (Euclidian Distance) among it and the EM entropy vectors. The methodology was tested with good results using AVIRIS spectra of a scene with 13 EM considering the full 209 bands and the reduced spectral bands of 128, 64 and 32. For the test area its show that can be used only 32 spectral bands instead of the original 209 bands, without significant loss in the classification process.

  9. A method of real-time fault diagnosis for power transformers based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie

    2015-11-01

    In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.

  10. A simple probabilistic model of initiation of motion of poorly-sorted granular mixtures subjected to a turbulent flow

    NASA Astrophysics Data System (ADS)

    Ferreira, Rui M. L.; Ferrer-Boix, Carles; Hassan, Marwan

    2015-04-01

    Initiation of sediment motion is a classic problem of sediment and fluid mechanics that has been studied at wide range of scales. By analysis at channel scale one means the investigation of a reach of a stream, sufficiently large to encompass a large number of sediment grains but sufficiently small not to experience important variations in key hydrodynamic variables. At this scale, and for poorly-sorted hydraulically rough granular beds, existing studies show a wide variation of the value of the critical Shields parameter. Such uncertainty constitutes a problem for engineering studies. To go beyond Shields paradigm for the study of incipient motion at channel scale this problem can be can be cast in probabilistic terms. An empirical probability of entrainment, which will naturally account for size-selective transport, can be calculated at the scale of the bed reach, using a) the probability density functions (PDFs) of the flow velocities {{f}u}(u|{{x}n}) over the bed reach, where u is the flow velocity and xn is the location, b) the PDF of the variability of competent velocities for the entrainment of individual particles, {{f}{{up}}}({{u}p}), where up is the competent velocity, and c) the concept of joint probability of entrainment and grain size. One must first divide the mixture in into several classes M and assign a correspondent frequency p_M. For each class, a conditional PDF of the competent velocity {{f}{{up}}}({{u}p}|M) is obtained, from the PDFs of the parameters that intervene in the model for the entrainment of a single particle: [ {{u}p}/√{g(s-1){{di}}}={{Φ }u}( { {{C}k} },{{{φ}k}},ψ,{{u}p/{di}}{{{ν}(w)}} )) ] where { Ck } is a set of shape parameters that characterize the non-sphericity of the grain, { φk} is a set of angles that describe the orientation of particle axes and its positioning relatively to its neighbours, ψ is the skin friction angle of the particles, {{{u}p}{{d}i}}/{{{ν}(w)}} is a particle Reynolds number, di is the sieving diameter of the particle, g is the acceleration of gravity and {{Φ }u} is a general function. For the same class, the probability density function of the instantaneous turbulent velocities {{f}u}(u|M) can be obtained from judicious laboratory or field work. From these probability densities, the empirical conditional probability of entrainment of class M is [ P(E|M)=int-∞ +∞ {P(u>{{u}p}|M) {{f}{{up}}}({{u}p}|M)d{{u}p}} ] where P(u>{{u}p}|M)=int{{up}}+∞ {{{f}u}(u|M)du}. Employing a frequentist interpretation of probability, in an actual bed reach subjected to a succession of N (turbulent) flows, the above equation states that the fraction N P(E|M) is the number of flows in which the grains of class M are entrained. The joint probability of entrainment and class M is given by the product P(E|M){{p}M}. Hence, the channel scale empirical probability of entrainment is the marginal probability [ P(E)=sumlimitsM{P(E|M){{p}M}} ] since the classes M are mutually exclusive. Fractional bedload transport rates can be obtained from the probability of entrainment through [ {{q}s_M}={{E}M}{{ℓ }s_M} ] where {{q}s_M} is the bedload discharge in volume per unit width of size fraction M, {{E}M} is the entrainment rate per unit bed area of that size fraction, calculated from the probability of entrainment as {{E}M}=P(E|M){{p}M}(1-&lambda )d/(2T) where d is a characteristic diameter of grains on the bed surface, &lambda is the bed porosity, T is the integral length scale of the longitudinal velocity at the elevation of crests of the roughness elements and {{ℓ }s_M} is the mean displacement length of class M. Fractional transport rates were computed and compared with experimental data, determined from bedload samples collected in a 12 m long 40 cm wide channel under uniform flow conditions and sediment recirculation. The median diameter of the bulk bed mixture was 3.2 mm and the geometric standard deviation was 1.7. Shields parameters ranged from 0.027 and 0.067 while the boundary Reynolds number ranged between 220 and 376. Instantaneous velocities were measured with 2-component Laser Doppler Anemometry. The results of the probabilist model exhibit a general good agreement with the laboratory data. However the probability of entrainment of the smallest size fractions is systematically underestimated. This may be caused by phenomena that is absent from the model, for instance the increased magnitude of hydrodynamic actions following the displacement of a larger sheltering grain and the fact that the collective entrainment of smaller grains following one large turbulent event is not accounted for. This work was partially funded by FEDER, program COMPETE, and by national funds through Portuguese Foundation for Science and Technology (FCT) project RECI/ECM-HID/0371/2012.

  11. Multicategory Composite Least Squares Classifiers

    PubMed Central

    Park, Seo Young; Liu, Yufeng; Liu, Dacheng; Scholl, Paul

    2010-01-01

    Classification is a very useful statistical tool for information extraction. In particular, multicategory classification is commonly seen in various applications. Although binary classification problems are heavily studied, extensions to the multicategory case are much less so. In view of the increased complexity and volume of modern statistical problems, it is desirable to have multicategory classifiers that are able to handle problems with high dimensions and with a large number of classes. Moreover, it is necessary to have sound theoretical properties for the multicategory classifiers. In the literature, there exist several different versions of simultaneous multicategory Support Vector Machines (SVMs). However, the computation of the SVM can be difficult for large scale problems, especially for problems with large number of classes. Furthermore, the SVM cannot produce class probability estimation directly. In this article, we propose a novel efficient multicategory composite least squares classifier (CLS classifier), which utilizes a new composite squared loss function. The proposed CLS classifier has several important merits: efficient computation for problems with large number of classes, asymptotic consistency, ability to handle high dimensional data, and simple conditional class probability estimation. Our simulated and real examples demonstrate competitive performance of the proposed approach. PMID:21218128

  12. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  13. Simulation of precipitation by weather pattern and frontal analysis

    NASA Astrophysics Data System (ADS)

    Wilby, Robert

    1995-12-01

    Daily rainfall from two sites in central and southern England was stratified according to the presence or absence of weather fronts and then cross-tabulated with the prevailing Lamb Weather Type (LWT). A semi-Markov chain model was developed for simulating daily sequences of LWTs from matrices of transition probabilities between weather types for the British Isles 1970-1990. Daily and annual rainfall distributions were then simulated from the prevailing LWTs using historic conditional probabilities for precipitation occurrence and frontal frequencies. When compared with a conventional rainfall generator the frontal model produced improved estimates of the overall size distribution of daily rainfall amounts and in particular the incidence of low-frequency high-magnitude totals. Further research is required to establish the contribution of individual frontal sub-classes to daily rainfall totals and of long-term fluctuations in frontal frequencies to conditional probabilities.

  14. Public Education: Special Problems in Collective Negotiations--An Overview.

    ERIC Educational Resources Information Center

    Oberer, Walter E.

    Because of the advent of collective negotiations, public education will never again be completely in control of local school boards. Collective negotiations will probably improve the quality of education to the extent that quality (higher salaries, smaller classes, better working conditions, etc.) coincides with the self-interest of teachers. The…

  15. Economic Observations on the Decision to Attend Law School

    ERIC Educational Resources Information Center

    Ahart, Alan M.

    1975-01-01

    On the premise that the expected benefits of a legal education can be measured in dollar terms, the author develops a formula for determining whether or not to matriculate based on expected earnings, educational costs, and probability of employment (graduation, class rank, passing bar exam, and supply/demand conditions). (JT)

  16. Obsessive–compulsive disorder: subclassification based on co-morbidity

    PubMed Central

    Nestadt, G.; Di, C. Z.; Riddle, M. A.; Grados, M. A.; Greenberg, B. D.; Fyer, A. J.; McCracken, J. T.; Rauch, S. L.; Murphy, D. L.; Rasmussen, S. A.; Cullen, B.; Pinto, A.; Knowles, J. A.; Piacentini, J.; Pauls, D. L.; Bienvenu, O. J.; Wang, Y.; Liang, K. Y.; Samuels, J. F.; Roche, K. Bandeen

    2011-01-01

    Background Obsessive–compulsive disorder (OCD) is probably an etiologically heterogeneous condition. Many patients manifest other psychiatric syndromes. This study investigated the relationship between OCD and co-morbid conditions to identify subtypes. Method Seven hundred and six individuals with OCD were assessed in the OCD Collaborative Genetics Study (OCGS). Multi-level latent class analysis was conducted based on the presence of eight co-morbid psychiatric conditions [generalized anxiety disorder (GAD), major depression, panic disorder (PD), separation anxiety disorder (SAD), tics, mania, somatization disorders (Som) and grooming disorders (GrD)]. The relationship of the derived classes to specific clinical characteristics was investigated. Results Two and three classes of OCD syndromes emerge from the analyses. The two-class solution describes lesser and greater co-morbidity classes and the more descriptive three-class solution is characterized by: (1) an OCD simplex class, in which major depressive disorder (MDD) is the most frequent additional disorder; (2) an OCD co-morbid tic-related class, in which tics are prominent and affective syndromes are considerably rarer; and (3) an OCD co-morbid affective-related class in which PD and affective syndromes are highly represented. The OCD co-morbid tic-related class is predominantly male and characterized by high conscientiousness. The OCD co-morbid affective-related class is predominantly female, has a young age at onset, obsessive–compulsive personality disorder (OCPD) features, high scores on the ‘taboo’ factor of OCD symptoms, and low conscientiousness. Conclusions OCD can be classified into three classes based on co-morbidity. Membership within a class is differentially associated with other clinical characteristics. These classes, if replicated, should have important implications for research and clinical endeavors. PMID:19046474

  17. On the Asymmetric Zero-Range in the Rarefaction Fan

    NASA Astrophysics Data System (ADS)

    Gonçalves, Patrícia

    2014-02-01

    We consider one-dimensional asymmetric zero-range processes starting from a step decreasing profile leading, in the hydrodynamic limit, to the rarefaction fan of the associated hydrodynamic equation. Under that initial condition, and for totally asymmetric jumps, we show that the weighted sum of joint probabilities for second class particles sharing the same site is convergent and we compute its limit. For partially asymmetric jumps, we derive the Law of Large Numbers for a second class particle, under the initial configuration in which all positive sites are empty, all negative sites are occupied with infinitely many first class particles and there is a single second class particle at the origin. Moreover, we prove that among the infinite characteristics emanating from the position of the second class particle it picks randomly one of them. The randomness is given in terms of the weak solution of the hydrodynamic equation, through some sort of renormalization function. By coupling the constant-rate totally asymmetric zero-range with the totally asymmetric simple exclusion, we derive limiting laws for more general initial conditions.

  18. The use of sensory perception indicators for improving the characterization and modelling of total petroleum hydrocarbon (TPH) grade in soils.

    PubMed

    Roxo, Sónia; de Almeida, José António; Matias, Filipa Vieira; Mata-Lima, Herlander; Barbosa, Sofia

    2016-03-01

    This paper proposes a multistep approach for creating a 3D stochastic model of total petroleum hydrocarbon (TPH) grade in potentially polluted soils of a deactivated oil storage site by using chemical analysis results as primary or hard data and classes of sensory perception variables as secondary or soft data. First, the statistical relationship between the sensory perception variables (e.g. colour, odour and oil-water reaction) and TPH grade is analysed, after which the sensory perception variable exhibiting the highest correlation is selected (oil-water reaction in this case study). The probabilities of cells belonging to classes of oil-water reaction are then estimated for the entire soil volume using indicator kriging. Next, local histograms of TPH grade for each grid cell are computed, combining the probabilities of belonging to a specific sensory perception indicator class and conditional to the simulated values of TPH grade. Finally, simulated images of TPH grade are generated by using the P-field simulation algorithm, utilising the local histograms of TPH grade for each grid cell. The set of simulated TPH values allows several calculations to be performed, such as average values, local uncertainties and the probability of the TPH grade of the soil exceeding a specific threshold value.

  19. Open Quantum Random Walks on the Half-Line: The Karlin-McGregor Formula, Path Counting and Foster's Theorem

    NASA Astrophysics Data System (ADS)

    Jacq, Thomas S.; Lardizabal, Carlos F.

    2017-11-01

    In this work we consider open quantum random walks on the non-negative integers. By considering orthogonal matrix polynomials we are able to describe transition probability expressions for classes of walks via a matrix version of the Karlin-McGregor formula. We focus on absorbing boundary conditions and, for simpler classes of examples, we consider path counting and the corresponding combinatorial tools. A non-commutative version of the gambler's ruin is studied by obtaining the probability of reaching a certain fortune and the mean time to reach a fortune or ruin in terms of generating functions. In the case of the Hadamard coin, a counting technique for boundary restricted paths in a lattice is also presented. We discuss an open quantum version of Foster's Theorem for the expected return time together with applications.

  20. Multiple murder and criminal careers: a latent class analysis of multiple homicide offenders.

    PubMed

    Vaughn, Michael G; DeLisi, Matt; Beaver, Kevin M; Howard, Matthew O

    2009-01-10

    To construct an empirically rigorous typology of multiple homicide offenders (MHOs). The current study conducted latent class analysis of the official records of 160 MHOs sampled from eight states to evaluate their criminal careers. A 3-class solution best fit the data (-2LL=-1123.61, Bayesian Information Criterion (BIC)=2648.15, df=81, L(2)=1179.77). Class 1 (n=64, class assignment probability=.999) was the low-offending group marked by little criminal record and delayed arrest onset. Class 2 (n=51, class assignment probability=.957) was the severe group that represents the most violent and habitual criminals. Class 3 (n=45, class assignment probability=.959) was the moderate group whose offending careers were similar to Class 2. A sustained criminal career with involvement in versatile forms of crime was observed for two of three classes of MHOs. Linkages to extant typologies and recommendations for additional research that incorporates clinical constructs are proffered.

  1. Neyman-Pearson classification algorithms and NP receiver operating characteristics

    PubMed Central

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-01-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies. PMID:29423442

  2. Neyman-Pearson classification algorithms and NP receiver operating characteristics.

    PubMed

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-02-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies.

  3. Robust Bayesian decision theory applied to optimal dosage.

    PubMed

    Abraham, Christophe; Daurès, Jean-Pierre

    2004-04-15

    We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. Copyright 2004 John Wiley & Sons, Ltd.

  4. A class of stochastic delayed SIR epidemic models with generalized nonlinear incidence rate and temporary immunity

    NASA Astrophysics Data System (ADS)

    Fan, Kuangang; Zhang, Yan; Gao, Shujing; Wei, Xiang

    2017-09-01

    A class of SIR epidemic model with generalized nonlinear incidence rate is presented in this paper. Temporary immunity and stochastic perturbation are also considered. The existence and uniqueness of the global positive solution is achieved. Sufficient conditions guaranteeing the extinction and persistence of the epidemic disease are established. Moreover, the threshold behavior is discussed, and the threshold value R0 is obtained. We show that if R0 < 1, the disease eventually becomes extinct with probability one, whereas if R0 > 1, then the system remains permanent in the mean.

  5. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  6. Use of Systematic Methods to Improve Disease Identification in Administrative Data: The Case of Severe Sepsis.

    PubMed

    Shahraz, Saeid; Lagu, Tara; Ritter, Grant A; Liu, Xiadong; Tompkins, Christopher

    2017-03-01

    Selection of International Classification of Diseases (ICD)-based coded information for complex conditions such as severe sepsis is a subjective process and the results are sensitive to the codes selected. We use an innovative data exploration method to guide ICD-based case selection for severe sepsis. Using the Nationwide Inpatient Sample, we applied Latent Class Analysis (LCA) to determine if medical coders follow any uniform and sensible coding for observations with severe sepsis. We examined whether ICD-9 codes specific to sepsis (038.xx for septicemia, a subset of 995.9 codes representing Systemic Inflammatory Response syndrome, and 785.52 for septic shock) could all be members of the same latent class. Hospitalizations coded with sepsis-specific codes could be assigned to a latent class of their own. This class constituted 22.8% of all potential sepsis observations. The probability of an observation with any sepsis-specific codes being assigned to the residual class was near 0. The chance of an observation in the residual class having a sepsis-specific code as the principal diagnosis was close to 0. Validity of sepsis class assignment is supported by empirical results, which indicated that in-hospital deaths in the sepsis-specific class were around 4 times as likely as that in the residual class. The conventional methods of defining severe sepsis cases in observational data substantially misclassify sepsis cases. We suggest a methodology that helps reliable selection of ICD codes for conditions that require complex coding.

  7. Modeling Women's Menstrual Cycles using PICI Gates in Bayesian Network.

    PubMed

    Zagorecki, Adam; Łupińska-Dubicka, Anna; Voortman, Mark; Druzdzel, Marek J

    2016-03-01

    A major difficulty in building Bayesian network (BN) models is the size of conditional probability tables, which grow exponentially in the number of parents. One way of dealing with this problem is through parametric conditional probability distributions that usually require only a number of parameters that is linear in the number of parents. In this paper, we introduce a new class of parametric models, the Probabilistic Independence of Causal Influences (PICI) models, that aim at lowering the number of parameters required to specify local probability distributions, but are still capable of efficiently modeling a variety of interactions. A subset of PICI models is decomposable and this leads to significantly faster inference as compared to models that cannot be decomposed. We present an application of the proposed method to learning dynamic BNs for modeling a woman's menstrual cycle. We show that PICI models are especially useful for parameter learning from small data sets and lead to higher parameter accuracy than when learning CPTs.

  8. Generalization of cross-modal stimulus equivalence classes: operant processes as components in human category formation.

    PubMed Central

    Lane, S D; Clow, J K; Innis, A; Critchfield, T S

    1998-01-01

    This study employed a stimulus-class rating procedure to explore whether stimulus equivalence and stimulus generalization can combine to promote the formation of open-ended categories incorporating cross-modal stimuli. A pretest of simple auditory discrimination indicated that subjects (college students) could discriminate among a range of tones used in the main study. Before beginning the main study, 10 subjects learned to use a rating procedure for categorizing sets of stimuli as class consistent or class inconsistent. After completing conditional discrimination training with new stimuli (shapes and tones), the subjects demonstrated the formation of cross-modal equivalence classes. Subsequently, the class-inclusion rating procedure was reinstituted, this time with cross-modal sets of stimuli drawn from the equivalence classes. On some occasions, the tones of the equivalence classes were replaced by novel tones. The probability that these novel sets would be rated as class consistent was generally a function of the auditory distance between the novel tone and the tone that was explicitly included in the equivalence class. These data extend prior work on generalization of equivalence classes, and support the role of operant processes in human category formation. PMID:9821680

  9. Selective inspection planning with ageing forecast for sewer types.

    PubMed

    Baur, R; Herz, R

    2002-01-01

    Investments in sewer rehabilitation must be based on inspection and evaluation of sewer conditions with respect to the severity of sewer damage and to environmental risks. This paper deals with the problems of forecasting the condition of sewers in a network from a small sample of inspected sewers. Transition functions from one into the next poorer condition class, which were empirically derived from this sample, are used to forecast the condition of sewers. By the same procedure, transition functions were subsequently calibrated for sub-samples of different types of sewers. With these transition functions, the most probable date of entering a critical condition class can be forecast from sewer characteristics, such as material, period of construction, location, use for waste and/or storm water, profile, diameter and gradient. Results are shown for the estimates about the actual condition of the Dresden sewer network and its deterioration in case of doing nothing about it. A procedure is proposed for scheduling the inspection dates for sewers which have not yet been inspected and for those which have been inspected before.

  10. A closer look at the probabilities of the notorious three prisoners.

    PubMed

    Falk, R

    1992-06-01

    The "problem of three prisoners", a counterintuitive teaser, is analyzed. It is representative of a class of probability puzzles where the correct solution depends on explication of underlying assumptions. Spontaneous beliefs concerning the problem and intuitive heuristics are reviewed. The psychological background of these beliefs is explored. Several attempts to find a simple criterion to predict whether and how the probability of the target event will change as a result of obtaining evidence are examined. However, despite the psychological appeal of these attempts, none proves to be valid in general. A necessary and sufficient condition for change in the probability of the target event, following observation of new data, is proposed. That criterion is an extension of the likelihood-ratio principle (which holds in the case of only two complementary alternatives) to any number of alternatives. Some didactic implications concerning the significance of the chance set-up and reliance on analogies are discussed.

  11. Estimating trends in alligator populations from nightlight survey data

    USGS Publications Warehouse

    Fujisaki, Ikuko; Mazzotti, Frank J.; Dorazio, Robert M.; Rice, Kenneth G.; Cherkiss, Michael; Jeffery, Brian

    2011-01-01

    Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.

  12. Tiger in the fault tree jungle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, P.

    1976-01-01

    There is yet little evidence of serious efforts to apply formal reliability analysis methods to evaluate, or even to identify, potential common-mode failures (CMF) of reactor safeguard systems. The prospects for event logic modeling in this regard are examined by the primitive device of reviewing actual CMF experience in terms of what the analyst might have perceived a priori. Further insights of the probability and risks aspects of CMFs are sought through consideration of three key likelihood factors: (1) prior probability of cause ever existing, (2) opportunities for removing cause, and (3) probability that a CMF cause will be activatedmore » by conditions associated with a real system challenge. It was concluded that the principal needs for formal logical discipline in the endeavor to decrease CMF-related risks are to discover and to account for strong ''energetic'' dependency couplings that could arise in the major accidents usually classed as ''hypothetical.'' This application would help focus research, design and quality assurance efforts to cope with major CMF causes. But without extraordinary challenges to the reactor safeguard systems, there must continue to be virtually no statistical evidence pertinent to that class of failure dependencies.« less

  13. Estimating trends in alligator populations from nightlight survey data

    USGS Publications Warehouse

    Fujisaki, Ikuko; Mazzotti, F.J.; Dorazio, R.M.; Rice, K.G.; Cherkiss, M.; Jeffery, B.

    2011-01-01

    Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001-2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations. ?? 2011 US Government.

  14. Parametric embedding for class visualization.

    PubMed

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  15. Measurement error in earnings data: Using a mixture model approach to combine survey and register data.

    PubMed

    Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.

  16. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  17. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  18. A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area

    NASA Astrophysics Data System (ADS)

    Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto

    2015-04-01

    In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.

  19. The job content questionnaire in various occupational contexts: applying a latent class model

    PubMed Central

    Santos, Kionna Oliveira Bernardes; de Araújo, Tânia Maria; Karasek, Robert

    2017-01-01

    Objective To evaluate Job Content Questionnaire(JCQ) performance using the latent class model. Methods We analysed cross-sectional studies conducted in Brazil and examined three occupational categories: petroleum industry workers (n=489), teachers (n=4392) and primary healthcare workers (3078)and 1552 urban workers from a representative sample of the city of Feira de Santana in Bahia, Brazil. An appropriate number of latent classes was extracted and described each occupational category using latent class analysis, a multivariate method that evaluates constructs and takes into account the latent characteristics underlying the structure of measurement scales. The conditional probabilities of workers belonging to each class were then analysed graphically. Results Initially, the latent class analysis extracted four classes corresponding to the four job types (active, passive, low strain and high strain) proposed by the Job-Strain model (JSM) and operationalised by the JCQ. However, after taking into consideration the adequacy criteria to evaluate the number of extracted classes, three classes (active, low strain and high strain) were extracted from the studies of urban workers and teachers and four classes (active, passive, low strain and high strain) from the study of primary healthcare and petroleum industry workers. Conclusion The four job types proposed by the JSM were identified among primary healthcare and petroleum industry workers—groups with relatively high levels of skill discretion and decision authority. Three job types were identified for teachers and urban workers; however, passive job situations were not found within these groups. The latent class analysis enabled us to describe the conditional standard responses of the job types proposed by the model, particularly in relation to active jobs and high and low strain situations. PMID:28515185

  20. Perceived risk associated with ecstasy use: a latent class analysis approach

    PubMed Central

    Martins, SS; Carlson, RG; Alexandre, PK; Falck, RS

    2011-01-01

    This study aims to define categories of perceived health problems among ecstasy users based on observed clustering of their perceptions of ecstasy-related health problems. Data from a community sample of ecstasy users (n=402) aged 18 to 30, in Ohio, was used in this study. Data was analyzed via Latent Class Analysis (LCA) and Regression. This study identified five different subgroups of ecstasy users based on their perceptions of health problems they associated with their ecstasy use. Almost one third of the sample (28.9%) belonged to a class with “low level of perceived problems” (Class 4). About one fourth (25.6%) of the sample (Class 2), had high probabilities of “perceiving problems on sexual-related items”, but generally low or moderate probabilities of perceiving problems in other areas. Roughly one-fifth of the sample (21.1%, Class 1) had moderate probabilities of perceiving ecstasy health-related problems in all areas. A small proportion of respondents (11.9%, Class 5) had high probabilities of reporting “perceived memory and cognitive problems, and of perceiving “ecstasy related-problems in all areas” (12.4%, Class 3). A large proportion of ecstasy users perceive either low or moderate risk associated with their ecstasy use. It is important to further investigate whether lower levels of risk perception are associated with persistence of ecstasy use. PMID:21296504

  1. On the Estimation of Disease Prevalence by Latent Class Models for Screening Studies Using Two Screening Tests with Categorical Disease Status Verified in Test Positives Only

    PubMed Central

    Chu, Haitao; Zhou, Yijie; Cole, Stephen R.; Ibrahim, Joseph G.

    2010-01-01

    Summary To evaluate the probabilities of a disease state, ideally all subjects in a study should be diagnosed by a definitive diagnostic or gold standard test. However, since definitive diagnostic tests are often invasive and expensive, it is generally unethical to apply them to subjects whose screening tests are negative. In this article, we consider latent class models for screening studies with two imperfect binary diagnostic tests and a definitive categorical disease status measured only for those with at least one positive screening test. Specifically, we discuss a conditional independent and three homogeneous conditional dependent latent class models and assess the impact of misspecification of the dependence structure on the estimation of disease category probabilities using frequentist and Bayesian approaches. Interestingly, the three homogeneous dependent models can provide identical goodness-of-fit but substantively different estimates for a given study. However, the parametric form of the assumed dependence structure itself is not “testable” from the data, and thus the dependence structure modeling considered here can only be viewed as a sensitivity analysis concerning a more complicated non-identifiable model potentially involving heterogeneous dependence structure. Furthermore, we discuss Bayesian model averaging together with its limitations as an alternative way to partially address this particularly challenging problem. The methods are applied to two cancer screening studies, and simulations are conducted to evaluate the performance of these methods. In summary, further research is needed to reduce the impact of model misspecification on the estimation of disease prevalence in such settings. PMID:20191614

  2. Systematic review: Efficacy and safety of medical marijuana in selected neurologic disorders

    PubMed Central

    Koppel, Barbara S.; Brust, John C.M.; Fife, Terry; Bronstein, Jeff; Youssof, Sarah; Gronseth, Gary; Gloss, David

    2014-01-01

    Objective: To determine the efficacy of medical marijuana in several neurologic conditions. Methods: We performed a systematic review of medical marijuana (1948–November 2013) to address treatment of symptoms of multiple sclerosis (MS), epilepsy, and movement disorders. We graded the studies according to the American Academy of Neurology classification scheme for therapeutic articles. Results: Thirty-four studies met inclusion criteria; 8 were rated as Class I. Conclusions: The following were studied in patients with MS: (1) Spasticity: oral cannabis extract (OCE) is effective, and nabiximols and tetrahydrocannabinol (THC) are probably effective, for reducing patient-centered measures; it is possible both OCE and THC are effective for reducing both patient-centered and objective measures at 1 year. (2) Central pain or painful spasms (including spasticity-related pain, excluding neuropathic pain): OCE is effective; THC and nabiximols are probably effective. (3) Urinary dysfunction: nabiximols is probably effective for reducing bladder voids/day; THC and OCE are probably ineffective for reducing bladder complaints. (4) Tremor: THC and OCE are probably ineffective; nabiximols is possibly ineffective. (5) Other neurologic conditions: OCE is probably ineffective for treating levodopa-induced dyskinesias in patients with Parkinson disease. Oral cannabinoids are of unknown efficacy in non–chorea-related symptoms of Huntington disease, Tourette syndrome, cervical dystonia, and epilepsy. The risks and benefits of medical marijuana should be weighed carefully. Risk of serious adverse psychopathologic effects was nearly 1%. Comparative effectiveness of medical marijuana vs other therapies is unknown for these indications. PMID:24778283

  3. Systematic review: efficacy and safety of medical marijuana in selected neurologic disorders: report of the Guideline Development Subcommittee of the American Academy of Neurology.

    PubMed

    Koppel, Barbara S; Brust, John C M; Fife, Terry; Bronstein, Jeff; Youssof, Sarah; Gronseth, Gary; Gloss, David

    2014-04-29

    To determine the efficacy of medical marijuana in several neurologic conditions. We performed a systematic review of medical marijuana (1948-November 2013) to address treatment of symptoms of multiple sclerosis (MS), epilepsy, and movement disorders. We graded the studies according to the American Academy of Neurology classification scheme for therapeutic articles. Thirty-four studies met inclusion criteria; 8 were rated as Class I. The following were studied in patients with MS: (1) Spasticity: oral cannabis extract (OCE) is effective, and nabiximols and tetrahydrocannabinol (THC) are probably effective, for reducing patient-centered measures; it is possible both OCE and THC are effective for reducing both patient-centered and objective measures at 1 year. (2) Central pain or painful spasms (including spasticity-related pain, excluding neuropathic pain): OCE is effective; THC and nabiximols are probably effective. (3) Urinary dysfunction: nabiximols is probably effective for reducing bladder voids/day; THC and OCE are probably ineffective for reducing bladder complaints. (4) Tremor: THC and OCE are probably ineffective; nabiximols is possibly ineffective. (5) Other neurologic conditions: OCE is probably ineffective for treating levodopa-induced dyskinesias in patients with Parkinson disease. Oral cannabinoids are of unknown efficacy in non-chorea-related symptoms of Huntington disease, Tourette syndrome, cervical dystonia, and epilepsy. The risks and benefits of medical marijuana should be weighed carefully. Risk of serious adverse psychopathologic effects was nearly 1%. Comparative effectiveness of medical marijuana vs other therapies is unknown for these indications.

  4. An Experiment in Voice Data Entry for Imagery Interpretation Reporting.

    DTIC Science & Technology

    1981-03-01

    INTERCEPTORS 219 KOTLIN CLASS KOTLIN CLASS- 22e KOTLN SAM CL KOTLIN -SAM CLASS 221 SKORY CLASS SKORY CLASS_ 222 RIVA CLASS RIGA CLASS 223 GRISHA CLASS GRISHA...INTERCEPTORS ----------------------------------------- ---------------------- INSTALLATION 0362-V34273 *2 PROBABLE SKORY CLASS DESTROYERS *3 CONFIRMED KOTLIN ...CLASS TORPEDO BOATS! 4 CONFIRMED KOTLIN SAM-CLASS DETSTROYERS

  5. Wildfire risk in the wildland-urban interface: A simulation study in northwestern Wisconsin

    USGS Publications Warehouse

    Massada, Avi Bar; Radeloff, Volker C.; Stewart, Susan I.; Hawbaker, Todd J.

    2009-01-01

    The rapid growth of housing in and near the wildland–urban interface (WUI) increases wildfirerisk to lives and structures. To reduce fire risk, it is necessary to identify WUI housing areas that are more susceptible to wildfire. This is challenging, because wildfire patterns depend on fire behavior and spread, which in turn depend on ignition locations, weather conditions, the spatial arrangement of fuels, and topography. The goal of our study was to assess wildfirerisk to a 60,000 ha WUI area in northwesternWisconsin while accounting for all of these factors. We conducted 6000 simulations with two dynamic fire models: Fire Area Simulator (FARSITE) and Minimum Travel Time (MTT) in order to map the spatial pattern of burn probabilities. Simulations were run under normal and extreme weather conditions to assess the effect of weather on fire spread, burn probability, and risk to structures. The resulting burn probability maps were intersected with maps of structure locations and land cover types. The simulations revealed clear hotspots of wildfire activity and a large range of wildfirerisk to structures in the study area. As expected, the extreme weather conditions yielded higher burn probabilities over the entire landscape, as well as to different land cover classes and individual structures. Moreover, the spatial pattern of risk was significantly different between extreme and normal weather conditions. The results highlight the fact that extreme weather conditions not only produce higher fire risk than normal weather conditions, but also change the fine-scale locations of high risk areas in the landscape, which is of great importance for fire management in WUI areas. In addition, the choice of weather data may limit the potential for comparisons of risk maps for different areas and for extrapolating risk maps to future scenarios where weather conditions are unknown. Our approach to modeling wildfirerisk to structures can aid fire risk reduction management activities by identifying areas with elevated wildfirerisk and those most vulnerable under extreme weather conditions.

  6. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  7. Generalized quantum theory of recollapsing homogeneous cosmologies

    NASA Astrophysics Data System (ADS)

    Craig, David; Hartle, James B.

    2004-06-01

    A sum-over-histories generalized quantum theory is developed for homogeneous minisuperspace type A Bianchi cosmological models, focusing on the particular example of the classically recollapsing Bianchi type-IX universe. The decoherence functional for such universes is exhibited. We show how the probabilities of decoherent sets of alternative, coarse-grained histories of these model universes can be calculated. We consider in particular the probabilities for classical evolution defined by a suitable coarse graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not. For these situations we show that the probability is near unity for the universe to recontract classically if it expands classically. We also determine the relative probabilities of quasiclassical trajectories for initial states of WKB form, recovering for such states a precise form of the familiar heuristic “JṡdΣ” rule of quantum cosmology, as well as a generalization of this rule to generic initial states.

  8. Self-focusing quantum states

    NASA Astrophysics Data System (ADS)

    Villanueva, Anthony Allan D.

    2018-02-01

    We discuss a class of solutions of the time-dependent Schrödinger equation such that the position uncertainty temporarily decreases. This self-focusing or contractive behavior is a consequence of the anti-correlation of the position and momentum observables. Since the associated position density satisfies a continuity equation, upon contraction the probability current at a given fixed point may flow in the opposite direction of the group velocity of the wave packet. For definiteness, we consider a free particle incident from the left of the origin, and establish a condition for the initial position-momentum correlation such that a negative probability current at the origin is possible. This implies a decrease in the particle's detection probability in the region x > 0, and we calculate how long this occurs. Analogous results are obtained for a particle subject to a uniform gravitational force if we consider the particle approaching the turning point. We show that position-momentum anti-correlation may cause a negative probability current at the turning point, leading to a temporary decrease in the particle's detection probability in the classically forbidden region.

  9. Short-term droughts forecast using Markov chain model in Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Rahmat, Siti Nazahiyah; Jayasuriya, Niranjali; Bhuiyan, Muhammed A.

    2017-07-01

    A comprehensive risk management strategy for dealing with drought should include both short-term and long-term planning. The objective of this paper is to present an early warning method to forecast drought using the Standardised Precipitation Index (SPI) and a non-homogeneous Markov chain model. A model such as this is useful for short-term planning. The developed method has been used to forecast droughts at a number of meteorological monitoring stations that have been regionalised into six (6) homogenous clusters with similar drought characteristics based on SPI. The non-homogeneous Markov chain model was used to estimate drought probabilities and drought predictions up to 3 months ahead. The drought severity classes defined using the SPI were computed at a 12-month time scale. The drought probabilities and the predictions were computed for six clusters that depict similar drought characteristics in Victoria, Australia. Overall, the drought severity class predicted was quite similar for all the clusters, with the non-drought class probabilities ranging from 49 to 57 %. For all clusters, the near normal class had a probability of occurrence varying from 27 to 38 %. For the more moderate and severe classes, the probabilities ranged from 2 to 13 % and 3 to 1 %, respectively. The developed model predicted drought situations 1 month ahead reasonably well. However, 2 and 3 months ahead predictions should be used with caution until the models are developed further.

  10. Generation of multivariate near shore extreme wave conditions based on an extreme value copula for offshore boundary conditions.

    NASA Astrophysics Data System (ADS)

    Leyssen, Gert; Mercelis, Peter; De Schoesitter, Philippe; Blanckaert, Joris

    2013-04-01

    Near shore extreme wave conditions, used as input for numerical wave agitation simulations and for the dimensioning of coastal defense structures, need to be determined at a harbour entrance situated at the French North Sea coast. To obtain significant wave heights, the numerical wave model SWAN has been used. A multivariate approach was used to account for the joint probabilities. Considered variables are: wind velocity and direction, water level and significant offshore wave height and wave period. In a first step a univariate extreme value distribution has been determined for the main variables. By means of a technique based on the mean excess function, an appropriate member of the GPD is selected. An optimal threshold for peak over threshold selection is determined by maximum likelihood optimization. Next, the joint dependency structure for the primary random variables is modeled by an extreme value copula. Eventually the multivariate domain of variables was stratified in different classes, each of which representing a combination of variable quantiles with a joint probability, which are used for model simulation. The main variable is the wind velocity, as in the area of concern extreme wave conditions are wind driven. The analysis is repeated for 9 different wind directions. The secondary variable is water level. In shallow waters extreme waves will be directly affected by water depth. Hence the joint probability of occurrence for water level and wave height is of major importance for design of coastal defense structures. Wind velocity and water levels are only dependent for some wind directions (wind induced setup). Dependent directions are detected using a Kendall and Spearman test and appeared to be those with the longest fetch. For these directions, wind velocity and water level extreme value distributions are multivariately linked through a Gumbel Copula. These distributions are stratified into classes of which the frequency of occurrence can be calculated. For the remaining directions the univariate extreme wind velocity distribution is stratified, each class combined with 5 high water levels. The wave height at the model boundaries was taken into account by a regression with the extreme wind velocity at the offshore location. The regression line and the 95% confidence limits where combined with each class. Eventually the wave period is computed by a new regression with the significant wave height. This way 1103 synthetic events were selected and simulated with the SWAN wave model, each of which a frequency of occurrence is calculated for. Hence near shore significant wave heights are obtained with corresponding frequencies. The statistical distribution of the near shore wave heights is determined by sorting the model results in a descending order and accumulating the corresponding frequencies. This approach allows determination of conditional return periods. For example, for the imposed univariate design return periods of 100 years for significant wave height and 30 years for water level, the joint return period for a simultaneous exceedance of both conditions can be computed as 4000 years. Hence, this methodology allows for a probabilistic design of coastal defense structures.

  11. Latent typologies of posttraumatic stress disorder in World Trade Center responders.

    PubMed

    Horn, Sarah R; Pietrzak, Robert H; Schechter, Clyde; Bromet, Evelyn J; Katz, Craig L; Reissman, Dori B; Kotov, Roman; Crane, Michael; Harrison, Denise J; Herbert, Robin; Luft, Benjamin J; Moline, Jacqueline M; Stellman, Jeanne M; Udasin, Iris G; Landrigan, Philip J; Zvolensky, Michael J; Southwick, Steven M; Feder, Adriana

    2016-12-01

    Posttraumatic stress disorder (PTSD) is a debilitating and often chronic psychiatric disorder. Following the 9/11/2001 World Trade Center (WTC) attacks, thousands of individuals were involved in rescue, recovery and clean-up efforts. While a growing body of literature has documented the prevalence and correlates of PTSD in WTC responders, no study has evaluated predominant typologies of PTSD in this population. Participants were 4352 WTC responders with probable WTC-related DSM-IV PTSD. Latent class analyses were conducted to identify predominant typologies of PTSD symptoms and associated correlates. A 3-class solution provided the optimal representation of latent PTSD symptom typologies. The first class, labeled "High-Symptom (n = 1,973, 45.3%)," was characterized by high probabilities of all PTSD symptoms. The second class, "Dysphoric (n = 1,371, 31.5%)," exhibited relatively high probabilities of emotional numbing and dysphoric arousal (e.g., sleep disturbance). The third class, "Threat (n = 1,008, 23.2%)," was characterized by high probabilities of re-experiencing, avoidance and anxious arousal (e.g., hypervigilance). Compared to the Threat class, the Dysphoric class reported a greater number of life stressors after 9/11/2001 (OR = 1.06). The High-Symptom class was more likely than the Threat class to have a positive psychiatric history before 9/11/2001 (OR = 1.7) and reported a greater number of life stressors after 9/11/2001 (OR = 1.1). The High-Symptom class was more likely than the Dysphoric class, which was more likely than the Threat class, to screen positive for depression (83% > 74% > 53%, respectively), and to report greater functional impairment (High-Symptom > Dysphoric [Cohen d = 0.19], Dysphoric > Threat [Cohen d = 0.24]). These results may help inform assessment, risk stratification, and treatment approaches for PTSD in WTC and disaster responders. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Impulse Control and Callous-Unemotional Traits Distinguish Patterns of Delinquency and Substance Use in Justice Involved Adolescents: Examining the Moderating Role of Neighborhood Context.

    PubMed

    Ray, James V; Thornton, Laura C; Frick, Paul J; Steinberg, Laurence; Cauffman, Elizabeth

    2016-04-01

    Both callous-unemotional (CU) traits and impulse control are known risk factors associated with delinquency and substance use. However, research is limited in how contextual factors such as neighborhood conditions influence the associations between these two dispositional factors and these two externalizing behaviors. The current study utilized latent class analysis (LCA) to identify unique classes of delinquency and substance use within an ethnically diverse sample (n = 1216) of justice-involved adolescents (ages 13 to 17) from three different sites. Neighborhood disorder, CU traits, and impulse control were all independently associated with membership in classes with more extensive histories of delinquency and substance use. The effects of CU traits and impulse control in distinguishing delinquent classes was invariant across levels of neighborhood disorder, whereas neighborhood disorder moderated the association between impulse control and substance use. Specifically, the probability of being in more severe substance using classes for those low in impulse control was stronger in neighborhoods with fewer indicators of social and physical disorder.

  13. Probability interpretations of intraclass reliabilities.

    PubMed

    Ellis, Jules L

    2013-11-20

    Research where many organizations are rated by different samples of individuals such as clients, patients, or employees frequently uses reliabilities computed from intraclass correlations. Consumers of statistical information, such as patients and policy makers, may not have sufficient background for deciding which levels of reliability are acceptable. It is shown that the reliability is related to various probabilities that may be easier to understand, for example, the proportion of organizations that will be classed significantly above (or below) the mean and the probability that an organization is classed correctly given that it is classed significantly above (or below) the mean. One can view these probabilities as the amount of information of the classification and the correctness of the classification. These probabilities have an inverse relationship: given a reliability, one can 'buy' correctness at the cost of informativeness and conversely. This article discusses how this can be used to make judgments about the required level of reliabilities. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Common Mental Disorders among Occupational Groups: Contributions of the Latent Class Model

    PubMed Central

    Martins Carvalho, Fernando; de Araújo, Tânia Maria

    2016-01-01

    Background. The Self-Reporting Questionnaire (SRQ-20) is widely used for evaluating common mental disorders. However, few studies have evaluated the SRQ-20 measurements performance in occupational groups. This study aimed to describe manifestation patterns of common mental disorders symptoms among workers populations, by using latent class analysis. Methods. Data derived from 9,959 Brazilian workers, obtained from four cross-sectional studies that used similar methodology, among groups of informal workers, teachers, healthcare workers, and urban workers. Common mental disorders were measured by using SRQ-20. Latent class analysis was performed on each database separately. Results. Three classes of symptoms were confirmed in the occupational categories investigated. In all studies, class I met better criteria for suspicion of common mental disorders. Class II discriminated workers with intermediate probability of answers to the items belonging to anxiety, sadness, and energy decrease that configure common mental disorders. Class III was composed of subgroups of workers with low probability to respond positively to questions for screening common mental disorders. Conclusions. Three patterns of symptoms of common mental disorders were identified in the occupational groups investigated, ranging from distinctive features to low probabilities of occurrence. The SRQ-20 measurements showed stability in capturing nonpsychotic symptoms. PMID:27630999

  15. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  16. The job content questionnaire in various occupational contexts: applying a latent class model.

    PubMed

    Santos, Kionna Oliveira Bernardes; Araújo, Tânia Maria de; Carvalho, Fernando Martins; Karasek, Robert

    2017-05-17

    To evaluate Job Content Questionnaire(JCQ) performance using the latent class model. We analysed cross-sectional studies conducted in Brazil and examined three occupational categories: petroleum industry workers (n=489), teachers (n=4392) and primary healthcare workers (3078)and 1552 urban workers from a representative sample of the city of Feira de Santana in Bahia, Brazil. An appropriate number of latent classes was extracted and described each occupational category using latent class analysis, a multivariate method that evaluates constructs and takes into accountthe latent characteristics underlying the structure of measurement scales. The conditional probabilities of workers belonging to each class were then analysed graphically. Initially, the latent class analysis extracted four classes corresponding to the four job types (active, passive, low strain and high strain) proposed by the Job-Strain model (JSM) and operationalised by the JCQ. However, after taking into consideration the adequacy criteria to evaluate the number of extracted classes, three classes (active, low strain and high strain) were extracted from the studies of urban workers and teachers and four classes (active, passive, low strain and high strain) from the study of primary healthcare and petroleum industry workers. The four job types proposed by the JSM were identified among primary healthcare and petroleum industry workers-groups with relatively high levels of skill discretion and decision authority. Three job types were identified for teachers and urban workers; however, passive job situations were not found within these groups. The latent class analysis enabled us to describe the conditional standard responses of the job types proposed by the model, particularly in relation to active jobs and high and low strain situations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Nuclear binding of progesterone in hen oviduct. Binding to multiple sites in vitro.

    PubMed Central

    Pikler, G M; Webster, R A; Spelsberg, T C

    1976-01-01

    Steroid hormones, including progesterone, are known to bind with high affinity (Kd approximately 1x10(-10)M) to receptor proteins once they enter target cells. This complex (the progesterone-receptor) then undergoes a temperature-and/or salt-dependent activation which allows it to migrate to the cell nucleus and to bind to the deoxyribonucleoproteins. The present studies demonstrate that binding the hormone-receptor complex in vitro to isolated nuclei from the oviducts of laying hens required the same conditions as do other studies of bbinding in vitro reported previously, e.g. the hormone must be complexed to intact and activated receptor. The assay of the nuclear binding by using multiple concentrations of progesterone receptor reveals the presence of more than one class of binding site in the oviduct nuclei. The affinity of each of these classes of binding sites range from Kd approximately 1x10(-9)-1x10(-8)M. Assays using free steroid (not complexed with receptor) show no binding to these sites. The binding to each of the classes of sites, displays a differential stability to increasing ionic concentrations, suggesting primarily an ionic-type interaction for all classes. Only the highest-affinity class of binding site is capable of binding progesterone receptor under physioligical-saline conditions. This class represent 6000-10000 sites per cell nucleus and resembles the sites detected in vivo (Spelsberg, 1976, Biochem. J. 156, 391-398) which cause maximal transcriptional response when saturated with the progesterone receptor. The multiple binding sites for the progesterone receptor either are not present or are found in limited numbers in the nuclei of non-target organs. Differences in extent of binding to the nuclear material between a target tissue (oviduct) and other tissues (spleen or erythrocyte) are markedly dependent on the ionic conditions, and are probably due to binding to different classes of sites in the nuclei. PMID:182147

  18. Quantitative assessment of mineral resources with an application to petroleum geology

    USGS Publications Warehouse

    Harff, Jan; Davis, J.C.; Olea, R.A.

    1992-01-01

    The probability of occurrence of natural resources, such as petroleum deposits, can be assessed by a combination of multivariate statistical and geostatistical techniques. The area of study is partitioned into regions that are as homogeneous as possible internally while simultaneously as distinct as possible. Fisher's discriminant criterion is used to select geological variables that best distinguish productive from nonproductive localities, based on a sample of previously drilled exploratory wells. On the basis of these geological variables, each wildcat well is assigned to the production class (dry or producer in the two-class case) for which the Mahalanobis' distance from the observation to the class centroid is a minimum. Universal kriging is used to interpolate values of the Mahalanobis' distances to all locations not yet drilled. The probability that an undrilled locality belongs to the productive class can be found, using the kriging estimation variances to assess the probability of misclassification. Finally, Bayes' relationship can be used to determine the probability that an undrilled location will be a discovery, regardless of the production class in which it is placed. The method is illustrated with a study of oil prospects in the Lansing/Kansas City interval of western Kansas, using geological variables derived from well logs. ?? 1992 Oxford University Press.

  19. Characterizing the performance of XOR games and the Shannon capacity of graphs.

    PubMed

    Ramanathan, Ravishankar; Kay, Alastair; Murta, Gláucia; Horodecki, Paweł

    2014-12-12

    In this Letter we give a set of necessary and sufficient conditions such that quantum players of a two-party XOR game cannot perform any better than classical players. With any such game, we associate a graph and examine its zero-error communication capacity. This allows us to specify a broad new class of graphs for which the Shannon capacity can be calculated. The conditions also enable the parametrization of new families of games that have no quantum advantage for arbitrary input probability distributions, up to certain symmetries. In the future, these might be used in information-theoretic studies on reproducing the set of quantum nonlocal correlations.

  20. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  1. Self-imposed timeouts under increasing response requirements.

    NASA Technical Reports Server (NTRS)

    Dardano, J. F.

    1973-01-01

    Three male White Carneaux pigeons were used in the investigation. None of the results obtained contradicts the interpretation of self-imposed timeouts as an escape response reinforced by the removal of unfavorable reinforcement conditions, although some details of the performances reflect either a weak control and/or operation of other controlling variables. Timeout key responding can be considered as one of several classes of behavior having a low probability of occurrence, all of which compete with the behavior maintained by positive reinforcement schedule.

  2. Ensemble learning and model averaging for material identification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Basener, William F.

    2017-05-01

    In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging. Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm. In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.

  3. Epidemiology of school accidents during a six school-year period in one region in Poland.

    PubMed

    Sosnowska, Stefania; Kostka, Tomasz

    2003-01-01

    The aim of the study was to analyse the incidence of school accidents in relation to school size, urban/rural environment and conditions of physical education classes. 202 primary schools with nearly 50,000 students aged 7-15 years were studied during a 6-year period in the Włocławek region in Poland. There were in total 3274 school accidents per 293,000 student-years. Accidents during breaks (36.6%) and physical education (33.2%) were most common. Most frequently accidents took place at schoolyard (29.7%), gymnasium (20.2%), and in the corridor and stairs (25.2%). After adjustment for students' age and sex, student-staff ratio and duration of school hours, urban environment increased the probability of accident (OR: 1.25; 95% CI: 1.14-1.38). Middle-size schools (8-23 classes) had similar accident rate as small schools (OR: 0.93; 95% CI: 0.83-1.04), while schools with 24-32 classes (OR: 1.26; 95% CI: 1.10-1.43) and with > or = 33 classes (OR: 1.36; 95% CI: 1.17-1.58) had increased accident rate. Presence of a gymnasium was also associated with increased probability of accident (OR: 1.49; 95% CI: 1.38-1.61). Urban environment, larger school-size and equipment with full-size gymnasium are important and independent risk factors for school accidents. These findings provide some new insights into the epidemiology of school-related accidents and may be useful information for the planning of strategies to reduce accident incidence in schools.

  4. Complete Defluorination of Perfluorinated Compounds by Hydrated Electrons Generated from 3-Indole-acetic-acid in Organomodified Montmorillonite

    PubMed Central

    Tian, Haoting; Gao, Juan; Li, Hui; Boyd, Stephen A.; Gu, Cheng

    2016-01-01

    Here we describe a unique process that achieves complete defluorination and decomposition of perfluorinated compounds (PFCs) which comprise one of the most recalcitrant and widely distributed classes of toxic pollutant chemicals found in natural environments. Photogenerated hydrated electrons derived from 3-indole-acetic-acid within an organomodified clay induce the reductive defluorination of co-sorbed PFCs. The process proceeds to completion within a few hours under mild reaction conditions. The organomontmorillonite clay promotes the formation of highly reactive hydrated electrons by stabilizing indole radical cations formed upon photolysis, and prevents their deactivation by reaction with protons or oxygen. In the constrained interlayer regions of the clay, hydrated electrons and co-sorbed PFCs are brought in close proximity thereby increasing the probability of reaction. This novel green chemistry provides the basis for in situ and ex situ technologies to treat one of the most troublesome, recalcitrant and ubiquitous classes of environmental contaminants, i.e., PFCs, utilizing innocuous reagents, naturally occurring materials and mild reaction conditions. PMID:27608658

  5. Wildfire risk in the wildland-urban interface: A simulation study in northwestern Wisconsin

    USGS Publications Warehouse

    Bar-Massada, A.; Radeloff, V.C.; Stewart, S.I.; Hawbaker, T.J.

    2009-01-01

    The rapid growth of housing in and near the wildland-urban interface (WUI) increases wildfire risk to lives and structures. To reduce fire risk, it is necessary to identify WUI housing areas that are more susceptible to wildfire. This is challenging, because wildfire patterns depend on fire behavior and spread, which in turn depend on ignition locations, weather conditions, the spatial arrangement of fuels, and topography. The goal of our study was to assess wildfire risk to a 60,000 ha WUI area in northwestern Wisconsin while accounting for all of these factors. We conducted 6000 simulations with two dynamic fire models: Fire Area Simulator (FARSITE) and Minimum Travel Time (MTT) in order to map the spatial pattern of burn probabilities. Simulations were run under normal and extreme weather conditions to assess the effect of weather on fire spread, burn probability, and risk to structures. The resulting burn probability maps were intersected with maps of structure locations and land cover types. The simulations revealed clear hotspots of wildfire activity and a large range of wildfire risk to structures in the study area. As expected, the extreme weather conditions yielded higher burn probabilities over the entire landscape, as well as to different land cover classes and individual structures. Moreover, the spatial pattern of risk was significantly different between extreme and normal weather conditions. The results highlight the fact that extreme weather conditions not only produce higher fire risk than normal weather conditions, but also change the fine-scale locations of high risk areas in the landscape, which is of great importance for fire management in WUI areas. In addition, the choice of weather data may limit the potential for comparisons of risk maps for different areas and for extrapolating risk maps to future scenarios where weather conditions are unknown. Our approach to modeling wildfire risk to structures can aid fire risk reduction management activities by identifying areas with elevated wildfire risk and those most vulnerable under extreme weather conditions. ?? 2009 Elsevier B.V.

  6. Computer models of social processes: the case of migration.

    PubMed

    Beshers, J M

    1967-06-01

    The demographic model is a program for representing births, deaths, migration, and social mobility as social processes in a non-stationary stochastic process (Markovian). Transition probabilities for each age group are stored and then retrieved at the next appearance of that age cohort. In this way new transition probabilities can be calculated as a function of the old transition probabilities and of two successive distribution vectors.Transition probabilities can be calculated to represent effects of the whole age-by-state distribution at any given time period, too. Such effects as saturation or queuing may be represented by a market mechanism; for example, migration between metropolitan areas can be represented as depending upon job supplies and labor markets. Within metropolitan areas, migration can be represented as invasion and succession processes with tipping points (acceleration curves), and the market device has been extended to represent this phenomenon.Thus, the demographic model makes possible the representation of alternative classes of models of demographic processes. With each class of model one can deduce implied time series (varying parame-terswithin the class) and the output of the several classes can be compared to each other and to outside criteria, such as empirical time series.

  7. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  8. Conditional net survival: Relevant prognostic information for colorectal cancer survivors. A French population-based study.

    PubMed

    Drouillard, Antoine; Bouvier, Anne-Marie; Rollot, Fabien; Faivre, Jean; Jooste, Valérie; Lepage, Côme

    2015-07-01

    Traditionally, survival estimates have been reported as survival from the time of diagnosis. A patient's probability of survival changes according to time elapsed since the diagnosis and this is known as conditional survival. The aim was to estimate 5-year net conditional survival in patients with colorectal cancer in a well-defined French population at yearly intervals up to 5 years. Our study included 18,300 colorectal cancers diagnosed between 1976 and 2008 and registered in the population-based digestive cancer registry of Burgundy (France). We calculated conditional 5-year net survival, using the Pohar Perme estimator, for every additional year survived after diagnosis from 1 to 5 years. The initial 5-year net survival estimates varied between 89% for stage I and 9% for advanced stage cancer. The corresponding 5-year net survival for patients alive after 5 years was 95% and 75%. Stage II and III patients who survived 5 years had a similar probability of surviving 5 more years, respectively 87% and 84%. For survivors after the first year following diagnosis, five-year conditional net survival was similar regardless of age class and period of diagnosis. For colorectal cancer survivors, conditional net survival provides relevant and complementary prognostic information for patients and clinicians. Copyright © 2015 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  9. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  10. Identifying sources of heterogeneity in capture probabilities: An example using the Great Tit Parus major

    USGS Publications Warehouse

    Senar, J.C.; Conroy, M.J.; Carrascal, L.M.; Domenech, J.; Mozetich, I.; Uribe, F.

    1999-01-01

    Heterogeneous capture probabilities are a common problem in many capture-recapture studies. Several methods of detecting the presence of such heterogeneity are currently available, and stratification of data has been suggested as the standard method to avoid its effects. However, few studies have tried to identify sources of heterogeneity, or whether there are interactions among sources. The aim of this paper is to suggest an analytical procedure to identify sources of capture heterogeneity. We use data on the sex and age of Great Tits captured in baited funnel traps, at two localities differing in average temperature. We additionally use 'recapture' data obtained by videotaping at feeder (with no associated trap), where the tits ringed with different colours were recorded. This allowed us to test whether individuals in different classes (age, sex and condition) are not trapped because of trap shyness or because o a reduced use of the bait. We used logistic regression analysis of the capture probabilities to test for the effects of age, sex, condition, location and 'recapture method. The results showed a higher recapture probability in the colder locality. Yearling birds (either males or females) had the highest recapture prob abilities, followed by adult males, while adult females had the lowest recapture probabilities. There was no effect of the method of 'recapture' (trap or video tape), which suggests that adult females are less often captured in traps no because of trap-shyness but because of less dependence on supplementary food. The potential use of this methodological approach in other studies is discussed.

  11. [Influence of individual characteristics and working conditions in the level of injury accident at work by registered in Andalusia, Spain, in 2003].

    PubMed

    Muñoz, Julia Bolívar; Codina, Antonio Daponte; Cruz, Laura López; Rodríguez, Inmaculada Mateo

    2009-01-01

    The study of the severity of occupational injuries is very important for the establishment of prevention plans. The aim of this paper is to analyze the distribution of occupational injuries by a) individual factors b) work place characteristics and c) working conditions and to analyze the severity of occupational injuries by this characteristics in men and women in Andalusia. Injury data came from the accident registry of the Ministry of Labor and social issues in 2003. Dependent variable: the severity of the injury: slight, serious, very serious and fatal; the independent variables: the characteristics of the worker, company data, and the accident itself. Bivariate and multivariate analysis were done to estimate the probability of serious, very serious and fatal injury, related to other variables, through odds ratio (OR), and using a 95% confidence interval (CI 95%). The 82.4% of the records were men and 17.6% were women, of whom the 78,1% are unskilled manual workers, compared to 44.9% of men. The men belonging to class I have a higher probability of more severe lesions (OR = 1.67, 95% CI = 1.17-2.38). The severity of the injury is associated with sex, age and type of injury. In men it is also related with the professional situation, the place where the accident happened, an unusual job, the size and the characteristics of the company and the social class, and in women with the sector.

  12. AUTOCLASS III - AUTOMATIC CLASS DISCOVERY FROM DATA

    NASA Technical Reports Server (NTRS)

    Cheeseman, P. C.

    1994-01-01

    The program AUTOCLASS III, Automatic Class Discovery from Data, uses Bayesian probability theory to provide a simple and extensible approach to problems such as classification and general mixture separation. Its theoretical basis is free from ad hoc quantities, and in particular free of any measures which alter the data to suit the needs of the program. As a result, the elementary classification model used lends itself easily to extensions. The standard approach to classification in much of artificial intelligence and statistical pattern recognition research involves partitioning of the data into separate subsets, known as classes. AUTOCLASS III uses the Bayesian approach in which classes are described by probability distributions over the attributes of the objects, specified by a model function and its parameters. The calculation of the probability of each object's membership in each class provides a more intuitive classification than absolute partitioning techniques. AUTOCLASS III is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or omitted. The user specifies a class probability distribution function by associating attribute sets with supplied likelihood function terms. AUTOCLASS then searches in the space of class numbers and parameters for the maximally probable combination. It returns the set of class probability function parameters, and the class membership probabilities for each data instance. AUTOCLASS III is written in Common Lisp, and is designed to be platform independent. This program has been successfully run on Symbolics and Explorer Lisp machines. It has been successfully used with the following implementations of Common LISP on the Sun: Franz Allegro CL, Lucid Common Lisp, and Austin Kyoto Common Lisp and similar UNIX platforms; under the Lucid Common Lisp implementations on VAX/VMS v5.4, VAX/Ultrix v4.1, and MIPS/Ultrix v4, rev. 179; and on the Macintosh personal computer. The minimum Macintosh required is the IIci. This program will not run under CMU Common Lisp or VAX/VMS DEC Common Lisp. A minimum of 8Mb of RAM is required for Macintosh platforms and 16Mb for workstations. The standard distribution medium for this program is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 3.5 inch diskette in Macintosh format. An electronic copy of the documentation is included on the distribution medium. AUTOCLASS was developed between March 1988 and March 1992. It was initially released in May 1991. Sun is a trademark of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation. Macintosh is a trademark of Apple Computer, Inc. Allegro CL is a registered trademark of Franz, Inc.

  13. Can the dissociative PTSD subtype be identified across two distinct trauma samples meeting caseness for PTSD?

    PubMed

    Hansen, Maj; Műllerová, Jana; Elklit, Ask; Armour, Cherie

    2016-08-01

    For over a century, the occurrence of dissociative symptoms in connection to traumatic exposure has been acknowledged in the scientific literature. Recently, the importance of dissociation has also been recognized in the long-term traumatic response within the DSM-5 nomenclature. Several studies have confirmed the existence of the dissociative posttraumatic stress disorder (PTSD) subtype. However, there is a lack of studies investigating latent profiles of PTSD solely in victims with PTSD. This study investigates the possible presence of PTSD subtypes using latent class analysis (LCA) across two distinct trauma samples meeting caseness for DSM-5 PTSD based on self-reports (N = 787). Moreover, we assessed if a number of risk factors resulted in an increased probability of membership in a dissociative compared with a non-dissociative PTSD class. The results of LCA revealed a two-class solution with two highly symptomatic classes: a dissociative class and a non-dissociative class across both samples. Increased emotion-focused coping increased the probability of individuals being grouped into the dissociative class across both samples. Social support reduced the probability of individuals being grouped into the dissociative class but only in the victims of motor vehicle accidents (MVAs) suffering from whiplash. The results are discussed in light of their clinical implications and suggest that the dissociative subtype can be identified in victims of incest and victims of MVA suffering from whiplash meeting caseness for DSM-5 PTSD.

  14. Climate drives inter-annual variability in probability of high severity fire occurrence in the western United States

    NASA Astrophysics Data System (ADS)

    Keyser, Alisa; Westerling, Anthony LeRoy

    2017-05-01

    A long history of fire suppression in the western United States has significantly changed forest structure and ecological function, leading to increasingly uncharacteristic fires in terms of size and severity. Prior analyses of fire severity in California forests showed that time since last fire and fire weather conditions predicted fire severity very well, while a larger regional analysis showed that topography and climate were important predictors of high severity fire. There has not yet been a large-scale study that incorporates topography, vegetation and fire-year climate to determine regional scale high severity fire occurrence. We developed models to predict the probability of high severity fire occurrence for the western US. We predict high severity fire occurrence with some accuracy, and identify the relative importance of predictor classes in determining the probability of high severity fire. The inclusion of both vegetation and fire-year climate predictors was critical for model skill in identifying fires with high fractional fire severity. The inclusion of fire-year climate variables allows this model to forecast inter-annual variability in areas at future risk of high severity fire, beyond what slower-changing fuel conditions alone can accomplish. This allows for more targeted land management, including resource allocation for fuels reduction treatments to decrease the risk of high severity fire.

  15. A general stochastic model for sporophytic self-incompatibility.

    PubMed

    Billiard, Sylvain; Tran, Viet Chi

    2012-01-01

    Disentangling the processes leading populations to extinction is a major topic in ecology and conservation biology. The difficulty to find a mate in many species is one of these processes. Here, we investigate the impact of self-incompatibility in flowering plants, where several inter-compatible classes of individuals exist but individuals of the same class cannot mate. We model pollen limitation through different relationships between mate availability and fertilization success. After deriving a general stochastic model, we focus on the simple case of distylous plant species where only two classes of individuals exist. We first study the dynamics of such a species in a large population limit and then, we look for an approximation of the extinction probability in small populations. This leads us to consider inhomogeneous random walks on the positive quadrant. We compare the dynamics of distylous species to self-fertile species with and without inbreeding depression, to obtain the conditions under which self-incompatible species can be less sensitive to extinction while they can suffer more pollen limitation. © Springer-Verlag 2011

  16. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  17. Clustering of Multiple Risk Behaviors Among a Sample of 18-Year-Old Australians and Associations With Mental Health Outcomes: A Latent Class Analysis.

    PubMed

    Champion, Katrina E; Mather, Marius; Spring, Bonnie; Kay-Lambkin, Frances; Teesson, Maree; Newton, Nicola C

    2018-01-01

    Risk behaviors commonly co-occur, typically emerge in adolescence, and become entrenched by adulthood. This study investigated the clustering of established (physical inactivity, diet, smoking, and alcohol use) and emerging (sedentary behavior and sleep) chronic disease risk factors among young Australian adults, and examined how clusters relate to mental health. The sample was derived from the long-term follow-up of a cohort of Australians. Participants were initially recruited at school as part of a cluster randomized controlled trial. A total of 853 participants (M age  = 18.88 years, SD = 0.42) completed an online self-report survey as part of the 5-year follow-up for the RCT. The survey assessed six behaviors (binge drinking and smoking in the past 6 months, moderate-to-vigorous physical activity/week, sitting time/day, fruit and vegetable intake/day, and sleep duration/night). Each behavior was represented by a dichotomous variable reflecting adherence to national guidelines. Exploratory analyses were conducted. Clusters were identified using latent class analysis. Three classes emerged: "moderate risk" (moderately likely to binge drink and not eat enough fruit, high probability of insufficient vegetable intake; Class 1, 52%); "inactive, non-smokers" (high probabilities of not meeting guidelines for physical activity, sitting time and fruit/vegetable consumption, very low probability of smoking; Class 2, 24%), and "smokers and binge drinkers" (high rates of smoking and binge drinking, poor fruit/vegetable intake; Class 3, 24%). There were significant differences between the classes in terms of psychological distress ( p  = 0.003), depression ( p  < 0.001), and anxiety ( p  = 0.003). Specifically, Class 3 ("smokers and binge drinkers") showed higher levels of distress, depression, and anxiety than Class 1 ("moderate risk"), while Class 2 ("inactive, non-smokers") had greater depression than the "moderate risk" group. Results indicate that risk behaviors are prevalent and clustered in 18-year old Australians. Mental health symptoms were significantly greater among the two classes that were characterized by high probabilities of engaging in multiple risk behaviors (Classes 2 and 3). An examination of the clustering of lifestyle risk behaviors is important to guide the development of preventive interventions. Our findings reinforce the importance of delivering multiple health interventions to reduce disease risk and improve mental well-being.

  18. Use of portable antennas to estimate abundance of PIT-tagged fish in small streams: Factors affecting detection probability

    USGS Publications Warehouse

    O'Donnell, Matthew J.; Horton, Gregg E.; Letcher, Benjamin H.

    2010-01-01

    Portable passive integrated transponder (PIT) tag antenna systems can be valuable in providing reliable estimates of the abundance of tagged Atlantic salmon Salmo salar in small streams under a wide range of conditions. We developed and employed PIT tag antenna wand techniques in two controlled experiments and an additional case study to examine the factors that influenced our ability to estimate population size. We used Pollock's robust-design capture–mark–recapture model to obtain estimates of the probability of first detection (p), the probability of redetection (c), and abundance (N) in the two controlled experiments. First, we conducted an experiment in which tags were hidden in fixed locations. Although p and c varied among the three observers and among the three passes that each observer conducted, the estimates of N were identical to the true values and did not vary among observers. In the second experiment using free-swimming tagged fish, p and c varied among passes and time of day. Additionally, estimates of N varied between day and night and among age-classes but were within 10% of the true population size. In the case study, we used the Cormack–Jolly–Seber model to examine the variation in p, and we compared counts of tagged fish found with the antenna wand with counts collected via electrofishing. In that study, we found that although p varied for age-classes, sample dates, and time of day, antenna and electrofishing estimates of N were similar, indicating that population size can be reliably estimated via PIT tag antenna wands. However, factors such as the observer, time of day, age of fish, and stream discharge can influence the initial and subsequent detection probabilities.

  19. Efficient and faithful remote preparation of arbitrary three- and four-particle -class entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Hu, You-Di; Wang, Zhe-Qiang; Ye, Liu

    2015-06-01

    We develop two efficient measurement-based schemes for remotely preparing arbitrary three- and four-particle W-class entangled states by utilizing genuine tripartite Greenberg-Horn-Zeilinger-type states as quantum channels, respectively. Through appropriate local operations and classical communication, the desired states can be faithfully retrieved at the receiver's place with certain probability. Compared with the previously existing schemes, the success probability in current schemes is greatly increased. Moreover, the required classical communication cost is calculated as well. Further, several attractive discussions on the properties of the presented schemes, including the success probability and reducibility, are made. Remarkably, the proposed schemes can be faithfully achieved with unity total success probability when the employed channels are reduced into maximally entangled ones.

  20. Patient factors and quality of life outcomes differ among four subgroups of oncology patients based on symptom occurrence.

    PubMed

    Astrup, Guro Lindviksmoen; Hofsø, Kristin; Bjordal, Kristin; Guren, Marianne Grønlie; Vistad, Ingvild; Cooper, Bruce; Miaskowski, Christine; Rustøen, Tone

    2017-03-01

    Reviews of the literature on symptoms in oncology patients undergoing curative treatment, as well as patients receiving palliative care, suggest that they experience multiple, co-occurring symptoms and side effects. The purposes of this study were to determine if subgroups of oncology patients could be identified based on symptom occurrence rates and if these subgroups differed on a number of demographic and clinical characteristics, as well as on quality of life (QoL) outcomes. Latent class analysis (LCA) was used to identify subgroups (i.e. latent classes) of patients with distinct symptom experiences based on the occurrence rates for the 13 most common symptoms from the Memorial Symptom Assessment Scale. In total, 534 patients with breast, head and neck, colorectal, or ovarian cancer participated. Four latent classes of patients were identified based on probability of symptom occurrence: all low class [i.e. low probability for all symptoms (n = 152)], all high class (n = 149), high psychological class (n = 121), and low psychological class (n = 112). Patients in the all high class were significantly younger compared with patients in the all low class. Furthermore, compared to the other three classes, patients in the all high class had lower functional status and higher comorbidity scores, and reported poorer QoL scores. Patients in the high and low psychological classes had a moderate probability of reporting physical symptoms. Patients in the low psychological class reported a higher number of symptoms, a lower functional status, and poorer physical and total QoL scores. Distinct subgroups of oncology patients can be identified based on symptom occurrence rates. Patient characteristics that are associated with these subgroups can be used to identify patients who are at greater risk for multiple co-occurring symptoms and diminished QoL, so that these patients can be offered appropriate symptom management interventions.

  1. Transfer of conflict and cooperation from experienced games to new games: a connectionist model of learning

    PubMed Central

    Spiliopoulos, Leonidas

    2015-01-01

    The question of whether, and if so how, learning can be transfered from previously experienced games to novel games has recently attracted the attention of the experimental game theory literature. Existing research presumes that learning operates over actions, beliefs or decision rules. This study instead uses a connectionist approach that learns a direct mapping from game payoffs to a probability distribution over own actions. Learning is operationalized as a backpropagation rule that adjusts the weights of feedforward neural networks in the direction of increasing the probability of an agent playing a myopic best response to the last game played. One advantage of this approach is that it expands the scope of the model to any possible n × n normal-form game allowing for a comprehensive model of transfer of learning. Agents are exposed to games drawn from one of seven classes of games with significantly different strategic characteristics and then forced to play games from previously unseen classes. I find significant transfer of learning, i.e., behavior that is path-dependent, or conditional on the previously seen games. Cooperation is more pronounced in new games when agents are previously exposed to games where the incentive to cooperate is stronger than the incentive to compete, i.e., when individual incentives are aligned. Prior exposure to Prisoner's dilemma, zero-sum and discoordination games led to a significant decrease in realized payoffs for all the game classes under investigation. A distinction is made between superficial and deep transfer of learning both—the former is driven by superficial payoff similarities between games, the latter by differences in the incentive structures or strategic implications of the games. I examine whether agents learn to play the Nash equilibria of games, how they select amongst multiple equilibria, and whether they transfer Nash equilibrium behavior to unseen games. Sufficient exposure to a strategically heterogeneous set of games is found to be a necessary condition for deep learning (and transfer) across game classes. Paradoxically, superficial transfer of learning is shown to lead to better outcomes than deep transfer for a wide range of game classes. The simulation results corroborate important experimental findings with human subjects, and make several novel predictions that can be tested experimentally. PMID:25873855

  2. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).

  3. Operationalizing Max Weber's probability concept of class situation: the concept of social class.

    PubMed

    Smith, Ken

    2007-03-01

    In this essay I take seriously Max Weber's astonishingly neglected claim that class situation may be defined, not in categorial terms, but probabilistically. I then apply this idea to another equally neglected claim made by Weber that the boundaries of social classes may be determined by the degree of social mobility within such classes. Taking these two ideas together I develop the idea of a non-categorial boundary 'surface' between classes and of a social class 'corridor' made up of all those people who are still to be found within the boundaries of the social class into which they were born. I call social mobility within a social class 'intra-class social mobility' and social mobility between classes 'inter-class social mobility'. I also claim that this distinction resolves the dispute between those sociologists who claim that late industrial societies are still highly class bound and those who think that this is no longer the case. Both schools are right I think, but one is referring to a high degree of intra-class social mobility and the other to an equally high degree of inter-class mobility. Finally I claim that this essay provides sociology with only one example among many other possible applications of how probability theory might usefully be used to overcome boundary problems generally in sociology.

  4. A latent transition model of the effects of a teen dating violence prevention initiative.

    PubMed

    Williams, Jason; Miller, Shari; Cutbush, Stacey; Gibbs, Deborah; Clinton-Sherrod, Monique; Jones, Sarah

    2015-02-01

    Patterns of physical and psychological teen dating violence (TDV) perpetration, victimization, and related behaviors were examined with data from the evaluation of the Start Strong: Building Healthy Teen Relationships initiative, a dating violence primary prevention program targeting middle school students. Latent class and latent transition models were used to estimate distinct patterns of TDV and related behaviors of bullying and sexual harassment in seventh grade students at baseline and to estimate transition probabilities from one pattern of behavior to another at the 1-year follow-up. Intervention effects were estimated by conditioning transitions on exposure to Start Strong. Latent class analyses suggested four classes best captured patterns of these interrelated behaviors. Classes were characterized by elevated perpetration and victimization on most behaviors (the multiproblem class), bullying perpetration/victimization and sexual harassment victimization (the bully-harassment victimization class), bullying perpetration/victimization and psychological TDV victimization (bully-psychological victimization), and experience of bully victimization (bully victimization). Latent transition models indicated greater stability of class membership in the comparison group. Intervention students were less likely to transition to the most problematic pattern and more likely to transition to the least problem class. Although Start Strong has not been found to significantly change TDV, alternative evaluation models may find important differences. Latent transition analysis models suggest positive intervention impact, especially for the transitions at the most and the least positive end of the spectrum. Copyright © 2015. Published by Elsevier Inc.

  5. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  6. A hazard and risk classification system for catastrophic rock slope failures in Norway

    NASA Astrophysics Data System (ADS)

    Hermanns, R.; Oppikofer, T.; Anda, E.; Blikra, L. H.; Böhme, M.; Bunkholt, H.; Dahle, H.; Devoli, G.; Eikenæs, O.; Fischer, L.; Harbitz, C. B.; Jaboyedoff, M.; Loew, S.; Yugsi Molina, F. X.

    2012-04-01

    The Geological Survey of Norway carries out systematic geologic mapping of potentially unstable rock slopes in Norway that can cause a catastrophic failure. As catastrophic failure we describe failures that involve substantial fragmentation of the rock mass during run-out and that impact an area larger than that of a rock fall (shadow angle of ca. 28-32° for rock falls). This includes therefore rock slope failures that lead to secondary effects, such as a displacement wave when impacting a water body or damming of a narrow valley. Our systematic mapping revealed more than 280 rock slopes with significant postglacial deformation, which might represent localities of large future rock slope failures. This large number necessitates prioritization of follow-up activities, such as more detailed investigations, periodic monitoring and permanent monitoring and early-warning. In the past hazard and risk were assessed qualitatively for some sites, however, in order to compare sites so that political and financial decisions can be taken, it was necessary to develop a quantitative hazard and risk classification system. A preliminary classification system was presented and discussed with an expert group of Norwegian and international experts and afterwards adapted following their recommendations. This contribution presents the concept of this final hazard and risk classification that should be used in Norway in the upcoming years. Historical experience and possible future rockslide scenarios in Norway indicate that hazard assessment of large rock slope failures must be scenario-based, because intensity of deformation and present displacement rates, as well as the geological structures activated by the sliding rock mass can vary significantly on a given slope. In addition, for each scenario the run-out of the rock mass has to be evaluated. This includes the secondary effects such as generation of displacement waves or landslide damming of valleys with the potential of later outburst floods. It became obvious that large rock slope failures cannot be evaluated on a slope scale with frequency analyses of historical and prehistorical events only, as multiple rockslides have occurred within one century on a single slope that prior to the recent failures had been inactive for several thousand years. In addition, a systematic analysis on temporal distribution indicates that rockslide activity following deglaciation after the Last Glacial Maximum has been much higher than throughout the Holocene. Therefore the classification system has to be based primarily on the geological conditions on the deforming slope and on the deformation rates and only to a lesser weight on a frequency analyses. Our hazard classification therefore is primarily based on several criteria: 1) Development of the back-scarp, 2) development of the lateral release surfaces, 3) development of the potential basal sliding surface, 4) morphologic expression of the basal sliding surface, 5) kinematic feasibility tests for different displacement mechanisms, 6) landslide displacement rates, 7) change of displacement rates (acceleration), 8) increase of rockfall activity on the unstable rock slope, 9) Presence post-glacial events of similar size along the affected slope and its vicinity. For each of these criteria several conditions are possible to choose from (e.g. different velocity classes for the displacement rate criterion). A score is assigned to each condition and the sum of all scores gives the total susceptibility score. Since many of these observations are somewhat uncertain, the classification system is organized in a decision tree where probabilities can be assigned to each condition. All possibilities in the decision tree are computed and the individual probabilities giving the same total score are summed. Basic statistics show the minimum and maximum total scores of a scenario, as well as the mean and modal value. The final output is a cumulative frequency distribution of the susceptibility scores that can be divided into several classes, which are interpreted as susceptibility classes (very high, high, medium, low, and very low). Today the Norwegian Planning and Building Act uses hazard classes with annual probabilities of impact on buildings producing damages (<1/100, <1/1000, <1/5000 and zero for critical buildings). However, up to now there is not enough scientific knowledge to predict large rock slope failures in these strict classes. Therefore, the susceptibility classes will be matched with the hazard classes from the Norwegian Building Act (e.g. very high susceptibility represents the hazard class with annual probability >1/100). The risk analysis focuses on the potential fatalities of a worst case rock slide scenario and its secondary effects only and is done in consequence classes with a decimal logarithmic scale. However we recommend for all high risk objects that municipalities carry out detailed risk analyses. Finally, the hazard and risk classification system will give recommendations where surveillance in form of continuous 24/7 monitoring systems coupled with early-warning systems (high risk class) or periodic monitoring (medium risk class) should be carried out. These measures are understood as to reduce the risk of life loss due to a rock slope failure close to 0 as population can be evacuated on time if a change of stability situation occurs. The final hazard and risk classification for all potentially unstable rock slopes in Norway, including all data used for its classification will be published within the national landslide database (available on www.skrednett.no).

  7. Evaluation of drought using SPEI drought class transitions and log-linear models for different agro-ecological regions of India

    NASA Astrophysics Data System (ADS)

    Alam, N. M.; Sharma, G. C.; Moreira, Elsa; Jana, C.; Mishra, P. K.; Sharma, N. K.; Mandal, D.

    2017-08-01

    Markov chain and 3-dimensional log-linear models were attempted to model drought class transitions derived from the newly developed drought index the Standardized Precipitation Evapotranspiration Index (SPEI) at a 12 month time scale for six major drought prone areas of India. Log-linear modelling approach has been used to investigate differences relative to drought class transitions using SPEI-12 time series derived form 48 yeas monthly rainfall and temperature data. In this study, the probabilities of drought class transition, the mean residence time, the 1, 2 or 3 months ahead prediction of average transition time between drought classes and the drought severity class have been derived. Seasonality of precipitation has been derived for non-homogeneous Markov chains which could be used to explain the effect of the potential retreat of drought. Quasi-association and Quasi-symmetry log-linear models have been fitted to the drought class transitions derived from SPEI-12 time series. The estimates of odds along with their confidence intervals were obtained to explain the progression of drought and estimation of drought class transition probabilities. For initial months as the drought severity increases the calculated odds shows lower value and the odds decreases for the succeeding months. This indicates that the ratio of expected frequencies of occurrence of transition from drought class to the non-drought class decreases as compared to transition to any drought class when the drought severity of the present class increases. From 3-dimensional log-linear model it is clear that during the last 24 years the drought probability has increased for almost all the six regions. The findings from the present study will immensely help to assess the impact of drought on the gross primary production and to develop future contingent planning in similar regions worldwide.

  8. Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.

    PubMed

    She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng

    2015-01-01

    Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.

  9. Comparative Study of Teachers in Regular Schools and Teachers in Specialized Schools in France, Working with Students with an Autism Spectrum Disorder: Stress, Social Support, Coping Strategies and Burnout.

    PubMed

    Boujut, Emilie; Dean, Annika; Grouselle, Amélie; Cappe, Emilie

    2016-09-01

    The inclusion of students with Autism Spectrum Disorder (ASD) in schools is a source of stress for teachers. Specialized teachers have, in theory, received special training. To compare the experiences of teachers dealing with students with ASD in different classroom environments. A total of 245 teachers filled out four self-report questionnaires measuring perceived stress, social support, coping strategies, and burnout. Specialized teachers perceive their teaching as a challenge, can count on receiving help from colleagues, use more problem-focused coping strategies and social support seeking behavior, and are less emotionally exhausted than teachers in regular classes. This study highlights that teachers in specialized schools and classes have better adjustment, probably due to their training, experience, and tailored classroom conditions.

  10. Integrated Bayesian models of learning and decision making for saccadic eye movements.

    PubMed

    Brodersen, Kay H; Penny, Will D; Harrison, Lee M; Daunizeau, Jean; Ruff, Christian C; Duzel, Emrah; Friston, Karl J; Stephan, Klaas E

    2008-11-01

    The neurophysiology of eye movements has been studied extensively, and several computational models have been proposed for decision-making processes that underlie the generation of eye movements towards a visual stimulus in a situation of uncertainty. One class of models, known as linear rise-to-threshold models, provides an economical, yet broadly applicable, explanation for the observed variability in the latency between the onset of a peripheral visual target and the saccade towards it. So far, however, these models do not account for the dynamics of learning across a sequence of stimuli, and they do not apply to situations in which subjects are exposed to events with conditional probabilities. In this methodological paper, we extend the class of linear rise-to-threshold models to address these limitations. Specifically, we reformulate previous models in terms of a generative, hierarchical model, by combining two separate sub-models that account for the interplay between learning of target locations across trials and the decision-making process within trials. We derive a maximum-likelihood scheme for parameter estimation as well as model comparison on the basis of log likelihood ratios. The utility of the integrated model is demonstrated by applying it to empirical saccade data acquired from three healthy subjects. Model comparison is used (i) to show that eye movements do not only reflect marginal but also conditional probabilities of target locations, and (ii) to reveal subject-specific learning profiles over trials. These individual learning profiles are sufficiently distinct that test samples can be successfully mapped onto the correct subject by a naïve Bayes classifier. Altogether, our approach extends the class of linear rise-to-threshold models of saccadic decision making, overcomes some of their previous limitations, and enables statistical inference both about learning of target locations across trials and the decision-making process within trials.

  11. An information measure for class discrimination. [in remote sensing of crop observation

    NASA Technical Reports Server (NTRS)

    Shen, S. S.; Badhwar, G. D.

    1986-01-01

    This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.

  12. Consumer-directed health care and the disadvantaged.

    PubMed

    Bloche, M Gregg

    2007-01-01

    Broad adoption of "consumer-directed health care" would probably widen socioeconomic disparities in care and redistribute wealth in "reverse Robin Hood" fashion, from the working poor and middle classes to the well-off. Racial and ethnic disparities in care would also probably worsen. These effects could be alleviated by adjustments to the consumer-directed paradigm. Possible fixes include more progressive tax subsidies, tiering of cost-sharing schemes to promote high-value care, and reduced cost sharing for the less well-off. These fixes, though, are unlikely to gain traction. If consumer-directed plans achieve market dominance, disparities in care by class and race will probably grow.

  13. Site occupancy models with heterogeneous detection probabilities

    USGS Publications Warehouse

    Royle, J. Andrew

    2006-01-01

    Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.

  14. Does highly symptomatic class membership in the acute phase predict highly symptomatic classification in victims 6 months after traumatic exposure?

    PubMed

    Hansen, Maj; Hyland, Philip; Armour, Cherie

    2016-05-01

    Recently studies have indicated the existence of both posttraumatic stress disorder (PTSD) and acute stress disorder (ASD) subtypes but no studies have investigated their mutual association. Although ASD may not be a precursor of PTSD per se, there are potential benefits associated with early identification of victims at risk of developing PTSD subtypes. The present study investigates ASD and PTSD subtypes using latent class analysis (LCA) following bank robbery (N=371). Moreover, we assessed if highly symptomatic ASD and selected risk factors increased the probability of highly symptomatic PTSD. The results of LCA revealed a three class solution for ASD and a two class solution for PTSD. Negative cognitions about self (OR=1.08), neuroticism (OR=1.09) and membership of the 'High symptomatic ASD' class (OR=20.41) significantly increased the probability of 'symptomatic PTSD' class membership. Future studies are needed to investigate the existence of ASD and PTSD subtypes and their mutual relationship. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Constructor theory of probability

    PubMed Central

    2016-01-01

    Unitary quantum theory, having no Born Rule, is non-probabilistic. Hence the notorious problem of reconciling it with the unpredictability and appearance of stochasticity in quantum measurements. Generalizing and improving upon the so-called ‘decision-theoretic approach’, I shall recast that problem in the recently proposed constructor theory of information—where quantum theory is represented as one of a class of superinformation theories, which are local, non-probabilistic theories conforming to certain constructor-theoretic conditions. I prove that the unpredictability of measurement outcomes (to which constructor theory gives an exact meaning) necessarily arises in superinformation theories. Then I explain how the appearance of stochasticity in (finitely many) repeated measurements can arise under superinformation theories. And I establish sufficient conditions for a superinformation theory to inform decisions (made under it) as if it were probabilistic, via a Deutsch–Wallace-type argument—thus defining a class of decision-supporting superinformation theories. This broadens the domain of applicability of that argument to cover constructor-theory compliant theories. In addition, in this version some of the argument's assumptions, previously construed as merely decision-theoretic, follow from physical properties expressed by constructor-theoretic principles. PMID:27616914

  16. A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.

    PubMed

    Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous

    2017-08-30

    While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.

  17. Occurrence of organic wastewater compounds in drinking water, wastewater effluent, and the Big Sioux River in or near Sioux Falls, South Dakota, 2001-2004

    USGS Publications Warehouse

    Sando, Steven K.; Furlong, Edward T.; Gray, James L.; Meyer, Michael T.

    2006-01-01

    The U.S. Geological Survey (USGS) in cooperation with the city of Sioux Falls conducted several rounds of sampling to determine the occurrence of organic wastewater compounds (OWCs) in the city of Sioux Falls drinking water and waste-water effluent, and the Big Sioux River in or near Sioux Falls during August 2001 through May 2004. Water samples were collected during both base-flow and storm-runoff conditions. Water samples were collected at 8 sites, which included 4 sites upstream from the wastewater treatment plant (WWTP) discharge, 2 sites downstream from the WWTP discharge, 1 finished drinking-water site, and 1 WWTP effluent (WWE) site. A total of 125 different OWCs were analyzed for in this study using five different analytical methods. Analyses for OWCs were performed at USGS laboratories that are developing and/or refining small-concentration (less than 1 microgram per liter (ug/L)) analytical methods. The OWCs were classified into six compound classes: human pharmaceutical compounds (HPCs); human and veterinary antibiotic compounds (HVACs); major agricultural herbicides (MAHs); household, industrial,and minor agricultural compounds (HIACs); polyaromatic hydrocarbons (PAHs); and sterol compounds (SCs). Some of the compounds in the HPC, MAH, HIAC, and PAH classes are suspected of being endocrine-disrupting compounds (EDCs). Of the 125 different OWCs analyzed for in this study, 81 OWCs had one or more detections in environmental samples reported by the laboratories, and of those 81 OWCs, 63 had acceptable analytical method performance, were detected at concentrations greater than the study reporting levels, and were included in analyses and discussion related to occurrence of OWCs in drinking water, wastewater effluent, and the Big Sioux River. OWCs in all compound classes were detected in water samples from sampling sites in the Sioux Falls area. For the five sampling periods when samples were collected from the Sioux Falls finished drinking water, only one OWC was detected at a concentration greater than the study reporting level (metolachlor; 0.0040 ug/L). During base-flow conditions, Big Sioux River sites upstream from the WWTP discharge had OWC contributions that primarily were from nonpoint animal or crop agriculture sources or had OWC concentrations that were minimal. The influence of the WWTP discharge on OWCs at downstream river sites during base-flow conditions ranged from minimal influence to substantial influence depending on the sampling period. During runoff conditions, OWCs at sites upstream from the WWTP discharge probably were primarily contributed by nonpoint animal and/or crop agriculture sources and possibly by stormwater runoff from nearby roads. OWCs at sites downstream from the WWTP discharge probably were contributed by sources other than the WWTP effluent discharge, such as stormwater runoff from urban and/or agriculture areas and/or resuspension of OWCs adsorbed to sediment deposited in the Big Sioux River. OWC loads generally were substantially smaller for upstream sites than downstream sites during both base-flow and runoff conditions.discharge had OWC contributions that primarily were from nonpoint animal or crop agriculture sources or had OWC concentrations that were minimal. The influence of the WWTP discharge on OWCs at downstream river sites during base-flow conditions ranged from minimal influence to substantial influence depending on the sampling period. During runoff conditions, OWCs at sites upstream from the WWTP discharge probably were primarily contributed by nonpoint animal and/or crop agriculture sources and possibly by stormwater runoff from nearby roads. OWCs at sites downstream from the WWTP discharge probably were contributed by sources other than the WWTP effluent discharge, such as stormwater runoff from urban and/or agriculture areas and/or resuspension of OWCs adsorbed to sediment deposited in the Big Sioux River. OWC loads generally were substantially smaller for

  18. Delirium superimposed on dementia: defining disease states and course from longitudinal measurements of a multivariate index using latent class analysis and hidden Markov chains.

    PubMed

    Ciampi, Antonio; Dyachenko, Alina; Cole, Martin; McCusker, Jane

    2011-12-01

    The study of mental disorders in the elderly presents substantial challenges due to population heterogeneity, coexistence of different mental disorders, and diagnostic uncertainty. While reliable tools have been developed to collect relevant data, new approaches to study design and analysis are needed. We focus on a new analytic approach. Our framework is based on latent class analysis and hidden Markov chains. From repeated measurements of a multivariate disease index, we extract the notion of underlying state of a patient at a time point. The course of the disorder is then a sequence of transitions among states. States and transitions are not observable; however, the probability of being in a state at a time point, and the transition probabilities from one state to another over time can be estimated. Data from 444 patients with and without diagnosis of delirium and dementia were available from a previous study. The Delirium Index was measured at diagnosis, and at 2 and 6 months from diagnosis. Four latent classes were identified: fairly healthy, moderately ill, clearly sick, and very sick. Dementia and delirium could not be separated on the basis of these data alone. Indeed, as the probability of delirium increased, so did the probability of decline of mental functions. Eight most probable courses were identified, including good and poor stable courses, and courses exhibiting various patterns of improvement. Latent class analysis and hidden Markov chains offer a promising tool for studying mental disorders in the elderly. Its use may show its full potential as new data become available.

  19. Skin Texture Recognition using Medical Diagnosis

    NASA Astrophysics Data System (ADS)

    Munshi, Anindita; Parekh, Ranjan

    2010-10-01

    This paper proposes an automated system for recognizing disease conditions of human skin in context to medical diagnosis. The disease conditions are recognized by analyzing skin texture images using a set of normalized symmetrical Grey Level Co occurrence Matrices (GLCM). GLCM defines the probability of grey level i occurring in the neighborhood of another grey level j at a distance d in directionθ. Directional GLCMs are computed along four directions: horizontal (θ = 0°), vertical (θ = 90°), right diagonal (θ = 45°) and left diagonal (θ = 135°), and a set of features viz. Contrast, Homogeneity and Energy computed from each, are averaged to provide an estimation of the texture class. The system is tested using 225 images pertaining to three dermatological skin conditions viz. dermatitis, eczema, urticaria. An accuracy of 94.81% is obtained using a multilayer perceptron (MLP) as a classifier.

  20. Distribution of chirality in the quantum walk: Markov process and entanglement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanelli, Alejandro

    The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible tomore » find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.« less

  1. A translational velocity command system for VTOL low speed flight

    NASA Technical Reports Server (NTRS)

    Merrick, V. K.

    1982-01-01

    A translational velocity flight controller, suitable for very low speed maneuvering, is described and its application to a large class of VTOL aircraft from jet lift to propeller driven types is analyzed. Estimates for the more critical lateral axis lead to the conclusion that the controller would provide a jet lift (high disk loading) VTOL aircraft with satisfactory "hands off" station keeping in operational conditions more stringent than any specified in current or projected requirements. It also seems likely that ducted fan or propeller driven (low disk loading) VTOL aircraft would have acceptable hovering handling qualities even in high turbulence, although in these conditions pilot intervention to maintain satisfactory station keeping would probably be required for landing in restricted areas.

  2. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  3. Active Learning? Not with My Syllabus!

    ERIC Educational Resources Information Center

    Ernst, Michael D.

    2012-01-01

    We describe an approach to teaching probability that minimizes the amount of class time spent on the topic while also providing a meaningful (dice-rolling) activity to get students engaged. The activity, which has a surprising outcome, illustrates the basic ideas of informal probability and how probability is used in statistical inference.…

  4. Organic priority substances and microbial processes in river sediments subject to contrasting hydrological conditions.

    PubMed

    Zoppini, Annamaria; Ademollo, Nicoletta; Amalfitano, Stefano; Casella, Patrizia; Patrolecco, Luisa; Polesello, Stefano

    2014-06-15

    Flood and drought events of higher intensity and frequency are expected to increase in arid and semi-arid regions, in which temporary rivers represent both a water resource and an aquatic ecosystem to be preserved. In this study, we explored the variation of two classes of hazardous substances (Polycyclic Aromatic Hydrocarbons and Nonylphenols) and the functioning of the microbial community in river sediments subject to hydrological fluctuations (Candelaro river basin, Italy). Overall, the concentration of pollutants (∑PAHs range 8-275ngg(-1); ∑NPs range 299-4858ngg(-1)) suggests a moderate degree of contamination. The conditions in which the sediments were tested, flow (high/low) and no flow (wet/dry/arid), were associated to significant differences in the chemical and microbial properties. The total organic carbon contribution decreased together with the stream flow reduction, while the contribution of C-PAHs and C-NPs tended to increase. NPs were relatively more concentrated in sediments under high flow, while the more hydrophobic PAHs accumulated under low and no flow conditions. Passing from high to no flow conditions, a gradual reduction of microbial processes was observed, to reach the lowest specific bacterial carbon production rates (0.06fmolCh(-1)cell(-1)), extracellular enzyme activities, and the highest doubling time (40h) in arid sediments. In conclusion, different scenarios for the mobilization of pollutants and microbial processes can be identified under contrasting hydrological conditions: (i) the mobilization of pollutants under high flow and a relatively higher probability for biodegradation; (ii) the accumulation of pollutants during low flow and lower probability for biodegradation; (iii) the drastic reduction of pollutant concentrations under dry and arid conditions, probably independently from the microbial activity (abiotic processes). Our findings let us infer that a multiple approach has to be considered for an appropriate water resource exploitation and a more realistic prevision of the impact of pollutants in temporary waters. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Simple Estimators for the Simple Latent Class Mastery Testing Model. Twente Educational Memorandum No. 19.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    Latent class models for mastery testing differ from continuum models in that they do not postulate a latent mastery continuum but conceive mastery and non-mastery as two latent classes, each characterized by different probabilities of success. Several researchers use a simple latent class model that is basically a simultaneous application of the…

  6. Sexual Behavior Latent Classes Among Men Who Have Sex With Men: Associations With Sexually Transmitted Infections.

    PubMed

    Rice, Cara E; Norris Turner, Abigail; Lanza, Stephanie T

    2017-01-01

    Men who have sex with men (MSM) are at disproportionate risk of acquisition of sexually transmitted infections (STIs). We used latent class analysis (LCA) to examine patterns of sexual behavior among MSM and how those patterns are related to STIs. We examined patterns of sexual behavior using behavioral and clinical data from a cross-sectional study of 235 MSM who presented to an urban sexual health clinic for STI testing. Analyzed data were collected using a combination of interviewer- and self-administered surveys and electronic health records. We used LCA to identify underlying subgroups of men based on their sexual behavior, described the demographics of the latent classes, and examined the association between the latent classes and STI status. We identified three latent classes of sexual behavior: Unprotected Anal Intercourse (UAI) Only (67%), Partner Seekers (14%), and Multiple Behaviors (19%). Men in the Multiple Behaviors class had a 67% probability of being STI positive, followed by men in the UAI Only class (27%) and men in the Partner Seekers class (22%). Examining the intersection of a variety of sexual practices indicates particular subgroups of MSM have the highest probability of being STI positive.

  7. Probabilistic Open Set Recognition

    NASA Astrophysics Data System (ADS)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  8. Viral peptides-MHC interaction: Binding probability and distance from human peptides.

    PubMed

    Santoni, Daniele

    2018-05-23

    Identification of peptides binding to MHC class I complex can play a crucial role in retrieving potential targets able to trigger an immune response. Affinity binding of viral peptides can be estimated through effective computational methods that in the most of cases are based on machine learning approach. Achieving a better insight into peptide features that impact on the affinity binding rate is a challenging issue. In the present work we focused on 9-mer peptides of Human immunodeficiency virus type 1 and Human herpes simplex virus 1, studying their binding to MHC class I. Viral 9-mers were partitioned into different classes, where each class is characterized by how far (in terms of mutation steps) the peptides belonging to that class are from human 9-mers. Viral 9-mers were partitioned in different classes, based on the number of mutation steps they are far from human 9-mers. We showed that the overall binding probability significantly differs among classes, and it typically increases as the distance, computed in terms of number of mutation steps from the human set of 9-mers, increases. The binding probability is particularly high when considering viral 9-mers that are far from all human 9-mers more than three mutation steps. A further evidence, providing significance to those special viral peptides and suggesting a potential role they can play, comes from the analysis of their distribution along viral genomes, as it revealed they are not randomly located, but they preferentially occur in specific genes. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Patterns of Dating Violence Victimization and Perpetration among Latino Youth.

    PubMed

    Reyes, H Luz McNaughton; Foshee, Vangie A; Chen, May S; Ennett, Susan T

    2017-08-01

    Theory and research suggest that there may be significant heterogeneity in the development, manifestation, and consequences of adolescent dating violence that is not yet well understood. The current study contributed to our understanding of this heterogeneity by identifying distinct patterns of involvement in psychological, physical, and sexual dating violence victimization and perpetration in a sample of Latino youth (n = 201; M = 13.87 years; 42% male), a group that is understudied, growing, and at high risk for involvement in dating violence. Among both boys and girls, latent class analyses identified a three-class solution wherein the largest class demonstrated a low probability of involvement in dating violence across all indices ("uninvolved"; 56% of boys, 64% of girls) and the smallest class demonstrated high probability of involvement in all forms of dating violence except for sexual perpetration among girls and physical perpetration among boys ("multiform aggressive victims"; 10% of boys, 11% of girls). A third class of "psychologically aggressive victims" was identified for which there was a high probability of engaging and experiencing psychological dating violence, but low likelihood of involvement in physical or sexual dating violence (34% of boys, 24% of girls). Cultural (parent acculturation, acculturation conflict), family (conflict and cohesion) and individual (normative beliefs, conflict resolution skills, self-control) risk and protective factors were associated with class membership. Membership in the multiform vs. the uninvolved class was concurrently associated with emotional distress among girls and predicted emotional distress longitudinally among boys. The results contribute to understanding heterogeneity in patterns of involvement in dating violence among Latino youth that may reflect distinct etiological processes.

  10. Month-wise variation and prediction of bulk tank somatic cell count in Brazilian dairy herds and its impact on payment based on milk quality.

    PubMed

    Busanello, Marcos; de Freitas, Larissa Nazareth; Winckler, João Pedro Pereira; Farias, Hiron Pereira; Dos Santos Dias, Carlos Tadeu; Cassoli, Laerte Dagher; Machado, Paulo Fernando

    2017-01-01

    Payment programs based on milk quality (PPBMQ) are used in several countries around the world as an incentive to improve milk quality. One of the principal milk parameters used in such programs is the bulk tank somatic cell count (BTSCC). In this study, using data from an average of 37,000 farms per month in Brazil where milk was analyzed, BTSCC data were divided into different payment classes based on milk quality. Then, descriptive and graphical analyses were performed. The probability of a change to a worse payment class was calculated, future BTSCC values were predicted using time series models, and financial losses due to the failure to reach the maximum bonus for the payment based on milk quality were simulated. In Brazil, the mean BTSCC has remained high in recent years, without a tendency to improve. The probability of changing to a worse payment class was strongly affected by both the BTSCC average and BTSCC standard deviation for classes 1 and 2 (1000-200,000 and 201,000-400,000 cells/mL, respectively) and only by the BTSCC average for classes 3 and 4 (401,000-500,000 and 501,000-800,000 cells/mL, respectively). The time series models indicated that at some point in the year, farms would not remain in their current class and would accrue financial losses due to payments based on milk quality. The BTSCC for Brazilian dairy farms has not recently improved. The probability of a class change to a worse class is a metric that can aid in decision-making and stimulate farmers to improve milk quality. A time series model can be used to predict the future value of the BTSCC, making it possible to estimate financial losses and to show, moreover, that financial losses occur in all classes of the PPBMQ because the farmers do not remain in the best payment class in all months.

  11. Learning Problem-Solving Rules as Search Through a Hypothesis Space.

    PubMed

    Lee, Hee Seung; Betts, Shawn; Anderson, John R

    2016-07-01

    Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem property such as computational difficulty of the rules biased the search process and so affected learning. Experiment 2 examined the impact of examples as instructional tools and found that their effectiveness was determined by whether they uniquely pointed to the correct rule. Experiment 3 compared verbal directions with examples and found that both could guide search. The final experiment tried to improve learning by using more explicit verbal directions or by adding scaffolding to the example. While both manipulations improved learning, learning still took the form of a search through a hypothesis space of possible rules. We describe a model that embodies two assumptions: (1) the instruction can bias the rules participants hypothesize rather than directly be encoded into a rule; (2) participants do not have memory for past wrong hypotheses and are likely to retry them. These assumptions are realized in a Markov model that fits all the data by estimating two sets of probabilities. First, the learning condition induced one set of Start probabilities of trying various rules. Second, should this first hypothesis prove wrong, the learning condition induced a second set of Choice probabilities of considering various rules. These findings broaden our understanding of effective instruction and provide implications for instructional design. Copyright © 2015 Cognitive Science Society, Inc.

  12. Teaching Basic Probability in Undergraduate Statistics or Management Science Courses

    ERIC Educational Resources Information Center

    Naidu, Jaideep T.; Sanford, John F.

    2017-01-01

    Standard textbooks in core Statistics and Management Science classes present various examples to introduce basic probability concepts to undergraduate business students. These include tossing of a coin, throwing a die, and examples of that nature. While these are good examples to introduce basic probability, we use improvised versions of Russian…

  13. Some Classes of Imperfect Information Finite State-Space Stochastic Games with Finite-Dimensional Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEneaney, William M.

    2004-08-15

    Stochastic games under imperfect information are typically computationally intractable even in the discrete-time/discrete-state case considered here. We consider a problem where one player has perfect information.A function of a conditional probability distribution is proposed as an information state.In the problem form here, the payoff is only a function of the terminal state of the system,and the initial information state is either linear ora sum of max-plus delta functions.When the initial information state belongs to these classes, its propagation is finite-dimensional.The state feedback value function is also finite-dimensional,and obtained via dynamic programming,but has a nonstandard form due to the necessity ofmore » an expanded state variable.Under a saddle point assumption,Certainty Equivalence is obtained and the proposed function is indeed an information state.« less

  14. A discriminative test among the different theories proposed to explain the origin of the genetic code: the coevolution theory finds additional support.

    PubMed

    Giulio, Massimo Di

    2018-05-19

    A discriminative statistical test among the different theories proposed to explain the origin of the genetic code is presented. Gathering the amino acids into polarity and biosynthetic classes that are the first expression of the physicochemical theory of the origin of the genetic code and the second expression of the coevolution theory, these classes are utilized in the Fisher's exact test to establish their significance within the genetic code table. Linking to the rows and columns of the genetic code of probabilities that express the statistical significance of these classes, I have finally been in the condition to be able to calculate a χ value to link to both the physicochemical theory and to the coevolution theory that would express the corroboration level referred to these theories. The comparison between these two χ values showed that the coevolution theory is able to explain - in this strictly empirical analysis - the origin of the genetic code better than that of the physicochemical theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. The oilspill risk analysis model of the U. S. Geological Survey

    USGS Publications Warehouse

    Smith, R.A.; Slack, J.R.; Wyant, Timothy; Lanfear, K.J.

    1982-01-01

    The U.S. Geological Survey has developed an oilspill risk analysis model to aid in estimating the environmental hazards of developing oil resources in Outer Continental Shelf (OCS) lease areas. The large, computerized model analyzes the probability of spill occurrence, as well as the likely paths or trajectories of spills in relation to the locations of recreational and biological resources which may be vulnerable. The analytical methodology can easily incorporate estimates of weathering rates , slick dispersion, and possible mitigating effects of cleanup. The probability of spill occurrence is estimated from information on the anticipated level of oil production and method of route of transport. Spill movement is modeled in Monte Carlo fashion with a sample of 500 spills per season, each transported by monthly surface current vectors and wind velocities sampled from 3-hour wind transition matrices. Transition matrices are based on historic wind records grouped in 41 wind velocity classes, and are constructed seasonally for up to six wind stations. Locations and monthly vulnerabilities of up to 31 categories of environmental resources are digitized within an 800,000 square kilometer study area. Model output includes tables of conditional impact probabilities (that is, the probability of hitting a target, given that a spill has occured), as well as probability distributions for oilspills occurring and contacting environmental resources within preselected vulnerability time horizons. (USGS)

  16. The oilspill risk analysis model of the U. S. Geological Survey

    USGS Publications Warehouse

    Smith, R.A.; Slack, J.R.; Wyant, T.; Lanfear, K.J.

    1980-01-01

    The U.S. Geological Survey has developed an oilspill risk analysis model to aid in estimating the environmental hazards of developing oil resources in Outer Continental Shelf (OCS) lease areas. The large, computerized model analyzes the probability of spill occurrence, as well as the likely paths or trajectories of spills in relation to the locations of recreational and biological resources which may be vulnerable. The analytical methodology can easily incorporate estimates of weathering rates , slick dispersion, and possible mitigating effects of cleanup. The probability of spill occurrence is estimated from information on the anticipated level of oil production and method and route of transport. Spill movement is modeled in Monte Carlo fashion with a sample of 500 spills per season, each transported by monthly surface current vectors and wind velocities sampled from 3-hour wind transition matrices. Transition matrices are based on historic wind records grouped in 41 wind velocity classes, and are constructed seasonally for up to six wind stations. Locations and monthly vulnerabilities of up to 31 categories of environmental resources are digitized within an 800,000 square kilometer study area. Model output includes tables of conditional impact probabilities (that is, the probability of hitting a target, given that a spill has occurred), as well as probability distributions for oilspills occurring and contacting environmental resources within preselected vulnerability time horizons. (USGS)

  17. Using a remote sensing/GIS model to predict southwestern Willow Flycatcher breeding habitat along the Rio Grande, New Mexico

    USGS Publications Warehouse

    Hatten, James R.; Sogge, Mark K.

    2007-01-01

    Introduction The Southwestern Willow Flycatcher (Empidonax traillii extimus; hereafter SWFL) is a federally endangered bird (USFWS 1995) that breeds in riparian areas in portions of New Mexico, Arizona, southwestern Colorado, extreme southern Utah and Nevada, and southern California (USFWS 2002). Across this range, it uses a variety of plant species as nesting/breeding habitat, but in all cases prefers sites with dense vegetation, high canopy, and proximity to surface water or saturated soils (Sogge and Marshall 2000). As of 2005, the known rangewide breeding population of SWFLs was roughly 1,214 territories, with approximately 393 territories distributed among 36 sites in New Mexico (Durst et al. 2006), primarily along the Rio Grande. One of the key challenges facing the management and conservation of the Southwestern Willow Flycatcher is that riparian areas are dynamic, with individual habitat patches subject to cycles of creation, growth, and loss due to drought, flooding, fire, and other disturbances. Former breeding patches can lose suitability, and new habitat can develop within a matter of only a few years, especially in reservoir drawdown zones. Therefore, measuring and predicting flycatcher habitat - either to discover areas that might support SWFLs, or to identify areas that may develop into appropriate habitat - requires knowledge of recent/current habitat conditions and an understanding of the factors that determine flycatcher use of riparian breeding sites. In the past, much of the determination of whether a riparian site is likely to support breeding flycatchers has been based on qualitative criteria (for example, 'dense vegetation' or 'large patches'). These determinations often require on-the-ground field evaluations by local or regional SWFL experts. While this has proven valuable in locating many of the currently known breeding sites, it is difficult or impossible to apply this approach effectively over large geographic areas (for example, the middle Rio Grande). The SWFL Recovery Plan (USFWS 2002) recognizes the importance of developing new approaches to habitat identification, and recommends the development of drainage-scale, quantitative habitat models. In particular, the plan suggests using models based on remote sensing and Geographic Information System (GIS) technology that can capture the relatively dynamic habitat changes that occur in southwestern riparian systems. In 1999, Arizona Game and Fish Department (AGFD) developed a GIS-based model (Hatten and Paradzick 2003) to identify SWFL breeding habitat from Landsat Thematic Mapper imagery and 30-m resolution digital elevation models (DEMs). The model was developed with presence/absence survey data acquired along the San Pedro and Gila rivers, and from the Salt River and Tonto Creek inlets to Roosevelt Lake in southern Arizona (collectively called the project area). The GIS-based model used a logistic regression equation to divide riparian vegetation into 5 probability classes based upon characteristics of riparian vegetation and floodplain size. This model was tested by predicting SWFL breeding habitat at Alamo Lake, Arizona, located 200 km from the project area (Hatten and Paradzick 2003). The GIS-based model performed as expected by identifying riparian areas with the highest SWFL nest densities, located in the higher probability classes. In 2002, AGFD applied the GIS-based model throughout Arizona, for riparian areas below 1,524 m (5,000 ft) elevation and within 1.6 km of perennial or intermittent waters (Dockens et al. 2004). Overall model accuracy (using probability classes 1-5, with class 5 having the greatest probability of nesting activity) for predicting the location of 2001 nest sites was 96.5 percent; accuracy decreased when fewer probability classes were defined as suitable. Map accuracy, determined from errors of commission, increased in higher probability classes in a fashion similar to errors of omission. Map accuracy, li

  18. Automatic mapping of event landslides at basin scale in Taiwan using a Montecarlo approach and synthetic land cover fingerprints

    NASA Astrophysics Data System (ADS)

    Mondini, Alessandro C.; Chang, Kang-Tsung; Chiang, Shou-Hao; Schlögel, Romy; Notarnicola, Claudia; Saito, Hitoshi

    2017-12-01

    We propose a framework to systematically generate event landslide inventory maps from satellite images in southern Taiwan, where landslides are frequent and abundant. The spectral information is used to assess the pixel land cover class membership probability through a Maximum Likelihood classifier trained with randomly generated synthetic land cover spectral fingerprints, which are obtained from an independent training images dataset. Pixels are classified as landslides when the calculated landslide class membership probability, weighted by a susceptibility model, is higher than membership probabilities of other classes. We generated synthetic fingerprints from two FORMOSAT-2 images acquired in 2009 and tested the procedure on two other images, one in 2005 and the other in 2009. We also obtained two landslide maps through manual interpretation. The agreement between the two sets of inventories is given by the Cohen's k coefficients of 0.62 and 0.64, respectively. This procedure can now classify a new FORMOSAT-2 image automatically facilitating the production of landslide inventory maps.

  19. Tobacco class I cytosolic small heat shock proteins are under transcriptional and translational regulations in expression and heterocomplex prevails under the high-temperature stress condition in vitro.

    PubMed

    Park, Soo Min; Kim, Keun Pill; Joe, Myung Kuk; Lee, Mi Ok; Koo, Hyun Jo; Hong, Choo Bong

    2015-04-01

    Seven genomic clones of tobacco (Nicotiana tabacum W38) cytosolic class I small heat shock proteins (sHSPs), probably representing all members in the class, were isolated and found to have 66 to 92% homology between their nucleotide sequences. Even though all seven sHSP genes showed heat shock-responsive accumulation of their transcripts and proteins, each member showed discrepancies in abundance and timing of expression upon high-temperature stress. This was mainly the result of transcriptional regulation during mild stress conditions and transcriptional and translational regulation during strong stress conditions. Open reading frames (ORFs) of these genomic clones were expressed in Escherichia coli and the sHSPs were purified from E. coli. The purified tobacco sHSPs rendered citrate synthase and luciferase soluble under high temperatures. At room temperature, non-denaturing pore exclusion polyacrylamide gel electrophoresis on three sHSPs demonstrated that the sHSPs spontaneously formed homo-oligomeric complexes of 200 ∼ 240 kDa. However, under elevated temperatures, hetero-oligomeric complexes between the sHSPs gradually prevailed. Atomic force microscopy showed that the hetero-oligomer of NtHSP18.2/NtHSP18.3 formed a stable oligomeric particle similar to that of the NtHSP18.2 homo-oligomer. These hetero-oligomers positively influenced the revival of thermally inactivated luciferase. Amino acid residues mainly in the N-terminus are suggested for the exchange of the component sHSPs and the formation of dominant hetero-oligomers under high temperatures. © 2014 John Wiley & Sons Ltd.

  20. Impact of socioeconomic status and subjective social class on overall and health-related quality of life.

    PubMed

    Kim, Jae-Hyun; Park, Eun-Cheol

    2015-08-15

    Our objective was to investigate the impact of socioeconomic status and subjective social class on health-related quality of life (HRQOL) vs. overall quality of life (QOL). We performed a longitudinal analysis using data regarding 8250 individuals drawn from the Korean Longitudinal Study of Aging (KLoSA). We analyzed differences between HRQOL and QOL in individuals of various socioeconomic strata (high, middle, or low household income and education levels) and subjective social classes (high, middle, or low) at baseline (2009). Individuals with low household incomes and of low subjective social class had the highest probability of reporting discrepant HRQOL and QOL scores (B: 4.796; P < 0.0001), whereas individuals with high household incomes and high subjective social class had the lowest probability of discrepant HRQOL and QOL scores (B: -3.625; P = 0.000). Similar trends were seen when education was used as a proxy for socioeconomic status. In conclusion, both household income/subjective social class and education/subjective social class were found to have an impact on the degree of divergence between QOL and HRQOL. Therefore, in designing interventions, socioeconomic inequalities should be taken into account through the use of multi-dimensional measurement tools.

  1. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  2. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  3. Identifying desertification risk areas using fuzzy membership and geospatial technique - A case study, Kota District, Rajasthan

    NASA Astrophysics Data System (ADS)

    Dasgupta, Arunima; Sastry, K. L. N.; Dhinwa, P. S.; Rathore, V. S.; Nathawat, M. S.

    2013-08-01

    Desertification risk assessment is important in order to take proper measures for its prevention. Present research intends to identify the areas under risk of desertification along with their severity in terms of degradation in natural parameters. An integrated model with fuzzy membership analysis, fuzzy rule-based inference system and geospatial techniques was adopted, including five specific natural parameters namely slope, soil pH, soil depth, soil texture and NDVI. Individual parameters were classified according to their deviation from mean. Membership of each individual values to be in a certain class was derived using the normal probability density function of that class. Thus if a single class of a single parameter is with mean μ and standard deviation σ, the values falling beyond μ + 2 σ and μ - 2 σ are not representing that class, but a transitional zone between two subsequent classes. These are the most important areas in terms of degradation, as they have the lowest probability to be in a certain class, hence highest probability to be extended or narrowed down in next or previous class respectively. Eventually, these are the values which can be easily altered, under extrogenic influences, hence are identified as risk areas. The overall desertification risk is derived by incorporating the different risk severity of each parameter using fuzzy rule-based interference system in GIS environment. Multicriteria based geo-statistics are applied to locate the areas under different severity of desertification risk. The study revealed that in Kota, various anthropogenic pressures are accelerating land deterioration, coupled with natural erosive forces. Four major sources of desertification in Kota are, namely Gully and Ravine erosion, inappropriate mining practices, growing urbanization and random deforestation.

  4. Study of recreational land and open space using Skylab imagery

    NASA Technical Reports Server (NTRS)

    Sattinger, I. J. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. An analysis of the statistical uniqueness of each of the signatures of the Gratiot-Saginaw State Game Area was made by computing a matrix of probabilities of misclassification for all possible signature pairs. Within each data set, the 35 signatures were then aggregated into a smaller set of composite signatures by combining groups of signatures having high probabilities of misclassification. Computer separation of forest denisty classes was poor with multispectral scanner data collected on 5 August 1973. Signatures from the scanner data were further analyzed to determine the ranking of spectral channels for computer separation of the scene classes. Probabilities of misclassification were computed for composite signatures using four separate combinations of data source and channel selection.

  5. Bayesian learning

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    In 1983 and 1984, the Infrared Astronomical Satellite (IRAS) detected 5,425 stellar objects and measured their infrared spectra. In 1987 a program called AUTOCLASS used Bayesian inference methods to discover the classes present in these data and determine the most probable class of each object, revealing unknown phenomena in astronomy. AUTOCLASS has rekindled the old debate on the suitability of Bayesian methods, which are computationally intensive, interpret probabilities as plausibility measures rather than frequencies, and appear to depend on a subjective assessment of the probability of a hypothesis before the data were collected. Modern statistical methods have, however, recently been shown to also depend on subjective elements. These debates bring into question the whole tradition of scientific objectivity and offer scientists a new way to take responsibility for their findings and conclusions.

  6. Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps

    NASA Astrophysics Data System (ADS)

    Zlinszky, A.; Kania, A.

    2016-06-01

    Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.

  7. Event probabilities and impact zones for hazardous materials accidents on railroads

    DOT National Transportation Integrated Search

    1983-11-01

    Procedures are presented for evaluating the probability and impacts of hazardous material accidents in rail transportation. The significance of track class for accident frequencies and of train speed for accident severity is quantified. Special atten...

  8. Combined risk assessment of nonstationary monthly water quality based on Markov chain and time-varying copula.

    PubMed

    Shi, Wei; Xia, Jun

    2017-02-01

    Water quality risk management is a global hot research linkage with the sustainable water resource development. Ammonium nitrogen (NH 3 -N) and permanganate index (COD Mn ) as the focus indicators in Huai River Basin, are selected to reveal their joint transition laws based on Markov theory. The time-varying moments model with either time or land cover index as explanatory variables is applied to build the time-varying marginal distributions of water quality time series. Time-varying copula model, which takes the non-stationarity in the marginal distribution and/or the time variation in dependence structure between water quality series into consideration, is constructed to describe a bivariate frequency analysis for NH 3 -N and COD Mn series at the same monitoring gauge. The larger first-order Markov joint transition probability indicates water quality state Class V w , Class IV and Class III will occur easily in the water body of Bengbu Sluice. Both marginal distribution and copula models are nonstationary, and the explanatory variable time yields better performance than land cover index in describing the non-stationarities in the marginal distributions. In modelling the dependence structure changes, time-varying copula has a better fitting performance than the copula with the constant or the time-trend dependence parameter. The largest synchronous encounter risk probability of NH 3 -N and COD Mn simultaneously reaching Class V is 50.61%, while the asynchronous encounter risk probability is largest when NH 3 -N and COD Mn is inferior to class V and class IV water quality standards, respectively.

  9. Evaluation of performance of bacterial culture of feces and serum ELISA across stages of Johne's disease in cattle using a Bayesian latent class model.

    PubMed

    Espejo, L A; Zagmutt, F J; Groenendaal, H; Muñoz-Zanzi, C; Wells, S J

    2015-11-01

    The objective of this study was to evaluate the performance of bacterial culture of feces and serum ELISA to correctly identify cows with Mycobacterium avium ssp. paratuberculosis (MAP) at heavy, light, and non-fecal-shedding levels. A total of 29,785 parallel test results from bacterial culture of feces and serum ELISA were collected from 17 dairy herds in Minnesota, Pennsylvania, and Colorado. Samples were obtained from adult cows from dairy herds enrolled for up to 10 yr in the National Johne's Disease Demonstration Herd Project. A Bayesian latent class model was fitted to estimate the probabilities that bacterial culture of feces (using 72-h sedimentation or 30-min centrifugation methods) and serum ELISA results correctly identified cows as high positive, low positive, or negative given that cows were heavy, light, and non-shedders, respectively. The model assumed that no gold standard test was available and conditional independency existed between diagnostic tests. The estimated conditional probabilities that bacterial culture of feces correctly identified heavy shedders, light shedders, and non-shedders were 70.9, 32.0, and 98.5%, respectively. The same values for the serum ELISA were 60.6, 18.7, and 99.5%, respectively. Differences in diagnostic test performance were observed among states. These results improve the interpretation of results from bacterial culture of feces and serum ELISA for detection of MAP and MAP antibody (respectively), which can support on-farm infection control decisions and can be used to evaluate disease-testing strategies, taking into account the accuracy of these tests. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Obesity and glycemic control in patients with diabetes mellitus: Analysis of physician electronic health records in the US from 2009-2011.

    PubMed

    Bae, J P; Lage, M J; Mo, D; Nelson, D R; Hoogwerf, B J

    2016-03-01

    Examine the association between obesity and glycemic control among patients with type 1 (T1DM) or type 2 diabetes mellitus (T2DM). Data from US physician electronic health records (Humedica®) from 2009-2011 were utilized. Patients were defined as having above-target glycemic control if they had an HbA1c ≥7% at any time during the study period. Multinomial logistic regressions were conducted separately for T1DM and T2DM patients, and examined associations between BMI categories and probability of having above-target glycemic control (≥7% and <8%, ≥8% and <9%, or ≥9%) while controlling for patient demographics, general health, comorbid conditions, and antihyperglycemic medication use. There were 14,028 T1DM and 248,567 T2DM patients; 47.8% of T1DM and 63.4% of T2DM were obese (BMI ≥30kg/m(2)). For T1DM, being overweight (BMI 25-<30), obese class I (30-<35), II (35-<40), or III (≥40) was associated with a significantly higher probability of having HbA1c≥8% and <9% or ≥9%, while being overweight was associated with a significantly higher probability of having HbA1c ≥7% and <8% compared to normal BMI (BMI≥18.5 and<25). For T2DM patients, being overweight, obese class I, II, or III was associated with a significantly higher probability of having HbA1c ≥7% and <8%, ≥8% and <9%, or ≥9%. For both T1DM and T2DM patients, there were positive and statistically significant associations between being overweight or obese and having suboptimal glycemic control. These findings quantify the associations between obesity and glycemic control, and highlight the potential importance of individual characteristics on glycemic control. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Drought forecasting in Luanhe River basin involving climatic indices

    NASA Astrophysics Data System (ADS)

    Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.

    2017-11-01

    Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.

  12. Optimum space shuttle launch times relative to natural environment

    NASA Technical Reports Server (NTRS)

    King, R. L.

    1977-01-01

    Three sets of meteorological criteria were analyzed to determine the probabilities of favorable launch and landing conditions. Probabilities were computed for every 3 hours on a yearly basis using 14 years of weather data. These temporal probability distributions, applicable to the three sets of weather criteria encompassing benign, moderate and severe weather conditions, were computed for both Kennedy Space Center (KSC) and Edwards Air Force Base. In addition, conditional probabilities were computed for unfavorable weather conditions occurring after a delay which may or may not be due to weather conditions. Also, for KSC, the probabilities of favorable landing conditions at various times after favorable launch conditions have prevailed have been computed so that mission probabilities may be more accurately computed for those time periods when persistence strongly correlates weather conditions. Moreover, the probabilities and conditional probabilities of the occurrence of both favorable and unfavorable events for each individual criterion were computed to indicate the significance of each weather element to the overall result.

  13. Bayesian classification theory

    NASA Technical Reports Server (NTRS)

    Hanson, Robin; Stutz, John; Cheeseman, Peter

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.

  14. Analysis and design of continuous class-E power amplifier at sub-nominal condition

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Yang, Kai; Zhang, Tianliang

    2017-12-01

    The continuous class-E power amplifier at sub-nominal condition is proposed in this paper. The class-E power amplifier at continuous mode means it can be high efficient on a series matching networks while at sub-nominal condition means it only requires the zero-voltage-switching condition. Comparing with the classical class-E power amplifier, the proposed design method releases two additional design freedoms, which increase the class-E power amplifier's design flexibility. Also, the proposed continuous class-E power amplifier at sub-nominal condition can perform high efficiency over a broad bandwidth. The performance study of the continuous class-E power amplifier at sub-nominal condition is derived and the design procedure is summarised. The normalised switch voltage and current waveforms are investigated. Furthermore, the influences of different sub-nominal conditions on the power losses of the switch-on resistor and the output power capability are also discussed. A broadband continuous class-E power amplifier based on a Gallium Nitride (GaN) transistor is designed and testified to verify the proposed design methodology. The measurement results show, it can deliver 10-15 W output power with 64-73% power-added efficiency over 1.4-2.8 GHz.

  15. Patterns of Chronic Conditions and Their Associations With Behaviors and Quality of Life, 2010

    PubMed Central

    Mitchell, Sandra A.; Thompson, William W.; Zack, Matthew M.; Reeve, Bryce B.; Cella, David; Smith, Ashley Wilder

    2015-01-01

    Introduction Co-occurring chronic health conditions elevate the risk of poor health outcomes such as death and disability, are associated with poor quality of life, and magnify the complexities of self-management, care coordination, and treatment planning. This study assessed patterns of both singular and multiple chronic conditions, behavioral risk factors, and quality of life in a population-based sample. Methods In a national survey, adults (n = 4,184) answered questions about the presence of 27 chronic conditions. We used latent class analysis to identify patterns of chronic conditions and to explore associations of latent class membership with sociodemographic characteristics, behavioral risk factors, and health. Results Latent class analyses indicated 4 morbidity profiles: a healthy class (class 1), a class with predominantly physical health conditions (class 2), a class with predominantly mental health conditions (class 3), and a class with both physical and mental health conditions (class 4). Class 4 respondents reported significantly worse physical health and well-being and more days of activity limitation than those in the other latent classes. Class 4 respondents were also more likely to be obese and sedentary, and those with predominantly mental health conditions were most likely to be current smokers. Conclusions Subgroups with distinct patterns of chronic conditions can provide direction for screening and surveillance, guideline development, and the delivery of complex care services. PMID:26679491

  16. Social class and survival on the S.S. Titanic.

    PubMed

    Hall, W

    1986-01-01

    Passengers' chances of surviving the sinking of the S.S. Titanic were related to their sex and their social class: females were more likely to survive than males, and the chances of survival declined with social class as measured by the class in which the passenger travelled. The probable reasons for these differences in rates of survival are discussed as are the reasons accepted by the Mersey Committee of Inquiry into the sinking.

  17. Bayes estimation on parameters of the single-class classifier. [for remotely sensed crop data

    NASA Technical Reports Server (NTRS)

    Lin, G. C.; Minter, T. C.

    1976-01-01

    Normal procedures used for designing a Bayes classifier to classify wheat as the major crop of interest require not only training samples of wheat but also those of nonwheat. Therefore, ground truth must be available for the class of interest plus all confusion classes. The single-class Bayes classifier classifies data into the class of interest or the class 'other' but requires training samples only from the class of interest. This paper will present a procedure for Bayes estimation on the mean vector, covariance matrix, and a priori probability of the single-class classifier using labeled samples from the class of interest and unlabeled samples drawn from the mixture density function.

  18. Categorizing Sounds

    DTIC Science & Technology

    1989-12-01

    psychophysical study. These have been called Class A and Class B, or sensory and perceptual, or local and global, and probably 18 other terms. Among...Class A studies, detection, two-choice dis- criminability and other local measures reveal differential sensi- tivities of receptor or sensory systems...Eds.), Percepcion del Obieto: Estructura y Procesos, 553-596. Universidad Nacional de Educacion a Distancia. Lisanby, S. H., & Lockhead, G. R. (accepted

  19. Competency criteria and the class inclusion task: modeling judgments and justifications.

    PubMed

    Thomas, H; Horton, J J

    1997-11-01

    Preschool age children's class inclusion task responses were modeled as mixtures of different probability distributions. The main idea: Different response strategies are equivalent to different probability distributions. A child displays cognitive strategy s if P (child uses strategy s, given the child's observed score X = x) = p(s) is the most probable strategy. The general approach is widely applicable to many settings. Both judgment and justification questions were asked. Judgment response strategies identified were subclass comparison, guessing, and inclusion logic. Children's justifications lagged their judgments in development. Although justification responses may be useful, C. J. Brainerd was largely correct: If a single response variable is to be selected, a judgments variable is likely the preferable one. But the process must be modeled to identify cognitive strategies, as B. Hodkin has demonstrated.

  20. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    ERIC Educational Resources Information Center

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  1. Amphibian and reptile road-kills on tertiary roads in relation to landscape structure: using a citizen science approach with open-access land cover data.

    PubMed

    Heigl, Florian; Horvath, Kathrin; Laaha, Gregor; Zaller, Johann G

    2017-06-26

    Amphibians and reptiles are among the most endangered vertebrate species worldwide. However, little is known how they are affected by road-kills on tertiary roads and whether the surrounding landscape structure can explain road-kill patterns. The aim of our study was to examine the applicability of open-access remote sensing data for a large-scale citizen science approach to describe spatial patterns of road-killed amphibians and reptiles on tertiary roads. Using a citizen science app we monitored road-kills of amphibians and reptiles along 97.5 km of tertiary roads covering agricultural, municipal and interurban roads as well as cycling paths in eastern Austria over two seasons. Surrounding landscape was assessed using open access land cover classes for the region (Coordination of Information on the Environment, CORINE). Hotspot analysis was performed using kernel density estimation (KDE+). Relations between land cover classes and amphibian and reptile road-kills were analysed with conditional probabilities and general linear models (GLM). We also estimated the potential cost-efficiency of a large scale citizen science monitoring project. We recorded 180 amphibian and 72 reptile road-kills comprising eight species mainly occurring on agricultural roads. KDE+ analyses revealed a significant clustering of road-killed amphibians and reptiles, which is an important information for authorities aiming to mitigate road-kills. Overall, hotspots of amphibian and reptile road-kills were next to the land cover classes arable land, suburban areas and vineyards. Conditional probabilities and GLMs identified road-kills especially next to preferred habitats of green toad, common toad and grass snake, the most often found road-killed species. A citizen science approach appeared to be more cost-efficient than monitoring by professional researchers only when more than 400 km of road are monitored. Our findings showed that freely available remote sensing data in combination with a citizen science approach would be a cost-efficient method aiming to identify and monitor road-kill hotspots of amphibians and reptiles on a larger scale.

  2. Toll-like receptor-associated keratitis and strategies for its management.

    PubMed

    Kaur, Amandeep; Kumar, Vijay; Singh, Simranjeet; Singh, Joginder; Upadhyay, Niraj; Datta, Shivika; Singla, Sourav; Kumar, Virender

    2015-10-01

    Keratitis is an inflammatory condition, characterized by involvement of corneal tissues. Most recurrent challenge of keratitis is infection. Bacteria, virus, fungus and parasitic organism have potential to cause infection. TLR are an important class of protein which has a major role in innate immune response to combat with pathogens. In last past years, extensive research efforts have provided considerable abundance information regarding the role of TLR in various types of keratitis. This paper focuses to review the recent literature illustrating amoebic, bacterial, fungal and viral keratitis associated with Toll-like receptor molecules and summarize existing thoughts on pathogenesis and treatment besides future probabilities for prevention against TLR-associated keratitis.

  3. Students' Understanding of Conditional Probability on Entering University

    ERIC Educational Resources Information Center

    Reaburn, Robyn

    2013-01-01

    An understanding of conditional probability is essential for students of inferential statistics as it is used in Null Hypothesis Tests. Conditional probability is also used in Bayes' theorem, in the interpretation of medical screening tests and in quality control procedures. This study examines the understanding of conditional probability of…

  4. Prediction and explanation in the multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garriga, J.; Vilenkin, A.

    2008-02-15

    Probabilities in the multiverse can be calculated by assuming that we are typical representatives in a given reference class. But is this class well defined? What should be included in the ensemble in which we are supposed to be typical? There is a widespread belief that this question is inherently vague, and that there are various possible choices for the types of reference objects which should be counted in. Here we argue that the 'ideal' reference class (for the purpose of making predictions) can be defined unambiguously in a rather precise way, as the set of all observers with identicalmore » information content. When the observers in a given class perform an experiment, the class branches into subclasses who learn different information from the outcome of that experiment. The probabilities for the different outcomes are defined as the relative numbers of observers in each subclass. For practical purposes, wider reference classes can be used, where we trace over all information which is uncorrelated to the outcome of the experiment, or whose correlation with it is beyond our current understanding. We argue that, once we have gathered all practically available evidence, the optimal strategy for making predictions is to consider ourselves typical in any reference class we belong to, unless we have evidence to the contrary. In the latter case, the class must be correspondingly narrowed.« less

  5. Arbitrary conditional discriminative functions of meaningful stimuli and enhanced equivalence class formation.

    PubMed

    Nedelcu, Roxana I; Fields, Lanny; Arntzen, Erik

    2015-03-01

    Equivalence class formation by college students was influenced through the prior acquisition of conditional discriminative functions by one of the abstract stimuli (C) in the to-be-formed classes. Participants in the GR-0, GR-1, and GR-5 groups attempted to form classes under the simultaneous protocol, after mastering 0, 1, or 5 conditional relations between C and other abstract stimuli (V, W, X, Y, Z) that were not included in the to-be-formed classes (ABCDE). Participants in the GR-many group attempted to form classes that contained four abstract stimuli and one meaningful picture as the C stimulus. In the GR-0, GR-1, GR-5, and GR-many groups, classes were formed by 17, 25, 58, and 67% of participants, respectively. Thus, likelihood of class formation was enhanced by the prior formation of five C-based conditional relations (the GR-5 vs. GR-0 condition), or the inclusion of a meaningful stimulus as a class member (the GR-many vs. GR-0 condition). The GR-5 and GR-many conditions produced very similar yields, indicating that class formation was enhanced to a similar degree by including a meaningful stimulus or an abstract stimulus that had become a member of five conditional relations prior to equivalence class formation. Finally, the low and high yields produced by the GR-1 and GR-5 conditions showed that the class enhancement effect of the GR-5 condition was due to the number of conditional relations established during preliminary training and not to the sheer amount of reinforcement provided while learning these conditional relations. Class enhancement produced by meaningful stimuli, then, can be attributed to their acquired conditional discriminative functions as well as their discriminative, connotative, and denotative properties. © Society for the Experimental Analysis of Behavior.

  6. Autoclass: An automatic classification system

    NASA Technical Reports Server (NTRS)

    Stutz, John; Cheeseman, Peter; Hanson, Robin

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.

  7. Estimates of growth and mortality of under-yearling smallmouth bass in Spednic Lake, from 1970 through 2008

    USGS Publications Warehouse

    Dudley, Robert W.; Trial, Joan G.

    2014-01-01

    This report is the product of a 2013 cooperative agreement between the U.S. Geological Survey, the International Joint Commission, and the Maine Bureau of Sea Run Fisheries and Habitat to quantify the effects of meteorological conditions (from 1970 through 2008) on the survival of smallmouth bass (Micropterus dolomieu) in the first year of life in Spednic Lake. This report documents the data and methods used to estimate historical daily mean lake surface-water temperatures from early spring through late autumn, which were used to estimate the dates of smallmouth bass spawning, young-of-the-year growth, and probable strength of each year class. Mortality of eggs and fry in nests was modeled and estimated to exceed 10 percent in 17 of 39 years; during those years, cold temperatures in the early part of the spawning period resulted in mortality to fish that were estimated to have had the longest growing season and attain the greatest length. Modeled length-dependent overwinter survival combined with early mortality identified 1986, 1994, 1996, and 2004 as the years in which temperature was likely to have presented the greatest challenge to year-class strength in the Spednic Lake fishery. Age distribution of bass in fisheries on lakes in the St. Croix and surrounding watersheds confirmed that conditions in 1986 and 1996 resulted in weak smallmouth bass year classes (age-four or age-five bass representing less than 15 percent of a 100-fish sample).

  8. Neural network ensemble based CAD system for focal liver lesions from B-mode ultrasound.

    PubMed

    Virmani, Jitendra; Kumar, Vinod; Kalra, Naveen; Khandelwal, Niranjan

    2014-08-01

    A neural network ensemble (NNE) based computer-aided diagnostic (CAD) system to assist radiologists in differential diagnosis between focal liver lesions (FLLs), including (1) typical and atypical cases of Cyst, hemangioma (HEM) and metastatic carcinoma (MET) lesions, (2) small and large hepatocellular carcinoma (HCC) lesions, along with (3) normal (NOR) liver tissue is proposed in the present work. Expert radiologists, visualize the textural characteristics of regions inside and outside the lesions to differentiate between different FLLs, accordingly texture features computed from inside lesion regions of interest (IROIs) and texture ratio features computed from IROIs and surrounding lesion regions of interests (SROIs) are taken as input. Principal component analysis (PCA) is used for reducing the dimensionality of the feature space before classifier design. The first step of classification module consists of a five class PCA-NN based primary classifier which yields probability outputs for five liver image classes. The second step of classification module consists of ten binary PCA-NN based secondary classifiers for NOR/Cyst, NOR/HEM, NOR/HCC, NOR/MET, Cyst/HEM, Cyst/HCC, Cyst/MET, HEM/HCC, HEM/MET and HCC/MET classes. The probability outputs of five class PCA-NN based primary classifier is used to determine the first two most probable classes for a test instance, based on which it is directed to the corresponding binary PCA-NN based secondary classifier for crisp classification between two classes. By including the second step of the classification module, classification accuracy increases from 88.7 % to 95 %. The promising results obtained by the proposed system indicate its usefulness to assist radiologists in differential diagnosis of FLLs.

  9. Balkanization and Unification of Probabilistic Inferences

    ERIC Educational Resources Information Center

    Yu, Chong-Ho

    2005-01-01

    Many research-related classes in social sciences present probability as a unified approach based upon mathematical axioms, but neglect the diversity of various probability theories and their associated philosophical assumptions. Although currently the dominant statistical and probabilistic approach is the Fisherian tradition, the use of Fisherian…

  10. Simulating the component counts of combinatorial structures.

    PubMed

    Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon

    2018-02-09

    This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.

  11. Adaptive tracking control for a class of stochastic switched systems

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Xia, Yuanqing

    2018-02-01

    The problem of adaptive tracking is considered for a class of stochastic switched systems, in this paper. As preliminaries, the criterion of global asymptotical practical stability in probability is first presented by the aid of common Lyapunov function method. Based on the Lyapunov stability criterion, adaptive backstepping controllers are designed to guarantee that the closed-loop system has a unique global solution, which is globally asymptotically practically stable in probability, and the tracking error in the fourth moment converges to an arbitrarily small neighbourhood of zero. Simulation examples are given to demonstrate the efficiency of the proposed schemes.

  12. CPROB: A COMPUTATIONAL TOOL FOR CONDUCTING CONDITIONAL PROBABILITY ANALYSIS

    EPA Science Inventory

    Conditional probability analysis measures the probability of observing one event given that another event has occurred. In an environmental context, conditional probability analysis helps assess the association between an environmental contaminant (i.e. the stressor) and the ec...

  13. How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    PubMed

    Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang

    2018-08-01

    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Study on conditional probability of surface rupture: effect of fault dip and width of seismogenic layer

    NASA Astrophysics Data System (ADS)

    Inoue, N.

    2017-12-01

    The conditional probability of surface ruptures is affected by various factors, such as shallow material properties, process of earthquakes, ground motions and so on. Toda (2013) pointed out difference of the conditional probability of strike and reverse fault by considering the fault dip and width of seismogenic layer. This study evaluated conditional probability of surface rupture based on following procedures. Fault geometry was determined from the randomly generated magnitude based on The Headquarters for Earthquake Research Promotion (2017) method. If the defined fault plane was not saturated in the assumed width of the seismogenic layer, the fault plane depth was randomly provided within the seismogenic layer. The logistic analysis was performed to two data sets: surface displacement calculated by dislocation methods (Wang et al., 2003) from the defined source fault, the depth of top of the defined source fault. The estimated conditional probability from surface displacement indicated higher probability of reverse faults than that of strike faults, and this result coincides to previous similar studies (i.e. Kagawa et al., 2004; Kataoka and Kusakabe, 2005). On the contrary, the probability estimated from the depth of the source fault indicated higher probability of thrust faults than that of strike and reverse faults, and this trend is similar to the conditional probability of PFDHA results (Youngs et al., 2003; Moss and Ross, 2011). The probability of combined simulated results of thrust and reverse also shows low probability. The worldwide compiled reverse fault data include low fault dip angle earthquake. On the other hand, in the case of Japanese reverse fault, there is possibility that the conditional probability of reverse faults with less low dip angle earthquake shows low probability and indicates similar probability of strike fault (i.e. Takao et al., 2013). In the future, numerical simulation by considering failure condition of surface by the source fault would be performed in order to examine the amount of the displacement and conditional probability quantitatively.

  15. Combined target factor analysis and Bayesian soft-classification of interference-contaminated samples: forensic fire debris analysis.

    PubMed

    Williams, Mary R; Sigman, Michael E; Lewis, Jennifer; Pitan, Kelly McHugh

    2012-10-10

    A bayesian soft classification method combined with target factor analysis (TFA) is described and tested for the analysis of fire debris data. The method relies on analysis of the average mass spectrum across the chromatographic profile (i.e., the total ion spectrum, TIS) from multiple samples taken from a single fire scene. A library of TIS from reference ignitable liquids with assigned ASTM classification is used as the target factors in TFA. The class-conditional distributions of correlations between the target and predicted factors for each ASTM class are represented by kernel functions and analyzed by bayesian decision theory. The soft classification approach assists in assessing the probability that ignitable liquid residue from a specific ASTM E1618 class, is present in a set of samples from a single fire scene, even in the presence of unspecified background contributions from pyrolysis products. The method is demonstrated with sample data sets and then tested on laboratory-scale burn data and large-scale field test burns. The overall performance achieved in laboratory and field test of the method is approximately 80% correct classification of fire debris samples. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Higher Impulsivity As a Distinctive Trait of Severe Cocaine Addiction among Individuals Treated for Cocaine or Alcohol Use Disorders.

    PubMed

    García-Marchena, Nuria; Ladrón de Guevara-Miranda, David; Pedraz, María; Araos, Pedro Fernando; Rubio, Gabriel; Ruiz, Juan Jesús; Pavón, Francisco Javier; Serrano, Antonia; Castilla-Ortega, Estela; Santín, Luis J; Rodríguez de Fonseca, Fernando

    2018-01-01

    Despite alcohol being the most often used addictive substance among addicted patients, use of other substances such as cocaine has increased over recent years, and the combination of both drugs aggravates health impairment and complicates clinical assessment. The aim of this study is to identify and characterize heterogeneous subgroups of cocaine- and alcohol-addicted patients with common characteristics based on substance use disorders, psychiatric comorbidity and impulsivity. A total of 214 subjects with cocaine and/or alcohol use disorders were recruited from outpatient treatment programs and clinically assessed. A latent class analysis was used to establish phenotypic categories according to diagnosis of cocaine and alcohol use disorders, mental disorders, and impulsivity scores. Relevant variables were examined in the latent classes (LCs) using correlation and analyses of variance and covariance. Four LCs of addicted patients were identified: Class 1 (45.3%) formed by alcohol-dependent patients exhibiting lifetime mood disorder diagnosis and mild impulsivity; Class 2 (14%) formed mainly by lifetime cocaine use disorder patients with low probability of comorbid mental disorders and mild impulsivity; Class 3 (10.7%) formed by cocaine use disorder patients with elevated probability to course with lifetime anxiety, early and personality disorders, and greater impulsivity scores; and Class 4 (29.9%) formed mainly by patients with alcohol and cocaine use disorders, with elevated probability in early and personality disorders and elevated impulsivity. Furthermore, there were significant differences among classes in terms of Diagnostic and Statistical Manual of Mental Disorders-4th Edition-Text Revision criteria for abuse and dependence: Class 3 showed more criteria for cocaine use disorders than other classes, while Class 1 and Class 4 showed more criteria for alcohol use disorders. Cocaine- and alcohol-addicted patients who were grouped according to diagnosis of substance use disorders, psychiatric comorbidity, and impulsivity show different clinical and sociodemographic variables. Whereas mood and anxiety disorders are more prevalent in alcohol-addicted patients, personality disorders are associated with cocaine use disorders and diagnosis of comorbid substance use disorders. Notably, increased impulsivity is a distinctive characteristic of patients with severe cocaine use disorder and comorbid personality disorders. Psychiatric disorders and impulsivity should be considered for improving the stratification of addicted patients with shared clinical and sociodemographic characteristics to select more appropriate treatments.

  17. The effects of host size and temperature on the emergence of Echinoparyphium recurvatum cercariae from Lymnaea peregra under natural light conditions.

    PubMed

    Morley, N J; Adam, M E; Lewis, J W

    2010-09-01

    The production of cercariae from their snail host is a fundamental component of transmission success in trematodes. The emergence of Echinoparyphium recurvatum (Trematoda: Echinostomatidae) cercariae from Lymnaea peregra was studied under natural sunlight conditions, using naturally infected snails of different sizes (10-17 mm) within a temperature range of 10-29 degrees C. There was a single photoperiodic circadian cycle of emergence with one peak, which correlated with the maximum diffuse sunlight irradiation. At 21 degrees C the daily number of emerging cercariae increased with increasing host snail size, but variations in cercarial emergence did occur between both individual snails and different days. There was only limited evidence of cyclic emergence patterns over a 3-week period, probably due to extensive snail mortality, particularly those in the larger size classes. Very few cercariae emerged in all snail size classes at the lowest temperature studied (10 degrees C), but at increasingly higher temperatures elevated numbers of cercariae emerged, reaching an optimum between 17 and 25 degrees C. Above this range emergence was reduced. At all temperatures more cercariae emerged from larger snails. Analysis of emergence using the Q10 value, a measure of physiological processes over temperature ranges, showed that between 10 and 21 degrees C (approximately 15 degrees C) Q10 values exceeded 100 for all snail size classes, indicating a substantially greater emergence than would be expected for normal physiological rates. From 14 to 25 degrees C (approximately 20 degrees C) cercarial emergence in most snail size classes showed little change in Q10, although in the smallest size class emergence was still substantially greater than the typical Q10 increase expected over this temperature range. At the highest range of 21-29 degrees C (approximately 25 degrees C), Q10 was much reduced. The importance of these results for cercarial emergence under global climate change is discussed.

  18. An investigation of interurban variations in the chemical composition and mutagenic activity of airborne particulate organic matter using an integrated chemical class/bioassay system

    NASA Astrophysics Data System (ADS)

    Butler, J. P.; Kneip, T. J.; Daisey, J. M.

    Previous investigations in this laboratory have demonstrated that the mutagenic activities of extractable particulate organic matter (EOM) from cities which differ in their principal fuels and meteorology can vary significantly. To gain a better understanding of these interurban variations, an Integrated Chemical Class/Biological Screening System was developed and used for a more detailed examination of differences in the chemical composition and mutagenic activity of EOM. The screening system involved coupling in situ Ames mutagenicity determinations on high performance thin layer chromatography (HPTLC) plates with class specific chemical analyses on a second set of plates. The system was used to screen for mutagenic activity and selected chemical classes (including PAH, nitro-PAH, phenols, carboxylic acids, carbonyls, aza-arenes and alkylating agents) in EOM from the following sites: New York City; Elizabeth, N.J.; Mexico City; Beijing, China; Philadelphia, PA; and the Caldecott Tunnel (CA). The results of this study demonstrated mutagenic activity and chemical compositional differences in HPTLC subfractions of particulate organic matter from these cities and from the Caldecott Tunnel. The greatest interurban differences in chemical classes were observed for the phenols, carbonyl compounds and alkylating agents. Interurban variations in mutagenic activities were greatest for EOM subfractions of intermediate polarity. These differences are probably related to interurban differences in the fuels used, types of sources and atmospheric conditions. The relationships between these variables are not well understood at present.

  19. Development of Genuine Neural Network Prototype Chip

    DTIC Science & Technology

    1991-01-28

    priori distribution is equivalent, and more readily visualized with a rank curve . The sonar signal data consisted of approximately 85% class Target and...15% class Clutter. For this reason, the rank curves for the class Clutter were used for device parameter analysis. R & D STATUS REPORT 1/28/91 N00014...the signal CLASSLD#. Four 10-bit class probabilities are available on the output bus (C0-C9, C16-C25, C32-C41 and C48- C57 ) at each clock cycle. A

  20. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  1. Possibilities of forecasting hypercholesterinemia in pilots

    NASA Technical Reports Server (NTRS)

    Vivilov, P.

    1980-01-01

    The dependence of the frequency of hypercholesterinemia on the age, average annual flying time, functional category, qualification class, and flying specialty of 300 pilots was investigated. The risk probability coefficient of hypercholesterinemia was computed. An evaluation table was developed which gives an 84% probability of forcasting risk of hypercholesterinemia.

  2. Four Distinct Subgroups of Self-Injurious Behavior among Chinese Adolescents: Findings from a Latent Class Analysis

    PubMed Central

    Xin, Xiuhong; Ming, Qingsen; Zhang, Jibiao; Wang, Yuping; Liu, Mingli; Yao, Shuqiao

    2016-01-01

    Self-injurious behavior (SIB) among adolescents is an important public health issue worldwide. It is still uncertain whether homogeneous subgroups of SIB can be identified and whether constellations of SIBs can co-occur due to the high heterogeneity of these behaviors. In this study, a cross-sectional study was conducted on a large school-based sample and latent class analysis was performed (n = 10,069, mean age = 15 years) to identify SIB classes based on 11 indicators falling under direct SIB (DSIB), indirect SIB (ISIB), and suicide attempts (SAs). Social and psychological characteristics of each subgroup were examined after controlling for age and gender. Results showed that a four-class model best fit the data and each class had a distinct pattern of co-occurrence of SIBs and external measures. Class 4 (the baseline/normative group, 65.3%) had a low probability of SIB. Class 3 (severe SIB group, 3.9%) had a high probability of SIB and the poorest social and psychological status. Class 1 (DSIB+SA group, 14.2%) had similar scores for external variables compared to class 3, and included a majority of girls [odds ratio (OR) = 1.94]. Class 2 (ISIB group, 16.6%) displayed moderate endorsement of ISIB items, and had a majority of boys and older adolescents (OR = 1.51). These findings suggest that SIB is a heterogeneous entity, but it may be best explained by four homogenous subgroups that display quantitative and qualitative differences. Findings in this study will improve our understanding on SIB and may facilitate the prevention and treatment of SIB. PMID:27392132

  3. An activity canyon characterization of the pharmacological topography.

    PubMed

    Kulkarni, Varsha S; Wild, David J

    2016-01-01

    Highly chemically similar drugs usually possess similar biological activities, but sometimes, small changes in chemistry can result in a large difference in biological effects. Chemically similar drug pairs that show extreme deviations in activity represent distinctive drug interactions having important implications. These associations between chemical and biological similarity are studied as discontinuities in activity landscapes. Particularly, activity cliffs are quantified by the drop in similar activity of chemically similar drugs. In this paper, we construct a landscape using a large drug-target network and consider the rises in similarity and variation in activity along the chemical space. Detailed analysis of structure and activity gives a rigorous quantification of distinctive pairs and the probability of their occurrence. We analyze pairwise similarity (s) and variation (d) in activity of drugs on proteins. Interactions between drugs are quantified by considering pairwise s and d weights jointly with corresponding chemical similarity (c) weights. Similarity and variation in activity are measured as the number of common and uncommon targets of two drugs respectively. Distinctive interactions occur between drugs having high c and above (below) average d (s). Computation of predicted probability of distinctiveness employs joint probability of c, s and of c, d assuming independence of structure and activity. Predictions conform with the observations at different levels of distinctiveness. Results are validated on the data used and another drug ensemble. In the landscape, while s and d decrease as c increases, d maintains value more than s. c ∈ [0.3, 0.64] is the transitional region where rises in d are significantly greater than drops in s. It is fascinating that distinctive interactions filtered with high d and low s are different in nature. It is crucial that high c interactions are more probable of having above average d than s. Identification of distinctive interactions is better with high d than low s. These interactions belong to diverse classes. d is greatest between drugs and analogs prepared for treatment of same class of ailments but with different therapeutic specifications. In contrast, analogs having low s would treat ailments from distinct classes. Intermittent spikes in d along the axis of c represent canyons in the activity landscape. This new representation accounts for distinctiveness through relative rises in s and d. It provides a mathematical basis for predicting the probability of occurrence of distinctiveness. It identifies the drug pairs at varying levels of distinctiveness and non-distinctiveness. The predicted probability formula is validated even if data approximately satisfy the conditions of its construction. Also, the postulated independence of structure and activity is of little significance to the overall assessment. The difference in distinctive interactions obtained by s and d highlights the importance of studying both of them, and reveals how the choice of measurement can affect the interpretation. The methods in this paper can be used to interpret whether or not drug interactions are distinctive and the probability of their occurrence. Practitioners and researchers can rely on this identification for quantitative modeling and assessment.

  4. Probability of detection of clinical seizures using heart rate changes.

    PubMed

    Osorio, Ivan; Manly, B F J

    2015-08-01

    Heart rate-based seizure detection is a viable complement or alternative to ECoG/EEG. This study investigates the role of various biological factors on the probability of clinical seizure detection using heart rate. Regression models were applied to 266 clinical seizures recorded from 72 subjects to investigate if factors such as age, gender, years with epilepsy, etiology, seizure site origin, seizure class, and data collection centers, among others, shape the probability of EKG-based seizure detection. Clinical seizure detection probability based on heart rate changes, is significantly (p<0.001) shaped by patients' age and gender, seizure class, and years with epilepsy. The probability of detecting clinical seizures (>0.8 in the majority of subjects) using heart rate is highest for complex partial seizures, increases with a patient's years with epilepsy, is lower for females than for males and is unrelated to the side of hemisphere origin. Clinical seizure detection probability using heart rate is multi-factorially dependent and sufficiently high (>0.8) in most cases to be clinically useful. Knowledge of the role that these factors play in shaping said probability will enhance its applicability and usefulness. Heart rate is a reliable and practical signal for extra-cerebral detection of clinical seizures originating from or spreading to central autonomic network structures. Copyright © 2015 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  5. Linear Classifier with Reject Option for the Detection of Vocal Fold Paralysis and Vocal Fold Edema

    NASA Astrophysics Data System (ADS)

    Kotropoulos, Constantine; Arce, Gonzalo R.

    2009-12-01

    Two distinct two-class pattern recognition problems are studied, namely, the detection of male subjects who are diagnosed with vocal fold paralysis against male subjects who are diagnosed as normal and the detection of female subjects who are suffering from vocal fold edema against female subjects who do not suffer from any voice pathology. To do so, utterances of the sustained vowel "ah" are employed from the Massachusetts Eye and Ear Infirmary database of disordered speech. Linear prediction coefficients extracted from the aforementioned utterances are used as features. The receiver operating characteristic curve of the linear classifier, that stems from the Bayes classifier when Gaussian class conditional probability density functions with equal covariance matrices are assumed, is derived. The optimal operating point of the linear classifier is specified with and without reject option. First results using utterances of the "rainbow passage" are also reported for completeness. The reject option is shown to yield statistically significant improvements in the accuracy of detecting the voice pathologies under study.

  6. Forest inventory using multistage sampling with probability proportional to size. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.

    1984-01-01

    A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.

  7. An open source multivariate framework for n-tissue segmentation with evaluation on public data.

    PubMed

    Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C

    2011-12-01

    We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.

  8. An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data

    PubMed Central

    Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.

    2012-01-01

    We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993

  9. Estimating transmission probability in schools for the 2009 H1N1 influenza pandemic in Italy.

    PubMed

    Clamer, Valentina; Dorigatti, Ilaria; Fumanelli, Laura; Rizzo, Caterina; Pugliese, Andrea

    2016-10-12

    Epidemic models are being extensively used to understand the main pathways of spread of infectious diseases, and thus to assess control methods. Schools are well known to represent hot spots for epidemic spread; hence, understanding typical patterns of infection transmission within schools is crucial for designing adequate control strategies. The attention that was given to the 2009 A/H1N1pdm09 flu pandemic has made it possible to collect detailed data on the occurrence of influenza-like illness (ILI) symptoms in two primary schools of Trento, Italy. The data collected in the two schools were used to calibrate a discrete-time SIR model, which was designed to estimate the probabilities of influenza transmission within the classes, grades and schools using Markov Chain Monte Carlo (MCMC) methods. We found that the virus was mainly transmitted within class, with lower levels of transmission between students in the same grade and even lower, though not significantly so, among different grades within the schools. We estimated median values of R 0 from the epidemic curves in the two schools of 1.16 and 1.40; on the other hand, we estimated the average number of students infected by the first school case to be 0.85 and 1.09 in the two schools. The discrepancy between the values of R 0 estimated from the epidemic curve or from the within-school transmission probabilities suggests that household and community transmission played an important role in sustaining the school epidemics. The high probability of infection between students in the same class confirms that targeting within-class transmission is key to controlling the spread of influenza in school settings and, as a consequence, in the general population.

  10. Construction of a Calibrated Probabilistic Classification Catalog: Application to 50k Variable Sources in the All-Sky Automated Survey

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien

    2012-12-01

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  11. CONSTRUCTION OF A CALIBRATED PROBABILISTIC CLASSIFICATION CATALOG: APPLICATION TO 50k VARIABLE SOURCES IN THE ALL-SKY AUTOMATED SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.

    2012-12-15

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less

  12. Incorporating Covariates into Stochastic Blockmodels

    ERIC Educational Resources Information Center

    Sweet, Tracy M.

    2015-01-01

    Social networks in education commonly involve some form of grouping, such as friendship cliques or teacher departments, and blockmodels are a type of statistical social network model that accommodate these grouping or blocks by assuming different within-group tie probabilities than between-group tie probabilities. We describe a class of models,…

  13. Stretching Probability Explorations with Geoboards

    ERIC Educational Resources Information Center

    Wheeler, Ann; Champion, Joe

    2016-01-01

    Students are faced with many transitions in their middle school mathematics classes. To build knowledge, skills, and confidence in the key areas of algebra and geometry, students often need to practice using numbers and polygons in a variety of contexts. Teachers also want students to explore ideas from probability and statistics. Teachers know…

  14. A latent class analysis of bullies, victims and aggressive victims in Chinese adolescence: relations with social and school adjustments.

    PubMed

    Shao, Aihui; Liang, Lichan; Yuan, Chunyong; Bian, Yufang

    2014-01-01

    This study used the latent class analysis (LCA) to identify and classify Chinese adolescent children's aggressive behaviors. It was found that (1) Adolescent children could be divided into four categories: general children, aggressive children, victimized children and aggressive victimized children. (2) There were significant gender differences among the aggressive victimized children, the aggressive children and the general children. Specifically, aggressive victimized children and aggressive children had greater probabilities of being boys; victimized children had equal probabilities of being boys or girls. (3) Significant differences in loneliness, depression, anxiety and academic achievement existed among the aggressive victims, the aggressor, the victims and the general children, in which the aggressive victims scored the worst in all questionnaires. (4) As protective factors, peer and teacher supports had important influences on children's aggressive and victimized behaviors. Relative to general children, aggressive victims, aggressive children and victimized children had lower probabilities of receiving peer supports. On the other hand, compared to general children, aggressive victims had lower probabilities of receiving teacher supports; while significant differences in the probability of receiving teacher supports did not exist between aggressive children and victimized children.

  15. A Latent Class Analysis of Bullies, Victims and Aggressive Victims in Chinese Adolescence: Relations with Social and School Adjustments

    PubMed Central

    Shao, Aihui; Liang, Lichan; Yuan, Chunyong; Bian, Yufang

    2014-01-01

    This study used the latent class analysis (LCA) to identify and classify Chinese adolescent children's aggressive behaviors. It was found that (1) Adolescent children could be divided into four categories: general children, aggressive children, victimized children and aggressive victimized children. (2) There were significant gender differences among the aggressive victimized children, the aggressive children and the general children. Specifically, aggressive victimized children and aggressive children had greater probabilities of being boys; victimized children had equal probabilities of being boys or girls. (3) Significant differences in loneliness, depression, anxiety and academic achievement existed among the aggressive victims, the aggressor, the victims and the general children, in which the aggressive victims scored the worst in all questionaires. (4) As protective factors, peer and teacher supports had important influences on children's aggressive and victimized behaviors. Relative to general children, aggressive victims, aggressive children and victimized children had lower probabilities of receiving peer supports. On the other hand, compared to general children, aggressive victims had lower probabilities of receiving teacher supports; while significant differences in the probability of receiving teacher supports did not exist between aggressive children and victimized children. PMID:24740096

  16. A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L

    2011-01-01

    Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less

  17. The Jukes-Cantor Model of Molecular Evolution

    ERIC Educational Resources Information Center

    Erickson, Keith

    2010-01-01

    The material in this module introduces students to some of the mathematical tools used to examine molecular evolution. This topic is standard fare in many mathematical biology or bioinformatics classes, but could also be suitable for classes in linear algebra or probability. While coursework in matrix algebra, Markov processes, Monte Carlo…

  18. On the shape and likelihood of oceanic rogue waves.

    PubMed

    Benetazzo, Alvise; Ardhuin, Fabrice; Bergamasco, Filippo; Cavaleri, Luigi; Guimarães, Pedro Veras; Schwendeman, Michael; Sclavo, Mauro; Thomson, Jim; Torsello, Andrea

    2017-08-15

    We consider the observation and analysis of oceanic rogue waves collected within spatio-temporal (ST) records of 3D wave fields. This class of records, allowing a sea surface region to be retrieved, is appropriate for the observation of rogue waves, which come up as a random phenomenon that can occur at any time and location of the sea surface. To verify this aspect, we used three stereo wave imaging systems to gather ST records of the sea surface elevation, which were collected in different sea conditions. The wave with the ST maximum elevation (happening to be larger than the rogue threshold 1.25H s ) was then isolated within each record, along with its temporal profile. The rogue waves show similar profiles, in agreement with the theory of extreme wave groups. We analyze the rogue wave probability of occurrence, also in the context of ST extreme value distributions, and we conclude that rogue waves are more likely than previously reported; the key point is coming across them, in space as well as in time. The dependence of the rogue wave profile and likelihood on the sea state conditions is also investigated. Results may prove useful in predicting extreme wave occurrence probability and strength during oceanic storms.

  19. Evolution of cooperation in a finite homogeneous graph.

    PubMed

    Taylor, Peter D; Day, Troy; Wild, Geoff

    2007-05-24

    Recent theoretical studies of selection in finite structured populations have worked with one of two measures of selective advantage of an allele: fixation probability and inclusive fitness. Each approach has its own analytical strengths, but given certain assumptions they provide equivalent results. In most instances the structure of the population can be specified by a network of nodes connected by edges (that is, a graph), and much of the work here has focused on a continuous-time model of evolution, first described by ref. 11. Working in this context, we provide an inclusive fitness analysis to derive a surprisingly simple analytical condition for the selective advantage of a cooperative allele in any graph for which the structure satisfies a general symmetry condition ('bi-transitivity'). Our results hold for a broad class of population structures, including most of those analysed previously, as well as some for which a direct calculation of fixation probability has appeared intractable. Notably, under some forms of population regulation, the ability of a cooperative allele to invade is seen to be independent of the nature of population structure (and in particular of how game partnerships are specified) and is identical to that for an unstructured population. For other types of population regulation our results reveal that cooperation can invade if players choose partners along relatively 'high-weight' edges.

  20. Assessment of Gamma Radiation Resistance of Spores Isolated from the Spacecraft Assembly Facility During MSL Assembly

    NASA Technical Reports Server (NTRS)

    Chopra, Arsh; Ramirez, Gustavo A.; Venkateswaran, Kasthuri J.; Vaishampayan, Parag A.

    2011-01-01

    Spore forming bacteria, a common inhabitant of spacecraft assembly facilities, are known to tolerate extreme environmental conditions such as radiation, desiccation, and high temperatures. Since the Viking era (early 1970's), spores have been utilized to assess the degree and level of microbiological contamination on spacecraft and their associated spacecraft assembly facilities. There is a growing concern that desiccation and extreme radiation resistant spore forming microorganisms associated with spacecraft surfaces can withstand space environmental conditions and subsequently proliferate on another solar body. Such forward contamination would certainly jeopardize future life detection or sample return technologies. It is important to recognize that different classes of organisms are critical while calculating the probability of contamination, and methods must be devised to estimate their abundances. Microorganisms can be categorized based on radiation sensitivity as Type A, B, C, and D. Type C represents spores resistant to radiation (10% or greater survival above 0.8 mRad gamma radiation). To address these questions we have purified 96 spore formers, isolated during planetary protection efforts of Mars Science Laboratory assembly for gamma radiation resistance. The spores purified and stored will be used to generate data that can be used further to model and predict the probability of forward contamination.

  1. Assessment of Gamma Radiation Resistance of Spores Isolated from the Spacecraft Assembly Facility During MSL Assembly

    NASA Technical Reports Server (NTRS)

    Chopra, Arsh; Ramirez, Gustavo A.; Vaishampayan, Parag A.; Venkateswaran, Kasthuri J.

    2011-01-01

    Spore forming bacteria, a common inhabitant of spacecraft assembly facilities, are known to tolerate extreme environmental conditions such as radiation, desiccation, and high temperatures. Since the Viking era (early 1970's), spores have been utilized to assess the degree and level of microbiological contamination on spacecraft and their associated spacecraft assembly facilities. There is a growing concern that desiccation and extreme radiation resistant spore forming microorganisms associated with spacecraft surfaces can withstand space environmental conditions and subsequently proliferate on another solar body. Such forward contamination would certainly jeopardize future life detection or sample return technologies. It is important to recognize that different classes of organisms are critical while calculating the probability of contamination, and methods must be devised to estimate their abundances. Microorganisms can be categorized based on radiation sensitivity as Type A, B, C, and D. Type C represents spores resistant to radiation (10% or greater survival above 0.8 Mrad gamma radiation). To address these questions we have purified 96 spore formers, isolated during planetary protection efforts of Mars Science Laboratory assembly for gamma radiation resistance. The spores purified and stored will be used to generate data that can be used further to model and predict the probability of forward contamination.

  2. Optimum space shuttle launch times relative to natural environment

    NASA Technical Reports Server (NTRS)

    King, R. L.

    1977-01-01

    The probabilities of favorable and unfavorable weather conditions for launch and landing of the STS under different criteria were computed for every three hours on a yearly basis using 14 years of weather data. These temporal probability distributions were considered for three sets of weather criteria encompassing benign, moderate and severe weather conditions for both Kennedy Space Center and for Edwards Air Force Base. In addition, the conditional probabilities were computed for unfavorable weather conditions occurring after a delay which may or may not be due to weather conditions. Also for KSC, the probabilities of favorable landing conditions at various times after favorable launch conditions have prevailed. The probabilities were computed to indicate the significance of each weather element to the overall result.

  3. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  4. An Alternative Version of Conditional Probabilities and Bayes' Rule: An Application of Probability Logic

    ERIC Educational Resources Information Center

    Satake, Eiki; Amato, Philip P.

    2008-01-01

    This paper presents an alternative version of formulas of conditional probabilities and Bayes' rule that demonstrate how the truth table of elementary mathematical logic applies to the derivations of the conditional probabilities of various complex, compound statements. This new approach is used to calculate the prior and posterior probabilities…

  5. Nonlinear GARCH model and 1 / f noise

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Ruseckas, J.

    2015-06-01

    Auto-regressive conditionally heteroskedastic (ARCH) family models are still used, by practitioners in business and economic policy making, as a conditional volatility forecasting models. Furthermore ARCH models still are attracting an interest of the researchers. In this contribution we consider the well known GARCH(1,1) process and its nonlinear modifications, reminiscent of NGARCH model. We investigate the possibility to reproduce power law statistics, probability density function and power spectral density, using ARCH family models. For this purpose we derive stochastic differential equations from the GARCH processes in consideration. We find the obtained equations to be similar to a general class of stochastic differential equations known to reproduce power law statistics. We show that linear GARCH(1,1) process has power law distribution, but its power spectral density is Brownian noise-like. However, the nonlinear modifications exhibit both power law distribution and power spectral density of the 1 /fβ form, including 1 / f noise.

  6. Rank and independence in contingency table

    NASA Astrophysics Data System (ADS)

    Tsumoto, Shusaku

    2004-04-01

    A contingency table summarizes the conditional frequencies of two attributes and shows how these two attributes are dependent on each other. Thus, this table is a fundamental tool for pattern discovery with conditional probabilities, such as rule discovery. In this paper, a contingency table is interpreted from the viewpoint of statistical independence and granular computing. The first important observation is that a contingency table compares two attributes with respect to the number of equivalence classes. For example, a n x n table compares two attributes with the same granularity, while a m x n(m >= n) table compares two attributes with different granularities. The second important observation is that matrix algebra is a key point of analysis of this table. Especially, the degree of independence, rank plays a very important role in evaluating the degree of statistical independence. Relations between rank and the degree of dependence are also investigated.

  7. Latent class analysis of accident risks in usage-based insurance: Evidence from Beijing.

    PubMed

    Jin, Wen; Deng, Yinglu; Jiang, Hai; Xie, Qianyan; Shen, Wei; Han, Weijian

    2018-06-01

    Car insurance is quickly becoming a big data industry, with usage-based insurance (UBI) poised to potentially change the business of insurance. Telematics data, which are transmitted from wireless devices in car, are widely used in UBI to obtain individual-level travel and driving characteristics. While most existing studies have introduced telematics data into car insurance pricing, the telematics-related characteristics are directly obtained from the raw data. In this study, we propose to quantify drivers' familiarity with their driving routes and develop models to quantify drivers' accident risks using the telematics data. In addition, we build a latent class model to study the heterogeneity in travel and driving styles based on the telematics data, which has not been investigated in literature. Our main results include: (1) the improvement to the model fit is statistically significant by adding telematics-related characteristics; (2) drivers' familiarity with their driving trips is critical to identify high risk drivers, and the relationship between drivers' familiarity and accident risks is non-linear; (3) the drivers can be classified into two classes, where the first class is the low risk class with 0.54% of its drivers reporting accidents, and the second class is the high risk class with 20.66% of its drivers reporting accidents; and (4) for the low risk class, drivers with high probability of reporting accidents can be identified by travel-behavior-related characteristics, while for the high risk class, they can be identified by driving-behavior-related characteristics. The driver's familiarity will affect the probability of reporting accidents for both classes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. The effects of rurality on substance use disorder diagnosis: A multiple-groups latent class analysis.

    PubMed

    Brooks, Billy; McBee, Matthew; Pack, Robert; Alamian, Arsham

    2017-05-01

    Rates of accidental overdose mortality from substance use disorder (SUD) have risen dramatically in the United States since 1990. Between 1999 and 2004 alone rates increased 62% nationwide, with rural overdose mortality increasing at a rate 3 times that seen in urban populations. Cultural differences between rural and urban populations (e.g., educational attainment, unemployment rates, social characteristics, etc.) affect the nature of SUD, leading to disparate risk of overdose across these communities. Multiple-groups latent class analysis with covariates was applied to data from the 2011 and 2012 National Survey on Drug Use and Health (n=12.140) to examine potential differences in latent classifications of SUD between rural and urban adult (aged 18years and older) populations. Nine drug categories were used to identify latent classes of SUD defined by probability of diagnosis within these categories. Once the class structures were established for rural and urban samples, posterior membership probabilities were entered into a multinomial regression analysis of socio-demographic predictors' association with the likelihood of SUD latent class membership. Latent class structures differed across the sub-groups, with the rural sample fitting a 3-class structure (Bootstrap Likelihood Ratio Test P value=0.03) and the urban fitting a 6-class model (Bootstrap Likelihood Ratio Test P value<0.0001). Overall the rural class structure exhibited less diversity in class structure and lower prevalence of SUD in multiple drug categories (e.g. cocaine, hallucinogens, and stimulants). This result supports the hypothesis that different underlying elements exist in the two populations that affect SUD patterns, and thus can inform the development of surveillance instruments, clinical services, and prevention programming tailored to specific communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Influence of crown class, diameter, and sprout rank on red maple (Acer rubrum L.) development during forest succession in Connecticut

    Treesearch

    Jeffery S. Ward; George R. Stephens

    1993-01-01

    Crown class, stem diameter, and sprout rank of 2067 red maples on medium quality sites were measured at 10-yr intervals between 1927-1987. Nominal stand age was 25 yrs in 1927. There was a progressive increase in the probability of an individual red maple ascending into the upper canopy and persisting in the upper canopy from suppressed through dominant crown classes...

  10. Analytical performance evaluation of SAR ATR with inaccurate or estimated models

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.

    2004-09-01

    Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.

  11. The roles of a process development group in biopharmaceutical process startup.

    PubMed

    Goochee, Charles F

    2002-01-01

    The transfer of processes for biotherapeutic products into finalmanufacturing facilities was frequently problematic during the 1980's and early 1990's, resulting in costly delays to licensure(Pisano 1997). While plant startups for this class of products can become chaotic affairs, this is not an inherent or intrinsic feature. Major classes of process startup problems have been identified andmechanisms have been developed to reduce their likelihood of occurrence. These classes of process startup problems and resolution mechanisms are the major topic of this article. With proper planning and sufficient staffing, the probably of a smooth process startup for a biopharmaceutical product can be very high - i.e., successful process performance will often beachieved within the first two full-scale process lots in the plant. The primary focus of this article is the role of the Process Development Group in helping to assure this high probability of success.

  12. Universality classes of fluctuation dynamics in hierarchical complex systems

    NASA Astrophysics Data System (ADS)

    Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.

    2017-03-01

    A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.

  13. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  14. Spatial distribution and occurrence probability of regional new particle formation events in eastern China

    NASA Astrophysics Data System (ADS)

    Shen, Xiaojing; Sun, Junying; Kivekäs, Niku; Kristensson, Adam; Zhang, Xiaoye; Zhang, Yangmei; Zhang, Lu; Fan, Ruxia; Qi, Xuefei; Ma, Qianli; Zhou, Huaigang

    2018-01-01

    In this work, the spatial extent of new particle formation (NPF) events and the relative probability of observing particles originating from different spatial origins around three rural sites in eastern China were investigated using the NanoMap method, using particle number size distribution (PNSD) data and air mass back trajectories. The length of the datasets used were 7, 1.5, and 3 years at rural sites Shangdianzi (SDZ) in the North China Plain (NCP), Mt. Tai (TS) in central eastern China, and Lin'an (LAN) in the Yangtze River Delta region in eastern China, respectively. Regional NPF events were observed to occur with the horizontal extent larger than 500 km at SDZ and TS, favoured by the fast transport of northwesterly air masses. At LAN, however, the spatial footprint of NPF events was mostly observed around the site within 100-200 km. Difference in the horizontal spatial distribution of new particle source areas at different sites was connected to typical meteorological conditions at the sites. Consecutive large-scale regional NPF events were observed at SDZ and TS simultaneously and were associated with a high surface pressure system dominating over this area. Simultaneous NPF events at SDZ and LAN were seldom observed. At SDZ the polluted air masses arriving over the NCP were associated with higher particle growth rate (GR) and new particle formation rate (J) than air masses from Inner Mongolia (IM). At TS the same phenomenon was observed for J, but GR was somewhat lower in air masses arriving over the NCP compared to those arriving from IM. The capability of NanoMap to capture the NPF occurrence probability depends on the length of the dataset of PNSD measurement but also on topography around the measurement site and typical air mass advection speed during NPF events. Thus the long-term measurements of PNSD in the planetary boundary layer are necessary in the further study of spatial extent and the probability of NPF events. The spatial extent, relative probability of occurrence, and typical evolution of PNSD during NPF events presented in this study provide valuable information to further understand the climate and air quality effects of new particle formation.

  15. Analysing designed experiments in distance sampling

    Treesearch

    Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block

    2009-01-01

    Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...

  16. Some Factor Analytic Approximations to Latent Class Structure.

    ERIC Educational Resources Information Center

    Dziuban, Charles D.; Denton, William T.

    Three procedures, alpha, image, and uniqueness rescaling, were applied to a joint occurrence probability matrix. That matrix was the basis of a well-known latent class structure. The values of the recurring subscript elements were varied as follows: Case 1 - The known elements were input; Case 2 - The upper bounds to the recurring subscript…

  17. Identity Orientations: Definition, Assessment, and Personal Correlates. A Teaching Module.

    ERIC Educational Resources Information Center

    Carducci, Bernardo J.

    Probably no other concept comes closer to encompassing the core of personality psychology than the concept of the self. This teaching activity provides instructors with a self-contained teaching module--including lecture material, an in-class activity, suggestions for in-class discussion, and supporting references--on the topic of identity…

  18. An Alternative Teaching Method of Conditional Probabilities and Bayes' Rule: An Application of the Truth Table

    ERIC Educational Resources Information Center

    Satake, Eiki; Vashlishan Murray, Amy

    2015-01-01

    This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…

  19. Patterns and predictors of violence against children in Uganda: a latent class analysis

    PubMed Central

    Clarke, Kelly; Patalay, Praveetha; Allen, Elizabeth; Knight, Louise; Naker, Dipak; Devries, Karen

    2016-01-01

    Objective To explore patterns of physical, emotional and sexual violence against Ugandan children. Design Latent class and multinomial logistic regression analysis of cross-sectional data. Setting Luwero District, Uganda. Participants In all, 3706 primary 5, 6 and 7 students attending 42 primary schools. Main outcome and measure To measure violence, we used the International Society for the Prevention of Child Abuse and Neglect Child Abuse Screening Tool—Child Institutional. We used the Strengths and Difficulties Questionnaire to assess mental health and administered reading, spelling and maths tests. Results We identified three violence classes. Class 1 (N=696 18.8%) was characterised by emotional and physical violence by parents and relatives, and sexual and emotional abuse by boyfriends, girlfriends and unrelated adults outside school. Class 2 (N=975 26.3%) was characterised by physical, emotional and sexual violence by peers (male and female students). Children in Classes 1 and 2 also had a high probability of exposure to emotional and physical violence by school staff. Class 3 (N=2035 54.9%) was characterised by physical violence by school staff and a lower probability of all other forms of violence compared to Classes 1 and 2. Children in Classes 1 and 2 were more likely to have worked for money (Class 1 Relative Risk Ratio 1.97, 95% CI 1.54 to 2.51; Class 2 1.55, 1.29 to 1.86), been absent from school in the previous week (Class 1 1.31, 1.02 to 1.67; Class 2 1.34, 1.10 to 1.63) and to have more mental health difficulties (Class 1 1.09, 1.07 to 1.11; Class 2 1.11, 1.09 to 1.13) compared to children in Class 3. Female sex (3.44, 2.48 to 4.78) and number of children sharing a sleeping area predicted being in Class 1. Conclusions Childhood violence in Uganda forms distinct patterns, clustered by perpetrator and setting. Research is needed to understand experiences of victimised children, and to develop mental health interventions for those with severe violence exposures. Trial registration number NCT01678846; Results. PMID:27221125

  20. RN jurisdiction over nursing care systems in nursing homes: application of latent class analysis

    PubMed Central

    Corazzini, Kirsten N.; Anderson, Ruth A.; Mueller, Christine; Thorpe, Joshua M.; McConnell, Eleanor S.

    2015-01-01

    Background In the context of declining registered nurse (RN) staffing levels in nursing homes, professional nursing jurisdiction over nursing care systems may erode. Objectives The purpose of this study is to develop a typology of professional nursing jurisdiction in nursing homes in relation to characteristics of RN staffing, drawing upon Abbott's (1988) tasks and jurisdictions framework. Method The study was a cross-sectional, observational study using the 2004 National Nursing Home Survey (N=1,120 nursing homes). Latent class analysis tested whether RN staffing indicators differentiated facilities in a typology of RN jurisdiction, and compared classes on key organizational environment characteristics. Multiple logistic regression analysis related the emergent classes to presence or absence of specialty care programs in 8 clinical areas. Results Three classes of capacity for jurisdiction were identified, including ‘low capacity’ (41% of homes) with low probabilities of having any indicators of RN jurisdiction, ‘mixed capacity’ (26% of homes) with moderate to high probabilities of having higher RN education and staffing levels, and ‘high capacity’ (32% of homes) with moderate to high probabilities of having almost all indicators of RN jurisdiction. ‘High capacity’ homes were more likely to have specialty care programs relative to ‘low capacity’ homes; such homes were less likely to be chain-owned, and more likely to be larger, provide higher technical levels of patient care, have unionized nursing assistants, have a lower ratio of LPNs to RNs, and a higher education level of the administrator. Discussion Findings provide preliminary support for the theoretical framework as a starting point to move beyond extensive reliance on staffing levels and mix as indicators of quality. Further, findings indicate the importance of RN specialty certification. PMID:22166907

  1. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    USGS Publications Warehouse

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.

  2. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  3. [Protection regionalization of Houshi Forest Park based on landscape sensitivity].

    PubMed

    Zhou, Rui; Li, Yue-hui; Hu, Yuan-man; Zhang, Jia-hui; Liu, Miao

    2009-03-01

    By using GIS technology, and selecting slope, relative distance to viewpoints, relative distance to tourism roads, visual probability of viewpoints, and visual probability of tourism roads as the indices, the landscape sensitivity of Houshi Forest Park was assessed, and an integrated assessment model was established. The AHP method was utilized to determine the weights of the indices, and further, to identify the integrated sensitivity class of the areas in the Park. Four classes of integrated sensitivity area were divided. Class I had an area of 297.24 hm2, occupying 22.9% of the total area of the Park, which should be strictly protected to maintain natural landscape, and prohibited any exploitation or construction. Class II had an area of 359.72 hm2, accounting for 27.8% of the total. The hills in this area should be kept from destroying to protect vegetation and water, but the simple byway and stone path could be built. Class III had an area reached up to 495.80 hm2, occupying 38.3% of the total, which could be moderately exploited, and artificial landscape was advocated to beautify and set off natural landscape. Class IV had the smallest area (142.80 hm2) accounting for 11% of the total, which had the greatest potential of exploitation, being possible to build large-scale integrated tourism facilities and travelling roads.

  4. Cloning, expression and biochemical characterization of one Epsilon-class (GST-3) and ten Delta-class (GST-1) glutathione S-transferases from Drosophila melanogaster, and identification of additional nine members of the Epsilon class.

    PubMed Central

    Sawicki, Rafał; Singh, Sharda P; Mondal, Ashis K; Benes, Helen; Zimniak, Piotr

    2003-01-01

    From the fruitfly, Drosophila melanogaster, ten members of the cluster of Delta-class glutathione S-transferases (GSTs; formerly denoted as Class I GSTs) and one member of the Epsilon-class cluster (formerly GST-3) have been cloned, expressed in Escherichia coli, and their catalytic properties have been determined. In addition, nine more members of the Epsilon cluster have been identified through bioinformatic analysis but not further characterized. Of the 11 expressed enzymes, seven accepted the lipid peroxidation product 4-hydroxynonenal as substrate, and nine were active in glutathione conjugation of 1-chloro-2,4-dinitrobenzene. Since the enzymically active proteins included the gene products of DmGSTD3 and DmGSTD7 which were previously deemed to be pseudogenes, we investigated them further and determined that both genes are transcribed in Drosophila. Thus our present results indicate that DmGSTD3 and DmGSTD7 are probably functional genes. The existence and multiplicity of insect GSTs capable of conjugating 4-hydroxynonenal, in some cases with catalytic efficiencies approaching those of mammalian GSTs highly specialized for this function, indicates that metabolism of products of lipid peroxidation is a highly conserved biochemical pathway with probable detoxification as well as regulatory functions. PMID:12443531

  5. The Probability Approach to English If-Conditional Sentences

    ERIC Educational Resources Information Center

    Wu, Mei

    2012-01-01

    Users of the Probability Approach choose the right one from four basic types of conditional sentences--factual, predictive, hypothetical and counterfactual conditionals, by judging how likely (i.e. the probability) the event in the result-clause will take place when the condition in the if-clause is met. Thirty-three students from the experimental…

  6. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  7. PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...

  8. Laser radar system for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Bers, Karlheinz; Schulz, Karl R.; Armbruster, Walter

    2005-09-01

    The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser radars which are build by the EADS company and presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from objects at distances of military relevance with a high hit-and-detect probability. The development of advanced 3d-scene analysis algorithms had increased the recognition probability and reduced the false alarm rate by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The sensor system and the implemented algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition. This paper describes different 3D-imaging ladar sensors with unique system architecture but different components matched for different military application. Emphasis is laid on an obstacle warning system with a high probability of detection of thin wires, the real time processing of the measured range image data, obstacle classification und visualization.

  9. Teleporting an unknown quantum state with unit fidelity and unit probability via a non-maximally entangled channel and an auxiliary system

    NASA Astrophysics Data System (ADS)

    Rashvand, Taghi

    2016-11-01

    We present a new scheme for quantum teleportation that one can teleport an unknown state via a non-maximally entangled channel with certainly, using an auxiliary system. In this scheme depending on the state of the auxiliary system, one can find a class of orthogonal vectors set as a basis which by performing von Neumann measurement in each element of this class Alice can teleport an unknown state with unit fidelity and unit probability. A comparison of our scheme with some previous schemes is given and we will see that our scheme has advantages that the others do not.

  10. A new sampler design for measuring sedimentation in streams

    USGS Publications Warehouse

    Hedrick, Lara B.; Welsh, S.A.; Hedrick, J.D.

    2005-01-01

    Sedimentation alters aquatic habitats and negatively affects fish and invertebrate communities but is difficult to quantify. To monitor bed load sedimentation, we designed a sampler with a 10.16-cm polyvinyl chloride coupling and removable sediment trap. We conducted a trial study of our samplers in riffle and pool habitats upstream and downstream of highway construction on a first-order Appalachian stream. Sediment samples were collected over three 6-week intervals, dried, and separated into five size-classes by means of nested sieves (U.S. standard sieve numbers 4, 8, 14, and 20). Downstream sediment accumulated in size-classes 1 and 2, and the total amount accumulated was significantly greater during all three sampling periods. Size-classes 3 and 4 had significantly greater amounts of sediment for the first two sampling periods at the downstream site. Differences between upstream and downstream sites narrowed during the 5-month sampling period. This probably reflects changes in site conditions, including the addition of more effective sediment control measures after the first 6-week period of the study. The sediment sampler design allowed for long-term placement of traps without continual disturbance of the streambed and was successful at providing repeat measures of sediment at paired sites. ?? Copyright by the American Fisheries Society 2005.

  11. Power-law tail probabilities of drainage areas in river basins

    USGS Publications Warehouse

    Veitzer, S.A.; Troutman, B.M.; Gupta, V.K.

    2003-01-01

    The significance of power-law tail probabilities of drainage areas in river basins was discussed. The convergence to a power law was not observed for all underlying distributions, but for a large class of statistical distributions with specific limiting properties. The article also discussed about the scaling properties of topologic and geometric network properties in river basins.

  12. Personnel reliability impact on petrochemical facilities monitoring system's failure skipping probability

    NASA Astrophysics Data System (ADS)

    Kostyukov, V. N.; Naumenko, A. P.

    2017-08-01

    The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under control - not more than 0.027. In case when only pump and compressor units are under control, the failure skipping risk is not more than 0.022, when the probability of error in operator's action is not more than 0.011. The work output shows that on the basis of the researches results an assessment of operators' reliability can be made in terms of almost any kind of production, but considering only technological capabilities, since operators' psychological and general training considerable vary in different production industries. Using latest technologies of engineering psychology and design of data support systems, situation assessment systems, decision-making and responding system, as well as achievement in condition monitoring in various production industries one can evaluate hazardous condition skipping risk probability considering static, dynamic errors and human factor.

  13. Probability Surveys, Conditional Probability, and Ecological Risk Assessment

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  14. The impact of motivational interviewing on participation in childbirth preparation classes and having a natural delivery: a randomised trial.

    PubMed

    Rasouli, M; AtashSokhan, G; Keramat, A; Khosravi, A; Fooladi, E; Mousavi, S A

    2017-03-01

    This study aimed to determine the effectiveness of motivational interviewing on women's participation in childbirth classes and their subsequent natural vaginal delivery. Randomised controlled trial. Prenatal clinic of the Shohada Women's Hospital, Behshahr, Mazandaran, Iran. This study was conducted with 230 nulliparous women. Participants were randomised into three groups, including 76 women in the motivational interviewing group, and 77 women in both the lecture and the control groups. Participants were assessed at three time points, including at baseline (16-19 weeks of gestation) and then following the intervention (at 21 and 37 weeks of gestation). The motivational interviewing group received two focus interviews and two telephone follow-up sessions (at 3 and 6 weeks after the last session of motivational interviewing). The lecture group received a speech session. The control group received routine care service. Frequency of participation in childbirth preparation classes and mode of delivery. Over 90% of women in the motivational interviewing group participated in childbirth preparation classes, whereas the rate of participation in the lecture and the control groups was 59.7 and 27.3%, respectively. The probability of maternal participation in childbirth classes in the motivational interviewing and in the lecture groups was 3.3 (95% CI 2.1-4.5) and 2.2 (95% CI 1.4-3.0) times the probability of maternal participation in the control group, respectively. Moreover, the intervention groups had 1.4 (95% CI 1.1-1.8) and 1.1 (95% CI 0.9-1.4) times the probability of natural delivery, compared with the control group. The frequency of natural delivery in motivational interviewing, lecture, and control groups was 68.4, 54.5, and 48.1%, respectively. The results showed a statistically significant difference between the mean scores for the awareness and attitude scores between the three groups in different time periods. We found that motivational interviewing can be a useful tool for encouraging pregnant women to attend childbirth preparation classes. Motivational interviewing with nulliparous women is strongly associated with their attendance in childbirth preparation classes. © 2016 Royal College of Obstetricians and Gynaecologists.

  15. Searching for chemical classes among metal-poor stars using medium-resolution spectroscopy

    NASA Astrophysics Data System (ADS)

    Cruz, Monique A.; Cogo-Moreira, Hugo; Rossi, Silvia

    2018-04-01

    Astronomy is in the era of large spectroscopy surveys, with the spectra of hundreds of thousands of stars in the Galaxy being collected. Although most of these surveys have low or medium resolution, which makes precise abundance measurements not possible, there is still important information to be extracted from the available data. Our aim is to identify chemically distinct classes among metal-poor stars, observed by the Sloan Digital Sky Survey, using line indices. The present work focused on carbon-enhanced metal-poor (CEMP) stars and their subclasses. We applied the latent profile analysis technique to line indices for carbon, barium, iron and europium, in order to separate the sample into classes with similar chemical signatures. This technique provides not only the number of possible groups but also the probability of each object to belong to each class. The method was able to distinguish at least two classes among the observed sample, with one of them being probable CEMP stars enriched in s-process elements. However, it was not able to separate CEMP-no stars from the rest of the sample. Latent profile analysis is a powerful model-based tool to be used in the identification of patterns in astrophysics. Our tests show the potential of the technique for the attainment of additional chemical information from `poor' data.

  16. Comparison of hoop-net trapping and visual surveys to monitor abundance of the Rio Grande cooter (Pseudemys gorzugi).

    PubMed

    Mali, Ivana; Duarte, Adam; Forstner, Michael R J

    2018-01-01

    Abundance estimates play an important part in the regulatory and conservation decision-making process. It is important to correct monitoring data for imperfect detection when using these data to track spatial and temporal variation in abundance, especially in the case of rare and elusive species. This paper presents the first attempt to estimate abundance of the Rio Grande cooter ( Pseudemys gorzugi ) while explicitly considering the detection process. Specifically, in 2016 we monitored this rare species at two sites along the Black River, New Mexico via traditional baited hoop-net traps and less invasive visual surveys to evaluate the efficacy of these two sampling designs. We fitted the Huggins closed-capture estimator to estimate capture probabilities using the trap data and distance sampling models to estimate detection probabilities using the visual survey data. We found that only the visual survey with the highest number of observed turtles resulted in similar abundance estimates to those estimated using the trap data. However, the estimates of abundance from the remaining visual survey data were highly variable and often underestimated abundance relative to the estimates from the trap data. We suspect this pattern is related to changes in the basking behavior of the species and, thus, the availability of turtles to be detected even though all visual surveys were conducted when environmental conditions were similar. Regardless, we found that riverine habitat conditions limited our ability to properly conduct visual surveys at one site. Collectively, this suggests visual surveys may not be an effective sample design for this species in this river system. When analyzing the trap data, we found capture probabilities to be highly variable across sites and between age classes and that recapture probabilities were much lower than initial capture probabilities, highlighting the importance of accounting for detectability when monitoring this species. Although baited hoop-net traps seem to be an effective sampling design, it is important to note that this method required a relatively high trap effort to reliably estimate abundance. This information will be useful when developing a larger-scale, long-term monitoring program for this species of concern.

  17. Hemispheric Interaction, Task Complexity, and Emotional Valence: Evidence from Naturalistic Images

    ERIC Educational Resources Information Center

    Hughes, Andrew J.; Rutherford, Barbara J.

    2013-01-01

    Two experiments extend the ecological validity of tests of hemispheric interaction in three novel ways. First, we present a broad class of naturalistic stimuli that have not yet been used in tests of hemispheric interaction. Second, we test whether probable differences in complexity within the class of stimuli are supported by outcomes from…

  18. HLA-matched sibling bone marrow transplantation for β-thalassemia major

    PubMed Central

    Sabloff, Mitchell; Chandy, Mammen; Wang, Zhiwei; Logan, Brent R.; Ghavamzadeh, Ardeshir; Li, Chi-Kong; Irfan, Syed Mohammad; Bredeson, Christopher N.; Cowan, Morton J.; Gale, Robert Peter; Hale, Gregory A.; Horan, John; Hongeng, Suradej; Eapen, Mary

    2011-01-01

    We describe outcomes after human leukocyte antigen-matched sibling bone marrow transplantation (BMT) for 179 patients with β-thalassemia major. The median age at transplantation was 7 years and the median follow-up was 6 years. The distribution of Pesaro risk class I, II, and III categories was 2%, 42%, and 36%, respectively. The day 30 cumulative incidence of neutrophil recovery and day 100 platelet recovery were 90% and 86%, respectively. Seventeen patients had graft failure, which was fatal in 11. Six of 9 patients with graft failure are alive after a second transplantation. The day 100 probability of acute graft-versus-host disease and 5-year probability of chronic graft-versus-host disease was 38% and 13%, respectively. The 5-year probabilities of overall- and disease-free survival were 91% and 88%, respectively, for patients with Pesaro risk class II, and 64% and 62%, respectively, for Pesaro risk class III. In multivariate analysis, mortality risks were higher in patients 7 years of age and older and those with hepatomegaly before BMT. The leading causes of death were interstitial pneumonitis (n = 7), hemorrhage (n = 8), and veno-occlusive disease (n = 6). Proceeding to BMT in children younger than 7 years before development of end-organ damage, particularly in the liver, should improve results after BMT for β-thalassemia major. PMID:21119108

  19. The Marine Corps Needs a Targeting, Sensors, and Surveillance Systems Operational Integration and Support Team

    DTIC Science & Technology

    2010-03-02

    triggerman is probably still close ; lately all IEDs in the area have been initiated via command-wire. The squad leader sets a cordon, ensures an IED 9...Operational Surveillance System (G-BOSS) with a Class IIIb laser pointer. This class of laser requires users to receive a laser safety class...2) The Keyhole kit of surveillance equipment. Designed to provide “snipers with an increased capability to visually detect the enemy emplacing IEDs

  20. A novel class sensitive hashing technique for large-scale content-based remote sensing image retrieval

    NASA Astrophysics Data System (ADS)

    Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo

    2017-10-01

    This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.

  1. Patterns of perceived barriers to medical care in older adults: a latent class analysis.

    PubMed

    Thorpe, Joshua M; Thorpe, Carolyn T; Kennelty, Korey A; Pandhi, Nancy

    2011-08-03

    This study examined multiple dimensions of healthcare access in order to develop a typology of perceived barriers to healthcare access in community-dwelling elderly. Secondary aims were to define distinct classes of older adults with similar perceived healthcare access barriers and to examine predictors of class membership to identify risk factors for poor healthcare access. A sample of 5,465 community-dwelling elderly was drawn from the 2004 wave of the Wisconsin Longitudinal Study. Perceived barriers to healthcare access were measured using items from the Group Health Association of America Consumer Satisfaction Survey. We used latent class analysis to assess the constellation of items measuring perceived barriers in access and multinomial logistic regression to estimate how risk factors affected the probability of membership in the latent barrier classes. Latent class analysis identified four classes of older adults. Class 1 (75% of sample) consisted of individuals with an overall low level of risk for perceived access problems (No Barriers). Class 2 (5%) perceived problems with the availability/accessibility of healthcare providers such as specialists or mental health providers (Availability/Accessibility Barriers). Class 3 (18%) perceived problems with how well their providers' operations arise organized to accommodate their needs and preferences (Accommodation Barriers). Class 4 (2%) perceived problems with all dimension of access (Severe Barriers). Results also revealed that healthcare affordability is a problem shared by members of all three barrier groups, suggesting that older adults with perceived barriers tend to face multiple, co-occurring problems. Compared to those classified into the No Barriers group, those in the Severe Barrier class were more likely to live in a rural county, have no health insurance, have depressive symptomatology, and speech limitations. Those classified into the Availability/Accessibility Barriers group were more likely to live in rural and micropolitan counties, have depressive symptomatology, more chronic conditions, and hearing limitations. Those in the Accommodation group were more likely to have depressive symptomatology and cognitive limitations. The current study identified a typology of perceived barriers in healthcare access in older adults. The identified risk factors for membership in perceived barrier classes could potentially assist healthcare organizations and providers with targeting polices and interventions designed to improve access in their most vulnerable older adult populations, particularly those in rural areas, with functional disabilities, or in poor mental health.

  2. A global analysis of traits predicting species sensitivity to habitat fragmentation

    USGS Publications Warehouse

    Keinath, Douglas; Doak, Daniel F.; Hodges, Karen E.; Prugh, Laura R.; Fagan, William F.; Sekercioglu, Cagan H.; Buchart, Stuart H. M.; Kauffman, Matthew J.

    2017-01-01

    AimElucidating patterns in species responses to habitat fragmentation is an important focus of ecology and conservation, but studies are often geographically restricted, taxonomically narrow or use indirect measures of species vulnerability. We investigated predictors of species presence after fragmentation using data from studies around the world that included all four terrestrial vertebrate classes, thus allowing direct inter-taxonomic comparison.LocationWorld-wide.MethodsWe used generalized linear mixed-effect models in an information theoretic framework to assess the factors that explained species presence in remnant habitat patches (3342 patches; 1559 species, mostly birds; and 65,695 records of patch-specific presence–absence). We developed a novel metric of fragmentation sensitivity, defined as the maximum rate of change in probability of presence with changing patch size (‘Peak Change’), to distinguish between general rarity on the landscape and sensitivity to fragmentation per se.ResultsSize of remnant habitat patches was the most important driver of species presence. Across all classes, habitat specialists, carnivores and larger species had a lower probability of presence, and those effects were substantially modified by interactions. Sensitivity to fragmentation (measured by Peak Change) was influenced primarily by habitat type and specialization, but also by fecundity, life span and body mass. Reptiles were more sensitive than other classes. Grassland species had a lower probability of presence, though sample size was relatively small, but forest and shrubland species were more sensitive.Main conclusionsHabitat relationships were more important than life-history characteristics in predicting the effects of fragmentation. Habitat specialization increased sensitivity to fragmentation and interacted with class and habitat type; forest specialists and habitat-specific reptiles were particularly sensitive to fragmentation. Our results suggest that when conservationists are faced with disturbances that could fragment habitat they should pay particular attention to specialists, particularly reptiles. Further, our results highlight that the probability of presence in fragmented landscapes and true sensitivity to fragmentation are predicted by different factors.

  3. Decision making under uncertainty: a quasimetric approach.

    PubMed

    N'Guyen, Steve; Moulin-Frier, Clément; Droulez, Jacques

    2013-01-01

    We propose a new approach for solving a class of discrete decision making problems under uncertainty with positive cost. This issue concerns multiple and diverse fields such as engineering, economics, artificial intelligence, cognitive science and many others. Basically, an agent has to choose a single or series of actions from a set of options, without knowing for sure their consequences. Schematically, two main approaches have been followed: either the agent learns which option is the correct one to choose in a given situation by trial and error, or the agent already has some knowledge on the possible consequences of his decisions; this knowledge being generally expressed as a conditional probability distribution. In the latter case, several optimal or suboptimal methods have been proposed to exploit this uncertain knowledge in various contexts. In this work, we propose following a different approach, based on the geometric intuition of distance. More precisely, we define a goal independent quasimetric structure on the state space, taking into account both cost function and transition probability. We then compare precision and computation time with classical approaches.

  4. Modelling volatility recurrence intervals in the Chinese commodity futures market

    NASA Astrophysics Data System (ADS)

    Zhou, Weijie; Wang, Zhengxin; Guo, Haiming

    2016-09-01

    The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.

  5. The estimation of tree posterior probabilities using conditional clade probability distributions.

    PubMed

    Larget, Bret

    2013-07-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.

  6. Automatic classification of spectra from the Infrared Astronomical Satellite (IRAS)

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John; Self, Matthew; Taylor, William; Goebel, John; Volk, Kevin; Walker, Helen

    1989-01-01

    A new classification of Infrared spectra collected by the Infrared Astronomical Satellite (IRAS) is presented. The spectral classes were discovered automatically by a program called Auto Class 2. This program is a method for discovering (inducing) classes from a data base, utilizing a Bayesian probability approach. These classes can be used to give insight into the patterns that occur in the particular domain, in this case, infrared astronomical spectroscopy. The classified spectra are the entire Low Resolution Spectra (LRS) Atlas of 5,425 sources. There are seventy-seven classes in this classification and these in turn were meta-classified to produce nine meta-classes. The classification is presented as spectral plots, IRAS color-color plots, galactic distribution plots and class commentaries. Cross-reference tables, listing the sources by IRAS name and by Auto Class class, are also given. These classes show some of the well known classes, such as the black-body class, and silicate emission classes, but many other classes were unsuspected, while others show important subtle differences within the well known classes.

  7. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  8. Adaptive output-feedback control for switched stochastic uncertain nonlinear systems with time-varying delay.

    PubMed

    Song, Zhibao; Zhai, Junyong

    2018-04-01

    This paper addresses the problem of adaptive output-feedback control for a class of switched stochastic time-delay nonlinear systems with uncertain output function, where both the control coefficients and time-varying delay are unknown. The drift and diffusion terms are subject to unknown homogeneous growth condition. By virtue of adding a power integrator technique, an adaptive output-feedback controller is designed to render that the closed-loop system is bounded in probability, and the state of switched stochastic nonlinear system can be globally regulated to the origin almost surely. A numerical example is provided to demonstrate the validity of the proposed control method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Quantum probability and conceptual combination in conjunctions.

    PubMed

    Hampton, James A

    2013-06-01

    I consider the general problem of category conjunctions in the light of Pothos & Busemeyer (P&B)'s quantum probability (QP) account of the conjunction fallacy. I argue that their account as presented cannot capture the "guppy effect" - the case in which a class is a better member of a conjunction A^B than it is of either A or B alone.

  10. Reweighting Data in the Spirit of Tukey: Using Bayesian Posterior Probabilities as Rasch Residuals for Studying Misfit

    ERIC Educational Resources Information Center

    Dardick, William R.; Mislevy, Robert J.

    2016-01-01

    A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…

  11. Classroom Research: Assessment of Student Understanding of Sampling Distributions of Means and the Central Limit Theorem in Post-Calculus Probability and Statistics Classes

    ERIC Educational Resources Information Center

    Lunsford, M. Leigh; Rowell, Ginger Holmes; Goodson-Espy, Tracy

    2006-01-01

    We applied a classroom research model to investigate student understanding of sampling distributions of sample means and the Central Limit Theorem in post-calculus introductory probability and statistics courses. Using a quantitative assessment tool developed by previous researchers and a qualitative assessment tool developed by the authors, we…

  12. Solving inverse problem for Markov chain model of customer lifetime value using flower pollination algorithm

    NASA Astrophysics Data System (ADS)

    Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji

    2015-12-01

    Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.

  13. Time-reversal symmetric resolution of unity without background integrals in open quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatano, Naomichi, E-mail: hatano@iis.u-tokyo.ac.jp; Ordonez, Gonzalo, E-mail: gordonez@butler.edu

    2014-12-15

    We present a new complete set of states for a class of open quantum systems, to be used in expansion of the Green’s function and the time-evolution operator. A remarkable feature of the complete set is that it observes time-reversal symmetry in the sense that it contains decaying states (resonant states) and growing states (anti-resonant states) parallelly. We can thereby pinpoint the occurrence of the breaking of time-reversal symmetry at the choice of whether we solve Schrödinger equation as an initial-condition problem or a terminal-condition problem. Another feature of the complete set is that in the subspace of the centralmore » scattering area of the system, it consists of contributions of all states with point spectra but does not contain any background integrals. In computing the time evolution, we can clearly see contribution of which point spectrum produces which time dependence. In the whole infinite state space, the complete set does contain an integral but it is over unperturbed eigenstates of the environmental area of the system and hence can be calculated analytically. We demonstrate the usefulness of the complete set by computing explicitly the survival probability and the escaping probability as well as the dynamics of wave packets. The origin of each term of matrix elements is clear in our formulation, particularly, the exponential decays due to the resonance poles.« less

  14. [Does elitism of school influence the smoking-related health behaviour among grammar school students?].

    PubMed

    Józwicki, Wojciech; Gołda, Ryszard; Domaniewska, Jolanta; Skok, Zdzisław; Jarzemski, Piotr; Przybylski, Grzegorz; Domaniewski, Jan

    2009-01-01

    The aim of the study was connected with smoking health behaviour estimation among public (SZP) and nonpublic (SZN) grammar school students. The analysis of 156 anonymous questionnaires was made. Questionnaires contained questions of parents' education, material situation of family, physical education, social relations with family and peers and positive or negative perception of smoking. In total trial we observed a strong positive correlation between style of smoking or number of smoked cigarettes and positive perception of smoking (r = 0.62 or r = 0.36 respectively). The latter correlated significantly with family presence of smoking (r = 0.18). Percentages of smoking students of SZP and SZN differed and amounted 22% and 18% respectively. Within I/II SZP classes the smoking depended on material position of family (r = 0.28) and positive perception of smoking (r = 0.68). Among students of III SZP classes the dependence on material situation was stronger (r = 0.49), while students of III SZN classes became to perceive smoking more positive (r = 0.82). Social relations of students of I/II SZN classes were inversely proportional to prevalence of smoking in their families. Smoking students of III SZN classes worked out much more variously in comparison with pupils of SZP. The main motivation of smoking within school students was the positive perception of smoking. The differences of smoking prevalence within both types of school probably formed in the families and observed in I/II classes pupils, vanished during the time of III class of studying. Elitism of school do not protect the student from smoking: during the time of III SZN class the smoking receives clearly positive appearance and became established. Probably existing antinicotinic school programs should much more decidedly deliver the negative appearance of health effects of smoking.

  15. Prediction of incidence and stability of alcohol use disorders by latent internalizing psychopathology risk profiles in adolescence and young adulthood.

    PubMed

    Behrendt, Silke; Bühringer, Gerhard; Höfler, Michael; Lieb, Roselind; Beesdo-Baum, Katja

    2017-10-01

    Comorbid internalizing mental disorders in alcohol use disorders (AUD) can be understood as putative independent risk factors for AUD or as expressions of underlying shared psychopathology vulnerabilities. However, it remains unclear whether: 1) specific latent internalizing psychopathology risk-profiles predict AUD-incidence and 2) specific latent internalizing comorbidity-profiles in AUD predict AUD-stability. To investigate baseline latent internalizing psychopathology risk profiles as predictors of subsequent AUD-incidence and -stability in adolescents and young adults. Data from the prospective-longitudinal EDSP study (baseline age 14-24 years) were used. The study-design included up to three follow-up assessments in up to ten years. DSM-IV mental disorders were assessed with the DIA-X/M-CIDI. To investigate risk-profiles and their associations with AUD-outcomes, latent class analysis with auxiliary outcome variables was applied. AUD-incidence: a 4-class model (N=1683) was identified (classes: normative-male [45.9%], normative-female [44.2%], internalizing [5.3%], nicotine dependence [4.5%]). Compared to the normative-female class, all other classes were associated with a higher risk of subsequent incident alcohol dependence (p<0.05). AUD-stability: a 3-class model (N=1940) was identified with only one class (11.6%) with high probabilities for baseline AUD. This class was further characterized by elevated substance use disorder (SUD) probabilities and predicted any subsequent AUD (OR 8.5, 95% CI 5.4-13.3). An internalizing vulnerability may constitute a pathway to AUD incidence in adolescence and young adulthood. In contrast, no indication for a role of internalizing comorbidity profiles in AUD-stability was found, which may indicate a limited importance of such profiles - in contrast to SUD-related profiles - in AUD stability. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Digital classification of Landsat data for vegetation and land-cover mapping in the Blackfoot River watershed, southeastern Idaho

    USGS Publications Warehouse

    Pettinger, L.R.

    1982-01-01

    This paper documents the procedures, results, and final products of a digital analysis of Landsat data used to produce a vegetation and landcover map of the Blackfoot River watershed in southeastern Idaho. Resource classes were identified at two levels of detail: generalized Level I classes (for example, forest land and wetland) and detailed Levels II and III classes (for example, conifer forest, aspen, wet meadow, and riparian hardwoods). Training set statistics were derived using a modified clustering approach. Environmental stratification that separated uplands from lowlands improved discrimination between resource classes having similar spectral signatures. Digital classification was performed using a maximum likelihood algorithm. Classification accuracy was determined on a single-pixel basis from a random sample of 25-pixel blocks. These blocks were transferred to small-scale color-infrared aerial photographs, and the image area corresponding to each pixel was interpreted. Classification accuracy, expressed as percent agreement of digital classification and photo-interpretation results, was 83.0:t 2.1 percent (0.95 probability level) for generalized (Level I) classes and 52.2:t 2.8 percent (0.95 probability level) for detailed (Levels II and III) classes. After the classified images were geometrically corrected, two types of maps were produced of Level I and Levels II and III resource classes: color-coded maps at a 1:250,000 scale, and flatbed-plotter overlays at a 1:24,000 scale. The overlays are more useful because of their larger scale, familiar format to users, and compatibility with other types of topographic and thematic maps of the same scale.

  17. An evaluation of open set recognition for FLIR images

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2015-05-01

    Typical supervised classification algorithms label inputs according to what was learned in a training phase. Thus, test inputs that were not seen in training are always given incorrect labels. Open set recognition algorithms address this issue by accounting for inputs that are not present in training and providing the classifier with an option to reject" unknown samples. A number of such techniques have been developed in the literature, many of which are based on support vector machines (SVMs). One approach, the 1-vs-set machine, constructs a slab" in feature space using the SVM hyperplane. Inputs falling on one side of the slab or within the slab belong to a training class, while inputs falling on the far side of the slab are rejected. We note that rejection of unknown inputs can be achieved by thresholding class posterior probabilities. Another recently developed approach, the Probabilistic Open Set SVM (POS-SVM), empirically determines good probability thresholds. We apply the 1-vs-set machine, POS-SVM, and closed set SVMs to FLIR images taken from the Comanche SIG dataset. Vehicles in the dataset are divided into three general classes: wheeled, armored personnel carrier (APC), and tank. For each class, a coarse pose estimate (front, rear, left, right) is taken. In a closed set sense, we analyze these algorithms for prediction of vehicle class and pose. To test open set performance, one or more vehicle classes are held out from training. By considering closed and open set performance separately, we may closely analyze both inter-class discrimination and threshold effectiveness.

  18. School in the Hospital. Bulletin, 1949, No. 3

    ERIC Educational Resources Information Center

    Mackie, Romaine P.; Fitzgerald, Margaret

    1949-01-01

    For many years, there have been schools or classes for child patients in hospitals here and there in the United States, but even today this service is far from adequate. It is probable that thousands of hospitalized children are having no schooling at all while they are in these institutions and it is certain that many existing hospital classes do…

  19. A Regional Simulation to Explore Impacts of Resource Use and Constraints

    DTIC Science & Technology

    2007-03-01

    mountaintops. (10) Deciduous Forest - This class is composed of forests, which contain at least 75% deciduous trees in the canopy, deciduous ... trees , pine plantations, and evergreen woodlands. (12) Mixed Forest - This class includes forests with mixed deciduous /coniferous canopies, natural...reflective surfaces. Classification of forested wetlands dominated by deciduous trees is probably more accurate than that in areas with 104

  20. A social network-informed latent class analysis of patterns of substance use, sexual behavior, and mental health: Social Network Study III, Winnipeg, Manitoba, Canada.

    PubMed

    Hopfer, Suellen; Tan, Xianming; Wylie, John L

    2014-05-01

    We assessed whether a meaningful set of latent risk profiles could be identified in an inner-city population through individual and network characteristics of substance use, sexual behaviors, and mental health status. Data came from 600 participants in Social Network Study III, conducted in 2009 in Winnipeg, Manitoba, Canada. We used latent class analysis (LCA) to identify risk profiles and, with covariates, to identify predictors of class. A 4-class model of risk profiles fit the data best: (1) solitary users reported polydrug use at the individual level, but low probabilities of substance use or concurrent sexual partners with network members; (2) social-all-substance users reported polydrug use at the individual and network levels; (3) social-noninjection drug users reported less likelihood of injection drug and solvent use; (4) low-risk users reported low probabilities across substances. Unstable housing, preadolescent substance use, age, and hepatitis C status predicted risk profiles. Incorporation of social network variables into LCA can distinguish important subgroups with varying patterns of risk behaviors that can lead to sexually transmitted and bloodborne infections.

  1. Translation norms for English and Spanish: The role of lexical variables, word class, and L2 proficiency in negotiating translation ambiguity

    PubMed Central

    Prior, Anat; MacWhinney, Brian; Kroll, Judith F.

    2014-01-01

    We present a set of translation norms for 670 English and 760 Spanish nouns, verbs and class ambiguous items that varied in their lexical properties in both languages, collected from 80 bilingual participants. Half of the words in each language received more than a single translation across participants. Cue word frequency and imageability were both negatively correlated with number of translations. Word class predicted number of translations: Nouns had fewer translations than did verbs, which had fewer translations than class-ambiguous items. The translation probability of specific responses was positively correlated with target word frequency and imageability, and with its form overlap with the cue word. Translation choice was modulated by L2 proficiency: Less proficient bilinguals tended to produce lower probability translations than more proficient bilinguals, but only in forward translation, from L1 to L2. These findings highlight the importance of translation ambiguity as a factor influencing bilingual representation and performance. The norms can also provide an important resource to assist researchers in the selection of experimental materials for studies of bilingual and monolingual language performance. These norms may be downloaded from www.psychonomic.org/archive. PMID:18183923

  2. Patterns and predictors of violence against children in Uganda: a latent class analysis.

    PubMed

    Clarke, Kelly; Patalay, Praveetha; Allen, Elizabeth; Knight, Louise; Naker, Dipak; Devries, Karen

    2016-05-24

    To explore patterns of physical, emotional and sexual violence against Ugandan children. Latent class and multinomial logistic regression analysis of cross-sectional data. Luwero District, Uganda. In all, 3706 primary 5, 6 and 7 students attending 42 primary schools. To measure violence, we used the International Society for the Prevention of Child Abuse and Neglect Child Abuse Screening Tool-Child Institutional. We used the Strengths and Difficulties Questionnaire to assess mental health and administered reading, spelling and maths tests. We identified three violence classes. Class 1 (N=696 18.8%) was characterised by emotional and physical violence by parents and relatives, and sexual and emotional abuse by boyfriends, girlfriends and unrelated adults outside school. Class 2 (N=975 26.3%) was characterised by physical, emotional and sexual violence by peers (male and female students). Children in Classes 1 and 2 also had a high probability of exposure to emotional and physical violence by school staff. Class 3 (N=2035 54.9%) was characterised by physical violence by school staff and a lower probability of all other forms of violence compared to Classes 1 and 2. Children in Classes 1 and 2 were more likely to have worked for money (Class 1 Relative Risk Ratio 1.97, 95% CI 1.54 to 2.51; Class 2 1.55, 1.29 to 1.86), been absent from school in the previous week (Class 1 1.31, 1.02 to 1.67; Class 2 1.34, 1.10 to 1.63) and to have more mental health difficulties (Class 1 1.09, 1.07 to 1.11; Class 2 1.11, 1.09 to 1.13) compared to children in Class 3. Female sex (3.44, 2.48 to 4.78) and number of children sharing a sleeping area predicted being in Class 1. Childhood violence in Uganda forms distinct patterns, clustered by perpetrator and setting. Research is needed to understand experiences of victimised children, and to develop mental health interventions for those with severe violence exposures. NCT01678846; Results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. Using latent class analysis to model prescription medications in the measurement of falling among a community elderly population

    PubMed Central

    2013-01-01

    Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639

  4. Gulf war syndrome: a toxic exposure? A systematic review.

    PubMed

    Gronseth, Gary S

    2005-05-01

    Using the strength-of-conclusion scheme enumerated in Box 2, based on two class II studies, there is probably a causal link between deployment to the Persian Gulf theater of operation and the development of the poorly defined multisymptom illness known as GWS (level B). Based on class IV studies, there is insufficient evidence to determine if exposure to toxins encountered during the Persian Gulf war caused GWS (level U). A major limitation of the literature regarding the GWS is the reliance on self-reporting to measure exposure to putative causal toxins. Although objective measures of toxin exposure in GWV generally is unavailable, modeling techniques to estimate exposure levels to low-level nerve agents and smoke from oil well fires have been developed. It would be useful to determine if exposure levels determined by these techniques are associated with GWS. The lack of a clear case definition GWS also hampers research. Some go even further, claiming that the absence of such a definition renders the condition illegitimate. Although an objective marker to GWS would be useful for studies, the absence of such a marker does not make the syndrome any less legitimate. in essence, GWS merely is a convenient descriptive term that describes a phenomenon: GWV reporting suffering from medically unexplained health-related symptoms. In this sense, it shares much with the other medically unexplained syndromes encountered in practice. The real debate surrounding medically unexplained conditions is not whether or not they exist, but defining their cause. In this regard, investigators fall into two camps. One camp insists that the conditions are caused by a yet-to-be-discovered medical problem, rejecting out of hand the possibility of a psychologic origin. The other camp insists the conditions are fundamentally psychogenic rejecting the possibility of an undiscovered medical condition. The evidence shows, however, that the conditions exists, the suffering is real, and the causes are unknown.

  5. Maternal eating disorder and infant diet. A latent class analysis based on the Norwegian Mother and Child Cohort Study (MoBa).

    PubMed

    Torgersen, Leila; Ystrom, Eivind; Siega-Riz, Anna Maria; Berg, Cecilie Knoph; Zerwas, Stephanie C; Reichborn-Kjennerud, Ted; Bulik, Cynthia M

    2015-01-01

    Knowledge of infant diet and feeding practices among children of mothers with eating disorders is essential to promote healthy eating in these children. This study compared the dietary patterns of 6-month-old children of mothers with anorexia nervosa, bulimia nervosa, binge eating disorder, and eating disorder not otherwise specified-purging subtype, to the diet of children of mothers with no eating disorders (reference group). The study was based on 53,879 mothers in the Norwegian Mother and Child Cohort Study (MoBa). Latent class analysis (LCA) was used to identify discrete latent classes of infant diet based on the mothers' responses to questions about 16 food items. LCA identified five classes, characterized by primarily homemade vegetarian food (4% of infants), homemade traditional food (8%), commercial cereals (35%), commercial jarred baby food (39%), and a mix of all food groups (11%). The association between latent dietary classes and maternal eating disorders were estimated by multinomial logistic regression. Infants of mothers with bulimia nervosa had a lower probability of being in the homemade traditional food class compared to the commercial jarred baby food class, than the referent (O.R. 0.59; 95% CI 0.36-0.99). Infants of mothers with binge eating disorder had a lower probability of being in the homemade vegetarian class compared to the commercial jarred baby food class (O.R. 0.77; 95% CI 0.60-0.99), but only before adjusting for relevant confounders. Anorexia nervosa and eating disorder not otherwise specified-purging subtype were not statistically significantly associated with any of the dietary classes. These results suggest that maternal eating disorders may to some extent influence the child's diet at 6 months; however, the extent to which these differences influence child health and development remains an area for further inquiry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Parental Predictions and Perceptions Regarding Long-Term Childhood Obesity-Related Health Risks

    PubMed Central

    Wright, Davene R.; Lozano, Paula; Dawson-Hahn, Elizabeth; Christakis, Dimitri A.; Haaland, Wren; Basu, Anirban

    2016-01-01

    Objectives To assess how parents perceive long-term risks for developing obesity-related chronic health conditions. Methods A web-based nationally representative survey was administered to 502 U.S. parents with a 5–12 year old child. Parents reported whether their child was most likely to be at a healthy weight or overweight, and the probability that their child would develop hypertension, heart disease, depression, or type 2 diabetes in adulthood. Responses of parents of children with overweight and obesity were compared to those of healthy weight children using multivariate models. Results The survey had an overall response rate of 39.2%. The mean (SD) unadjusted parent predicted health risks were 15.4% (17.7%), 11.2% (14.7%), 12.5% (16.2%), and 12.1% (16.1%) for hypertension, heart disease, depression, and diabetes, respectively. Despite under-perceiving their child’s current BMI class, parents of children with obesity estimate their children to be at greater risk for obesity-related health conditions than parents of healthy weight children by 5–6 percentage points. Having a family history of a chronic disease, higher quality of care, and older parent age were also significant predictors of estimating higher risk probabilities. Conclusions Despite evidence that parents of overweight children may not perceive these children as being overweight, parents unexpectedly estimate greater future risk of weight-related health conditions for these children. Focusing communication about weight on screening for and reducing the risk of weight-related diseases may prove useful in engaging parents and children in weight management PMID:26875508

  7. Unified picture of strong-coupling stochastic thermodynamics and time reversals

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-04-01

    Strong-coupling statistical thermodynamics is formulated as the Hamiltonian dynamics of an observed system interacting with another unobserved system (a bath). It is shown that the entropy production functional of stochastic thermodynamics, defined as the log ratio of forward and backward system path probabilities, is in a one-to-one relation with the log ratios of the joint initial conditions of the system and the bath. A version of strong-coupling statistical thermodynamics where the system-bath interaction vanishes at the beginning and at the end of a process is, as is also weak-coupling stochastic thermodynamics, related to the bath initially in equilibrium by itself. The heat is then the change of bath energy over the process, and it is discussed when this heat is a functional of the system history alone. The version of strong-coupling statistical thermodynamics introduced by Seifert and Jarzynski is related to the bath initially in conditional equilibrium with respect to the system. This leads to heat as another functional of the system history which needs to be determined by thermodynamic integration. The log ratio of forward and backward system path probabilities in a stochastic process is finally related to log ratios of the initial conditions of a combined system and bath. It is shown that the entropy production formulas of stochastic processes under a general class of time reversals are given by the differences of bath energies in a larger underlying Hamiltonian system. The paper highlights the centrality of time reversal in stochastic thermodynamics, also in the case of strong coupling.

  8. Prediction of Conditional Probability of Survival After Surgery for Gastric Cancer: A Study Based on Eastern and Western Large Data Sets.

    PubMed

    Zhong, Qing; Chen, Qi-Yue; Li, Ping; Xie, Jian-Wei; Wang, Jia-Bin; Lin, Jian-Xian; Lu, Jun; Cao, Long-Long; Lin, Mi; Tu, Ru-Hong; Zheng, Chao-Hui; Huang, Chang-Ming

    2018-04-20

    The dynamic prognosis of patients who have undergone curative surgery for gastric cancer has yet to be reported. Our objective was to devise an accurate tool for predicting the conditional probability of survival for these patients. We analyzed 11,551 gastric cancer patients from the Surveillance, Epidemiology, and End Results database. Two-thirds of the patients were selected randomly for the development set and one-third for the validation set. Two nomograms were constructed to predict the conditional probability of overall survival and the conditional probability of disease-specific survival, using conditional survival methods. We then applied these nomograms to the 4,001 patients in the database from Fujian Medical University Union Hospital, Fuzhou, China, one of the most active Chinese institutes. The 5-year conditional probability of overall survival of the patients was 41.6% immediately after resection and increased to 52.8%, 68.2%, and 80.4% at 1, 2, and 3 years after gastrectomy. The 5-year conditional probability of disease-specific survival "increased" from 48.9% at the time of gastrectomy to 59.8%, 74.7%, and 85.5% for patients surviving 1, 2, and 3 years, respectively. Sex; race; age; depth of tumor invasion; lymph node metastasis; and tumor size, site, and grade were associated with overall survival and disease-specific survival (P <.05). Within the Surveillance, Epidemiology, and End Results validation set, the accuracy of the conditional probability of overall survival nomogram was 0.77, 0.81, 0.82, and 0.82 at 1, 3, 5, and 10 years after gastrectomy, respectively. Within the other validation set from the Fujian Medical University Union Hospital (n = 4,001), the accuracy of the conditional probability of overall survival nomogram was 0.76, 0.79, 0.77, and 0.77 at 1, 3, 5, and 10 years, respectively. The accuracy of the conditional probability of disease-specific survival model was also favorable. The calibration curve demonstrated good agreement between the predicted and observed survival rates. Based on the large Eastern and Western data sets, we developed and validated the first conditional nomogram for prediction of conditional probability of survival for patients with gastric cancer to allow consideration of the duration of survivorship. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Social class based on occupation is associated with hospitalization for A(H1N1)pdm09 infection. Comparison between hospitalized and ambulatory cases.

    PubMed

    Pujol, J; Godoy, P; Soldevila, N; Castilla, J; González-Candelas, F; Mayoral, J M; Astray, J; Garcia, S; Martin, V; Tamames, S; Delgado, M; Domínguez, A

    2016-03-01

    This study aimed to analyse the existence of an association between social class (categorized by type of occupation) and the occurrence of A(H1N1)pmd09 infection and hospitalization for two seasons (2009-2010 and 2010-2011). This multicentre study compared ambulatory A(H1N1)pmd09 confirmed cases with ambulatory controls to measure risk of infection, and with hospitalized A(H1N1)pmd09 confirmed cases to asses hospitalization risk. Study variables were: age, marital status, tobacco and alcohol use, pregnancy, chronic obstructive pulmonary disease, chronic respiratory failure, cardiovascular disease, diabetes, chronic liver disease, body mass index >40, systemic corticosteroid treatment and influenza vaccination status. Occupation was registered literally and coded into manual and non-manual worker occupational social class groups. A conditional logistic regression analysis was performed. There were 720 hospitalized cases, 996 ambulatory cases and 1062 ambulatory controls included in the study. No relationship between occupational social class and A(H1N1)pmd09 infection was found [adjusted odds ratio (aOR) 0·97, 95% confidence interval (CI) 0·74-1·27], but an association (aOR 1·53, 95% CI 1·01-2·31) between occupational class and hospitalization for A(H1N1)pmd09 was observed. Influenza vaccination was a protective factor for A(H1N1)pmd09 infection (aOR 0·41, 95% CI 0·23-0·73) but not for hospitalization. We conclude that manual workers have the highest risk of hospitalization when infected by influenza than other occupations but they do not have a different probability of being infected by influenza.

  10. Option volatility and the acceleration Lagrangian

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Cao, Yang

    2014-01-01

    This paper develops a volatility formula for option on an asset from an acceleration Lagrangian model and the formula is calibrated with market data. The Black-Scholes model is a simpler case that has a velocity dependent Lagrangian. The acceleration Lagrangian is defined, and the classical solution of the system in Euclidean time is solved by choosing proper boundary conditions. The conditional probability distribution of final position given the initial position is obtained from the transition amplitude. The volatility is the standard deviation of the conditional probability distribution. Using the conditional probability and the path integral method, the martingale condition is applied, and one of the parameters in the Lagrangian is fixed. The call option price is obtained using the conditional probability and the path integral method.

  11. Communities and classes in symmetric fractals

    NASA Astrophysics Data System (ADS)

    Krawczyk, Małgorzata J.

    2015-07-01

    Two aspects of fractal networks are considered: the community structure and the class structure, where classes of nodes appear as a consequence of a local symmetry of nodes. The analyzed systems are the networks constructed for two selected symmetric fractals: the Sierpinski triangle and the Koch curve. Communities are searched for by means of a set of differential equations. Overlapping nodes which belong to two different communities are identified by adding some noise to the initial connectivity matrix. Then, a node can be characterized by a spectrum of probabilities of belonging to different communities. Our main goal is that the overlapping nodes with the same spectra belong to the same class.

  12. Latent Class Analysis of Incomplete Data via an Entropy-Based Criterion

    PubMed Central

    Larose, Chantal; Harel, Ofer; Kordas, Katarzyna; Dey, Dipak K.

    2016-01-01

    Latent class analysis is used to group categorical data into classes via a probability model. Model selection criteria then judge how well the model fits the data. When addressing incomplete data, the current methodology restricts the imputation to a single, pre-specified number of classes. We seek to develop an entropy-based model selection criterion that does not restrict the imputation to one number of clusters. Simulations show the new criterion performing well against the current standards of AIC and BIC, while a family studies application demonstrates how the criterion provides more detailed and useful results than AIC and BIC. PMID:27695391

  13. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  14. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions

    PubMed Central

    Storkel, Holly L.; Lee, Jaehoon; Cox, Casey

    2016-01-01

    Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276

  15. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions.

    PubMed

    Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey

    2016-11-01

    Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.

  16. Land use and land cover classification for rural residential areas in China using soft-probability cascading of multifeatures

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Liu, Yueyan; Zhang, Zuyu; Shen, Yonglin

    2017-10-01

    A multifeature soft-probability cascading scheme to solve the problem of land use and land cover (LULC) classification using high-spatial-resolution images to map rural residential areas in China is proposed. The proposed method is used to build midlevel LULC features. Local features are frequently considered as low-level feature descriptors in a midlevel feature learning method. However, spectral and textural features, which are very effective low-level features, are neglected. The acquisition of the dictionary of sparse coding is unsupervised, and this phenomenon reduces the discriminative power of the midlevel feature. Thus, we propose to learn supervised features based on sparse coding, a support vector machine (SVM) classifier, and a conditional random field (CRF) model to utilize the different effective low-level features and improve the discriminability of midlevel feature descriptors. First, three kinds of typical low-level features, namely, dense scale-invariant feature transform, gray-level co-occurrence matrix, and spectral features, are extracted separately. Second, combined with sparse coding and the SVM classifier, the probabilities of the different LULC classes are inferred to build supervised feature descriptors. Finally, the CRF model, which consists of two parts: unary potential and pairwise potential, is employed to construct an LULC classification map. Experimental results show that the proposed classification scheme can achieve impressive performance when the total accuracy reached about 87%.

  17. Identification of probabilities.

    PubMed

    Vitányi, Paul M B; Chater, Nick

    2017-02-01

    Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterized by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Löf (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyze the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology.

  18. Integrating multiple fitting regression and Bayes decision for cancer diagnosis with transcriptomic data from tumor-educated blood platelets.

    PubMed

    Huang, Guangzao; Yuan, Mingshun; Chen, Moliang; Li, Lei; You, Wenjie; Li, Hanjie; Cai, James J; Ji, Guoli

    2017-10-07

    The application of machine learning in cancer diagnostics has shown great promise and is of importance in clinic settings. Here we consider applying machine learning methods to transcriptomic data derived from tumor-educated platelets (TEPs) from individuals with different types of cancer. We aim to define a reliability measure for diagnostic purposes to increase the potential for facilitating personalized treatments. To this end, we present a novel classification method called MFRB (for Multiple Fitting Regression and Bayes decision), which integrates the process of multiple fitting regression (MFR) with Bayes decision theory. MFR is first used to map multidimensional features of the transcriptomic data into a one-dimensional feature. The probability density function of each class in the mapped space is then adjusted using the Gaussian probability density function. Finally, the Bayes decision theory is used to build a probabilistic classifier with the estimated probability density functions. The output of MFRB can be used to determine which class a sample belongs to, as well as to assign a reliability measure for a given class. The classical support vector machine (SVM) and probabilistic SVM (PSVM) are used to evaluate the performance of the proposed method with simulated and real TEP datasets. Our results indicate that the proposed MFRB method achieves the best performance compared to SVM and PSVM, mainly due to its strong generalization ability for limited, imbalanced, and noisy data.

  19. TRANSFER OF AVERSIVE RESPONDENT ELICITATION IN ACCORDANCE WITH EQUIVALENCE RELATIONS

    PubMed Central

    Valverde, Miguel RodrÍguez; Luciano, Carmen; Barnes-Holmes, Dermot

    2009-01-01

    The present study investigates the transfer of aversively conditioned respondent elicitation through equivalence classes, using skin conductance as the measure of conditioning. The first experiment is an attempt to replicate Experiment 1 in Dougher, Augustson, Markham, Greenway, and Wulfert (1994), with different temporal parameters in the aversive conditioning procedure employed. Match-to-sample procedures were used to teach 17 participants two 4-member equivalence classes. Then, one member of one class was paired with electric shock and one member of the other class was presented without shock. The remaining stimuli from each class were presented in transfer tests. Unlike the findings in the original study, transfer of conditioning was not achieved. In Experiment 2, similar procedures were used with 30 participants, although several modifications were introduced (formation of five-member classes, direct conditioning with several elements of each class, random sequences of stimulus presentation in transfer tests, reversal in aversive conditioning contingencies). More than 80% of participants who had shown differential conditioning also showed the transfer of function effect. Moreover, this effect was replicated within subjects for 3 participants. This is the first demonstration of the transfer of aversive respondent elicitation through stimulus equivalence classes with the presentation of transfer test trials in random order. The latter prevents the possibility that transfer effects are an artefact of transfer test presentation order. PMID:20119523

  20. Leukocyte Recognition Using EM-Algorithm

    NASA Astrophysics Data System (ADS)

    Colunga, Mario Chirinos; Siordia, Oscar Sánchez; Maybank, Stephen J.

    This document describes a method for classifying images of blood cells. Three different classes of cells are used: Band Neutrophils, Eosinophils and Lymphocytes. The image pattern is projected down to a lower dimensional sub space using PCA; the probability density function for each class is modeled with a Gaussian mixture using the EM-Algorithm. A new cell image is classified using the maximum a posteriori decision rule.

  1. The role of misclassification in estimating proportions and an estimator of misclassification probability

    Treesearch

    Patrick L. Zimmerman; Greg C. Liknes

    2010-01-01

    Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...

  2. Minimum Expected Risk Estimation for Near-neighbor Classification

    DTIC Science & Technology

    2006-04-01

    We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights

  3. Lacosamide

    MedlinePlus

    ... with other medications to control certain types of seizures. Lacosamide is in a class of medications called ... mood. If you suddenly stop taking lacosamide, your seizures may happen more often. Your doctor will probably ...

  4. Beginning Bayes

    ERIC Educational Resources Information Center

    Erickson, Tim

    2017-01-01

    Understanding a Bayesian perspective demands comfort with conditional probability and with probabilities that appear to change as we acquire additional information. This paper suggests a simple context in conditional probability that helps develop the understanding students would need for a successful introduction to Bayesian reasoning.

  5. Sensitivity to spatial and temporal scale and fire regime inputs in deriving fire regime condition class

    Treesearch

    Linda Tedrow; Wendel J. Hann

    2015-01-01

    The Fire Regime Condition Class (FRCC) is a composite departure measure that compares current vegetation structure and fire regime to historical reference conditions. FRCC is computed as the average of: 1) Vegetation departure (VDEP) and 2) Regime (frequency and severity) departure (RDEP). In addition to the FRCC rating, the Vegetation Condition Class (VCC) and Regime...

  6. Probability of misclassifying biological elements in surface waters.

    PubMed

    Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna

    2017-11-24

    Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.

  7. Cue-based assertion classification for Swedish clinical text – developing a lexicon for pyConTextSwe

    PubMed Central

    Velupillai, Sumithra; Skeppstedt, Maria; Kvist, Maria; Mowery, Danielle; Chapman, Brian E.; Dalianis, Hercules; Chapman, Wendy W.

    2014-01-01

    Objective The ability of a cue-based system to accurately assert whether a disorder is affirmed, negated, or uncertain is dependent, in part, on its cue lexicon. In this paper, we continue our study of porting an assertion system (pyConTextNLP) from English to Swedish (pyConTextSwe) by creating an optimized assertion lexicon for clinical Swedish. Methods and material We integrated cues from four external lexicons, along with generated inflections and combinations. We used subsets of a clinical corpus in Swedish. We applied four assertion classes (definite existence, probable existence, probable negated existence and definite negated existence) and two binary classes (existence yes/no and uncertainty yes/no) to pyConTextSwe. We compared pyConTextSwe’s performance with and without the added cues on a development set, and improved the lexicon further after an error analysis. On a separate evaluation set, we calculated the system’s final performance. Results Following integration steps, we added 454 cues to pyConTextSwe. The optimized lexicon developed after an error analysis resulted in statistically significant improvements on the development set (83% F-score, overall). The system’s final F-scores on an evaluation set were 81% (overall). For the individual assertion classes, F-score results were 88% (definite existence), 81% (probable existence), 55% (probable negated existence), and 63% (definite negated existence). For the binary classifications existence yes/no and uncertainty yes/no, final system performance was 97%/87% and 78%/86% F-score, respectively. Conclusions We have successfully ported pyConTextNLP to Swedish (pyConTextSwe). We have created an extensive and useful assertion lexicon for Swedish clinical text, which could form a valuable resource for similar studies, and which is publicly available. PMID:24556644

  8. Towards a truly mobile auditory brain-computer interface: exploring the P300 to take away.

    PubMed

    De Vos, Maarten; Gandras, Katharina; Debener, Stefan

    2014-01-01

    In a previous study we presented a low-cost, small, and wireless 14-channel EEG system suitable for field recordings (Debener et al., 2012, psychophysiology). In the present follow-up study we investigated whether a single-trial P300 response can be reliably measured with this system, while subjects freely walk outdoors. Twenty healthy participants performed a three-class auditory oddball task, which included rare target and non-target distractor stimuli presented with equal probabilities of 16%. Data were recorded in a seated (control condition) and in a walking condition, both of which were realized outdoors. A significantly larger P300 event-related potential amplitude was evident for targets compared to distractors (p<.001), but no significant interaction with recording condition emerged. P300 single-trial analysis was performed with regularized stepwise linear discriminant analysis and revealed above chance-level classification accuracies for most participants (19 out of 20 for the seated, 16 out of 20 for the walking condition), with mean classification accuracies of 71% (seated) and 64% (walking). Moreover, the resulting information transfer rates for the seated and walking conditions were comparable to a recently published laboratory auditory brain-computer interface (BCI) study. This leads us to conclude that a truly mobile auditory BCI system is feasible. © 2013.

  9. Know the risk, take the win: how executive functions and probability processing influence advantageous decision making under risk conditions.

    PubMed

    Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete

    2014-01-01

    Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.

  10. Semiclassical electron transport at the edge of a two-dimensional topological insulator: Interplay of protected and unprotected modes

    NASA Astrophysics Data System (ADS)

    Khalaf, E.; Skvortsov, M. A.; Ostrovsky, P. M.

    2016-03-01

    We study electron transport at the edge of a generic disordered two-dimensional topological insulator, where some channels are topologically protected from backscattering. Assuming the total number of channels is large, we consider the edge as a quasi-one-dimensional quantum wire and describe it in terms of a nonlinear sigma model with a topological term. Neglecting localization effects, we calculate the average distribution function of transmission probabilities as a function of the sample length. We mainly focus on the two experimentally relevant cases: a junction between two quantum Hall (QH) states with different filling factors (unitary class) and a relatively thick quantum well exhibiting quantum spin Hall (QSH) effect (symplectic class). In a QH sample, the presence of topologically protected modes leads to a strong suppression of diffusion in the other channels already at scales much shorter than the localization length. On the semiclassical level, this is accompanied by the formation of a gap in the spectrum of transmission probabilities close to unit transmission, thereby suppressing shot noise and conductance fluctuations. In the case of a QSH system, there is at most one topologically protected edge channel leading to weaker transport effects. In order to describe `topological' suppression of nearly perfect transparencies, we develop an exact mapping of the semiclassical limit of the one-dimensional sigma model onto a zero-dimensional sigma model of a different symmetry class, allowing us to identify the distribution of transmission probabilities with the average spectral density of a certain random-matrix ensemble. We extend our results to other symmetry classes with topologically protected edges in two dimensions.

  11. Oxytocin Injection

    MedlinePlus

    ... with other medications or procedures to end a pregnancy. Oxytocin is in a class of medications called ... the cervix, or toxemia (high blood pressure during pregnancy). Your doctor will probably not give you oxytocin ...

  12. Multistage classification of multispectral Earth observational data: The design approach

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Muasher, M. J.; Landgrebe, D. A.

    1981-01-01

    An algorithm is proposed which predicts the optimal features at every node in a binary tree procedure. The algorithm estimates the probability of error by approximating the area under the likelihood ratio function for two classes and taking into account the number of training samples used in estimating each of these two classes. Some results on feature selection techniques, particularly in the presence of a very limited set of training samples, are presented. Results comparing probabilities of error predicted by the proposed algorithm as a function of dimensionality as compared to experimental observations are shown for aircraft and LANDSAT data. Results are obtained for both real and simulated data. Finally, two binary tree examples which use the algorithm are presented to illustrate the usefulness of the procedure.

  13. Botulinum Toxin Treatment in Multiple Sclerosis-a Review.

    PubMed

    Safarpour, Yasaman; Mousavi, Tahereh; Jabbari, Bahman

    2017-08-17

    Purpose of review The purpose of this review is to provide updated information on the role of botulinum neurotoxin (BoNT) therapy in multiple sclerosis (MS). This review aims to answer which symptoms of multiple sclerosis may be amenable to BoNT therapy. Recent findings We searched the literature on the efficacy of BoNTs for treatment of MS symptoms up to April 1st 2017 via the Yale University Library's search engine including but not limited to Pub Med and Ovis SP. The level of efficacy was defined according to the assessment's criteria set forth by the Subcommittee on Guideline Development of the American Academy of Neurology. Significant efficacy was found for two indications based on the available blinded studies (class I and II) and has been suggested for several others through open-label clinical trials. Summary There is level A evidence (effective- two or more class I) that injection of BoNT-A into the bladder's detrusor muscle improves MS-related neurogenic detrusor overactivity (NDO) and MS-related overactive (OA) bladder. There is level B evidence (probably effective- two class II studies) for utility of intramuscular BoNT-A injections for spasticity of multiple sclerosis. Emerging data based on retrospective class IV studies demonstrates that intramuscular injection of BoNTs may help other symptoms of MS such as focal tonic spasms, focal myokymia, spastic dysphagia, and double vision in internuclear ophthalmoplegia. There is no data on MS-related trigeminal neuralgia and sialorrhea, two conditions which have been shown to respond to BoNT therapy in non-MS population.

  14. Persistence Probabilities of Two-Sided (Integrated) Sums of Correlated Stationary Gaussian Sequences

    NASA Astrophysics Data System (ADS)

    Aurzada, Frank; Buck, Micha

    2018-02-01

    We study the persistence probability for some two-sided, discrete-time Gaussian sequences that are discrete-time analogues of fractional Brownian motion and integrated fractional Brownian motion, respectively. Our results extend the corresponding ones in continuous time in Molchan (Commun Math Phys 205(1):97-111, 1999) and Molchan (J Stat Phys 167(6):1546-1554, 2017) to a wide class of discrete-time processes.

  15. Development of probabilistic thinking-oriented learning tools for probability materials at junior high school students

    NASA Astrophysics Data System (ADS)

    Sari, Dwi Ivayana; Hermanto, Didik

    2017-08-01

    This research is a developmental research of probabilistic thinking-oriented learning tools for probability materials at ninth grade students. This study is aimed to produce a good probabilistic thinking-oriented learning tools. The subjects were IX-A students of MTs Model Bangkalan. The stages of this development research used 4-D development model which has been modified into define, design and develop. Teaching learning tools consist of lesson plan, students' worksheet, learning teaching media and students' achievement test. The research instrument used was a sheet of learning tools validation, a sheet of teachers' activities, a sheet of students' activities, students' response questionnaire and students' achievement test. The result of those instruments were analyzed descriptively to answer research objectives. The result was teaching learning tools in which oriented to probabilistic thinking of probability at ninth grade students which has been valid. Since teaching and learning tools have been revised based on validation, and after experiment in class produced that teachers' ability in managing class was effective, students' activities were good, students' responses to the learning tools were positive and the validity, sensitivity and reliability category toward achievement test. In summary, this teaching learning tools can be used by teacher to teach probability for develop students' probabilistic thinking.

  16. In Search of Teen Dating Violence Typologies.

    PubMed

    Reidy, Dennis E; Ball, Barbara; Houry, Debra; Holland, Kristin M; Valle, Linda A; Kearns, Megan C; Marshall, Khiya J; Rosenbluth, Barri

    2016-02-01

    The goal of the present research was to identify distinct latent classes of adolescents that commit teen dating violence (TDV) and assess differences on demographic, behavioral, and attitudinal correlates. Boys and girls (N = 1,149; Mage = 14.3; Grades 6-12) with a history of violence exposure completed surveys assessing six indices of TDV in the preceding 3 months. Indices of TDV included controlling behaviors, psychological TDV, physical TDV, sexual TDV, fear/intimidation, and injury. In addition, adolescents provided demographic and dating history information and completed surveys assessing attitudes condoning violence, relationship skills and knowledge, and reactive/proactive aggression. Latent class analysis indicated a three-class solution wherein the largest class of students was nonviolent on all indices ("nonaggressors") and the smallest class of students demonstrated high probability of nearly all indices of TDV ("multiform aggressors"). In addition, a third class of "emotional aggressors" existed for which there was a high probability of controlling and psychological TDV but low likelihood of any other form of TDV. Multiform aggressors were differentiated from emotional and nonaggressors on the use of self-defense in dating relationships, attitudes condoning violence, and proactive aggression. Emotional aggressors were distinguished from nonaggressors on nearly all measured covariates. Evidence indicates that different subgroups of adolescents engaging in TDV exist. In particular, a small group of youth engaging in multiple forms of TDV can be distinguished from a larger group of youth that commit acts of TDV restricted to emotional aggression (i.e., controlling and psychological) and most youth that do not engage in TDV. Published by Elsevier Inc.

  17. In Search of Teen Dating Violence Typologies

    PubMed Central

    Reidy, Dennis E.; Ball, Barbara; Houry, Debra; Holland, Kristin M.; Valle, Linda A.; Kearns, Megan C.; Marshall, Khiya J.; Rosenbluth, Barri

    2018-01-01

    Purpose The goal of the present research was to identify distinct latent classes of adolescents that commit teen dating violence (TDV) and assess differences on demographic, behavioral, and attitudinal correlates. Methods Boys and girls (N = 1,149; Mage = 14.3; Grades 6–12) with a history of violence exposure completed surveys assessing six indices of TDV in the preceding 3 months. Indices of TDV included controlling behaviors, psychological TDV, physical TDV, sexual TDV, fear/intimidation, and injury. In addition, adolescents provided demographic and dating history information and completed surveys assessing attitudes condoning violence, relationship skills and knowledge, and reactive/proactive aggression. Results Latent class analysis indicated a three-class solution wherein the largest class of students was nonviolent on all indices (“nonaggressors”) and the smallest class of students demonstrated high probability of nearly all indices of TDV (“multiform aggressors”). In addition, a third class of “emotional aggressors” existed for which there was a high probability of controlling and psychological TDV but low likelihood of any other form of TDV. Multiform aggressors were differentiated from emotional and nonaggressors on the use of self-defense in dating relationships, attitudes condoning violence, and proactive aggression. Emotional aggressors were distinguished from nonaggressors on nearly all measured covariates. Conclusions Evidence indicates that different subgroups of adolescents engaging in TDV exist. In particular, a small group of youth engaging in multiple forms of TDV can be distinguished from a larger group of youth that commit acts of TDV restricted to emotional aggression (i.e., controlling and psychological) and most youth that do not engage in TDV. PMID:26683984

  18. Summary of comprehensive systematic review: Rehabilitation in multiple sclerosis

    PubMed Central

    Haselkorn, Jodie K.; Hughes, Christina; Rae-Grant, Alex; Henson, Lily Jung; Bever, Christopher T.; Lo, Albert C.; Brown, Theodore R.; Kraft, George H.; Getchius, Thomas; Gronseth, Gary; Armstrong, Melissa J.; Narayanaswami, Pushpa

    2015-01-01

    Objective: To systematically review the evidence regarding rehabilitation treatments in multiple sclerosis (MS). Methods: We systematically searched the literature (1970–2013) and classified articles using 2004 American Academy of Neurology criteria. Results: This systematic review highlights the paucity of well-designed studies, which are needed to evaluate the available MS rehabilitative therapies. Weekly home/outpatient physical therapy (8 weeks) probably is effective for improving balance, disability, and gait (MS type unspecified, participants able to walk ≥5 meters) but probably is ineffective for improving upper extremity dexterity (1 Class I). Inpatient exercises (3 weeks) followed by home exercises (15 weeks) possibly are effective for improving disability (relapsing-remitting MS [RRMS], primary progressive MS [PPMS], secondary progressive MS [SPMS], Expanded Disability Status Scale [EDSS] 3.0–6.5) (1 Class II). Six weeks' worth of comprehensive multidisciplinary outpatient rehabilitation possibly is effective for improving disability/function (PPMS, SPMS, EDSS 4.0–8.0) (1 Class II). Motor and sensory balance training or motor balance training (3 weeks) possibly is effective for improving static and dynamic balance, and motor balance training (3 weeks) possibly is effective for improving static balance (RRMS, SPMS, PPMS) (1 Class II). Breathing-enhanced upper extremity exercises (6 weeks) possibly are effective for improving timed gait and forced expiratory volume in 1 second (RRMS, SPMS, PPMS, mean EDSS 4.5); this change is of unclear clinical significance. This technique possibly is ineffective for improving disability (1 Class II). Inspiratory muscle training (10 weeks) possibly improves maximal inspiratory pressure (RRMS, SPMS, PPMS, EDSS 2–6.5) (1 Class II). PMID:26598432

  19. Optimal quantum networks and one-shot entropies

    NASA Astrophysics Data System (ADS)

    Chiribella, Giulio; Ebler, Daniel

    2016-09-01

    We develop a semidefinite programming method for the optimization of quantum networks, including both causal networks and networks with indefinite causal structure. Our method applies to a broad class of performance measures, defined operationally in terms of interative tests set up by a verifier. We show that the optimal performance is equal to a max relative entropy, which quantifies the informativeness of the test. Building on this result, we extend the notion of conditional min-entropy from quantum states to quantum causal networks. The optimization method is illustrated in a number of applications, including the inversion, charge conjugation, and controlization of an unknown unitary dynamics. In the non-causal setting, we show a proof-of-principle application to the maximization of the winning probability in a non-causal quantum game.

  20. Semantic image segmentation with fused CNN features

    NASA Astrophysics Data System (ADS)

    Geng, Hui-qiang; Zhang, Hua; Xue, Yan-bing; Zhou, Mian; Xu, Guang-ping; Gao, Zan

    2017-09-01

    Semantic image segmentation is a task to predict a category label for every image pixel. The key challenge of it is to design a strong feature representation. In this paper, we fuse the hierarchical convolutional neural network (CNN) features and the region-based features as the feature representation. The hierarchical features contain more global information, while the region-based features contain more local information. The combination of these two kinds of features significantly enhances the feature representation. Then the fused features are used to train a softmax classifier to produce per-pixel label assignment probability. And a fully connected conditional random field (CRF) is used as a post-processing method to improve the labeling consistency. We conduct experiments on SIFT flow dataset. The pixel accuracy and class accuracy are 84.4% and 34.86%, respectively.

  1. The effect of policies regulating tobacco consumption on smoking initiation and cessation in Spain: is it equal across socioeconomic groups?

    PubMed

    Pinilla, Jaime; Abásolo, Ignacio

    2017-01-01

    In Spain, the Law 28/2005, which came into effect on January 2006, was a turning point in smoking regulation and prevention, serving as a guarantee for the progress of future strategies in the direction marked by international organizations. It is expected that this regulatory policy should benefit relatively more to lower socioeconomic groups, thus contributing to a reduction in socioeconomic health inequalities. This research analyzes the effect of tobacco regulation in Spain, under Law 28/2005, on the initiation and cessation of tobacco consumption, and whether this effect has been unequal across distinct socioeconomic levels. Micro-data from the National Health Survey in its 2006 and 2011 editions are used (study numbers: 4382 and 5389 respectively; inventory of statistical operations (ISO) code: 54009), with a sample size of approximately 24,000 households divided into 2,000 census areas. This allows individuals' tobacco consumption records to be reconstructed over five years before the initiation of each survey, as well as identifying those individuals that started or stopped smoking. The methodology is based on "time to event analysis". Cox's proportional hazard models are adapted to show the effects of a set of explanatory variables on the conditional probability of change in tobacco consumption: initiation as a daily smoker by young people or the cessation of daily smoking by adults. Initiation rates among young people went from 25% (95% confidence interval (CI), 23-27) to 19% (95% CI, 17-21) following the implementation of the Law, and the change in cessation rates among smokers was even greater, with rates increasing from 12% (95% CI, 11-13) to 20% (95% CI, 19-21). However, this effect has not been equal by socioeconomic groups as shown by relative risks. Before the regulation policy, social class was not a statistically significant factor in the initiation of daily smoking ( p  > 0.05); however, following the implementation of the Law, young people belonging to social classes IV-V and VI had a relative risk of starting smoking 63% ( p  = 0.03) and 82% ( p  = 0.02) higher than young people of higher social classes I-II. On the other hand, lower social class also means a lower probability of smoking cessation; however, the relative risk of cessation for a smoker belonging to a household of social class VI (compared to classes I-II) went from 24% ( p  < 0.001) lower before the Law to 33% ( p  < 0.001) lower following the law's implementation. Law 28/2005 has been effective, as after its promulgation there has been a decrease in the rate of smoking initiation among young people and an increase in the rate of cessation among adult smokers. However, this effect has not been equal by socioeconomic groups, favoring relatively more to those individuals belonging to higher social classes.

  2. Managerial performance and cost efficiency of Japanese local public hospitals: a latent class stochastic frontier model.

    PubMed

    Besstremyannaya, Galina

    2011-09-01

    The paper explores the link between managerial performance and cost efficiency of 617 Japanese general local public hospitals in 1999-2007. Treating managerial performance as unobservable heterogeneity, the paper employs a panel data stochastic cost frontier model with latent classes. Financial parameters associated with better managerial performance are found to be positively significant in explaining the probability of belonging to the more efficient latent class. The analysis of latent class membership was consistent with the conjecture that unobservable technological heterogeneity reflected in the existence of the latent classes is related to managerial performance. The findings may support the cause for raising efficiency of Japanese local public hospitals by enhancing the quality of management. Copyright © 2011 John Wiley & Sons, Ltd.

  3. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, P.; Beaudet, P.

    1980-01-01

    The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included.

  4. Strong lensing probability in TeVeS (tensor-vector-scalar) theory

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ming

    2008-01-01

    We recalculate the strong lensing probability as a function of the image separation in TeVeS (tensor-vector-scalar) cosmology, which is a relativistic version of MOND (MOdified Newtonian Dynamics). The lens is modeled by the Hernquist profile. We assume an open cosmology with Ωb = 0.04 and ΩΛ = 0.5 and three different kinds of interpolating functions. Two different galaxy stellar mass functions (GSMF) are adopted: PHJ (Panter, Heavens and Jimenez 2004 Mon. Not. R. Astron. Soc. 355 764) determined from SDSS data release 1 and Fontana (Fontana et al 2006 Astron. Astrophys. 459 745) from GOODS-MUSIC catalog. We compare our results with both the predicted probabilities for lenses from singular isothermal sphere galaxy halos in LCDM (Lambda cold dark matter) with a Schechter-fit velocity function, and the observational results for the well defined combined sample of the Cosmic Lens All-Sky Survey (CLASS) and Jodrell Bank/Very Large Array Astrometric Survey (JVAS). It turns out that the interpolating function μ(x) = x/(1+x) combined with Fontana GSMF matches the results from CLASS/JVAS quite well.

  5. Solar Flare Occurrence Rate and Probability in Terms of the Sunspot Classification Supplemented with Sunspot Area and Its Changes

    NASA Astrophysics Data System (ADS)

    Lee, K.; Moon, Y.; Lee, J.; Na, H.; Lee, K.

    2013-12-01

    We investigate the solar flare occurrence rate and daily flare probability in terms of the sunspot classification supplemented with sunspot area and its changes. For this we use the NOAA active region data and GOES solar flare data for 15 years (from January 1996 to December 2010). We consider the most flare-productive 11 sunspot classes in the McIntosh sunspot group classification. Sunspot area and its changes can be a proxy of magnetic flux and its emergence/cancellation, respectively. We classify each sunspot group into two sub-groups by its area: 'Large' and 'Small'. In addition, for each group, we classify it into three sub-groups according to sunspot area changes: 'Decrease', 'Steady', and 'Increase'. As a result, in the case of compact groups, their flare occurrence rates and daily flare probabilities noticeably increase with sunspot group area. We also find that the flare occurrence rates and daily flare probabilities for the 'Increase' sub-groups are noticeably higher than those for the other sub-groups. In case of the (M + X)-class flares in the ';Dkc' group, the flare occurrence rate of the 'Increase' sub-group is three times higher than that of the 'Steady' sub-group. The mean flare occurrence rates and flare probabilities for all sunspot groups increase with the following order: 'Decrease', 'Steady', and 'Increase'. Our results statistically demonstrate that magnetic flux and its emergence enhance the occurrence of major solar flares.

  6. METHODS MODULE

    EPA Science Inventory

    The ultimate goal of classification is to reduce variation within classes to enable detection of differences between reference and impacted condition within classes as cost-effectively as possible, while minimizing the number of classes for which reference conditions must be defi...

  7. Detection and classification of interstitial lung diseases and emphysema using a joint morphological-fuzzy approach

    NASA Astrophysics Data System (ADS)

    Chang Chien, Kuang-Che; Fetita, Catalin; Brillet, Pierre-Yves; Prêteux, Françoise; Chang, Ruey-Feng

    2009-02-01

    Multi-detector computed tomography (MDCT) has high accuracy and specificity on volumetrically capturing serial images of the lung. It increases the capability of computerized classification for lung tissue in medical research. This paper proposes a three-dimensional (3D) automated approach based on mathematical morphology and fuzzy logic for quantifying and classifying interstitial lung diseases (ILDs) and emphysema. The proposed methodology is composed of several stages: (1) an image multi-resolution decomposition scheme based on a 3D morphological filter is used to detect and analyze the different density patterns of the lung texture. Then, (2) for each pattern in the multi-resolution decomposition, six features are computed, for which fuzzy membership functions define a probability of association with a pathology class. Finally, (3) for each pathology class, the probabilities are combined up according to the weight assigned to each membership function and two threshold values are used to decide the final class of the pattern. The proposed approach was tested on 10 MDCT cases and the classification accuracy was: emphysema: 95%, fibrosis/honeycombing: 84% and ground glass: 97%.

  8. Live long and prosper? Childhood living conditions, marital status, social class in adulthood and mortality during mid-life: a cohort study.

    PubMed

    Fors, Stefan; Lennartsson, Carin; Lundberg, Olle

    2011-03-01

    The aim of the present study was to investigate the impact of childhood living conditions, marital status, and social class in adulthood on the risk of mortality during mid-life. Two questions were addressed: Is there an effect of childhood living conditions on mortality risk during mid-life and if so, is the effect mediated or modified by social class and/or marital status in adulthood? A nationally representative, Swedish, level of living survey from 1968 was used as baseline. The study included those aged 25-69 at baseline (n = 4082). Social conditions in childhood and adulthood were assessed using self-reports. These individuals were then followed for 39 years using registry data on mortality. The results showed associations between childhood living conditions, marital status, social class in adulthood and mortality during mid life. Social class and familial conditions during childhood as well as marital status and social class in adulthood all contributed to the risk of mortality during mid-life. Individuals whose father's were manual workers, who grew up in broken homes, who were unmarried, and/or were manual workers in adulthood had an increased risk of mortality during mid life. The effects of childhood conditions were, in part, both mediated and modified by social class in adulthood. The findings of this study suggest that there are structural, social conditions experienced at different stages of the life course that affect the risk of mortality during mid-life.

  9. From Parasite Encounter to Infection: Multiple-Scale Drivers of Parasite Richness in a Wild Social Primate Population

    NASA Technical Reports Server (NTRS)

    Benavides J. A.; Huchard, E.; Pettorelli, N.; King, A. J.; Brown, M. E.; Archer, C. E.; Appleton, C. C.; Raymond, M.; Cowlishaw, G.

    2011-01-01

    Host parasite diversity plays a fundamental role in ecological and evolutionary processes, yet the factors that drive it are still poorly understood. A variety of processes, operating across a range of spatial scales, are likely to influence both the probability of parasite encounter and subsequent infection. Here, we explored eight possible determinants of parasite richness, comprising rainfall and temperature at the population level, ranging behavior and home range productivity at the group level, and age, sex, body condition, and social rank at the individual level. We used a unique dataset describing gastrointestinal parasites in a terrestrial subtropical vertebrate (chacma baboons, Papio ursinus), comprising 662 faecal samples from 86 individuals representing all age-sex classes across two groups over two dry seasons in a desert population. Three mixed models were used to identify the most important factor at each of the three spatial scales (population, group, individual); these were then standardised and combined in a single, global, mixed model. Individual age had the strongest influence on parasite richness, in a convex relationship. Parasite richness was also higher in females and animals in poor condition, albeit at a lower order of magnitude than age. Finally, with a further halving of effect size, parasite richness was positively correlated to day range and temperature. These findings indicate that a range of factors influence host parasite richness through both encounter and infection probabilities, but that individual-level processes may be more important than those at the group or population level.

  10. The Impact of Three Commonly Used Fungicides on Typhlodromus pyri (Acari: Phytoseiidae) in European Vineyards.

    PubMed

    Kemmitt, G; Valverde-Garcia, P; Hufnagl, A; Bacci, L; Zotz, A

    2015-04-01

    The impact of the fungicides mancozeb, myclobutanil, and meptyldinocap on populations of Typhlodromus pyri Scheuten was evaluated under field conditions, when applied following the good agricultural practices recommended for their use. Two complementary statistical models were used to analyze the population reduction compared to the control: a linear mixed model to estimate the mean effect of the fungicide, and a generalized linear mixed model (proportional odds mixed model) to estimate the cumulative probability for those effects being equal or less than a specific IOBC class (International Organization for Biological and Integrated Control of Noxious Animal and Plants). Findings from 27 field experiments in a range of different vine-growing regions in Europe indicated that the use of mancozeb, myclobutanil, and meptyldinocap caused minimal impact on naturally occurring populations of T. pyri. Both statistical models confirmed that although adverse effects on T. pyri can occur under certain conditions after several applications of any of the three fungicides studied, the probability of the effects occurring is low and they will not persist. These methods demonstrated how data from a series of trials could be used to evaluate the variability of the effects caused by the chemical rather than relying on the worst-case findings from a single trial. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Updating: Learning versus Supposing

    ERIC Educational Resources Information Center

    Zhao, Jiaying; Crupi, Vincenzo; Tentori, Katya; Fitelson, Branden; Osherson, Daniel

    2012-01-01

    Bayesian orthodoxy posits a tight relationship between conditional probability and updating. Namely, the probability of an event "A" after learning "B" should equal the conditional probability of "A" given "B" prior to learning "B". We examine whether ordinary judgment conforms to the orthodox view. In three experiments we found substantial…

  12. Music-evoked incidental happiness modulates probability weighting during risky lottery choices

    PubMed Central

    Schulreich, Stefan; Heussen, Yana G.; Gerhardt, Holger; Mohr, Peter N. C.; Binkofski, Ferdinand C.; Koelsch, Stefan; Heekeren, Hauke R.

    2014-01-01

    We often make decisions with uncertain consequences. The outcomes of the choices we make are usually not perfectly predictable but probabilistic, and the probabilities can be known or unknown. Probability judgments, i.e., the assessment of unknown probabilities, can be influenced by evoked emotional states. This suggests that also the weighting of known probabilities in decision making under risk might be influenced by incidental emotions, i.e., emotions unrelated to the judgments and decisions at issue. Probability weighting describes the transformation of probabilities into subjective decision weights for outcomes and is one of the central components of cumulative prospect theory (CPT) that determine risk attitudes. We hypothesized that music-evoked emotions would modulate risk attitudes in the gain domain and in particular probability weighting. Our experiment featured a within-subject design consisting of four conditions in separate sessions. In each condition, the 41 participants listened to a different kind of music—happy, sad, or no music, or sequences of random tones—and performed a repeated pairwise lottery choice task. We found that participants chose the riskier lotteries significantly more often in the “happy” than in the “sad” and “random tones” conditions. Via structural regressions based on CPT, we found that the observed changes in participants' choices can be attributed to changes in the elevation parameter of the probability weighting function: in the “happy” condition, participants showed significantly higher decision weights associated with the larger payoffs than in the “sad” and “random tones” conditions. Moreover, elevation correlated positively with self-reported music-evoked happiness. Thus, our experimental results provide evidence in favor of a causal effect of incidental happiness on risk attitudes that can be explained by changes in probability weighting. PMID:24432007

  13. Music-evoked incidental happiness modulates probability weighting during risky lottery choices.

    PubMed

    Schulreich, Stefan; Heussen, Yana G; Gerhardt, Holger; Mohr, Peter N C; Binkofski, Ferdinand C; Koelsch, Stefan; Heekeren, Hauke R

    2014-01-07

    We often make decisions with uncertain consequences. The outcomes of the choices we make are usually not perfectly predictable but probabilistic, and the probabilities can be known or unknown. Probability judgments, i.e., the assessment of unknown probabilities, can be influenced by evoked emotional states. This suggests that also the weighting of known probabilities in decision making under risk might be influenced by incidental emotions, i.e., emotions unrelated to the judgments and decisions at issue. Probability weighting describes the transformation of probabilities into subjective decision weights for outcomes and is one of the central components of cumulative prospect theory (CPT) that determine risk attitudes. We hypothesized that music-evoked emotions would modulate risk attitudes in the gain domain and in particular probability weighting. Our experiment featured a within-subject design consisting of four conditions in separate sessions. In each condition, the 41 participants listened to a different kind of music-happy, sad, or no music, or sequences of random tones-and performed a repeated pairwise lottery choice task. We found that participants chose the riskier lotteries significantly more often in the "happy" than in the "sad" and "random tones" conditions. Via structural regressions based on CPT, we found that the observed changes in participants' choices can be attributed to changes in the elevation parameter of the probability weighting function: in the "happy" condition, participants showed significantly higher decision weights associated with the larger payoffs than in the "sad" and "random tones" conditions. Moreover, elevation correlated positively with self-reported music-evoked happiness. Thus, our experimental results provide evidence in favor of a causal effect of incidental happiness on risk attitudes that can be explained by changes in probability weighting.

  14. CProb: a computational tool for conducting conditional probability analysis.

    PubMed

    Hollister, Jeffrey W; Walker, Henry A; Paul, John F

    2008-01-01

    Conditional probability is the probability of observing one event given that another event has occurred. In an environmental context, conditional probability helps to assess the association between an environmental contaminant (i.e., the stressor) and the ecological condition of a resource (i.e., the response). These analyses, when combined with controlled experiments and other methodologies, show great promise in evaluating ecological conditions from observational data and in defining water quality and other environmental criteria. Current applications of conditional probability analysis (CPA) are largely done via scripts or cumbersome spreadsheet routines, which may prove daunting to end-users and do not provide access to the underlying scripts. Combining spreadsheets with scripts eases computation through a familiar interface (i.e., Microsoft Excel) and creates a transparent process through full accessibility to the scripts. With this in mind, we developed a software application, CProb, as an Add-in for Microsoft Excel with R, R(D)com Server, and Visual Basic for Applications. CProb calculates and plots scatterplots, empirical cumulative distribution functions, and conditional probability. In this short communication, we describe CPA, our motivation for developing a CPA tool, and our implementation of CPA as a Microsoft Excel Add-in. Further, we illustrate the use of our software with two examples: a water quality example and a landscape example. CProb is freely available for download at http://www.epa.gov/emap/nca/html/regions/cprob.

  15. Mapping landscape fire frequency for fire regime condition class

    Treesearch

    Dale A. Hamilton; Wendel J. Hann

    2015-01-01

    Fire Regime Condition Class (FRCC) is a departure index that compares the current amounts of the different vegetation succession classes, fire frequency, and fire severity to historic reference conditions. FRCC assessments have been widely used for evaluating ecosystem status in many areas of the U.S. in reports such as land use plans, fire management plans, project...

  16. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification.

    PubMed

    Soares, João V B; Leandro, Jorge J G; Cesar Júnior, Roberto M; Jelinek, Herbert F; Cree, Michael J

    2006-09-01

    We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al., 2004) and STARE (Hoover et al., 2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods.

  17. Eigenpairs of Toeplitz and Disordered Toeplitz Matrices with a Fisher-Hartwig Symbol

    NASA Astrophysics Data System (ADS)

    Movassagh, Ramis; Kadanoff, Leo P.

    2017-05-01

    Toeplitz matrices have entries that are constant along diagonals. They model directed transport, are at the heart of correlation function calculations of the two-dimensional Ising model, and have applications in quantum information science. We derive their eigenvalues and eigenvectors when the symbol is singular Fisher-Hartwig. We then add diagonal disorder and study the resulting eigenpairs. We find that there is a "bulk" behavior that is well captured by second order perturbation theory of non-Hermitian matrices. The non-perturbative behavior is classified into two classes: Runaways type I leave the complex-valued spectrum and become completely real because of eigenvalue attraction. Runaways type II leave the bulk and move very rapidly in response to perturbations. These have high condition numbers and can be predicted. Localization of the eigenvectors are then quantified using entropies and inverse participation ratios. Eigenvectors corresponding to Runaways type II are most localized (i.e., super-exponential), whereas Runaways type I are less localized than the unperturbed counterparts and have most of their probability mass in the interior with algebraic decays. The results are corroborated by applying free probability theory and various other supporting numerical studies.

  18. Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.

    PubMed

    Lin, Lanny; Goodrich, Michael A

    2014-12-01

    During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.

  19. Age-0 Lost River sucker and shortnose sucker nearshore habitat use in Upper Klamath Lake, Oregon: A patch occupancy approach

    USGS Publications Warehouse

    Burdick, S.M.; Hendrixson, H.A.; VanderKooi, S.P.

    2008-01-01

    We examined habitat use by age-0 Lost River suckers Deltistes luxatus and shortnose suckers Chasmistes brevirostris over six substrate classes and in vegetated and nonvegetated areas of Upper Klamath Lake, Oregon. We used a patch occupancy approach to model the effect of physical habitat and water quality conditions on habitat use. Our models accounted for potential inconsistencies in detection probability among sites and sampling occasions as a result of differences in fishing gear types and techniques, habitat characteristics, and age-0 fish size and abundance. Detection probability was greatest during mid- to late summer, when water temperatures were highest and age-0 suckers were the largest. The proportion of sites used by age-0 suckers was inversely related to depth (range = 0.4-3.0 m), particularly during late summer. Age-0 suckers were more likely to use habitats containing small substrate (64 mm) and habitats with vegetation than those without vegetation. Relatively narrow ranges in dissolved oxygen, temperature, and pH prevented us from detecting effects of these water quality features on age-0 sucker nearshore habitat use.

  20. Egg production of turbot, Scophthalmus maximus, in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Nissling, Anders; Florin, Ann-Britt; Thorsen, Anders; Bergström, Ulf

    2013-11-01

    In the brackish water Baltic Sea turbot spawn at ~ 6-9 psu along the coast and on offshore banks in ICES SD 24-29, with salinity influencing the reproductive success. The potential fecundity (the stock of vitellogenic oocytes in the pre-spawning ovary), egg size (diameter and dry weight of artificially fertilized 1-day-old eggs) and gonad dry weight were assessed for fish sampled in SD 25 and SD 28. Multiple regression analysis identified somatic weight, or total length in combination with Fulton's condition factor, as main predictors of fecundity and gonad dry weight with stage of maturity (oocyte packing density or leading cohort) as an additional predictor. For egg size, somatic weight was identified as main predictor while otolith weight (proxy for age) was an additional predictor. Univariate analysis using GLM revealed significantly higher fecundity and gonad dry weight for turbot from SD 28 (3378-3474 oocytes/g somatic weight) compared to those from SD 25 (2343 oocytes/g somatic weight), with no difference in egg size (1.05 ± 0.03 mm diameter and 46.8 ± 6.5 μg dry weight; mean ± sd). The difference in egg production matched egg survival probabilities in relation to salinity conditions suggesting selection for higher fecundity as a consequence of poorer reproductive success at lower salinities. This supports the hypothesis of higher size-specific fecundity towards the limit of the distribution of a species as an adaptation to harsher environmental conditions and lower offspring survival probabilities. Within SD 28 comparisons were made between two major fishing areas targeting spawning aggregations and a marine protected area without fishing. The outcome was inconclusive and is discussed with respect to potential fishery induced effects, effects of the salinity gradient, effects of specific year-classes, and effects of maturation status of sampled fish.

  1. Probability in reasoning: a developmental test on conditionals.

    PubMed

    Barrouillet, Pierre; Gauffroy, Caroline

    2015-04-01

    Probabilistic theories have been claimed to constitute a new paradigm for the psychology of reasoning. A key assumption of these theories is captured by what they call the Equation, the hypothesis that the meaning of the conditional is probabilistic in nature and that the probability of If p then q is the conditional probability, in such a way that P(if p then q)=P(q|p). Using the probabilistic truth-table task in which participants are required to evaluate the probability of If p then q sentences, the present study explored the pervasiveness of the Equation through ages (from early adolescence to adulthood), types of conditionals (basic, causal, and inducements) and contents. The results reveal that the Equation is a late developmental achievement only endorsed by a narrow majority of educated adults for certain types of conditionals depending on the content they involve. Age-related changes in evaluating the probability of all the conditionals studied closely mirror the development of truth-value judgements observed in previous studies with traditional truth-table tasks. We argue that our modified mental model theory can account for this development, and hence for the findings related with the probability task, which do not consequently support the probabilistic approach of human reasoning over alternative theories. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. What Can Quantum Optics Say about Computational Complexity Theory?

    NASA Astrophysics Data System (ADS)

    Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.

    2015-02-01

    Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.

  3. Modeling and Optimization of Class-E Amplifier at Subnominal Condition in a Wireless Power Transfer System for Biomedical Implants.

    PubMed

    Liu, Hao; Shao, Qi; Fang, Xuelin

    2017-02-01

    For the class-E amplifier in a wireless power transfer (WPT) system, the design parameters are always determined by the nominal model. However, this model neglects the conduction loss and voltage stress of MOSFET and cannot guarantee the highest efficiency in the WPT system for biomedical implants. To solve this problem, this paper proposes a novel circuit model of the subnominal class-E amplifier. On a WPT platform for capsule endoscope, the proposed model was validated to be effective and the relationship between the amplifier's design parameters and its characteristics was analyzed. At a given duty ratio, the design parameters with the highest efficiency and safe voltage stress are derived and the condition is called 'optimal subnominal condition.' The amplifier's efficiency can reach the highest of 99.3% at the 0.097 duty ratio. Furthermore, at the 0.5 duty ratio, the measured efficiency of the optimal subnominal condition can reach 90.8%, which is 15.2% higher than that of the nominal condition. Then, a WPT experiment with a receiving unit was carried out to validate the feasibility of the optimized amplifier. In general, the design parameters of class-E amplifier in a WPT system for biomedical implants can be determined with the proposed optimization method in this paper.

  4. Support Vector Machine Based Monitoring of Cardio-Cerebrovascular Reserve during Simulated Hemorrhage.

    PubMed

    van der Ster, Björn J P; Bennis, Frank C; Delhaas, Tammo; Westerhof, Berend E; Stok, Wim J; van Lieshout, Johannes J

    2017-01-01

    Introduction: In the initial phase of hypovolemic shock, mean blood pressure (BP) is maintained by sympathetically mediated vasoconstriction rendering BP monitoring insensitive to detect blood loss early. Late detection can result in reduced tissue oxygenation and eventually cellular death. We hypothesized that a machine learning algorithm that interprets currently used and new hemodynamic parameters could facilitate in the detection of impending hypovolemic shock. Method: In 42 (27 female) young [mean (sd): 24 (4) years], healthy subjects central blood volume (CBV) was progressively reduced by application of -50 mmHg lower body negative pressure until the onset of pre-syncope. A support vector machine was trained to classify samples into normovolemia (class 0), initial phase of CBV reduction (class 1) or advanced CBV reduction (class 2). Nine models making use of different features were computed to compare sensitivity and specificity of different non-invasive hemodynamic derived signals. Model features included : volumetric hemodynamic parameters (stroke volume and cardiac output), BP curve dynamics, near-infrared spectroscopy determined cortical brain oxygenation, end-tidal carbon dioxide pressure, thoracic bio-impedance, and middle cerebral artery transcranial Doppler (TCD) blood flow velocity. Model performance was tested by quantifying the predictions with three methods : sensitivity and specificity, absolute error, and quantification of the log odds ratio of class 2 vs. class 0 probability estimates. Results: The combination with maximal sensitivity and specificity for classes 1 and 2 was found for the model comprising volumetric features (class 1: 0.73-0.98 and class 2: 0.56-0.96). Overall lowest model error was found for the models comprising TCD curve hemodynamics. Using probability estimates the best combination of sensitivity for class 1 (0.67) and specificity (0.87) was found for the model that contained the TCD cerebral blood flow velocity derived pulse height. The highest combination for class 2 was found for the model with the volumetric features (0.72 and 0.91). Conclusion: The most sensitive models for the detection of advanced CBV reduction comprised data that describe features from volumetric parameters and from cerebral blood flow velocity hemodynamics. In a validated model of hemorrhage in humans these parameters provide the best indication of the progression of central hypovolemia.

  5. Maternal eating disorder and infant diet. A latent class analysis based on the Norwegian Mother and Child Cohort Study (MoBa)

    PubMed Central

    Torgersen, Leila; Ystrom, Eivind; Siega-Riz, Anna Maria; Berg, Cecilie Knoph; Zerwas, Stephanie; Reichborn-Kjennerud, Ted; Bulik, Cynthia M.

    2015-01-01

    Knowledge of infant diet and feeding practices among children of mothers with eating disorders is essential to promote healthy eating in these children. This study compared the dietary patterns of 6-month-old children of mothers with anorexia nervosa, bulimia nervosa, binge eating disorder, and eating disorder not otherwise specified - purging subtype, to the diet of children of mothers with no eating disorders. The study was based on 53,879 mothers in the Norwegian Mother and Child Cohort Study (MoBa). Latent class analysis (LCA) was used to identify discrete latent classes of infant diet based on the mothers’ responses to questions about 16 food items. LCA identified five classes, characterized by primarily homemade vegetarian food (4% of the infants in the sample), homemade traditional food (8%), commercial infant cereals (35%), commercial jarred baby food (39%), and a mix of all food groups (11%). We then estimated the association between the different latent dietary classes and maternal eating disorders using a multinomial logistic regression model. Infants of mothers with bulimia nervosa had a lower probability of being in the homemade traditional food class compared to the commercial jarred baby food class, than the referent without an eating disorder (O.R. 0.59; 95% CI 0.36–0.99). Infants of mothers with binge eating disorder had a lower probability of being in the homemade vegetarian class compared to the commercial jarred baby food class, than the referent (O.R. 0.77; 95% CI 0.60–0.99), but only before controlling for relevant confounders. Anorexia nervosa and eating disorder not otherwise specified-purging subtype were not statistically significant associated with any of the dietary classes. These results suggest that in the general population, maternal eating disorders may to some extent influence the child’s diet as early as 6 months after birth; however, the extent to which these differences influence child health and development remain an area for further inquiry. PMID:25453594

  6. Anesthetic cardioprotection in clinical practice from proof-of-concept to clinical applications.

    PubMed

    Zaugg, Michael; Lucchinetti, Eliana; Behmanesh, Saeid; Clanachan, Alexander S

    2014-01-01

    In 2007, the American Heart Association (AHA) recommended (class IIa, level of evidence B) in their guidelines on Perioperative Cardiovascular Evaluation and Care for Noncardiac Surgery volatile anesthetics as first choice for general anesthesia in hemodynamically stable patients at risk for myocardial ischemia. This recommendation was based on results from patients undergoing coronary artery bypass grafting (CABG) surgery and thus subject to criticism. However, since a "good anesthetic" often resembles a piece of art in the complex perioperative environment, and is difficult to highly standardize, it seems unlikely that large-scale randomized control trials in noncardiac surgical patients will be performed in the near future to tackle this question. There is growing evidence that ether-derived volatile anesthetics and opioids provide cardioprotection in patients undergoing CABG surgery. Since 2011, the American College of Cardiology Foundation/AHA have recommended a "volatile-based anesthesia" for these procedures (class IIa, level of evidence A). It is very likely that volatile anesthetics and opioids also protect hearts of noncardiac surgical patients. However, age, diabetes and myocardial remodeling diminish the cardioprotective benefits of anesthetics. In patients at risk for perioperative cardiovascular complications, it is essential to abandon the use of "anti-conditioning" drugs (sulfonylureas and COX-2 inhibitors) and probably glitazones. There is significant interference in cardioprotection between sevoflurane and propofol, which should not be used concomitantly during anesthesia if possible. Any type of ischemic "conditioning" appears to exhibit markedly reduced protection or completely loses protection in the presence of volatile anesthetics and/or opioids.

  7. Size-sex variation in survival rates and abundance of pig frogs, Rana grylio, in northern Florida wetlands

    USGS Publications Warehouse

    Wood, K.V.; Nichols, J.D.; Percival, H.F.; Hines, J.E.

    1998-01-01

    During 1991-1993, we conducted capture-recapture studies on pig frogs, Rana grylio, in seven study locations in northcentral Florida. Resulting data were used to test hypotheses about variation in survival probability over different size-sex classes of pig frogs. We developed multistate capture-recapture models for the resulting data and used them to estimate survival rates and frog abundance. Tests provided strong evidence of survival differences among size-sex classes, with adult females showing the highest survival probabilities. Adult males and juvenile frogs had lower survival rates that were similar to each other. Adult females were more abundant than adult males in most locations at most sampling occasions. We recommended probabilistic capture-recapture models in general, and multistate models in particular, for robust estimation of demographic parameters in amphibian populations.

  8. Internal Medicine residents use heuristics to estimate disease probability.

    PubMed

    Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin

    2015-01-01

    Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition. When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025). Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing.

  9. Whitebark pine mortality related to white pine blister rust, mountain pine beetle outbreak, and water availability

    USGS Publications Warehouse

    Shanahan, Erin; Irvine, Kathryn M.; Thoma, David P.; Wilmoth, Siri K.; Ray, Andrew; Legg, Kristin; Shovic, Henry

    2016-01-01

    Whitebark pine (Pinus albicaulis) forests in the western United States have been adversely affected by an exotic pathogen (Cronartium ribicola, causal agent of white pine blister rust), insect outbreaks (Dendroctonus ponderosae, mountain pine beetle), and drought. We monitored individual trees from 2004 to 2013 and characterized stand-level biophysical conditions through a mountain pine beetle epidemic in the Greater Yellowstone Ecosystem. Specifically, we investigated associations between tree-level variables (duration and location of white pine blister rust infection, presence of mountain pine beetle, tree size, and potential interactions) with observations of individual whitebark pine tree mortality. Climate summaries indicated that cumulative growing degree days in years 2006–2008 likely contributed to a regionwide outbreak of mountain pine beetle prior to the observed peak in whitebark mortality in 2009. We show that larger whitebark pine trees were preferentially attacked and killed by mountain pine beetle and resulted in a regionwide shift to smaller size class trees. In addition, we found evidence that smaller size class trees with white pine blister rust infection experienced higher mortality than larger trees. This latter finding suggests that in the coming decades white pine blister rust may become the most probable cause of whitebark pine mortality. Our findings offered no evidence of an interactive effect of mountain pine beetle and white pine blister rust infection on whitebark pine mortality in the Greater Yellowstone Ecosystem. Interestingly, the probability of mortality was lower for larger trees attacked by mountain pine beetle in stands with higher evapotranspiration. Because evapotranspiration varies with climate and topoedaphic conditions across the region, we discuss the potential to use this improved understanding of biophysical influences on mortality to identify microrefugia that might contribute to successful whitebark pine conservation efforts. Using tree-level observations, the National Park Service-led Greater Yellowstone Interagency Whitebark Pine Long-term Monitoring Program provided important ecological insight on the size-dependent effects of white pine blister rust, mountain pine beetle, and water availability on whitebark pine mortality. This ongoing monitoring campaign will continue to offer observations that advance conservation in the Greater Yellowstone Ecosystem.

  10. Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras.

    PubMed

    Shan, Ying; Sawhney, Harpreet S; Kumar, Rakesh

    2008-04-01

    This paper proposes a novel unsupervised algorithm learning discriminative features in the context of matching road vehicles between two non-overlapping cameras. The matching problem is formulated as a same-different classification problem, which aims to compute the probability of vehicle images from two distinct cameras being from the same vehicle or different vehicle(s). We employ a novel measurement vector that consists of three independent edge-based measures and their associated robust measures computed from a pair of aligned vehicle edge maps. The weight of each measure is determined by an unsupervised learning algorithm that optimally separates the same-different classes in the combined measurement space. This is achieved with a weak classification algorithm that automatically collects representative samples from same-different classes, followed by a more discriminative classifier based on Fisher' s Linear Discriminants and Gibbs Sampling. The robustness of the match measures and the use of unsupervised discriminant analysis in the classification ensures that the proposed method performs consistently in the presence of missing/false features, temporally and spatially changing illumination conditions, and systematic misalignment caused by different camera configurations. Extensive experiments based on real data of over 200 vehicles at different times of day demonstrate promising results.

  11. Bayesian Latent Class Analysis Tutorial.

    PubMed

    Li, Yuelin; Lord-Bessen, Jennifer; Shiyko, Mariya; Loeb, Rebecca

    2018-01-01

    This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R . The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.

  12. Exploiting teleconnection indices for probabilistic forecasting of drought class transitions in Sicily region (Italy)

    NASA Astrophysics Data System (ADS)

    Bonaccorso, Brunella; Cancelliere, Antonino

    2015-04-01

    In the present study two probabilistic models for short-medium term drought forecasting able to include information provided by teleconnection indices are proposed and applied to Sicily region (Italy). Drought conditions are expressed in terms of the Standardized Precipitation-Evapotranspiration Index (SPEI) at different aggregation time scales. More specifically, a multivariate approach based on normal distribution is developed in order to estimate: 1) on the one hand transition probabilities to future SPEI drought classes and 2) on the other hand, SPEI forecasts at a generic time horizon M, as functions of past values of SPEI and the selected teleconnection index. To this end, SPEI series at 3, 4 and 6 aggregation time scales for Sicily region are extracted from the Global SPEI database, SPEIbase , available at Web repository of the Spanish National Research Council (http://sac.csic.es/spei/database.html), and averaged over the study area. In particular, SPEIbase v2.3 with spatial resolution of 0.5° lat/lon and temporal coverage between January 1901 and December 2013 is used. A preliminary correlation analysis is carried out to investigate the link between the drought index and different teleconnection patterns, namely: the North Atlantic Oscillation (NAO), the Scandinavian (SCA) and the East Atlantic-West Russia (EA-WR) patterns. Results of such analysis indicate a strongest influence of NAO on drought conditions in Sicily with respect to other teleconnection indices. Then, the proposed forecasting methodology is applied and the skill in forecasting of the proposed models is quantitatively assessed through the application of a simple score approach and of performance indices. Results indicate that inclusion of NAO index generally enhance model performance thus confirming the suitability of the models for short- medium term forecast of drought conditions.

  13. Adjustment capacity of maritime pine cambial activity in drought-prone environments.

    PubMed

    Vieira, Joana; Campelo, Filipe; Rossi, Sergio; Carvalho, Ana; Freitas, Helena; Nabais, Cristina

    2015-01-01

    Intra-annual density fluctuations (IADFs) are anatomical features formed in response to changes in the environmental conditions within the growing season. These anatomical features are commonly observed in Mediterranean pines, being more frequent in younger and wider tree rings. However, the process behind IADF formation is still unknown. Weekly monitoring of cambial activity and wood formation would fill this void. Although studies describing cambial activity and wood formation have become frequent, this knowledge is still fragmentary in the Mediterranean region. Here we present data from the monitoring of cambial activity and wood formation in two diameter classes of maritime pine (Pinus pinaster Ait.), over two years, in order to test: (i) whether the differences in stem diameter in an even-aged stand were due to timings and/or rates of xylogenesis; (ii) if IADFs were more common in large trees; and (iii) if their formation is triggered by cambial resumption after the summer drought. Larger trees showed higher rates of cell production and longer growing seasons, due to an earlier start and later end of xylogenesis. When a drier winter occurs, larger trees were more affected, probably limiting xylogenesis in the summer months. In both diameter classes a latewood IADF was formed in 2012 in response to late-September precipitation, confirming that the timing of the precipitation event after the summer drought is crucial in determining the resumption of cambial activity and whether or not an IADF is formed. It was the first time that the formation of a latewood IADF was monitored at a weekly time scale in maritime pine. The capacity of maritime pine to adjust cambial activity to the current environmental conditions represents a valuable strategy under the future climate change conditions.

  14. Evaluating Merger and Intersection of Equivalence Classes with One Member in Common

    ERIC Educational Resources Information Center

    MacKay, Harry A.; Wilkinson, Krista M.; Farrell, Colleen; Serna, Richard W.

    2011-01-01

    Sidman (1994) noted that the existence of a member that is common to more than one class may produce either class merger (union) or class intersection. A multiple-selection, matching-to-sample test was developed to examine the conditions under which these outcomes occur. Test trials each required three conditional discriminations involving…

  15. 49 CFR 176.116 - General stowage conditions for Class 1 (explosive) materials.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... stowage conditions for Class 1 (explosive) materials. (a) Heat and sources of ignition: (1) Class 1... on board. Stowage must be well away from all sources of heat, including steam pipes, heating coils... addition to this separation, there must be insulation to Class A60 standard as defined in 46 CFR 72.05-10(a...

  16. 49 CFR 176.116 - General stowage conditions for Class 1 (explosive) materials.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... stowage conditions for Class 1 (explosive) materials. (a) Heat and sources of ignition: (1) Class 1... on board. Stowage must be well away from all sources of heat, including steam pipes, heating coils... addition to this separation, there must be insulation to Class A60 standard as defined in 46 CFR 72.05-10(a...

  17. 49 CFR 176.116 - General stowage conditions for Class 1 (explosive) materials.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... stowage conditions for Class 1 (explosive) materials. (a) Heat and sources of ignition: (1) Class 1... on board. Stowage must be well away from all sources of heat, including steam pipes, heating coils... addition to this separation, there must be insulation to Class A60 standard as defined in 46 CFR 72.05-10(a...

  18. An Intergroup Perspective on Group Dynamics.

    DTIC Science & Technology

    1983-10-01

    students and faculty. Abstract generalizations and concrete applications about the material become part of the cognitive formations that student and faculty...grade mathematics class in a Japanese Junior high school . A 13-year-old girl is called upon and is unable to answer a question. The Times reporter...then fell silent. Finally the teacher allowed her to sit down. In an American school , the student would probably have been placed in a slower class

  19. Recurrent novae

    NASA Technical Reports Server (NTRS)

    Hack, Margherita; Selvelli, Pierluigi

    1993-01-01

    Recurrent novae seem to be a rather inhomogeneous group: T CrB is a binary with a M III companion; U Sco probably has a late dwarf as companion. Three are fast novae; two are slow novae. Some of them appear to have normal chemical composition; others may present He and CNO excess. Some present a mass-loss that is lower by two orders of magnitude than classical novae. However, our sample is too small for saying whether there are several classes of recurrent novae, which may be related to the various classes of classical novae, or whether the low mass-loss is a general property of the class or just a peculiarity of one member of the larger class of classical novae and recurrent novae.

  20. A satellite-asteroid mystery and a possible early flux of scattered C-class asteroids

    NASA Technical Reports Server (NTRS)

    Hartmann, William K.

    1987-01-01

    The C spectral class implied by the neutral spectra and low albedo of probably capture-originated satellites orbiting Saturn, Jupiter, and Mars is noted to contradict evidence that class-C objects are native only to the outer half of the asteroid belt. It is presently suggested that Jupiter resonances may have scattered a high flux of C-type objects out of the belt as well as throughout the primordial solar system, at the close of planet accretion, when extended atmospheres could figure in their capture. The largest scattered object fluxes come from the resonance regions primarily populated by C-class objects, lending support to the Pollack et al. (1979) capture scenario invoking extended protoatmospheres.

  1. 49 CFR 572.127 - Test conditions and instrumentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Forces—Class 1000; (ii) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation—Class 60 (if used). (3) Thorax: (i) Rib acceleration—Class 1000; (ii) Spine and pendulum accelerations—Class...

  2. 49 CFR 572.127 - Test conditions and instrumentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Forces—Class 1000; (ii) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation—Class 60 (if used). (3) Thorax: (i) Rib acceleration—Class 1000; (ii) Spine and pendulum accelerations—Class...

  3. 49 CFR 572.127 - Test conditions and instrumentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Forces—Class 1000; (ii) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation—Class 60 (if used). (3) Thorax: (i) Rib acceleration—Class 1000; (ii) Spine and pendulum accelerations—Class...

  4. 49 CFR 572.127 - Test conditions and instrumentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Forces—Class 1000; (ii) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation—Class 60 (if used). (3) Thorax: (i) Rib acceleration—Class 1000; (ii) Spine and pendulum accelerations—Class...

  5. 49 CFR 572.127 - Test conditions and instrumentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Forces—Class 1000; (ii) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation—Class 60 (if used). (3) Thorax: (i) Rib acceleration—Class 1000; (ii) Spine and pendulum accelerations—Class...

  6. Two weeks of additional standing balance circuit classes during inpatient rehabilitation are cost saving and effective: an economic evaluation.

    PubMed

    Treacy, Daniel; Howard, Kirsten; Hayes, Alison; Hassett, Leanne; Schurr, Karl; Sherrington, Catherine

    2018-01-01

    Among people admitted for inpatient rehabilitation, is usual care plus standing balance circuit classes more cost-effective than usual care alone? Cost-effectiveness study embedded within a randomised controlled trial with concealed allocation, assessor blinding and intention-to-treat analysis. 162 rehabilitation inpatients from a metropolitan hospital in Sydney, Australia. The experimental group received a 1-hour standing balance circuit class, delivered three times a week for 2 weeks, in addition to usual therapy. The circuit classes were supervised by one physiotherapist and one physiotherapy assistant for up to eight patients. The control group received usual therapy alone. Costs were estimated from routinely collected hospital use data in the 3 months after randomisation. The functional outcome measure was mobility measured at 3 months using the Short Physical Performance Battery administered by a blinded assessor. An incremental analysis was conducted and the joint probability distribution of costs and outcomes was examined using bootstrapping. The median cost savings for the intervention group was AUD4,741 (95% CI 137 to 9,372) per participant; 94% of bootstraps showed that the intervention was both effective and cost saving. Two weeks of additional standing balance circuit classes delivered in addition to usual therapy resulted in decreased healthcare costs at 3 months in hospital inpatients admitted for rehabilitation. There is a high probability that this intervention is both cost saving and effective. ACTRN12611000412932. [Treacy D, Howard K, Hayes A, Hassett L, Schurr K, Sherrington C (2018) Two weeks of additional standing balance circuit classes during inpatient rehabilitation are cost saving and effective: an economic evaluation. Journal of Physiotherapy 64: 41-47]. Copyright © 2017 Australian Physiotherapy Association. Published by Elsevier B.V. All rights reserved.

  7. Roots of angiosperm formins: The evolutionary history of plant FH2 domain-containing proteins

    PubMed Central

    2008-01-01

    Background Shuffling of modular protein domains is an important source of evolutionary innovation. Formins are a family of actin-organizing proteins that share a conserved FH2 domain but their overall domain architecture differs dramatically between opisthokonts (metazoans and fungi) and plants. We performed a phylogenomic analysis of formins in most eukaryotic kingdoms, aiming to reconstruct an evolutionary scenario that may have produced the current diversity of domain combinations with focus on the origin of the angiosperm formin architectures. Results The Rho GTPase-binding domain (GBD/FH3) reported from opisthokont and Dictyostelium formins was found in all lineages except plants, suggesting its ancestral character. Instead, mosses and vascular plants possess the two formin classes known from angiosperms: membrane-anchored Class I formins and Class II formins carrying a PTEN-like domain. PTEN-related domains were found also in stramenopile formins, where they have been probably acquired independently rather than by horizontal transfer, following a burst of domain rearrangements in the chromalveolate lineage. A novel RhoGAP-related domain was identified in some algal, moss and lycophyte (but not angiosperm) formins that define a specific branch (Class III) of the formin family. Conclusion We propose a scenario where formins underwent multiple domain rearrangements in several eukaryotic lineages, especially plants and chromalveolates. In plants this replaced GBD/FH3 by a probably inactive RhoGAP-like domain, preserving a formin-mediated association between (membrane-anchored) Rho GTPases and the actin cytoskeleton. Subsequent amplification of formin genes, possibly coincident with the expansion of plants to dry land, was followed by acquisition of alternative membrane attachment mechanisms present in extant Class I and Class II formins, allowing later loss of the RhoGAP-like domain-containing formins in angiosperms. PMID:18430232

  8. Summary of comprehensive systematic review: Rehabilitation in multiple sclerosis: Report of the Guideline Development, Dissemination, and Implementation Subcommittee of the American Academy of Neurology.

    PubMed

    Haselkorn, Jodie K; Hughes, Christina; Rae-Grant, Alex; Henson, Lily Jung; Bever, Christopher T; Lo, Albert C; Brown, Theodore R; Kraft, George H; Getchius, Thomas; Gronseth, Gary; Armstrong, Melissa J; Narayanaswami, Pushpa

    2015-11-24

    To systematically review the evidence regarding rehabilitation treatments in multiple sclerosis (MS). We systematically searched the literature (1970-2013) and classified articles using 2004 American Academy of Neurology criteria. This systematic review highlights the paucity of well-designed studies, which are needed to evaluate the available MS rehabilitative therapies. Weekly home/outpatient physical therapy (8 weeks) probably is effective for improving balance, disability, and gait (MS type unspecified, participants able to walk ≥5 meters) but probably is ineffective for improving upper extremity dexterity (1 Class I). Inpatient exercises (3 weeks) followed by home exercises (15 weeks) possibly are effective for improving disability (relapsing-remitting MS [RRMS], primary progressive MS [PPMS], secondary progressive MS [SPMS], Expanded Disability Status Scale [EDSS] 3.0-6.5) (1 Class II). Six weeks' worth of comprehensive multidisciplinary outpatient rehabilitation possibly is effective for improving disability/function (PPMS, SPMS, EDSS 4.0-8.0) (1 Class II). Motor and sensory balance training or motor balance training (3 weeks) possibly is effective for improving static and dynamic balance, and motor balance training (3 weeks) possibly is effective for improving static balance (RRMS, SPMS, PPMS) (1 Class II). Breathing-enhanced upper extremity exercises (6 weeks) possibly are effective for improving timed gait and forced expiratory volume in 1 second (RRMS, SPMS, PPMS, mean EDSS 4.5); this change is of unclear clinical significance. This technique possibly is ineffective for improving disability (1 Class II). Inspiratory muscle training (10 weeks) possibly improves maximal inspiratory pressure (RRMS, SPMS, PPMS, EDSS 2-6.5) (1 Class II). © 2015 American Academy of Neurology.

  9. Conditional, Time-Dependent Probabilities for Segmented Type-A Faults in the WGCEP UCERF 2

    USGS Publications Warehouse

    Field, Edward H.; Gupta, Vipin

    2008-01-01

    This appendix presents elastic-rebound-theory (ERT) motivated time-dependent probabilities, conditioned on the date of last earthquake, for the segmented type-A fault models of the 2007 Working Group on California Earthquake Probabilities (WGCEP). These probabilities are included as one option in the WGCEP?s Uniform California Earthquake Rupture Forecast 2 (UCERF 2), with the other options being time-independent Poisson probabilities and an ?Empirical? model based on observed seismicity rate changes. A more general discussion of the pros and cons of all methods for computing time-dependent probabilities, as well as the justification of those chosen for UCERF 2, are given in the main body of this report (and the 'Empirical' model is also discussed in Appendix M). What this appendix addresses is the computation of conditional, time-dependent probabilities when both single- and multi-segment ruptures are included in the model. Computing conditional probabilities is relatively straightforward when a fault is assumed to obey strict segmentation in the sense that no multi-segment ruptures occur (e.g., WGCEP (1988, 1990) or see Field (2007) for a review of all previous WGCEPs; from here we assume basic familiarity with conditional probability calculations). However, and as we?ll see below, the calculation is not straightforward when multi-segment ruptures are included, in essence because we are attempting to apply a point-process model to a non point process. The next section gives a review and evaluation of the single- and multi-segment rupture probability-calculation methods used in the most recent statewide forecast for California (WGCEP UCERF 1; Petersen et al., 2007). We then present results for the methodology adopted here for UCERF 2. We finish with a discussion of issues and possible alternative approaches that could be explored and perhaps applied in the future. A fault-by-fault comparison of UCERF 2 probabilities with those of previous studies is given in the main part of this report.

  10. "A violation of the conditional independence assumption in the two-high-threshold Model of recognition memory": Correction to Chen, Starns, and Rotello (2015).

    PubMed

    2016-01-01

    Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In contrast, a Gaussian signal detection model, which posits that the level of confidence that an item is "old" or "new" is a function of its continuous strength value, provided a good account of the data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. The Formalism of Generalized Contexts and Decay Processes

    NASA Astrophysics Data System (ADS)

    Losada, Marcelo; Laura, Roberto

    2013-04-01

    The formalism of generalized contexts for quantum histories is used to investigate the possibility to consider the survival probability as the probability of no decay property at a given time conditional to no decay property at an earlier time. A negative result is found for an isolated system. The inclusion of two quantum measurement instruments at two different times makes possible to interpret the survival probability as a conditional probability of the whole system.

  12. Factors influencing particulate lipid production in the East Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Gašparović, B.; Frka, S.; Koch, B. P.; Zhu, Z. Y.; Bracher, A.; Lechtenfeld, O. J.; Neogi, S. B.; Lara, R. J.; Kattner, G.

    2014-07-01

    Extensive analyses of particulate lipids and lipid classes were conducted to gain insight into lipid production and related factors along the biogeochemical provinces of the Eastern Atlantic Ocean. Data are supported by particulate organic carbon (POC), chlorophyll a (Chl a), phaeopigments, Chl a concentrations and carbon content of eukaryotic micro-, nano- and picophytoplankton, including cell abundances for the latter two and for cyanobacteria and prokaryotic heterotrophs. We focused on the productive ocean surface (2 m depth and deep Chl a maximum (DCM). Samples from the deep ocean provided information about the relative reactivity and preservation potential of particular lipid classes. Surface and DCM particulate lipid concentrations (3.5-29.4 μg L-1) were higher than in samples from deep waters (3.2-9.3 μg L-1) where an increased contribution to the POC pool was observed. The highest lipid concentrations were measured in high latitude temperate waters and in the North Atlantic Tropical Gyral Province (13-25°N). Factors responsible for the enhanced lipid synthesis in the eastern Atlantic appeared to be phytoplankton size (micro, nano, pico) and the low nutrient status with microphytoplankton having the most expressed influence in the surface and eukaryotic nano- and picophytoplankton in the DCM layer. Higher lipid to Chl a ratios suggest enhanced lipid biosynthesis in the nutrient poorer regions. The various lipid classes pointed to possible mechanisms of phytoplankton adaptation to the nutritional conditions. Thus, it is likely that adaptation comprises the replacement of membrane phospholipids by non-phosphorus containing glycolipids under low phosphorus conditions. The qualitative and quantitative lipid compositions revealed that phospholipids were the most degradable lipids, and their occurrence decreased with increasing depth. In contrast, wax esters, possibly originating from zooplankton, survived downward transport probably due to the fast sinking rate of particles (fecal pellets). The important contribution of glycolipids in deep waters reflected their relatively stable nature and degradation resistance. A lipid-based proxy for the lipid degradative state (Lipolysis Index) suggests that many lipid classes were quite resistant to degradation even in the deep ocean.

  13. Clustering determines the dynamics of complex contagions in multiplex networks

    NASA Astrophysics Data System (ADS)

    Zhuang, Yong; Arenas, Alex; Yaǧan, Osman

    2017-01-01

    We present the mathematical analysis of generalized complex contagions in a class of clustered multiplex networks. The model is intended to understand spread of influence, or any other spreading process implying a threshold dynamics, in setups of interconnected networks with significant clustering. The contagion is assumed to be general enough to account for a content-dependent linear threshold model, where each link type has a different weight (for spreading influence) that may depend on the content (e.g., product, rumor, political view) that is being spread. Using the generating functions formalism, we determine the conditions, probability, and expected size of the emergent global cascades. This analysis provides a generalization of previous approaches and is especially useful in problems related to spreading and percolation. The results present nontrivial dependencies between the clustering coefficient of the networks and its average degree. In particular, several phase transitions are shown to occur depending on these descriptors. Generally speaking, our findings reveal that increasing clustering decreases the probability of having global cascades and their size, however, this tendency changes with the average degree. There exists a certain average degree from which on clustering favors the probability and size of the contagion. By comparing the dynamics of complex contagions over multiplex networks and their monoplex projections, we demonstrate that ignoring link types and aggregating network layers may lead to inaccurate conclusions about contagion dynamics, particularly when the correlation of degrees between layers is high.

  14. 49 CFR 572.146 - Test conditions and instrumentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Head acceleration—Class 1000 (2) Neck (i) Force—Class 1000 (ii) Moments—Class 600 (iii) Pendulum... acceleration—Class 1000 (ii) Spine and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv...

  15. 49 CFR 572.146 - Test conditions and instrumentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Head acceleration—Class 1000 (2) Neck (i) Force—Class 1000 (ii) Moments—Class 600 (iii) Pendulum... acceleration—Class 1000 (ii) Spine and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv...

  16. 49 CFR 572.146 - Test conditions and instrumentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Head acceleration—Class 1000 (2) Neck (i) Force—Class 1000 (ii) Moments—Class 600 (iii) Pendulum... acceleration—Class 1000 (ii) Spine and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv...

  17. 49 CFR 572.146 - Test conditions and instrumentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Head acceleration—Class 1000 (2) Neck (i) Force—Class 1000 (ii) Moments—Class 600 (iii) Pendulum... acceleration—Class 1000 (ii) Spine and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv...

  18. 49 CFR 572.146 - Test conditions and instrumentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Head acceleration—Class 1000 (2) Neck (i) Force—Class 1000 (ii) Moments—Class 600 (iii) Pendulum... acceleration—Class 1000 (ii) Spine and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv...

  19. Nematode Damage Functions: The Problems of Experimental and Sampling Error

    PubMed Central

    Ferris, H.

    1984-01-01

    The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  1. EAGLES NEST WILDERNESS, COLORADO.

    USGS Publications Warehouse

    Tweto, Ogden; Williams, Frank E.

    1984-01-01

    On the basis of a geologic and mineral survey, a primitive area that constitutes the nucleus of the Eagles Nest Wilderness, Colorado was appraised to offer little promise for the occurrence of mineral or energy resources. Among the additional areas later incorporated in the wilderness, only a strip near a major fault west and northwest of Frisco and Dillon is classed as having probable mineral-resource potential. If mineral deposits exist, they probably are of the silver-lead-zinc or fluorspar types.

  2. HIGH UINTAS PRIMITIVE AREA, UTAH.

    USGS Publications Warehouse

    Crittenden, Max D.; Sheridan, Michael J.

    1984-01-01

    Mineral surveys in the High Uintas Primitive Area, Utah and the additions subsequently proposed concluded that the area has little promise for mineral resources. Of the areas around the fringes, a strip along the north flank fault can be classed as having probable energy-resource potential for oil and gas. The oil and gas potential could be tested by additional seismic studies followed by drilling. Much of the necessary information probably could be obtained without drilling within the primitive area itself.

  3. Relative tectonics and debris flow hazards in the Beijing mountain area from DEM-derived geomorphic indices and drainage analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Weiming; Wang, Nan; Zhao, Min; Zhao, Shangmin

    2016-03-01

    The geomorphic setting of the tectonically active area around Beijing is a result of complex interactions involving Yanshan neotectonic movements and processes of erosion and deposition. The Beijing Mountain study area contains the junction of two mountain ranges (the Yanshan Mountains and the Taihang Mountains). Tectonic activity has significantly influenced the drainage system and the geomorphic situation in the area, leading to a high probability of the development of debris flows, which is one of the major abrupt geological disasters in the region. Based on 30-m-resolution ASTER GDEM data, a total of 752 drainage basins were extracted using ArcGIS software. A total of 705 debris flow valleys were visually interpreted from ALOS satellite images and published documents. Seven geomorphic indices were calculated for each basin including the relief amplitude, the hypsometric integral, the stream length gradient, the basin shape indices, the fractal dimension, the asymmetry factor, and the ratio of the valley floor width to the height. These geomorphic indices were divided into five classes and the ratio of the number of the debris flow valleys to the number of the drainage basins for each geomorphic index was computed and analyzed for every class. Average class values of the seven indices were used to derive an index of relative active tectonics (IRAT). The ratio of the number of the debris flow valleys to the number of the drainage basins was computed for every class of IRAT. The degree of probable risk level was then defined from the IRAT classes. Finally, the debris flow hazard was evaluated for each drainage basin based on the combined effect of probable risk level and occurrence frequency of the debris flows. The result showed a good correspondence between IRAT classes and the ratio of the number of the debris flow valleys to the number of the drainage basins. Approximately 65% of the drainage basins with occurred debris flow valleys are at a high risk level, while 43% of the drainage basins without occurred debris flow valleys are at a high risk level. A comparison with results from past studies demonstrated that the accuracy of these findings is greater than 85%, indicating that the basin topography created by rapid tectonic deformations is more favorable for debris flows.

  4. THE SEMIGROUP OF METRIC MEASURE SPACES AND ITS INFINITELY DIVISIBLE PROBABILITY MEASURES

    PubMed Central

    EVANS, STEVEN N.; MOLCHANOV, ILYA

    2015-01-01

    A metric measure space is a complete, separable metric space equipped with a probability measure that has full support. Two such spaces are equivalent if they are isometric as metric spaces via an isometry that maps the probability measure on the first space to the probability measure on the second. The resulting set of equivalence classes can be metrized with the Gromov–Prohorov metric of Greven, Pfaffelhuber and Winter. We consider the natural binary operation ⊞ on this space that takes two metric measure spaces and forms their Cartesian product equipped with the sum of the two metrics and the product of the two probability measures. We show that the metric measure spaces equipped with this operation form a cancellative, commutative, Polish semigroup with a translation invariant metric. There is an explicit family of continuous semicharacters that is extremely useful for, inter alia, establishing that there are no infinitely divisible elements and that each element has a unique factorization into prime elements. We investigate the interaction between the semigroup structure and the natural action of the positive real numbers on this space that arises from scaling the metric. For example, we show that for any given positive real numbers a, b, c the trivial space is the only space that satisfies a ⊞ b = c . We establish that there is no analogue of the law of large numbers: if X1, X2, … is an identically distributed independent sequence of random spaces, then no subsequence of 1n⊞k=1nXk converges in distribution unless each Xk is almost surely equal to the trivial space. We characterize the infinitely divisible probability measures and the Lévy processes on this semigroup, characterize the stable probability measures and establish a counterpart of the LePage representation for the latter class. PMID:28065980

  5. Using dynamic geometry software for teaching conditional probability with area-proportional Venn diagrams

    NASA Astrophysics Data System (ADS)

    Radakovic, Nenad; McDougall, Douglas

    2012-10-01

    This classroom note illustrates how dynamic visualization can be used to teach conditional probability and Bayes' theorem. There are two features of the visualization that make it an ideal pedagogical tool in probability instruction. The first feature is the use of area-proportional Venn diagrams that, along with showing qualitative relationships, describe the quantitative relationship between two sets. The second feature is the slider and animation component of dynamic geometry software enabling students to observe how the change in the base rate of an event influences conditional probability. A hypothetical instructional sequence using a well-known breast cancer example is described.

  6. 49 CFR 572.137 - Test conditions and instrumentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...—Class 1000 (2) Neck: (i) Forces—Class 1000 (ii) Moments—Class 600 (iii) Pendulum acceleration—Class 180... and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv) Forces—Class 1000 (v...—Class 180 (6) Femur forces and knee pendulum—Class 600 (n) Coordinate signs for instrumentation polarity...

  7. 49 CFR 572.137 - Test conditions and instrumentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...—Class 1000 (2) Neck: (i) Forces—Class 1000 (ii) Moments—Class 600 (iii) Pendulum acceleration—Class 180... and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv) Forces—Class 1000 (v...—Class 180 (6) Femur forces and knee pendulum—Class 600 (n) Coordinate signs for instrumentation polarity...

  8. 49 CFR 572.137 - Test conditions and instrumentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...—Class 1000 (2) Neck: (i) Forces—Class 1000 (ii) Moments—Class 600 (iii) Pendulum acceleration—Class 180... and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv) Forces—Class 1000 (v...—Class 180 (6) Femur forces and knee pendulum—Class 600 (n) Coordinate signs for instrumentation polarity...

  9. 49 CFR 572.137 - Test conditions and instrumentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...—Class 1000 (2) Neck: (i) Forces—Class 1000 (ii) Moments—Class 600 (iii) Pendulum acceleration—Class 180... and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv) Forces—Class 1000 (v...—Class 180 (6) Femur forces and knee pendulum—Class 600 (n) Coordinate signs for instrumentation polarity...

  10. 49 CFR 572.137 - Test conditions and instrumentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...—Class 1000 (2) Neck: (i) Forces—Class 1000 (ii) Moments—Class 600 (iii) Pendulum acceleration—Class 180... and pendulum accelerations—Class 180 (iii) Sternum deflection—Class 600 (iv) Forces—Class 1000 (v...—Class 180 (6) Femur forces and knee pendulum—Class 600 (n) Coordinate signs for instrumentation polarity...

  11. The World According to de Finetti: On de Finetti's Theory of Probability and Its Application to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Berkovitz, Joseph

    Bruno de Finetti is one of the founding fathers of the subjectivist school of probability, where probabilities are interpreted as rational degrees of belief. His work on the relation between the theorems of probability and rationality is among the corner stones of modern subjective probability theory. De Finetti maintained that rationality requires that degrees of belief be coherent, and he argued that the whole of probability theory could be derived from these coherence conditions. De Finetti's interpretation of probability has been highly influential in science. This paper focuses on the application of this interpretation to quantum mechanics. We argue that de Finetti held that the coherence conditions of degrees of belief in events depend on their verifiability. Accordingly, the standard coherence conditions of degrees of belief that are familiar from the literature on subjective probability only apply to degrees of belief in events which could (in principle) be jointly verified; and the coherence conditions of degrees of belief in events that cannot be jointly verified are weaker. While the most obvious explanation of de Finetti's verificationism is the influence of positivism, we argue that it could be motivated by the radical subjectivist and instrumental nature of probability in his interpretation; for as it turns out, in this interpretation it is difficult to make sense of the idea of coherent degrees of belief in, and accordingly probabilities of unverifiable events. We then consider the application of this interpretation to quantum mechanics, concentrating on the Einstein-Podolsky-Rosen experiment and Bell's theorem.

  12. Birth/birth-death processes and their computable transition probabilities with biological applications.

    PubMed

    Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2018-03-01

    Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

  13. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  14. Internal Medicine residents use heuristics to estimate disease probability

    PubMed Central

    Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin

    2015-01-01

    Background Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. Method We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition. Results When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025). Conclusions Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing. PMID:27004080

  15. Nowcasting of Low-Visibility Procedure States with Ordered Logistic Regression at Vienna International Airport

    NASA Astrophysics Data System (ADS)

    Kneringer, Philipp; Dietz, Sebastian; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Low-visibility conditions have a large impact on aviation safety and economic efficiency of airports and airlines. To support decision makers, we develop a statistical probabilistic nowcasting tool for the occurrence of capacity-reducing operations related to low visibility. The probabilities of four different low visibility classes are predicted with an ordered logistic regression model based on time series of meteorological point measurements. Potential predictor variables for the statistical models are visibility, humidity, temperature and wind measurements at several measurement sites. A stepwise variable selection method indicates that visibility and humidity measurements are the most important model inputs. The forecasts are tested with a 30 minute forecast interval up to two hours, which is a sufficient time span for tactical planning at Vienna Airport. The ordered logistic regression models outperform persistence and are competitive with human forecasters.

  16. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    PubMed

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  17. Stability of uncertain systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Blankenship, G. L.

    1971-01-01

    The asymptotic properties of feedback systems are discussed, containing uncertain parameters and subjected to stochastic perturbations. The approach is functional analytic in flavor and thereby avoids the use of Markov techniques and auxiliary Lyapunov functionals characteristic of the existing work in this area. The results are given for the probability distributions of the accessible signals in the system and are proved using the Prohorov theory of the convergence of measures. For general nonlinear systems, a result similar to the small loop-gain theorem of deterministic stability theory is given. Boundedness is a property of the induced distributions of the signals and not the usual notion of boundedness in norm. For the special class of feedback systems formed by the cascade of a white noise, a sector nonlinearity and convolution operator conditions are given to insure the total boundedness of the overall feedback system.

  18. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  19. LipidFrag: Improving reliability of in silico fragmentation of lipids and application to the Caenorhabditis elegans lipidome

    PubMed Central

    Neumann, Steffen; Schmitt-Kopplin, Philippe

    2017-01-01

    Lipid identification is a major bottleneck in high-throughput lipidomics studies. However, tools for the analysis of lipid tandem MS spectra are rather limited. While the comparison against spectra in reference libraries is one of the preferred methods, these libraries are far from being complete. In order to improve identification rates, the in silico fragmentation tool MetFrag was combined with Lipid Maps and lipid-class specific classifiers which calculate probabilities for lipid class assignments. The resulting LipidFrag workflow was trained and evaluated on different commercially available lipid standard materials, measured with data dependent UPLC-Q-ToF-MS/MS acquisition. The automatic analysis was compared against manual MS/MS spectra interpretation. With the lipid class specific models, identification of the true positives was improved especially for cases where candidate lipids from different lipid classes had similar MetFrag scores by removing up to 56% of false positive results. This LipidFrag approach was then applied to MS/MS spectra of lipid extracts of the nematode Caenorhabditis elegans. Fragments explained by LipidFrag match known fragmentation pathways, e.g., neutral losses of lipid headgroups and fatty acid side chain fragments. Based on prediction models trained on standard lipid materials, high probabilities for correct annotations were achieved, which makes LipidFrag a good choice for automated lipid data analysis and reliability testing of lipid identifications. PMID:28278196

  20. Conservative Analytical Collision Probabilities for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  1. Conservative Analytical Collision Probability for Design of Orbital Formations

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  2. Automatic threshold selection for multi-class open set recognition

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2017-05-01

    Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.

  3. Local short-term variability in solar irradiance

    NASA Astrophysics Data System (ADS)

    Lohmann, Gerald M.; Monahan, Adam H.; Heinemann, Detlev

    2016-05-01

    Characterizing spatiotemporal irradiance variability is important for the successful grid integration of increasing numbers of photovoltaic (PV) power systems. Using 1 Hz data recorded by as many as 99 pyranometers during the HD(CP)2 Observational Prototype Experiment (HOPE), we analyze field variability of clear-sky index k* (i.e., irradiance normalized to clear-sky conditions) and sub-minute k* increments (i.e., changes over specified intervals of time) for distances between tens of meters and about 10 km. By means of a simple classification scheme based on k* statistics, we identify overcast, clear, and mixed sky conditions, and demonstrate that the last of these is the most potentially problematic in terms of short-term PV power fluctuations. Under mixed conditions, the probability of relatively strong k* increments of ±0.5 is approximately twice as high compared to increment statistics computed without conditioning by sky type. Additionally, spatial autocorrelation structures of k* increment fields differ considerably between sky types. While the profiles for overcast and clear skies mostly resemble the predictions of a simple model published by , this is not the case for mixed conditions. As a proxy for the smoothing effects of distributed PV, we finally show that spatial averaging mitigates variability in k* less effectively than variability in k* increments, for a spatial sensor density of 2 km-2.

  4. Discontinuous Patterns of Cigarette Smoking From Ages 18 to 50 in the United States: A Repeated-Measures Latent Class Analysis.

    PubMed

    Terry-McElrath, Yvonne M; O'Malley, Patrick M; Johnston, Lloyd D

    2017-12-13

    Effective cigarette smoking prevention and intervention programming is enhanced by accurate understanding of developmental smoking pathways across the life span. This study investigated within-person patterns of cigarette smoking from ages 18 to 50 among a US national sample of high school graduates, focusing on identifying ages of particular importance for smoking involvement change. Using data from approximately 15,000 individuals participating in the longitudinal Monitoring the Future study, trichotomous measures of past 30-day smoking obtained at 11 time points were modeled using repeated-measures latent class analyses. Sex differences in latent class structure and membership were examined. Twelve latent classes were identified: three characterized by consistent smoking patterns across age (no smoking; smoking < pack per day; smoking pack + per day); three showing uptake to a higher category of smoking across age; four reflecting successful quit behavior by age 50; and two defined by discontinuous shifts between smoking categories. The same latent class structure was found for both males and females, but membership probabilities differed between sexes. Although evidence of increases or decreases in smoking behavior was observed at virtually all ages through 35, 21/22 and 29/30 appeared to be particularly key for smoking category change within class. This examination of latent classes of cigarette smoking among a national US longitudinal sample of high school graduates from ages 18 to 50 identified unique patterns and critical ages of susceptibility to change in smoking category within class. Such information may be of particular use in developing effective smoking prevention and intervention programming. This study examined cigarette smoking among a national longitudinal US sample of high school graduates from ages 18 to 50 and identified distinct latent classes characterized by patterns of movement between no cigarette use, light-to-moderate smoking, and the conventional definition of heavy smoking at 11 time points via repeated-measures latent class analysis. Membership probabilities for each smoking class were estimated, and critical ages of susceptibility to change in smoking behaviors were identified. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Quantum gravity in timeless configuration space

    NASA Astrophysics Data System (ADS)

    Gomes, Henrique

    2017-12-01

    On the path towards quantum gravity we find friction between temporal relations in quantum mechanics (QM) (where they are fixed and field-independent), and in general relativity (where they are field-dependent and dynamic). This paper aims to attenuate that friction, by encoding gravity in the timeless configuration space of spatial fields with dynamics given by a path integral. The framework demands that boundary conditions for this path integral be uniquely given, but unlike other approaches where they are prescribed—such as the no-boundary and the tunneling proposals—here I postulate basic principles to identify boundary conditions in a large class of theories. Uniqueness arises only if a reduced configuration space can be defined and if it has a profoundly asymmetric fundamental structure. These requirements place strong restrictions on the field and symmetry content of theories encompassed here; shape dynamics is one such theory. When these constraints are met, any emerging theory will have a Born rule given merely by a particular volume element built from the path integral in (reduced) configuration space. Also as in other boundary proposals, Time, including space-time, emerges as an effective concept; valid for certain curves in configuration space but not assumed from the start. When some such notion of time becomes available, conservation of (positive) probability currents ensues. I show that, in the appropriate limits, a Schrödinger equation dictates the evolution of weakly coupled source fields on a classical gravitational background. Due to the asymmetry of reduced configuration space, these probabilities and currents avoid a known difficulty of standard WKB approximations for Wheeler DeWitt in minisuperspace: the selection of a unique Hamilton–Jacobi solution to serve as background. I illustrate these constructions with a simple example of a full quantum gravitational theory (i.e. not in minisuperspace) for which the formalism is applicable, and give a formula for calculating gravitational semi-classical relative probabilities in it.

  6. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  7. A Study of the Efficiency of the Class of W-States as a Quantum Channel

    NASA Astrophysics Data System (ADS)

    Adhikari, Satyabrata; Gangopadhyay, Sunandan

    2009-02-01

    Recently, a new class of W-states has been defined by Agarwal and Pati (Phys. Rev. A 74:062320, 2006) and it has been shown that they can be used as a quantum channel for teleportation and superdense coding. In this work, we identify those three-qubit states from the set of the new class of W-states which are most efficient or suitable for quantum teleportation. We show that with some probability |W1rangle=1/2(|100rangle+|010rangle+sqrt{2}|001rangle) is best suited for teleportation channel in the sense that it does not depend on the input state.

  8. Unit conversion as a source of misclassification in US birthweight data.

    PubMed

    Umbach, D M

    2000-01-01

    This study explains why frequency polygons for US birthweights in 100-g weight classes appear spiky compared with their European counterparts. A probability model is used to describe how unit conversion can induce misclassification. Birthweights from the United States and Norway are used to illustrate that misclassification operates in grouped US data. Spikiness represents misclassification that arises when measured birthweights are rounded to the nearer ounce, converted to grams, and then grouped into weight classes. Misclassification is ameliorated, not eliminated, with 200-g weight classes. Possible biases from misclassification should be carefully evaluated when fitting statistical models to grouped US birthweights.

  9. Economic performance and public concerns about social class in twentieth-century books.

    PubMed

    Chen, Yunsong; Yan, Fei

    2016-09-01

    What is the association between macroeconomic conditions and public perceptions of social class? Applying a novel approach based on the Google Books N-gram corpus, this study addresses the relationship between public concerns about social class and economic conditions throughout the twentieth century. The usage of class-related words/phrases, or "literary references to class," in American English-language books is related to US economic performance and income inequality. The findings of this study demonstrate that economic conditions play a significant role in literary references to class throughout the century, whereas income inequality does not. Similar results are obtained from further analyses using alternative measures of class concerns as well as different corpora of English Fiction and the New York Times. We add to the social class literature by showing that the long-term temporal dynamics of an economy can be exhibited by aggregate class concerns. The application of massive culture-wide content analysis using data of unprecedented size also represents a contribution to the literature. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Using Dynamic Geometry Software for Teaching Conditional Probability with Area-Proportional Venn Diagrams

    ERIC Educational Resources Information Center

    Radakovic, Nenad; McDougall, Douglas

    2012-01-01

    This classroom note illustrates how dynamic visualization can be used to teach conditional probability and Bayes' theorem. There are two features of the visualization that make it an ideal pedagogical tool in probability instruction. The first feature is the use of area-proportional Venn diagrams that, along with showing qualitative relationships,…

  11. Latent classes of resilience and psychological response among only-child loss parents in China.

    PubMed

    Wang, An-Ni; Zhang, Wen; Zhang, Jing-Ping; Huang, Fei-Fei; Ye, Man; Yao, Shu-Yu; Luo, Yuan-Hui; Li, Zhi-Hua; Zhang, Jie; Su, Pan

    2017-10-01

    Only-child loss parents in China recently gained extensive attention as a newly defined social group. Resilience could be a probable solution out of the psychological dilemma. Using a sample of 185 only-child loss people, this study employed latent class analysis (a) to explore whether different classes of resilience could be identified, (b) to determine socio-demographic characteristics of each class, and (c) to compare the depression and the subjective well-being of each class. The results supported a three-class solution, defined as 'high tenacity-strength but moderate optimism class', 'moderate resilience but low self-efficacy class' and 'low tenacity but moderate adaption-dependence class'. Parents with low income and medical insurance of low reimbursement type and without endowment insurance occupied more proportions in the latter two classes. The latter two classes also had a significant higher depression scores and lower subjective well-being scores than high tenacity-strength but moderate optimism class. Future work should care those socio-economically vulnerable bereaved parents, and an elastic economic assistance policy was needed. To develop targeted resilience interventions, the emphasis of high tenacity-strength but moderate optimism class should be the optimism. Moderate resilience but low self-efficacy class should be self-efficacy, and low tenacity but moderate adaption-dependence class should be tenacity. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Motivating Inquiry in Statistics and Probability in the Primary Classroom

    ERIC Educational Resources Information Center

    Leavy, Aisling; Hourigan, Mairéad

    2015-01-01

    We describe how the use of a games environment combined with technology supports upper primary children in engaging with a concept traditionally considered too advanced for the primary classes: "The Law of Large Numbers."

  13. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  14. Modelling `Life' against `heat death'

    NASA Astrophysics Data System (ADS)

    Zak, Michail

    2018-01-01

    This work is inspired by the discovery of a new class of dynamical system described by ordinary differential equations coupled with their Liouville equation. These systems called self-controlled since the role of actuators is played by the probability produced by the Liouville equation. Following the Madelung equation that belongs to this class, non-Newtonian properties such as randomness, entanglement and probability interference typical for quantum systems have been described. Special attention was paid to the capability to violate the second law of thermodynamics, which makes these systems neither Newtonian, nor quantum. It has been shown that self-controlled dynamical systems can be linked to mathematical models of living systems. The discovery of isolated dynamical systems that can decrease entropy in violation of the second law of thermodynamics, and resemblances of these systems to livings suggests that `Life' can slow down the `heat death' of the Universe and that can be associated with the Purpose of Life.

  15. Deep learning of support vector machines with class probability output networks.

    PubMed

    Kim, Sangwook; Yu, Zhibin; Kil, Rhee Man; Lee, Minho

    2015-04-01

    Deep learning methods endeavor to learn features automatically at multiple levels and allow systems to learn complex functions mapping from the input space to the output space for the given data. The ability to learn powerful features automatically is increasingly important as the volume of data and range of applications of machine learning methods continues to grow. This paper proposes a new deep architecture that uses support vector machines (SVMs) with class probability output networks (CPONs) to provide better generalization power for pattern classification problems. As a result, deep features are extracted without additional feature engineering steps, using multiple layers of the SVM classifiers with CPONs. The proposed structure closely approaches the ideal Bayes classifier as the number of layers increases. Using a simulation of classification problems, the effectiveness of the proposed method is demonstrated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. X-Ray Spectra of Quasars from the ROSAT Public Archive

    NASA Technical Reports Server (NTRS)

    Elvis, Martin S.; West, Donald (Technical Monitor)

    2000-01-01

    This has been a most productive proposal. We have: (1) Found many new X-ray absorbed quasars at z>2; (2) Determined that all of these are radio-loud, favoring an intrinsic origin for the absorption; (3) Found that the one radio-quiet exception lay close to a nearby galaxy, so initiating the X-ray study of the ISM of normal galaxies via X-ray spectroscopy; (4) Discovered a class of 'red quasars', probably the tip of a large obscured population; and (5) Discovered a class of 'blank field X-ray sources'. These are a heterogeneous collection but probably include several peculiar types of active galactic nuclei (AGN). Follow-up of the 'blanks' is being undertaken under a separate ADP program. Chandra and XMM-Newton observing time for these objects has been approved. This program has produced six refereed papers and six published conference proceedings.

  17. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  18. A parallel process growth mixture model of conduct problems and substance use with risky sexual behavior.

    PubMed

    Wu, Johnny; Witkiewitz, Katie; McMahon, Robert J; Dodge, Kenneth A

    2010-10-01

    Conduct problems, substance use, and risky sexual behavior have been shown to coexist among adolescents, which may lead to significant health problems. The current study was designed to examine relations among these problem behaviors in a community sample of children at high risk for conduct disorder. A latent growth model of childhood conduct problems showed a decreasing trend from grades K to 5. During adolescence, four concurrent conduct problem and substance use trajectory classes were identified (high conduct problems and high substance use, increasing conduct problems and increasing substance use, minimal conduct problems and increasing substance use, and minimal conduct problems and minimal substance use) using a parallel process growth mixture model. Across all substances (tobacco, binge drinking, and marijuana use), higher levels of childhood conduct problems during kindergarten predicted a greater probability of classification into more problematic adolescent trajectory classes relative to less problematic classes. For tobacco and binge drinking models, increases in childhood conduct problems over time also predicted a greater probability of classification into more problematic classes. For all models, individuals classified into more problematic classes showed higher proportions of early sexual intercourse, infrequent condom use, receiving money for sexual services, and ever contracting an STD. Specifically, tobacco use and binge drinking during early adolescence predicted higher levels of sexual risk taking into late adolescence. Results highlight the importance of studying the conjoint relations among conduct problems, substance use, and risky sexual behavior in a unified model. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  19. Potential toxicity of pesticides measured in midwestern streams to aquatic organisms

    USGS Publications Warehouse

    Battaglin, W.; Fairchild, J.

    2002-01-01

    Society is becoming increasingly aware of the value of healthy aquatic ecosystems as well as the effects that man’s activities have on those ecosystems. In recent years, many urban and industrial sources of contamination have been reduced or eliminated. The agricultural community also has worked towards reducing off-site movement of agricultural chemicals, but their use in farming is still growing. A small fraction, estimated at <1 to 2% of the pesticides applied to crops are lost from fields and enter nearby streams during rainfall events. In many cases aquatic organisms are exposed to mixtures of chemicals, which may lead to greater non-target risk than that predicted based on traditional risk assessments for single chemicals. We evaluated the potential toxicity of environmental mixtures of 5 classes of pesticides using concentrations from water samples collected from ∼50 sites on midwestern streams during late spring or early summer runoff events in 1989 and 1998. Toxicity index values are calculated as the concentration of the compound in the sample divided by the EC50 or LC50 of an aquatic organism. These index values are summed within a pesticide class and for all classes to determine additive pesticide class and total pesticide toxicity indices. Toxicity index values greater than 1.0 indicate probable toxicity of a class of pesticides measured in a water sample to aquatic organisms. Results indicate that some samples had probable toxicity to duckweed and green algae, but few are suspected of having significant toxicity to bluegill sunfish or chorus frogs.

  20. Breeding chronology and social interactions affect ungulate foraging behavior at a concentrated food resource

    PubMed Central

    Cohen, Bradley S.; Miller, Karl V.

    2017-01-01

    Prey species must balance predator avoidance behavior with other essential activities including foraging, breeding, and social interactions. Anti-predator behaviors such as vigilance can impede resource acquisition rates by altering foraging behavior. However, in addition to predation risk, foraging behavior may also be affected by socio-sexual factors including breeding chronology and social interactions. Therefore, we investigated how time-of-day, distance-to-forest, group size, social interactions (presence of different sex-age class), and breeding chronology (pre-breeding, breeding, post-breeding seasons) affected probability of feeding (hereafter: feeding) for different sex and age-classes (mature males, immature males, adult females, and juveniles) of white-tailed deer at feed sites. We developed a set of candidate models consisting of social, habitat, reproductive, and abiotic factors and combinations of these factors. We then used generalized linear mixed models (GLMMs) to estimate the probability of feeding and used model averaging of competing models for multimodel inference. Each adult sex-age class’ feeding was influenced by breeding chronology. Juveniles were more likely to be feeding than adults in all seasons. Feeding increased with group size for all sex-age classes. The presence of a mature male negatively influenced the feeding of immature males and juveniles were more likely to be feeding when an adult female was present. Feeding decreased with increasing distance-to-forest for mature males but not for other sex-age classes. Our results indicate that each sex-age class modulates vigilance levels in response to socio-sexual factors according to the unique pressures placed upon them by their reproductive status and social rank. PMID:28591136

  1. Shallow lithological structure across the Dead Sea Transform derived from geophysical experiments

    USGS Publications Warehouse

    Stankiewicz, J.; Munoz, G.; Ritter, O.; Bedrosian, P.A.; Ryberg, T.; Weckmann, U.; Weber, M.

    2011-01-01

    In the framework of the DEad SEa Rift Transect (DESERT) project a 150 km magnetotelluric profile consisting of 154 sites was carried out across the Dead Sea Transform. The resistivity model presented shows conductive structures in the western section of the study area terminating abruptly at the Arava Fault. For a more detailed analysis we performed a joint interpretation of the resistivity model with a P wave velocity model from a partially coincident seismic experiment. The technique used is a statistical correlation of resistivity and velocity values in parameter space. Regions of high probability of a coexisting pair of values for the two parameters are mapped back into the spatial domain, illustrating the geographical location of lithological classes. In this study, four regions of enhanced probability have been identified, and are remapped as four lithological classes. This technique confirms the Arava Fault marks the boundary of a highly conductive lithological class down to a depth of ???3 km. That the fault acts as an impermeable barrier to fluid flow is unusual for large fault zone, which often exhibit a fault zone characterized by high conductivity and low seismic velocity. At greater depths it is possible to resolve the Precambrian basement into two classes characterized by vastly different resistivity values but similar seismic velocities. The boundary between these classes is approximately coincident with the Al Quweira Fault, with higher resistivities observed east of the fault. This is interpreted as evidence for the original deformation along the DST originally taking place at the Al Quweira Fault, before being shifted to the Arava Fault. 

  2. A Test of the Discrimination Account in Equivalence Class Formation

    ERIC Educational Resources Information Center

    Wang, Ting; McHugh, Louise A.; Whelan, Robert

    2012-01-01

    An equivalence class is typically established when a subject is taught a set of interrelated conditional discriminations with physically unrelated stimuli and additional, untaught, conditional discriminations are then demonstrated. Interestingly, and perhaps counter-intuitively, the relations among the stimuli within such a class are not…

  3. Statistical learning of action: the role of conditional probability.

    PubMed

    Meyer, Meredith; Baldwin, Dare

    2011-12-01

    Identification of distinct units within a continuous flow of human action is fundamental to action processing. Such segmentation may rest in part on statistical learning. In a series of four experiments, we examined what types of statistics people can use to segment a continuous stream involving many brief, goal-directed action elements. The results of Experiment 1 showed no evidence for sensitivity to conditional probability, whereas Experiment 2 displayed learning based on joint probability. In Experiment 3, we demonstrated that additional exposure to the input failed to engender sensitivity to conditional probability. However, the results of Experiment 4 showed that a subset of adults-namely, those more successful at identifying actions that had been seen more frequently than comparison sequences-were also successful at learning conditional-probability statistics. These experiments help to clarify the mechanisms subserving processing of intentional action, and they highlight important differences from, as well as similarities to, prior studies of statistical learning in other domains, including language.

  4. Role of conviction in nonequilibrium models of opinion formation

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno; Anteneodo, Celia

    2012-12-01

    We analyze the critical behavior of a class of discrete opinion models in the presence of disorder. Within this class, each agent opinion takes a discrete value (±1 or 0) and its time evolution is ruled by two terms, one representing agent-agent interactions and the other the degree of conviction or persuasion (a self-interaction). The mean-field limit, where each agent can interact evenly with any other, is considered. Disorder is introduced in the strength of both interactions, with either quenched or annealed random variables. With probability p (1-p), a pairwise interaction reflects a negative (positive) coupling, while the degree of conviction also follows a binary probability distribution (two different discrete probability distributions are considered). Numerical simulations show that a nonequilibrium continuous phase transition, from a disordered state to a state with a prevailing opinion, occurs at a critical point pc that depends on the distribution of the convictions, with the transition being spoiled in some cases. We also show how the critical line, for each model, is affected by the update scheme (either parallel or sequential) as well as by the kind of disorder (either quenched or annealed).

  5. Finding the probability of infection in an SIR network is NP-Hard

    PubMed Central

    Shapiro, Michael; Delgado-Eckert, Edgar

    2012-01-01

    It is the purpose of this article to review results that have long been known to communications network engineers and have direct application to epidemiology on networks. A common approach in epidemiology is to study the transmission of a disease in a population where each individual is initially susceptible (S), may become infective (I) and then removed or recovered (R) and plays no further epidemiological role. Much of the recent work gives explicit consideration to the network of social interactions or disease-transmitting contacts and attendant probability of transmission for each interacting pair. The state of such a network is an assignment of the values {S, I, R} to its members. Given such a network, an initial state and a particular susceptible individual, we would like to compute their probability of becoming infected in the course of an epidemic. It turns out that this and related problems are NP-hard. In particular, it belongs in a class of problems for which no efficient algorithms for their solution are known. Moreover, finding an efficient algorithm for the solution of any problem in this class would entail a major breakthrough in theoretical computer science. PMID:22824138

  6. Bayesian structural inference for hidden processes.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ε-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ε-machines, irrespective of estimated transition probabilities. Properties of ε-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  7. Bayesian structural inference for hidden processes

    NASA Astrophysics Data System (ADS)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  8. Threshold-selecting strategy for best possible ground state detection with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Lässig, Jörg; Hoffmann, Karl Heinz

    2009-04-01

    Genetic algorithms are a standard heuristic to find states of low energy in complex state spaces as given by physical systems such as spin glasses but also in combinatorial optimization. The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover. Many schemes have been considered in literature as possible crossover selection strategies. We show for a large class of quality measures that the best possible probability distribution for selecting individuals in each generation of the algorithm execution is a rectangular distribution over the individuals sorted by their energy values. This means uniform probabilities have to be assigned to a group of the individuals with lowest energy in the population but probabilities equal to zero to individuals which are corresponding to energy values higher than a fixed cutoff, which is equal to a certain rank in the vector sorted by the energy of the states in the current population. The considered strategy is dubbed threshold selecting. The proof applies basic arguments of Markov chains and linear optimization and makes only a few assumptions on the underlying principles and hence applies to a large class of algorithms.

  9. Strong lensing probability in TeVeS (tensor-vector-scalar) theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Daming, E-mail: cdm@bao.ac.cn

    2008-01-15

    We recalculate the strong lensing probability as a function of the image separation in TeVeS (tensor-vector-scalar) cosmology, which is a relativistic version of MOND (MOdified Newtonian Dynamics). The lens is modeled by the Hernquist profile. We assume an open cosmology with {Omega}{sub b} = 0.04 and {Omega}{sub {Lambda}} = 0.5 and three different kinds of interpolating functions. Two different galaxy stellar mass functions (GSMF) are adopted: PHJ (Panter, Heavens and Jimenez 2004 Mon. Not. R. Astron. Soc. 355 764) determined from SDSS data release 1 and Fontana (Fontana et al 2006 Astron. Astrophys. 459 745) from GOODS-MUSIC catalog. We comparemore » our results with both the predicted probabilities for lenses from singular isothermal sphere galaxy halos in LCDM (Lambda cold dark matter) with a Schechter-fit velocity function, and the observational results for the well defined combined sample of the Cosmic Lens All-Sky Survey (CLASS) and Jodrell Bank/Very Large Array Astrometric Survey (JVAS). It turns out that the interpolating function {mu}(x) = x/(1+x) combined with Fontana GSMF matches the results from CLASS/JVAS quite well.« less

  10. Drying of Floodplain Forests Associated with Water-Level Decline in the Apalachicola River, Florida - Interim Results, 2006

    USGS Publications Warehouse

    Darst, Melanie R.; Light, Helen M.

    2007-01-01

    Floodplain forests of the Apalachicola River, Florida, are drier in composition today (2006) than they were before 1954, and drying is expected to continue for at least the next 50 years. Drier forest composition is probably caused by water-level declines that occurred as a result of physical changes in the main channel after 1954 and decreased flows in spring and summer months since the 1970s. Forest plots sampled from 2004 to 2006 were compared to forests sampled in the late 1970s (1976-79) using a Floodplain Index (FI) based on species dominance weighted by the Floodplain Species Category, a value that represents the tolerance of tree species to inundation and saturation in the floodplain and consequently, the typical historic floodplain habitat for that species. Two types of analyses were used to determine forest changes over time: replicate plot analysis comparing present (2004-06) canopy composition to late 1970s canopy composition at the same locations, and analyses comparing the composition of size classes of trees on plots in late 1970s and in present forests. An example of a size class analysis would be a comparison of the composition of the entire canopy (all trees greater than 7.5 cm (centimeter) diameter at breast height (dbh)) to the composition of the large canopy tree size class (greater than or equal to 25 cm dbh) at one location. The entire canopy, which has a mixture of both young and old trees, is probably indicative of more recent hydrologic conditions than the large canopy, which is assumed to have fewer young trees. Change in forest composition from the pre-1954 period to approximately 2050 was estimated by combining results from three analyses. The composition of pre-1954 forests was represented by the large canopy size class sampled in the late 1970s. The average FI for canopy trees was 3.0 percent drier than the average FI for the large canopy tree size class, indicating that the late 1970s forests were 3.0 percent drier than pre-1954 forests. The change from the late 1970s to the present was based on replicate plot analysis. The composition of 71 replicate plots sampled from 2004 to 2006 averaged 4.4 percent drier than forests sampled in the late 1970s. The potential composition of future forests (2050 or later) was estimated from the composition of the present subcanopy tree size class (less than 7.5 cm and greater than or equal to 2.5 cm dbh), which contains the greatest percentage of young trees and is indicative of recent hydrologic conditions. Subcanopy trees are the driest size class in present forests, with FIs averaging 31.0 percent drier than FIs for all canopy trees. Based on results from all three sets of data, present floodplain forests average 7.4 percent drier in composition than pre-1954 forests and have the potential to become at least 31.0 percent drier in the future. An overall total change in floodplain forests to an average composition 38.4 percent drier than pre-1954 forests is expected within approximately 50 years. The greatest effects of water-level decline have occurred in tupelo-cypress swamps where forest composition has become at least 8.8 percent drier in 2004-06 than in pre-1954 years. This change indicates that a net loss of swamps has already occurred in the Apalachicola River floodplain, and further losses are expected to continue over the next 50 years. Drying of floodplain forests will result in some low bottomland hardwood forests changing in composition to high bottomland hardwood forests. The composition of high bottomland hardwoods will also change, although periodic flooding is still occurring and will continue to limit most of the floodplain to bottomland hardwood species that are adapted to at least short periods of inundation and saturation.

  11. Epidemiological modeling in a branching population. Particular case of a general SIS model with two age classes.

    PubMed

    Jacob, C; Viet, A F

    2003-03-01

    This paper covers the elaboration of a general class of multitype branching processes for modeling in a branching population, the evolution of a disease with horizontal and vertical transmissions. When the size of the population may tend to infinity, normalization must be carried out. As the initial size tends to infinity, the normalized model converges a.s. to a dynamical system the solution of which is the probability law of the state of health for an individual ancestors line. The focal point of this study concerns the transient and asymptotical behaviors of a SIS model with two age classes in a branching population. We will compare the asymptotical probability of extinction on the scale of a finite population and on the scale of an individual in an infinite population: when the rates of transmission are small compared to the rate of renewing the population of susceptibles, the two models lead to a.s. extinction, giving consistent results, which no longer applies to the opposite situation of important transmissions. In that case the size of the population plays a crucial role in the spreading of the disease.

  12. Botulinum toxin treatment of pain syndromes -an evidence based review.

    PubMed

    Safarpour, Yasaman; Jabbari, Bahman

    2018-06-01

    This review evaluates the existing level of evidence for efficacy of BoNTs in different pain syndromes using the recommended efficacy criteria from the Assessment and Therapeutic Subcommittee of the American Academy of Neurology. There is a level A evidence (effective) for BoNT therapy in post-herpetic neuralgia, trigeminal neuralgia, and posttraumatic neuralgia. There is a level B evidence (probably effective) for diabetic neuropathy, plantar fasciitis, piriformis syndrome, pain associated with total knee arthroplasty, male pelvic pain syndrome, chronic low back pain, male pelvic pain, and neuropathic pain secondary to traumatic spinal cord injury. BoNTs are possibly effective (Level C -one class II study) for female pelvic pain, painful knee osteoarthritis, post-operative pain in children with cerebral palsy after adductor release surgery, anterior knee pain with vastus lateralis imbalance. There is a level B evidence (one class I study) that BoNT treatment is probably ineffective in carpal tunnel syndrome. For myofascial pain syndrome, the level of evidence is U (undetermined) due to contradicting results. More high quality (Class I) studies and studies with different types of BoNTs are needed for better understanding of the role of BoNTs in pain syndromes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Relevance Vector Machine Learning for Neonate Pain Intensity Assessment Using Digital Imaging

    PubMed Central

    Gholami, Behnood; Tannenbaum, Allen R.

    2011-01-01

    Pain assessment in patients who are unable to verbally communicate is a challenging problem. The fundamental limitations in pain assessment in neonates stem from subjective assessment criteria, rather than quantifiable and measurable data. This often results in poor quality and inconsistent treatment of patient pain management. Recent advancements in pattern recognition techniques using relevance vector machine (RVM) learning techniques can assist medical staff in assessing pain by constantly monitoring the patient and providing the clinician with quantifiable data for pain management. The RVM classification technique is a Bayesian extension of the support vector machine (SVM) algorithm, which achieves comparable performance to SVM while providing posterior probabilities for class memberships and a sparser model. If classes represent “pure” facial expressions (i.e., extreme expressions that an observer can identify with a high degree of confidence), then the posterior probability of the membership of some intermediate facial expression to a class can provide an estimate of the intensity of such an expression. In this paper, we use the RVM classification technique to distinguish pain from nonpain in neonates as well as assess their pain intensity levels. We also correlate our results with the pain intensity assessed by expert and nonexpert human examiners. PMID:20172803

  14. MAG4 Versus Alternative Techniques for Forecasting Active-Region Flare Productivity

    NASA Technical Reports Server (NTRS)

    Falconer, David A.; Moore, Ronald L.; Barghouty, Abdulnasser F.; Khazanov, Igor

    2014-01-01

    MAG4 (Magnetogram Forecast), developed originally for NASA/SRAG (Space Radiation Analysis Group), is an automated program that analyzes magnetograms from the HMI (Helioseismic and Magnetic Imager) instrument on NASA SDO (Solar Dynamics Observatory), and automatically converts the rate (or probability) of major flares (M- and X-class), Coronal Mass Ejections (CMEs), and Solar Energetic Particle Events. MAG4 does not forecast that a flare will occur at a particular time in the next 24 or 48 hours; rather the probability of one occurring.

  15. Forestry inventory based on multistage sampling with probability proportional to size

    NASA Technical Reports Server (NTRS)

    Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.

    1983-01-01

    A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.

  16. Rank-k Maximal Statistics for Divergence and Probability of Misclassification

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.

    1972-01-01

    A technique is developed for selecting from n-channel multispectral data some k combinations of the n-channels upon which to base a given classification technique so that some measure of the loss of the ability to distinguish between classes, using the compressed k-dimensional data, is minimized. Information loss in compressing the n-channel data to k channels is taken to be the difference in the average interclass divergences (or probability of misclassification) in n-space and in k-space.

  17. Robust Statistics and Regularization for Feature Extraction and UXO Discrimination

    DTIC Science & Technology

    2011-07-01

    July 11, 2011 real data we find that this technique has an improved probability of finding all ordnance in a test data set, relative to previously...many sites. Tests on larger data sets should still be carried out. In previous work we considered a bootstrapping approach to selecting the operating...Marginalizing over x we obtain the probability that the ith order statistic in the test data belongs to the T class (55) P (T |x(i)) = ∞∫ −∞ P (T |x)p(x

  18. The essence of fire regime-condition class assessment

    Treesearch

    McKinley-Ben Miller

    2008-01-01

    The interagency-Fire Regime / Condition Class - assessment process (FRCC) represents a contemporary and effective means of estimating the relative degree of difference or "departure" a subject landscape condition is currently in, as compared to the historic or reference ecological conditions. This process generally applied to fire adapted systems is science-...

  19. Assessing potential health risks to fish and humans using mercury concentrations in inland fish from across western Canada and the United States

    USGS Publications Warehouse

    Lepak, Jesse M.; Hooten, Mevin B.; Eagles-Smith, Collin A.; Tate, Michael T.; Lutz, Michelle A.; Ackerman, Joshua T.; Willacker, James J.; Jackson, Allyson K.; Evers, David C.; Wiener, James G.; Pritz, Colleen Flanagan; Davis, Jay

    2016-01-01

    Fish represent high quality protein and nutrient sources, but Hg contamination is ubiquitous in aquatic ecosystems and can pose health risks to fish and their consumers. Potential health risks posed to fish and humans by Hg contamination in fish were assessed in western Canada and the United States. A large compilation of inland fish Hg concentrations was evaluated in terms of potential health risk to the fish themselves, health risk to predatory fish that consume Hg contaminated fish, and to humans that consume Hg contaminated fish. The probability that a fish collected from a given location would exceed a Hg concentration benchmark relevant to a health risk was calculated. These exceedance probabilities and their associated uncertainties were characterized for fish of multiple size classes at multiple health-relevant benchmarks. The approach was novel and allowed for the assessment of the potential for deleterious health effects in fish and humans associated with Hg contamination in fish across this broad study area. Exceedance probabilities were relatively common at low Hg concentration benchmarks, particularly for fish in larger size classes. Specifically, median exceedances for the largest size classes of fish evaluated at the lowest Hg concentration benchmarks were 0.73 (potential health risks to fish themselves), 0.90 (potential health risk to predatory fish that consume Hg contaminated fish), and 0.97 (potential for restricted fish consumption by humans), but diminished to essentially zero at the highest benchmarks and smallest fish size classes. Exceedances of benchmarks are likely to have deleterious health effects on fish and limit recommended amounts of fish humans consume in western Canada and the United States. Results presented here are not intended to subvert or replace local fish Hg data or consumption advice, but provide a basis for identifying areas of potential health risk and developing more focused future research and monitoring efforts.

  20. [Effects of prefrontal ablations on the reaction of the active choice of feeder under different probability and value of the reinforcement on dog].

    PubMed

    Preobrazhenskaia, L A; Ioffe, M E; Mats, V N

    2004-01-01

    The role of the prefrontal cortex was investigated on the reaction of the active choice of the two feeders under changes value and probability reinforcement. The experiments were performed on 2 dogs with prefrontal ablation (g. proreus). Before the lesions the dogs were taught to receive food in two different feeders to conditioned stimuli with equally probable alimentary reinforcement. After ablation in the inter-trial intervals the dogs were running from the one feeder to another. In the answer to conditioned stimuli for many times the dogs choose the same feeder. The disturbance of the behavior after some times completely restored. In the experiments with competition of probability events and values of reinforcement the dogs chose the feeder with low-probability but better quality of reinforcement. In the experiments with equal value but different probability the intact dogs chose the feeder with higher probability. In our experiments the dogs with prefrontal lesions chose the each feeder equiprobably. Thus in condition of free behavior one of different functions of the prefrontal cortex is the reactions choose with more probability of reinforcement.

  1. Colloquium: Statistical mechanics of money, wealth, and income

    NASA Astrophysics Data System (ADS)

    Yakovenko, Victor M.; Rosser, J. Barkley, Jr.

    2009-10-01

    This Colloquium reviews statistical models for money, wealth, and income distributions developed in the econophysics literature since the late 1990s. By analogy with the Boltzmann-Gibbs distribution of energy in physics, it is shown that the probability distribution of money is exponential for certain classes of models with interacting economic agents. Alternative scenarios are also reviewed. Data analysis of the empirical distributions of wealth and income reveals a two-class distribution. The majority of the population belongs to the lower class, characterized by the exponential (“thermal”) distribution, whereas a small fraction of the population in the upper class is characterized by the power-law (“superthermal”) distribution. The lower part is very stable, stationary in time, whereas the upper part is highly dynamical and out of equilibrium.

  2. The Context Matters: Outcome Probability and Expectation Mismatch Modulate the Feedback Negativity When Self-Evaluation of Response Correctness Is Possible

    PubMed Central

    Leue, Anja; Cano Rodilla, Carmen; Beauducel, André

    2015-01-01

    Individuals typically evaluate whether their performance and the obtained feedback match. Previous research has shown that feedback negativity (FN) depends on outcome probability and feedback valence. It is, however, less clear to what extent previous effects of outcome probability on FN depend on self-evaluations of response correctness. Therefore, we investigated the effects of outcome probability on FN amplitude in a simple go/no-go task that allowed for the self-evaluation of response correctness. We also investigated effects of performance incompatibility and feedback valence. In a sample of N = 22 participants, outcome probability was manipulated by means of precues, feedback valence by means of monetary feedback, and performance incompatibility by means of feedback that induced a match versus mismatch with individuals' performance. We found that the 100% outcome probability condition induced a more negative FN following no-loss than the 50% outcome probability condition. The FN following loss was more negative in the 50% compared to the 100% outcome probability condition. Performance-incompatible loss resulted in a more negative FN than performance-compatible loss. Our results indicate that the self-evaluation of the correctness of responses should be taken into account when the effects of outcome probability and expectation mismatch on FN are investigated. PMID:26783525

  3. The Context Matters: Outcome Probability and Expectation Mismatch Modulate the Feedback Negativity When Self-Evaluation of Response Correctness Is Possible.

    PubMed

    Leue, Anja; Cano Rodilla, Carmen; Beauducel, André

    2015-01-01

    Individuals typically evaluate whether their performance and the obtained feedback match. Previous research has shown that feedback negativity (FN) depends on outcome probability and feedback valence. It is, however, less clear to what extent previous effects of outcome probability on FN depend on self-evaluations of response correctness. Therefore, we investigated the effects of outcome probability on FN amplitude in a simple go/no-go task that allowed for the self-evaluation of response correctness. We also investigated effects of performance incompatibility and feedback valence. In a sample of N = 22 participants, outcome probability was manipulated by means of precues, feedback valence by means of monetary feedback, and performance incompatibility by means of feedback that induced a match versus mismatch with individuals' performance. We found that the 100% outcome probability condition induced a more negative FN following no-loss than the 50% outcome probability condition. The FN following loss was more negative in the 50% compared to the 100% outcome probability condition. Performance-incompatible loss resulted in a more negative FN than performance-compatible loss. Our results indicate that the self-evaluation of the correctness of responses should be taken into account when the effects of outcome probability and expectation mismatch on FN are investigated.

  4. A genetic scale of reading frame coding.

    PubMed

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Feature discrimination/identification based upon SAR return variations

    NASA Technical Reports Server (NTRS)

    Rasco, W. A., Sr.; Pietsch, R.

    1978-01-01

    A study of the statistics of The look-to-look variation statistics in the returns recorded in-flight by a digital, realtime SAR system are analyzed. The determination that the variations in the look-to-look returns from different classes do carry information content unique to the classes was illustrated by a model based on four variants derived from four look in-flight SAR data under study. The model was limited to four classes of returns: mowed grass on a athletic field, rough unmowed grass and weeds on a large vacant field, young fruit trees in a large orchard, and metal mobile homes and storage buildings in a large mobile home park. The data population in excess of 1000 returns represented over 250 individual pixels from the four classes. The multivariant discriminant model operated on the set of returns for each pixel and assigned that pixel to one of the four classes, based on the target variants and the probability distribution function of the four variants for each class.

  6. Detection of 224 candidate structured RNAs by comparative analysis of specific subsets of intergenic regions

    PubMed Central

    Lünse, Christina E.; Corbino, Keith A.; Ames, Tyler D.; Nelson, James W.; Roth, Adam; Perkins, Kevin R.; Sherlock, Madeline E.

    2017-01-01

    Abstract The discovery of structured non-coding RNAs (ncRNAs) in bacteria can reveal new facets of biology and biochemistry. Comparative genomics analyses executed by powerful computer algorithms have successfully been used to uncover many novel bacterial ncRNA classes in recent years. However, this general search strategy favors the discovery of more common ncRNA classes, whereas progressively rarer classes are correspondingly more difficult to identify. In the current study, we confront this problem by devising several methods to select subsets of intergenic regions that can concentrate these rare RNA classes, thereby increasing the probability that comparative sequence analysis approaches will reveal their existence. By implementing these methods, we discovered 224 novel ncRNA classes, which include ROOL RNA, an RNA class averaging 581 nt and present in multiple phyla, several highly conserved and widespread ncRNA classes with properties that suggest sophisticated biochemical functions and a multitude of putative cis-regulatory RNA classes involved in a variety of biological processes. We expect that further research on these newly found RNA classes will reveal additional aspects of novel biology, and allow for greater insights into the biochemistry performed by ncRNAs. PMID:28977401

  7. 32 CFR 269.4 - Cost of living adjustments of civil monetary penalties.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Statement 5,000 5,500 33 U.S.C. 1319(g)(2)(A) § 404Permit Condition Violation, Class I (per violation amount) 10,000 11,000 33 U.S.C. 1319(g)(2)(A) § 404Permit Condition Violation, Class I (maximum amount) 25,000 27,500 33 U.S.C. 1319(g)(2)(B) § 404Permit Condition Violation, Class II (per day amount) 10,000...

  8. Multi-Platform Metabolomic Analyses of Rat Urine Following Exposure to Perfluorinated Chemicals (PFCs)

    EPA Science Inventory

    Perfluorinated chemicals (PFCs), namely perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS), represent an emerging class of persistent and bioaccumulative compounds. Global occurrence of these fluorochemicals, coupled with probable human exposure, has prompted inv...

  9. Thermosetting Phthalocyanine Polymers

    NASA Technical Reports Server (NTRS)

    Fohlen, G.; Parker, J.; Achar, B.

    1985-01-01

    Group of phthalocyanine polymers resist thermal degradation. Polymers expected semiconducting. Principal applications probably in molded or laminated parts that have to withstand high temperatures. Polymers made from either of two classes of monomer: Bisphthalonitriles with imide linkages or Bisphthalonitriles with ester-imide linkages.

  10. 2060 Chiron - Visual and thermal infrared observations

    NASA Technical Reports Server (NTRS)

    Lebofsky, L. A.; Tholen, D. J.; Rieke, G. H.; Lebofsky, M. J.

    1984-01-01

    Five-color (wavelength = 0.36-0.85 microns) and thermal infrared (wavelength = 22.5 microns) photometric observations of the unusual asteroid 2060 Chiron were made. Between 0.36 and 0.85 microns, Chiron's reflectance spectrum is similar to those of C-class asteroids as well as Saturn's satellite Phoebe. However, the thermal IR measurements imply an albedo greater than 0.05 (i.e., a diameter of less than 250 km at the 2-sigma level) that is probably higher than those of C-class asteroids or Phoebe.

  11. Probabilistic tsunami hazard assessment based on the long-term evaluation of subduction-zone earthquakes along the Sagami Trough, Japan

    NASA Astrophysics Data System (ADS)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Ohsumi, T.; Morikawa, N.; Kawai, S.; Maeda, T.; Matsuyama, H.; Toyama, N.; Kito, T.; Murata, Y.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.; Hakamata, T.

    2017-12-01

    For the forthcoming large earthquakes along the Sagami Trough where the Philippine Sea Plate is subducting beneath the northeast Japan arc, the Earthquake Research Committee(ERC) /Headquarters for Earthquake Research Promotion, Japanese government (2014a) assessed that M7 and M8 class earthquakes will occur there and defined the possible extent of the earthquake source areas. They assessed 70% and 0% 5% of the occurrence probability within the next 30 years (from Jan. 1, 2014), respectively, for the M7 and M8 class earthquakes. First, we set possible 10 earthquake source areas(ESAs) and 920 ESAs, respectively, for M8 and M7 class earthquakes. Next, we constructed 125 characterized earthquake fault models (CEFMs) and 938 CEFMs, respectively, for M8 and M7 class earthquakes, based on "tsunami receipt" of ERC (2017) (Kitoh et al., 2016, JpGU). All the CEFMs are allowed to have a large slip area for expression of fault slip heterogeneity. For all the CEFMs, we calculate tsunamis by solving a nonlinear long wave equation, using FDM, including runup calculation, over a nesting grid system with a minimum grid size of 50 meters. Finally, we re-distributed the occurrence probability to all CEFMs (Abe et al., 2014, JpGU) and gathered excess probabilities for variable tsunami heights, calculated from all the CEFMs, at every observation point along Pacific coast to get PTHA. We incorporated aleatory uncertainties inherent in tsunami calculation and earthquake fault slip heterogeneity. We considered two kinds of probabilistic hazard models; one is "Present-time hazard model" under an assumption that the earthquake occurrence basically follows a renewal process based on BPT distribution if the latest faulting time was known. The other is "Long-time averaged hazard model" under an assumption that earthquake occurrence follows a stationary Poisson process. We fixed our viewpoint, for example, on the probability that the tsunami height will exceed 3 meters at coastal points in next 30 years (from Jan. 1, 2014). Present-time hazard model showed relatively high possibility over 0.1% along the Boso Peninsula. Long-time averaged hazard model showed highest possibility over 3% along the Boso Peninsula and relatively high possibility over 0.1 % along wide coastal areas on Pacific side from Kii Peninsula to Fukushima prefecture.

  12. Identification of cloud fields by the nonparametric algorithm of pattern recognition from normalized video data recorded with the AVHRR instrument

    NASA Astrophysics Data System (ADS)

    Protasov, Konstantin T.; Pushkareva, Tatyana Y.; Artamonov, Evgeny S.

    2002-02-01

    The problem of cloud field recognition from the NOAA satellite data is urgent for solving not only meteorological problems but also for resource-ecological monitoring of the Earth's underlying surface associated with the detection of thunderstorm clouds, estimation of the liquid water content of clouds and the moisture of the soil, the degree of fire hazard, etc. To solve these problems, we used the AVHRR/NOAA video data that regularly displayed the situation in the territory. The complexity and extremely nonstationary character of problems to be solved call for the use of information of all spectral channels, mathematical apparatus of testing statistical hypotheses, and methods of pattern recognition and identification of the informative parameters. For a class of detection and pattern recognition problems, the average risk functional is a natural criterion for the quality and the information content of the synthesized decision rules. In this case, to solve efficiently the problem of identifying cloud field types, the informative parameters must be determined by minimization of this functional. Since the conditional probability density functions, representing mathematical models of stochastic patterns, are unknown, the problem of nonparametric reconstruction of distributions from the leaning samples arises. To this end, we used nonparametric estimates of distributions with the modified Epanechnikov kernel. The unknown parameters of these distributions were determined by minimization of the risk functional, which for the learning sample was substituted by the empirical risk. After the conditional probability density functions had been reconstructed for the examined hypotheses, a cloudiness type was identified using the Bayes decision rule.

  13. Ubiquity of Benford's law and emergence of the reciprocal distribution

    DOE PAGES

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    2016-04-07

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  14. Encounter risk analysis of rainfall and reference crop evapotranspiration in the irrigation district

    NASA Astrophysics Data System (ADS)

    Zhang, Jinping; Lin, Xiaomin; Zhao, Yong; Hong, Yang

    2017-09-01

    Rainfall and reference crop evapotranspiration are random but mutually affected variables in the irrigation district, and their encounter situation can determine water shortage risks under the contexts of natural water supply and demand. However, in reality, the rainfall and reference crop evapotranspiration may have different marginal distributions and their relations are nonlinear. In this study, based on the annual rainfall and reference crop evapotranspiration data series from 1970 to 2013 in the Luhun irrigation district of China, the joint probability distribution of rainfall and reference crop evapotranspiration are developed with the Frank copula function. Using the joint probability distribution, the synchronous-asynchronous encounter risk, conditional joint probability, and conditional return period of different combinations of rainfall and reference crop evapotranspiration are analyzed. The results show that the copula-based joint probability distributions of rainfall and reference crop evapotranspiration are reasonable. The asynchronous encounter probability of rainfall and reference crop evapotranspiration is greater than their synchronous encounter probability, and the water shortage risk associated with meteorological drought (i.e. rainfall variability) is more prone to appear. Compared with other states, there are higher conditional joint probability and lower conditional return period in either low rainfall or high reference crop evapotranspiration. For a specifically high reference crop evapotranspiration with a certain frequency, the encounter risk of low rainfall and high reference crop evapotranspiration is increased with the decrease in frequency. For a specifically low rainfall with a certain frequency, the encounter risk of low rainfall and high reference crop evapotranspiration is decreased with the decrease in frequency. When either the high reference crop evapotranspiration exceeds a certain frequency or low rainfall does not exceed a certain frequency, the higher conditional joint probability and lower conditional return period of various combinations likely cause a water shortage, but the water shortage is not severe.

  15. Causal explanations for class inequality in health--an empirical analysis.

    PubMed

    Lundberg, O

    1991-01-01

    One of the most important issues for research on social class inequalities in health are the causes behind such differences. So far, the debate on class inequalities in health has mainly been centred around hypotheses on artefactual and selectional processes. Although most contributors to this branch of research have argued in favour of causal explanations, these have gained very little systematic scrutiny. In this article, several possible causal factors are singled out for empirical testing. The effect of these factors on class differences in physical and mental illness is studied by means of logit regressions. On the basis of these analyses, it is shown that physical working conditions are the prime source of class inequality in physical illness, although economic hardship during upbringing and health related behaviours also contribute. For class inequality in mental illness these three factors plus weak social network are important. In sum, a large part of the class differences in physical as well as mental illness can be understood as a result of systematic differences between classes in living conditions, primarily differences in working conditions.

  16. Classification with asymmetric label noise: Consistency and maximal denoising

    DOE PAGES

    Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...

    2016-09-20

    In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less

  17. Classification with asymmetric label noise: Consistency and maximal denoising

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, Gilles; Flaska, Marek; Handy, Gregory

    In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less

  18. Tsirelson's bound and supersymmetric entangled states

    PubMed Central

    Borsten, L.; Brádler, K.; Duff, M. J.

    2014-01-01

    A superqubit, belonging to a (2|1)-dimensional super-Hilbert space, constitutes the minimal supersymmetric extension of the conventional qubit. In order to see whether superqubits are more non-local than ordinary qubits, we construct a class of two-superqubit entangled states as a non-local resource in the CHSH game. Since super Hilbert space amplitudes are Grassmann numbers, the result depends on how we extract real probabilities and we examine three choices of map: (1) DeWitt (2) Trigonometric and (3) Modified Rogers. In cases (1) and (2), the winning probability reaches the Tsirelson bound pwin=cos2π/8≃0.8536 of standard quantum mechanics. Case (3) crosses Tsirelson's bound with pwin≃0.9265. Although all states used in the game involve probabilities lying between 0 and 1, case (3) permits other changes of basis inducing negative transition probabilities. PMID:25294964

  19. Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD

    PubMed Central

    Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne

    2014-01-01

    We recorded visual event-related brain potentials (ERPs) from 32 adult male participants (16 high-functioning participants diagnosed with Autism Spectrum Disorder (ASD) and 16 control participants, ranging in age from 18–53 yrs) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability conditions, whereas the probability of a third non-target stimulus was held constant in all conditions. P3 amplitude to target stimuli was more sensitive to probability in ASD than in TD participants, whereas P3 amplitude to non-target stimuli was less responsive to probability in ASD participants. This suggests that neural responses to changes in event probability are attention-dependant in high-functioning ASD. The implications of these findings for higher-level behaviors such as prediction and planning are discussed. PMID:24488156

  20. Factors influencing reporting and harvest probabilities in North American geese

    USGS Publications Warehouse

    Zimmerman, G.S.; Moser, T.J.; Kendall, W.L.; Doherty, P.F.; White, Gary C.; Caswell, D.F.

    2009-01-01

    We assessed variation in reporting probabilities of standard bands among species, populations, harvest locations, and size classes of North American geese to enable estimation of unbiased harvest probabilities. We included reward (US10,20,30,50, or100) and control (0) banded geese from 16 recognized goose populations of 4 species: Canada (Branta canadensis), cackling (B. hutchinsii), Ross's (Chen rossii), and snow geese (C. caerulescens). We incorporated spatially explicit direct recoveries and live recaptures into a multinomial model to estimate reporting, harvest, and band-retention probabilities. We compared various models for estimating harvest probabilities at country (United States vs. Canada), flyway (5 administrative regions), and harvest area (i.e., flyways divided into northern and southern sections) scales. Mean reporting probability of standard bands was 0.73 (95 CI 0.690.77). Point estimates of reporting probabilities for goose populations or spatial units varied from 0.52 to 0.93, but confidence intervals for individual estimates overlapped and model selection indicated that models with species, population, or spatial effects were less parsimonious than those without these effects. Our estimates were similar to recently reported estimates for mallards (Anas platyrhynchos). We provide current harvest probability estimates for these populations using our direct measures of reporting probability, improving the accuracy of previous estimates obtained from recovery probabilities alone. Goose managers and researchers throughout North America can use our reporting probabilities to correct recovery probabilities estimated from standard banding operations for deriving spatially explicit harvest probabilities.

  1. Longitudinal patterns of gambling activities and associated risk factors in college students

    PubMed Central

    Goudriaan, Anna E.; Slutske, Wendy S.; Krull, Jennifer L.; Sher, Kenneth J.

    2009-01-01

    Aims To investigate which clusters of gambling activities exist within a longitudinal study of college health, how membership in gambling clusters change over time and whether particular clusters of gambling are associated with unhealthy risk behaviour. Design Four-year longitudinal study (2002–2006). Setting Large, public university. Participants Undergraduate college students. Measurements Ten common gambling activities were measured during 4 consecutive college years (years 1–4). Clusters of gambling activities were examined using latent class analyses. Relations between gambling clusters and gender, Greek membership, alcohol use, drug use, personality indicators of behavioural undercontrol and psychological distress were examined. Findings Four latent gambling classes were identified: (1) a low-gambling class, (2) a card gambling class, (3) a casino/slots gambling class and (4) an extensive gambling class. Over the first college years a high probability of transitioning from the low-gambling class and the card gambling class into the casino/slots gambling class was present. Membership in the card, casino/slots and extensive gambling classes were associated with higher scores on alcohol/drug use, novelty seeking and self-identified gambling problems compared to the low-gambling class. The extensive gambling class scored higher than the other gambling classes on risk factors. Conclusions Extensive gamblers and card gamblers are at higher risk for problem gambling and other risky health behaviours. Prospective examinations of class membership suggested that being in the extensive and the low gambling classes was highly stable across the 4 years of college. PMID:19438422

  2. Comparison of vision disorders between children in mainstream and special education classes in government primary schools in Malaysia.

    PubMed

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong; Md Noor, Abdul Rahim; Goh, Pik-Pin

    2012-08-01

    The visual status of children with learning disabilities has not been extensively studied. This study aimed to compare vision disorders between children in mainstream classes and those with learning disabilities attending special education classes in government primary schools in Malaysia. In this cross-sectional comparative study, 60 school children (30 from mainstream classes and 30 from special education classes) who were matched in age (6-12 years old) and ethnicity (Malay, Chinese and Indian) were examined. The subjects were recruited using non-probability convenience sampling. A complete eye examination was performed to detect three major vision disorders, namely refractive error, lag of accommodation and convergence insufficiency. The overall prevalence of refractive error, lag of accommodation and convergence insufficiency was found to be 65.0%, 43.3% and 35.2%, respectively. Convergence insufficiency (χ² = 24.073, p < 0.001) was found to be associated with children in special education classes. No association was found between refractive error and lag of accommodation (p > 0.05) with the type of classes. Children in special education classes are more likely to have convergence insufficiency compared to children in mainstream classes. Thus, vision screening programmes for children in special education classes may need to be modified.

  3. Analysis of class II (hydrolytic) and class I (beta-lyase) apurinic/apyrimidinic endonucleases with a synthetic DNA substrate.

    PubMed Central

    Levin, J D; Demple, B

    1990-01-01

    We have developed simple and sensitive assays that distinguish the main classes of apurinic/apyrimidinic (AP) endonucleases: Class I enzymes that cleave on the 3' side of AP sites by beta-elimination, and Class II enzymes that cleave by hydrolysis on the 5' side. The distinction of the two types depends on the use of a synthetic DNA polymer that contains AP sites with 5'-[32P]phosphate residues. Using this approach, we now show directly that Escherichia coli endonuclease IV and human AP endonuclease are Class II enzymes, as inferred previously on the basis of indirect assays. The assay method does not exhibit significant interference by nonspecific nucleases or primary amines, which allows the ready determination of different AP endonuclease activities in crude cell extracts. In this way, we show that virtually all of the Class II AP endonuclease activity in E. coli can be accounted for by two enzymes: exonuclease III and endonuclease IV. In the yeast Saccharomyces cerevisiae, the Class II AP endonuclease activity is totally dependent on a single enzyme, the Apn1 protein, but there are probably multiple Class I enzymes. The versatility and ease of our approach should be useful for characterizing this important class of DNA repair enzymes in diverse systems. PMID:1698278

  4. Decomposition of conditional probability for high-order symbolic Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  5. Decomposition of conditional probability for high-order symbolic Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  6. Phenols in hydrothermal petroleums and sediment bitumen from Guaymas Basin, Gulf of California

    NASA Technical Reports Server (NTRS)

    Simoneit, B. R.; Leif, R. N.; Ishiwatari, R.

    1996-01-01

    The aliphatic, aromatic and polar (NSO) fractions of seabed petroleums and sediment bitumen extracts from the Guaymas Basin hydrothermal system have been analyzed by gas chromatography and gas chromatography-mass spectrometry (free and silylated). The oils were collected from the interiors and exteriors of high temperature hydrothermal vents and represent hydrothermal pyrolyzates that have migrated to the seafloor by hydrothermal fluid circulation. The downcore sediments are representative of both thermally unaltered and thermally altered sediments. The survey has revealed the presence of oxygenated compounds in samples with a high degree of thermal maturity. Phenols are one class of oxygenated compounds found in these samples. A group of methyl-, dimethyl- and trimethyl-isoprenoidyl phenols (C27-C29) is present in all of the seabed NSO fractions, with the methyl- and dimethyl-isoprenoidyl phenols occurring as major components, and a trimethyl-isoprenoidyl phenol as a minor component. A homologous series of n-alkylphenols (C13-C33) has also been found in the seabed petroleums. These phenols are most likely derived from the hydrothermal alteration of sedimentary organic matter. The n-alkylphenols are probably synthesized under hydrothermal conditions, but the isoprenoidyl phenols are probably hydrothermal alteration products of natural product precursors. The suites of phenols do not appear to be useful tracers of high temperature hydrothermal processes.

  7. Food and habitat resource partitioning between three estuarine fish species on the Swedish west coast

    NASA Astrophysics Data System (ADS)

    Thorman, Staffan

    1983-12-01

    In 1978 the food and habitat resource partitioning of three small and common fish species, viz. Pomatoschistus microps (Krøyer), Gasterosteus aculeatus (L.) and Pungitius pungitius (L.) were studied in river Broälven estuary on the Swedish west coast (58°22'N, 11°29'E). The area was divided into three habitats, based on environmental features. In July, September, and October stomach contents and size distribution of each species present were analysed. In July there was high food and habitat overlap between the species. Interference interactions probably occurred between some size classes of P. microps and the other two species. P. pungitius was exposed to both intra- and interspecific interactions. In September the food and habitat overlaps between G. aculeatus and P. pungitius were high, while both had low food and habitat overlaps in relation to P. microps. Interactions between G. aculeatus and P. pungitius were probably influenced by more severe abiotic conditions in one habitat, which caused lower abundances there, and higher abundances in the other two habitats. In October no interactions were observed. These results indicate that competition for food at least temporarily determines the species distribution in a temperate estuary, and that estuarine fish populations are sometimes food limited.

  8. New normative standards of conditional reasoning and the dual-source model

    PubMed Central

    Singmann, Henrik; Klauer, Karl Christoph; Over, David

    2014-01-01

    There has been a major shift in research on human reasoning toward Bayesian and probabilistic approaches, which has been called a new paradigm. The new paradigm sees most everyday and scientific reasoning as taking place in a context of uncertainty, and inference is from uncertain beliefs and not from arbitrary assumptions. In this manuscript we present an empirical test of normative standards in the new paradigm using a novel probabilized conditional reasoning task. Our results indicated that for everyday conditional with at least a weak causal connection between antecedent and consequent only the conditional probability of the consequent given antecedent contributes unique variance to predicting the probability of conditional, but not the probability of the conjunction, nor the probability of the material conditional. Regarding normative accounts of reasoning, we found significant evidence that participants' responses were confidence preserving (i.e., p-valid in the sense of Adams, 1998) for MP inferences, but not for MT inferences. Additionally, only for MP inferences and to a lesser degree for DA inferences did the rate of responses inside the coherence intervals defined by mental probability logic (Pfeifer and Kleiter, 2005, 2010) exceed chance levels. In contrast to the normative accounts, the dual-source model (Klauer et al., 2010) is a descriptive model. It posits that participants integrate their background knowledge (i.e., the type of information primary to the normative approaches) and their subjective probability that a conclusion is seen as warranted based on its logical form. Model fits showed that the dual-source model, which employed participants' responses to a deductive task with abstract contents to estimate the form-based component, provided as good an account of the data as a model that solely used data from the probabilized conditional reasoning task. PMID:24860516

  9. New normative standards of conditional reasoning and the dual-source model.

    PubMed

    Singmann, Henrik; Klauer, Karl Christoph; Over, David

    2014-01-01

    There has been a major shift in research on human reasoning toward Bayesian and probabilistic approaches, which has been called a new paradigm. The new paradigm sees most everyday and scientific reasoning as taking place in a context of uncertainty, and inference is from uncertain beliefs and not from arbitrary assumptions. In this manuscript we present an empirical test of normative standards in the new paradigm using a novel probabilized conditional reasoning task. Our results indicated that for everyday conditional with at least a weak causal connection between antecedent and consequent only the conditional probability of the consequent given antecedent contributes unique variance to predicting the probability of conditional, but not the probability of the conjunction, nor the probability of the material conditional. Regarding normative accounts of reasoning, we found significant evidence that participants' responses were confidence preserving (i.e., p-valid in the sense of Adams, 1998) for MP inferences, but not for MT inferences. Additionally, only for MP inferences and to a lesser degree for DA inferences did the rate of responses inside the coherence intervals defined by mental probability logic (Pfeifer and Kleiter, 2005, 2010) exceed chance levels. In contrast to the normative accounts, the dual-source model (Klauer et al., 2010) is a descriptive model. It posits that participants integrate their background knowledge (i.e., the type of information primary to the normative approaches) and their subjective probability that a conclusion is seen as warranted based on its logical form. Model fits showed that the dual-source model, which employed participants' responses to a deductive task with abstract contents to estimate the form-based component, provided as good an account of the data as a model that solely used data from the probabilized conditional reasoning task.

  10. Probable Posttraumatic Stress Disorder in the US Veteran Population According to DSM-5: Results From the National Health and Resilience in Veterans Study.

    PubMed

    Wisco, Blair E; Marx, Brian P; Miller, Mark W; Wolf, Erika J; Mota, Natalie P; Krystal, John H; Southwick, Steven M; Pietrzak, Robert H

    2016-11-01

    With the publication of DSM-5, important changes were made to the diagnostic criteria for posttraumatic stress disorder (PTSD), including the addition of 3 new symptoms. Some have argued that these changes will further increase the already high rates of comorbidity between PTSD and other psychiatric disorders. This study examined the prevalence of DSM-5 PTSD, conditional probability of PTSD given certain trauma exposures, endorsement of specific PTSD symptoms, and psychiatric comorbidities in the US veteran population. Data were analyzed from the National Health and Resilience in Veterans Study (NHRVS), a Web-based survey of a cross-sectional, nationally representative, population-based sample of 1,484 US veterans, which was fielded from September through October 2013. Probable PTSD was assessed using the PTSD Checklist-5. The weighted lifetime and past-month prevalence of probable DSM-5 PTSD was 8.1% (SE = 0.7%) and 4.7% (SE = 0.6%), respectively. Conditional probability of lifetime probable PTSD ranged from 10.1% (sudden death of close family member or friend) to 28.0% (childhood sexual abuse). The DSM-5 PTSD symptoms with the lowest prevalence among veterans with probable PTSD were trauma-related amnesia and reckless and self-destructive behavior. Probable PTSD was associated with increased odds of mood and anxiety disorders (OR = 7.6-62.8, P < .001), substance use disorders (OR = 3.9-4.5, P < .001), and suicidal behaviors (OR = 6.7-15.1, P < .001). In US veterans, the prevalence of DSM-5 probable PTSD, conditional probability of probable PTSD, and odds of psychiatric comorbidity were similar to prior findings with DSM-IV-based measures; we found no evidence that changes in DSM-5 increase psychiatric comorbidity. Results underscore the high rates of exposure to both military and nonmilitary trauma and the high public health burden of DSM-5 PTSD and comorbid conditions in veterans. © Copyright 2016 Physicians Postgraduate Press, Inc.

  11. Quantitative assessment of building fire risk to life safety.

    PubMed

    Guanquan, Chu; Jinhua, Sun

    2008-06-01

    This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.

  12. AIDSinfo Drug Database

    MedlinePlus

    ... HIV/AIDS-related drugs for that condition. OK Filters Approval: What's this? FDA-Approved Investigational Class: What's ... Encephalitis Tuberculosis Varicella-Zoster Virus Diseases Update Loading... Filter by approval status, drug class, and condition Browse ...

  13. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  14. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  15. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  16. Similarity as an organising principle in short-term memory.

    PubMed

    LeCompte, D C; Watkins, M J

    1993-03-01

    The role of stimulus similarity as an organising principle in short-term memory was explored in a series of seven experiments. Each experiment involved the presentation of a short sequence of items that were drawn from two distinct physical classes and arranged such that item class changed after every second item. Following presentation, one item was re-presented as a probe for the 'target' item that had directly followed it in the sequence. Memory for the sequence was considered organised by class if probability of recall was higher when the probe and target were from the same class than when they were from different classes. Such organisation was found when one class was auditory and the other was visual (spoken vs. written words, and sounds vs. pictures). It was also found when both classes were auditory (words spoken in a male voice vs. words spoken in a female voice) and when both classes were visual (digits shown in one location vs. digits shown in another). It is concluded that short-term memory can be organised on the basis of sensory modality and on the basis of certain features within both the auditory and visual modalities.

  17. Design of partially supervised classifiers for multispectral image data

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David

    1993-01-01

    A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.

  18. Singular optimal control and the identically non-regular problem in the calculus of variations

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.

    1985-01-01

    A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.

  19. Assessing ecological departure from reference conditions with the Fire Regime Condition Class (FRCC) Mapping Tool

    Treesearch

    Stephen W. Barrett; Thomas DeMeo; Jeffrey L. Jones; J.D. Zeiler; Lee C. Hutter

    2006-01-01

    Knowledge of ecological departure from a range of reference conditions provides a critical context for managing sustainable ecosystems. Fire Regime Condition Class (FRCC) is a qualitative measure characterizing possible departure from historical fire regimes. The FRCC Mapping Tool was developed as an ArcMap extension utilizing the protocol identified by the Interagency...

  20. Apparent inferiority of first-time breeders in the kittiwake: The role of heterogeneity among age classes

    USGS Publications Warehouse

    Cam, E.; Monnat, J.-Y.

    2000-01-01

    1. Many studies have provided evidence that first-time breeders have a lower survival, a lower probability of success, or of breeding, in the following year. Hypotheses based on reproductive costs have often been proposed to explain this. However, because of the intrinsic relationship between age and experience, the apparent inferiority of first-time breeders at the population level may result from selection, and experience may not influence performance within each individual. In this paper we address the question of phenotypic correlations between fitness components. This addresses differences in individual quality, a prerequisite for a selection process to occur. We also test the hypothesis of an influence of experience on these components while taking age and reproductive success into account: two factors likely to play a key role in a selection process. 2. Using data from a long-term study on the kittiwake, we found that first-time breeders have a lower probability of success, a lower survival and a lower probability of breeding in the next year than experienced breeders. However, neither experienced nor inexperienced breeders have a lower survival or a lower probability of breeding in the following year than birds that skipped a breeding opportunity. This suggests heterogeneity in quality among individuals. 3. Failed birds have a lower survival and a lower probability of breeding in the following year regardless of experience. This can be interpreted in the light of the selection hypothesis. The inferiority of inexperienced breeders may be linked to a higher proportion of lower-quality individuals in younger age classes. When age and breeding success are controlled for, there is no evidence of an influence of experience on survival or future breeding probability. 4. Using data from individuals whose reproductive life lasted the same number of years, we investigated the influence of experience on reproductive performance within individuals. There is no strong evidence that a process operating within individuals explains the improvement in performance observed at the population level.

  1. Probability based models for estimation of wildfire risk

    Treesearch

    Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit

    2004-01-01

    We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 km²-day cell level. We fit a spatially and temporally explicit non-...

  2. Voluntary climate change mitigation actions of young adults: a classification of mitigators through latent class analysis.

    PubMed

    Korkala, Essi A E; Hugg, Timo T; Jaakkola, Jouni J K

    2014-01-01

    Encouraging individuals to take action is important for the overall success of climate change mitigation. Campaigns promoting climate change mitigation could address particular groups of the population on the basis of what kind of mitigation actions the group is already taking. To increase the knowledge of such groups performing similar mitigation actions we conducted a population-based cross-sectional study in Finland. The study population comprised 1623 young adults who returned a self-administered questionnaire (response rate 64%). Our aims were to identify groups of people engaged in similar climate change mitigation actions and to study the gender differences in the grouping. We also determined if socio-demographic characteristics can predict group membership. We performed latent class analysis using 14 mitigation actions as manifest variables. Three classes were identified among men: the Inactive (26%), the Semi-active (63%) and the Active (11%) and two classes among women: the Semi-active (72%) and the Active (28%). The Active among both genders were likely to have mitigated climate change through several actions, such as recycling, using environmentally friendly products, preferring public transport, and conserving energy. The Semi-Active had most probably recycled and preferred public transport because of climate change. The Inactive, a class identified among men only, had very probably done nothing to mitigate climate change. Among males, being single or divorced predicted little involvement in climate change mitigation. Among females, those without tertiary degree and those with annual income €≥16801 were less involved in climate change mitigation. Our results illustrate to what extent young adults are engaged in climate change mitigation, which factors predict little involvement in mitigation and give insight to which segments of the public could be the audiences of targeted mitigation campaigns.

  3. Voluntary Climate Change Mitigation Actions of Young Adults: A Classification of Mitigators through Latent Class Analysis

    PubMed Central

    Korkala, Essi A. E.; Hugg, Timo T.; Jaakkola, Jouni J. K.

    2014-01-01

    Encouraging individuals to take action is important for the overall success of climate change mitigation. Campaigns promoting climate change mitigation could address particular groups of the population on the basis of what kind of mitigation actions the group is already taking. To increase the knowledge of such groups performing similar mitigation actions we conducted a population-based cross-sectional study in Finland. The study population comprised 1623 young adults who returned a self-administered questionnaire (response rate 64%). Our aims were to identify groups of people engaged in similar climate change mitigation actions and to study the gender differences in the grouping. We also determined if socio-demographic characteristics can predict group membership. We performed latent class analysis using 14 mitigation actions as manifest variables. Three classes were identified among men: the Inactive (26%), the Semi-active (63%) and the Active (11%) and two classes among women: the Semi-active (72%) and the Active (28%). The Active among both genders were likely to have mitigated climate change through several actions, such as recycling, using environmentally friendly products, preferring public transport, and conserving energy. The Semi-Active had most probably recycled and preferred public transport because of climate change. The Inactive, a class identified among men only, had very probably done nothing to mitigate climate change. Among males, being single or divorced predicted little involvement in climate change mitigation. Among females, those without tertiary degree and those with annual income €≥16801 were less involved in climate change mitigation. Our results illustrate to what extent young adults are engaged in climate change mitigation, which factors predict little involvement in mitigation and give insight to which segments of the public could be the audiences of targeted mitigation campaigns. PMID:25054549

  4. Heterogeneity in 10-Year Course Trajectories of Moderate to Severe Major Depressive Disorder: A Danish National Register-Based Study.

    PubMed

    Musliner, Katherine L; Munk-Olsen, Trine; Laursen, Thomas M; Eaton, William W; Zandi, Peter P; Mortensen, Preben B

    2016-04-01

    Evidence suggests that long-term trajectories of major depressive disorder (MDD) are heterogeneous. The Danish Psychiatric Central Research Register (DPCRR) provides a rare opportunity to examine patterns and correlates of long-term trajectories in a large sample of patients with moderate to severe MDD. To characterize patterns and correlates of 10-year course trajectories of MDD in the DPCRR. A cohort containing 11 640 individuals born in Denmark in 1955 or later with their first recorded MDD diagnosis in the DPCRR between 1995 and 2002 was established. Patients were followed for 10 years from the date of their initial MDD diagnosis. Data were obtained from Danish civil and psychiatric national registers in June 2013 and were analyzed from April 4, 2014, to December 17, 2015. Correlates of trajectory class membership were sex, characteristics of the first recorded MDD episode (ie, age, severity, inpatient treatment, and record of suicide attempt or self-harm), and psychiatric diagnoses in parents (ie, depression, bipolar disorder, schizophrenia-spectrum disorders, substance abuse, and anxiety or somatoform disorders). The outcome variable was past-year contact at a psychiatric hospital with a main diagnosis of MDD during each of the 10 years following the initial MDD diagnosis. Trajectories were modeled using latent class growth analysis. The sample included 11 640 individuals (7493 [64.4%] women) aged 18 to 48 years (mean [SD], 31.4 [7.3]) at their first recorded MDD diagnosis. Four trajectory classes were identified: brief contact (77.0%) (characterized by low probability of contact after 2 years); prolonged initial contact (12.8%) (characterized by high decreasing probability of contact during the first 5 years); later reentry (7.1%) (characterized by moderate probability of contact during the second 5 years); and persistent contact (3.1%) (characterized by high or moderate probability of contact throughout). Female sex (odds ratio [OR] range, 1.82-2.22), inpatient treatment (OR range, 1.40-1.50), and severity at first recorded MDD episode (OR range: moderate, 1.61-1.84; severe, 1.93-2.23; and psychotic, 2.73-3.07) were associated with more severe trajectories. Parental anxiety (OR, 1.34 [95% CI, 1.10-1.63]) and depression (OR, 1.63 [95% CI, 1.28-2.09]) were associated with the prolonged initial contact and later reentry classes, respectively. Parental schizophrenia was associated with the persistent contact class (OR range, 2.55-3.04). Most people treated for moderate to severe MDD in Danish psychiatric hospitals do not receive additional MDD treatment after 2 years; however, a minority receive specialty treatment for up to a decade. Observable heterogeneity in the course may be indicative of underlying etiologic differences.

  5. Analyzing remote sensing geobotanical trends in Quetico Provincial Park, Ontario, Canada, using digital elevation data

    NASA Technical Reports Server (NTRS)

    Warner, Timothy A.; Campagna, David J.; Levandowski, Don W.; Cetin, Haluk; Evans, Carla S.

    1991-01-01

    A 10 x 13-km area in Quetico Provincial Park, Canada has been studied using a digital elevation model to separate different drainage classes and to examine the influence of site factors and lithology on vegetation. Landsat Thematic Mapper data have been classified into six forest classes of varying deciduous-coniferous cover through nPDF, a procedure based on probability density functions. It is shown that forests growing on mafic lithologies are enriched in deciduous species, compared to those growing on granites. Of the forest classes found on mafics, the highest coniferous component was on north facing slopes, and the highest deciduous component on south facing slopes. Granites showed no substantial variation between site classes. The digital elevation derived site data is considered to be an important tool in geobotanical investigations.

  6. Packaging Concerns and Techniques for Large Devices: Challenges for Complex Electronics

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.; Sampson, Michael J.

    2010-01-01

    NASA is going to have to accept the use of non-hermetic packages for complex devices. There are a large number of packaging options available. Space application subjects the packages to stresses that they were probably not designed for (vacuum for instance). NASA has to find a way of having assurance in the integrity of the packages. There are manufacturers interested in qualifying non-hermetic packages to MIL-PRF-38535 Class V. Government space users are agreed that Class V should be for hermetic packages only. NASA is working on a new Class for non-hermetic packages for M38535 Appendix B, "Class Y". Testing for package integrity will be required but can be package specific as described by a Package Integrity Test Plan. The plan is developed by the manufacturer and approved by DSCC and government space.

  7. Targeting the probability versus cost of feared outcomes in public speaking anxiety.

    PubMed

    Nelson, Elizabeth A; Deacon, Brett J; Lickel, James J; Sy, Jennifer T

    2010-04-01

    Cognitive-behavioral theory suggests that social phobia is maintained, in part, by overestimates of the probability and cost of negative social events. Indeed, empirically supported cognitive-behavioral treatments directly target these cognitive biases through the use of in vivo exposure or behavioral experiments. While cognitive-behavioral theories and treatment protocols emphasize the importance of targeting probability and cost biases in the reduction of social anxiety, few studies have examined specific techniques for reducing probability and cost bias, and thus the relative efficacy of exposure to the probability versus cost of negative social events is unknown. In the present study, 37 undergraduates with high public speaking anxiety were randomly assigned to a single-session intervention designed to reduce either the perceived probability or the perceived cost of negative outcomes associated with public speaking. Compared to participants in the probability treatment condition, those in the cost treatment condition demonstrated significantly greater improvement on measures of public speaking anxiety and cost estimates for negative social events. The superior efficacy of the cost treatment condition was mediated by greater treatment-related changes in social cost estimates. The clinical implications of these findings are discussed. Published by Elsevier Ltd.

  8. Three-dimensional obstacle classification in laser range data

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter; Bers, Karl-Heinz

    1998-10-01

    The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser rangefinders which are presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from wires at over 500 m range (depends on the system) with a high hit-and-detect probability. Despite the efficiency of the sensor, acceptance of current obstacle warning systems by test pilots is not very high, mainly due to the systems' inadequacies in obstacle recognition and visualization. This has motivated the development and the testing of more advanced 3d-scene analysis algorithm at FGAN-FIM to replace the obstacle recognition component of current warning systems. The basic ideas are to increase the recognition probability and to reduce the false alarm rate for hard-to-extract obstacles such as wires, by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. by implementing a hierarchical classification procedure to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition.

  9. Complexity, information loss, and model building: from neuro- to cognitive dynamics

    NASA Astrophysics Data System (ADS)

    Arecchi, F. Tito

    2007-06-01

    A scientific problem described within a given code is mapped by a corresponding computational problem, We call complexity (algorithmic) the bit length of the shortest instruction which solves the problem. Deterministic chaos in general affects a dynamical systems making the corresponding problem experimentally and computationally heavy, since one must reset the initial conditions at a rate higher than that of information loss (Kolmogorov entropy). One can control chaos by adding to the system new degrees of freedom (information swapping: information lost by chaos is replaced by that arising from the new degrees of freedom). This implies a change of code, or a new augmented model. Within a single code, changing hypotheses is equivalent to fixing different sets of control parameters, each with a different a-priori probability, to be then confirmed and transformed to an a-posteriori probability via Bayes theorem. Sequential application of Bayes rule is nothing else than the Darwinian strategy in evolutionary biology. The sequence is a steepest ascent algorithm, which stops once maximum probability has been reached. At this point the hypothesis exploration stops. By changing code (and hence the set of relevant variables) one can start again to formulate new classes of hypotheses . We call semantic complexity the number of accessible scientific codes, or models, that describe a situation. It is however a fuzzy concept, in so far as this number changes due to interaction of the operator with the system under investigation. These considerations are illustrated with reference to a cognitive task, starting from synchronization of neuron arrays in a perceptual area and tracing the putative path toward a model building.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattison, M.B.; Schroeder, J.A.; Russell, K.D.

    The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according tomore » plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less

  11. Abstracting Sequences: Reasoning That Is a Key to Academic Achievement.

    PubMed

    Pasnak, Robert; Kidd, Julie K; Gadzichowski, K Marinka; Gallington, Debbie A; Schmerold, Katrina Lea; West, Heather

    2015-01-01

    The ability to understand sequences of items may be an important cognitive ability. To test this proposition, 8 first-grade children from each of 36 classes were randomly assigned to four conditions. Some were taught sequences that represented increasing or decreasing values, or were symmetrical, or were rotations of an object through 6 or 8 positions. Control children received equal numbers of sessions on mathematics, reading, or social studies. Instruction was conducted three times weekly in 15-min sessions for seven months. In May, the children taught sequences applied their understanding to novel sequences, and scored as well or better on three standardized reading tests as the control children. They outscored all children on tests of mathematics concepts, and scored better than control children on some mathematics scales. These findings indicate that developing an understanding of sequences is a form of abstraction, probably involving fluid reasoning, that provides a foundation for academic achievement in early education.

  12. Hematopoietic Stem Cell Transplantation in Thalassemia and Sickle Cell Anemia

    PubMed Central

    Lucarelli, Guido; Isgrò, Antonella; Sodani, Pietro; Gaziev, Javid

    2012-01-01

    The globally widespread single-gene disorders β-thalassemia and sickle cell anemia (SCA) can only be cured by allogeneic hematopoietic stem cell transplantation (HSCT). HSCT treatment of thalassemia has substantially improved over the last two decades, with advancements in preventive strategies, control of transplant-related complications, and preparative regimens. A risk class–based transplantation approach results in disease-free survival probabilities of 90%, 84%, and 78% for class 1, 2, and 3 thalassemia patients, respectively. Because of disease advancement, adult thalassemia patients have a higher risk for transplant-related toxicity and a 65% cure rate. Patients without matched donors could benefit from haploidentical mother-to-child transplantation. There is a high cure rate for children with SCA who receive HSCT following myeloablative conditioning protocols. Novel non-myeloablative transplantation protocols could make HSCT available to adult SCA patients who were previously excluded from allogeneic stem cell transplantation. PMID:22553502

  13. Low dosages: new chemotherapeutic weapons on the battlefield of immune-related disease

    PubMed Central

    Liu, Jing; Zhao, Jie; Hu, Liang; Cao, Yuchun; Huang, Bo

    2011-01-01

    Chemotherapeutic drugs eliminate tumor cells at relatively high doses and are considered weapons against tumors in clinics and hospitals. However, despite their ability to induce cellular apoptosis, chemotherapeutic drugs should probably be regarded more as a class of cell regulators than cell killers, if the dosage used and the fact that their targets are involved in basic molecular events are considered. Unfortunately, the regulatory properties of chemotherapeutic drugs are usually hidden or masked by the massive cell death induced by high doses. Recent evidence has begun to suggest that low dosages of chemotherapeutic drugs might profoundly regulate various intracellular aspects of normal cells, especially immune cells. Here, we discuss the immune regulatory roles of three kinds of chemotherapeutic drugs under low-dose conditions and propose low dosages as potential new chemotherapeutic weapons on the battlefield of immune-related disease. PMID:21423201

  14. Typical performance of approximation algorithms for NP-hard problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-11-01

    Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.

  15. Automatic classification of visual evoked potentials based on wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz

    2017-04-01

    Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.

  16. 49 CFR 572.36 - Test conditions and instrumentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... be mounted with its sensitive axis colinear with the pendulum's longitudinal centerline. (h) The... acceleration—Class 1000 (2) Neck forces—Class 1000 (3) Neck moments—Class 600 (4) Neck pendulum acceleration—Class 60 (5) Thorax and thorax pendulum acceleration—Class 180 (6) Thorax deflection—Class 180 (7) Knee...

  17. 49 CFR 572.36 - Test conditions and instrumentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... be mounted with its sensitive axis colinear with the pendulum's longitudinal centerline. (h) The... acceleration—Class 1000 (2) Neck forces—Class 1000 (3) Neck moments—Class 600 (4) Neck pendulum acceleration—Class 60 (5) Thorax and thorax pendulum acceleration—Class 180 (6) Thorax deflection—Class 180 (7) Knee...

  18. 49 CFR 572.36 - Test conditions and instrumentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... be mounted with its sensitive axis colinear with the pendulum's longitudinal centerline. (h) The... acceleration—Class 1000 (2) Neck forces—Class 1000 (3) Neck moments—Class 600 (4) Neck pendulum acceleration—Class 60 (5) Thorax and thorax pendulum acceleration—Class 180 (6) Thorax deflection—Class 180 (7) Knee...

  19. 49 CFR 572.36 - Test conditions and instrumentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... be mounted with its sensitive axis colinear with the pendulum's longitudinal centerline. (h) The... acceleration—Class 1000 (2) Neck forces—Class 1000 (3) Neck moments—Class 600 (4) Neck pendulum acceleration—Class 60 (5) Thorax and thorax pendulum acceleration—Class 180 (6) Thorax deflection—Class 180 (7) Knee...

  20. 49 CFR 572.36 - Test conditions and instrumentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... be mounted with its sensitive axis colinear with the pendulum's longitudinal centerline. (h) The... acceleration—Class 1000 (2) Neck forces—Class 1000 (3) Neck moments—Class 600 (4) Neck pendulum acceleration—Class 60 (5) Thorax and thorax pendulum acceleration—Class 180 (6) Thorax deflection—Class 180 (7) Knee...

  1. Where can pixel counting area estimates meet user-defined accuracy requirements?

    NASA Astrophysics Data System (ADS)

    Waldner, François; Defourny, Pierre

    2017-08-01

    Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.

  2. Change detection from synthetic aperture radar images based on neighborhood-based ratio and extreme learning machine

    NASA Astrophysics Data System (ADS)

    Gao, Feng; Dong, Junyu; Li, Bo; Xu, Qizhi; Xie, Cui

    2016-10-01

    Change detection is of high practical value to hazard assessment, crop growth monitoring, and urban sprawl detection. A synthetic aperture radar (SAR) image is the ideal information source for performing change detection since it is independent of atmospheric and sunlight conditions. Existing SAR image change detection methods usually generate a difference image (DI) first and use clustering methods to classify the pixels of DI into changed class and unchanged class. Some useful information may get lost in the DI generation process. This paper proposed an SAR image change detection method based on neighborhood-based ratio (NR) and extreme learning machine (ELM). NR operator is utilized for obtaining some interested pixels that have high probability of being changed or unchanged. Then, image patches centered at these pixels are generated, and ELM is employed to train a model by using these patches. Finally, pixels in both original SAR images are classified by the pretrained ELM model. The preclassification result and the ELM classification result are combined to form the final change map. The experimental results obtained on three real SAR image datasets and one simulated dataset show that the proposed method is robust to speckle noise and is effective to detect change information among multitemporal SAR images.

  3. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  4. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    PubMed

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  5. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  6. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  7. 49 CFR 572.155 - Test conditions and instrumentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation potentiometer response (if used)—CFC 60. (3) Thorax: (i) Spine and pendulum accelerations—Class 180; (ii) Shoulder forces—Class 600; (4...

  8. 49 CFR 572.155 - Test conditions and instrumentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation potentiometer response (if used)—CFC 60. (3) Thorax: (i) Spine and pendulum accelerations—Class 180; (ii) Shoulder forces—Class 600; (4...

  9. 49 CFR 572.155 - Test conditions and instrumentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation potentiometer response (if used)—CFC 60. (3) Thorax: (i) Spine and pendulum accelerations—Class 180; (ii) Shoulder forces—Class 600; (4...

  10. 49 CFR 572.155 - Test conditions and instrumentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation potentiometer response (if used)—CFC 60. (3) Thorax: (i) Spine and pendulum accelerations—Class 180; (ii) Shoulder forces—Class 600; (4...

  11. 49 CFR 572.155 - Test conditions and instrumentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Moments—Class 600; (iii) Pendulum acceleration—Class 180; (iv) Rotation potentiometer response (if used)—CFC 60. (3) Thorax: (i) Spine and pendulum accelerations—Class 180; (ii) Shoulder forces—Class 600; (4...

  12. Aerobic Conditioning Class.

    ERIC Educational Resources Information Center

    Johnson, Neil R.

    1980-01-01

    An aerobic exercise class that focuses on the conditioning of the cardiovascular and muscular systems is presented. Students complete data cards on heart rate, pulse, and exercises to be completed during the forty minute course. (CJ)

  13. Role of bioinformatics in establishing microRNAs as modulators of abiotic stress responses: the new revolution

    PubMed Central

    Tripathi, Anita; Goswami, Kavita; Sanan-Mishra, Neeti

    2015-01-01

    microRNAs (miRs) are a class of 21–24 nucleotide long non-coding RNAs responsible for regulating the expression of associated genes mainly by cleavage or translational inhibition of the target transcripts. With this characteristic of silencing, miRs act as an important component in regulation of plant responses in various stress conditions. In recent years, with drastic change in environmental and soil conditions different type of stresses have emerged as a major challenge for plants growth and productivity. The identification and profiling of miRs has itself been a challenge for research workers given their small size and large number of many probable sequences in the genome. Application of computational approaches has expedited the process of identification of miRs and their expression profiling in different conditions. The development of High-Throughput Sequencing (HTS) techniques has facilitated to gain access to the global profiles of the miRs for understanding their mode of action in plants. Introduction of various bioinformatics databases and tools have revolutionized the study of miRs and other small RNAs. This review focuses the role of bioinformatics approaches in the identification and study of the regulatory roles of plant miRs in the adaptive response to stresses. PMID:26578966

  14. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach

  15. Evidence-Based Medicine as a Tool for Undergraduate Probability and Statistics Education

    PubMed Central

    Masel, J.; Humphrey, P. T.; Blackburn, B.; Levine, J. A.

    2015-01-01

    Most students have difficulty reasoning about chance events, and misconceptions regarding probability can persist or even strengthen following traditional instruction. Many biostatistics classes sidestep this problem by prioritizing exploratory data analysis over probability. However, probability itself, in addition to statistics, is essential both to the biology curriculum and to informed decision making in daily life. One area in which probability is particularly important is medicine. Given the preponderance of pre health students, in addition to more general interest in medicine, we capitalized on students’ intrinsic motivation in this area to teach both probability and statistics. We use the randomized controlled trial as the centerpiece of the course, because it exemplifies the most salient features of the scientific method, and the application of critical thinking to medicine. The other two pillars of the course are biomedical applications of Bayes’ theorem and science and society content. Backward design from these three overarching aims was used to select appropriate probability and statistics content, with a focus on eliciting and countering previously documented misconceptions in their medical context. Pretest/posttest assessments using the Quantitative Reasoning Quotient and Attitudes Toward Statistics instruments are positive, bucking several negative trends previously reported in statistics education. PMID:26582236

  16. Learning difficulties of senior high school students based on probability understanding levels

    NASA Astrophysics Data System (ADS)

    Anggara, B.; Priatna, N.; Juandi, D.

    2018-05-01

    Identifying students' difficulties in learning concept of probability is important for teachers to prepare the appropriate learning processes and can overcome obstacles that may arise in the next learning processes. This study revealed the level of students' understanding of the concept of probability and identified their difficulties as a part of the epistemological obstacles identification of the concept of probability. This study employed a qualitative approach that tends to be the character of descriptive research involving 55 students of class XII. In this case, the writer used the diagnostic test of probability concept learning difficulty, observation, and interview as the techniques to collect the data needed. The data was used to determine levels of understanding and the learning difficulties experienced by the students. From the result of students' test result and learning observation, it was found that the mean cognitive level was at level 2. The findings indicated that students had appropriate quantitative information of probability concept but it might be incomplete or incorrectly used. The difficulties found are the ones in arranging sample space, events, and mathematical models related to probability problems. Besides, students had difficulties in understanding the principles of events and prerequisite concept.

  17. Gothic pedagogy and Victorian reform treatises.

    PubMed

    Kehler, Grace

    2008-01-01

    This paper considers the work of bodily affect in three Victorian reform treatises about the industrial working classes: Kay's The Moral and Physical Condition of the Working Classes Employed in the Cotton Manufacture in Manchester, Chadwick's Report on the Sanitary Condition of the Labouring Population of Great Britain, and Engels's The Condition of the Working Class in England. Employing a gothic technology that graphically illustrates and appeals to the sensations, these treatises provide a striking instance of the extent to which Victorian attempts at social reform were routed through the visceral, sensible knowledge of the body. Since, however, the gothic tends toward the excessive, a second crucial feature of its technology entails the arousal of conflicting sensations that problematize class relations.

  18. Mixture class recovery in GMM under varying degrees of class separation: frequentist versus Bayesian estimation.

    PubMed

    Depaoli, Sarah

    2013-06-01

    Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  19. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  20. Epigenetic silencing of MLH1 in endometrial cancers is associated with larger tumor volume, increased rate of lymph node positivity and reduced recurrence-free survival.

    PubMed

    Cosgrove, Casey M; Cohn, David E; Hampel, Heather; Frankel, Wendy L; Jones, Dan; McElroy, Joseph P; Suarez, Adrian A; Zhao, Weiqiang; Chen, Wei; Salani, Ritu; Copeland, Larry J; O'Malley, David M; Fowler, Jeffrey M; Yilmaz, Ahmet; Chassen, Alexis S; Pearlman, Rachel; Goodfellow, Paul J; Backes, Floor J

    2017-09-01

    To determine the relationship between mismatch repair (MMR) classification and clinicopathologic features including tumor volume, and explore outcomes by MMR class in a contemporary cohort. Single institution cohort evaluating MMR classification for endometrial cancers (EC). MMR immunohistochemistry (IHC)±microsatellite instability (MSI) testing and reflex MLH1 methylation testing was performed. Tumors with MMR abnormalities by IHC or MSI and MLH1 methylation were classified as epigenetic MMR deficiency while those without MLH1 methylation were classified as probable MMR mutations. Clinicopathologic characteristics were analyzed. 466 endometrial cancers were classified; 75% as MMR proficient, 20% epigenetic MMR defects, and 5% as probable MMR mutations. Epigenetic MMR defects were associated with advanced stage, higher grade, presence of lymphovascular space invasion, and older age. MMR class was significantly associated with tumor volume, an association not previously reported. The epigenetic MMR defect tumors median volume was 10,220mm 3 compared to 3321mm 3 and 2,846mm 3 , for MMR proficient and probable MMR mutations respectively (P<0.0001). Higher tumor volume was associated with lymph node involvement. Endometrioid EC cases with epigenetic MMR defects had significantly reduced recurrence-free survival (RFS). Among advanced stage (III/IV) endometrioid EC the epigenetic MMR defect group was more likely to recur compared to the MMR proficient group (47.7% vs 3.4%) despite receiving similar adjuvant therapy. In contrast, there was no difference in the number of early stage recurrences for the different MMR classes. MMR testing that includes MLH1 methylation analysis defines a subset of tumors that have worse prognostic features and reduced RFS. Copyright © 2017 Elsevier Inc. All rights reserved.

Top