Sample records for model generally captured

  1. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part I: boron neutron capture therapy models.

    PubMed

    Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.

  2. Estimating state-transition probabilities for unobservable states using capture-recapture/resighting data

    USGS Publications Warehouse

    Kendall, W.L.; Nichols, J.D.

    2002-01-01

    Temporary emigration was identified some time ago as causing potential problems in capture-recapture studies, and in the last five years approaches have been developed for dealing with special cases of this general problem. Temporary emigration can be viewed more generally as involving transitions to and from an unobservable state, and frequently the state itself is one of biological interest (e.g., 'nonbreeder'). Development of models that permit estimation of relevant parameters in the presence of an unobservable state requires either extra information (e.g., as supplied by Pollock's robust design) or the following classes of model constraints: reducing the order of Markovian transition probabilities, imposing a degree of determinism on transition probabilities, removing state specificity of survival probabilities, and imposing temporal constancy of parameters. The objective of the work described in this paper is to investigate estimability of model parameters under a variety of models that include an unobservable state. Beginning with a very general model and no extra information, we used numerical methods to systematically investigate the use of ancillary information and constraints to yield models that are useful for estimation. The result is a catalog of models for which estimation is possible. An example analysis of sea turtle capture-recapture data under two different models showed similar point estimates but increased precision for the model that incorporated ancillary data (the robust design) when compared to the model with deterministic transitions only. This comparison and the results of our numerical investigation of model structures lead to design suggestions for capture-recapture studies in the presence of an unobservable state.

  3. Overview of the High Performance Antiproton Trap (HiPAT) Experiment

    NASA Technical Reports Server (NTRS)

    Martin, James; Chakrabarti, Suman; Pearson, Boise; Sims, W. Herbert; Lewis, Raymond; Fant, Wallace; Rodgers, Stephen (Technical Monitor)

    2002-01-01

    A general overview of the High Performance Antiproton Trap (HiPAT) Experiment is presented. The topics include: 1) Why Antimatter? 2) HiPAT Applicability; 3) Approach-Goals; 4) HiPAT General Layout; 5) Sizing For Containment; 6) Laboratory Operations; 7) Vacuum System Cleaning; 8) Ion Production Via Electron Gun; 9) Particle Capture Via Ion Sources; 10) Ion Beam Steering/Focusing; 11) Ideal Ion Stacking Sequence; 12) Setup For Dynamic Capture; 13) Dynamic Capture of H(+) Ions; 14) Dynamic Capture; 15) Radio Frequency Particle Detection; 16) Radio Frequency Antenna Modeling; and 17) R.F. Stabilization-Low Frequencies. A short presentation of propulsion applications of Antimatter is also given. This paper is in viewgraph form.

  4. A goodness-of-fit test for capture-recapture model M(t) under closure

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.

  5. Individual heterogeneity and identifiability in capture-recapture models

    USGS Publications Warehouse

    Link, W.A.

    2004-01-01

    Individual heterogeneity in detection probabilities is a far more serious problem for capture-recapture modeling than has previously been recognized. In this note, I illustrate that population size is not an identifiable parameter under the general closed population mark-recapture model Mh. The problem of identifiability is obvious if the population includes individuals with pi = 0, but persists even when it is assumed that individual detection probabilities are bounded away from zero. Identifiability may be attained within parametric families of distributions for pi, but not among parametric families of distributions. Consequently, in the presence of individual heterogeneity in detection probability, capture-recapture analysis is strongly model dependent.

  6. A measurement model for general noise reaction in response to aircraft noise.

    PubMed

    Kroesen, Maarten; Schreckenberg, Dirk

    2011-01-01

    In this paper a measurement model for general noise reaction (GNR) in response to aircraft noise is developed to assess the performance of aircraft noise annoyance and a direct measure of general reaction as indicators of this concept. For this purpose GNR is conceptualized as a superordinate latent construct underlying particular manifestations. This conceptualization is empirically tested through estimation of a second-order factor model. Data from a community survey at Frankfurt Airport are used for this purpose (N=2206). The data fit the hypothesized factor structure well and support the conceptualization of GNR as a superordinate construct. It is concluded that noise annoyance and a direct measure of general reaction to noise capture a large part of the negative feelings and emotions in response to aircraft noise but are unable to capture all relevant variance. The paper concludes with recommendations for the valid measurement of community reaction and several directions for further research.

  7. A Generalized Radiation Model for Human Mobility: Spatial Scale, Searching Direction and Trip Constraint.

    PubMed

    Kang, Chaogui; Liu, Yu; Guo, Diansheng; Qin, Kun

    2015-01-01

    We generalized the recently introduced "radiation model", as an analog to the generalization of the classic "gravity model", to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses.

  8. A Generalized Radiation Model for Human Mobility: Spatial Scale, Searching Direction and Trip Constraint

    PubMed Central

    Kang, Chaogui; Liu, Yu; Guo, Diansheng; Qin, Kun

    2015-01-01

    We generalized the recently introduced “radiation model”, as an analog to the generalization of the classic “gravity model”, to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses. PMID:26600153

  9. Stability of a slotted ALOHA system with capture effect

    NASA Astrophysics Data System (ADS)

    Onozato, Yoshikuni; Liu, Jin; Noguchi, Shoichi

    1989-02-01

    The stability of a slotted ALOHA system with capture effect is investigated under a general communication environment where terminals are divided into two groups (low-power and high-power) and the capture effect is modeled by capture probabilities. An approximate analysis is developed using catastrophe theory, in which the effects of system and user parameters on the stability are characterized by the cusp catastrophe. Particular attention is given to the low-power group, since it must bear the strain under the capture effect. The stability conditions of the two groups are given explicitly by bifurcation sets.

  10. Solar wind driven empirical forecast models of the time derivative of the ground magnetic field

    NASA Astrophysics Data System (ADS)

    Wintoft, Peter; Wik, Magnus; Viljanen, Ari

    2015-03-01

    Empirical models are developed to provide 10-30-min forecasts of the magnitude of the time derivative of local horizontal ground geomagnetic field (|dBh/dt|) over Europe. The models are driven by ACE solar wind data. A major part of the work has been devoted to the search and selection of datasets to support the model development. To simplify the problem, but at the same time capture sudden changes, 30-min maximum values of |dBh/dt| are forecast with a cadence of 1 min. Models are tested both with and without the use of ACE SWEPAM plasma data. It is shown that the models generally capture sudden increases in |dBh/dt| that are associated with sudden impulses (SI). The SI is the dominant disturbance source for geomagnetic latitudes below 50° N and with minor contribution from substorms. However, at occasions, large disturbances can be seen associated with geomagnetic pulsations. For higher latitudes longer lasting disturbances, associated with substorms, are generally also captured. It is also shown that the models using only solar wind magnetic field as input perform in most cases equally well as models with plasma data. The models have been verified using different approaches including the extremal dependence index which is suitable for rare events.

  11. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    NASA Astrophysics Data System (ADS)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  12. Use of Factor Mixture Modeling to Capture Spearman's Law of Diminishing Returns

    ERIC Educational Resources Information Center

    Reynolds, Matthew R.; Keith, Timothy Z.; Beretvas, S. Natasha

    2010-01-01

    Spearman's law of diminishing returns (SLODR) posits that at higher levels of general cognitive ability the general factor ("g") performs less well in explaining individual differences in cognitive test performance. Research has generally supported SLODR, but previous research has required the a priori division of respondents into…

  13. Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model

    NASA Astrophysics Data System (ADS)

    Vila, J.; Fernández-Sáez, J.; Zaera, R.

    2018-04-01

    In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.

  14. Human Judgment and Decision Making: A Proposed Decision Model Using Sequential Processing

    DTIC Science & Technology

    1985-08-01

    to the issues noted above is called policy capturing ( Szilagyi and Wallace , 1983). 4 The purpose of policy capturing is to develop a decision making...papers have been written on this general subject. A concise overview of this discipline is found in Szilagyi and Wallace (1983). Basically, decision models... Szilagyi , A. and Wallace , H. Organizational Behavior and Performance (3rd Ed.), Scott, Foresman and Company, 1983. Taylor, R. L. and Wilsted, W. D

  15. Parasitoid competition and the dynamics of host-parasitoid models

    Treesearch

    Andrew D. Taylor

    1988-01-01

    Both parasitoids and predators compete intraspecifically for prey or hosts. The nature of this competition, however, is potentially much more complex and varied for parasitoids than for predators. With predators, prey are generally consumed upon capture and thus cease to be bones of contention: competition is simply for discovery (or capture) of prey. In contrast,...

  16. Two-proton capture on the 68Se nucleus with a new self-consistent cluster model

    NASA Astrophysics Data System (ADS)

    Hove, D.; Garrido, E.; Jensen, A. S.; Sarriguren, P.; Fynbo, H. O. U.; Fedorov, D. V.; Zinner, N. T.

    2018-07-01

    We investigate the two-proton capture reaction of the prominent rapid proton capture waiting point nucleus, 68Se, that produces the borromean nucleus 70Kr (68Se + p + p). We apply a recently formulated general model where the core nucleus, 68Se, is treated in the mean-field approximation and the three-body problem of the two valence protons and the core is solved exactly. We compare using two popular Skyrme interactions, SLy4 and SkM*. We calculate E2 electromagnetic two-proton dissociation and capture cross sections, and derive the temperature dependent capture rates. We vary the unknown 2+ resonance energy without changing any of the structures computed self-consistently for both core and valence particles. We find rates increasing quickly with temperature below 2-4 GK after which we find rates varying by about a factor of two independent of 2+ resonance energy. The capture mechanism is sequential through the f5/2 proton-core resonance, but the continuum background contributes significantly.

  17. Modeling Amorphous Microporous Polymers for CO2 Capture and Separations.

    PubMed

    Kupgan, Grit; Abbott, Lauren J; Hart, Kyle E; Colina, Coray M

    2018-06-13

    This review concentrates on the advances of atomistic molecular simulations to design and evaluate amorphous microporous polymeric materials for CO 2 capture and separations. A description of atomistic molecular simulations is provided, including simulation techniques, structural generation approaches, relaxation and equilibration methodologies, and considerations needed for validation of simulated samples. The review provides general guidelines and a comprehensive update of the recent literature (since 2007) to promote the acceleration of the discovery and screening of amorphous microporous polymers for CO 2 capture and separation processes.

  18. A technical, economic, and environmental assessment of amine-based CO2 capture technology for power plant greenhouse gas control.

    PubMed

    Rao, Anand B; Rubin, Edward S

    2002-10-15

    Capture and sequestration of CO2 from fossil fuel power plants is gaining widespread interest as a potential method of controlling greenhouse gas emissions. Performance and cost models of an amine (MEA)-based CO2 absorption system for postcombustion flue gas applications have been developed and integrated with an existing power plant modeling framework that includes multipollutant control technologies for other regulated emissions. The integrated model has been applied to study the feasibility and cost of carbon capture and sequestration at both new and existing coal-burning power plants. The cost of carbon avoidance was shown to depend strongly on assumptions about the reference plant design, details of the CO2 capture system design, interactions with other pollution control systems, and method of CO2 storage. The CO2 avoidance cost for retrofit systems was found to be generally higher than for new plants, mainly because of the higher energy penalty resulting from less efficient heat integration as well as site-specific difficulties typically encountered in retrofit applications. For all cases, a small reduction in CO2 capture cost was afforded by the SO2 emission trading credits generated by amine-based capture systems. Efforts are underway to model a broader suite of carbon capture and sequestration technologies for more comprehensive assessments in the context of multipollutant environmental management.

  19. Screening of metal-organic frameworks for carbon dioxide capture from flue gas using a combined experimental and modeling approach.

    PubMed

    Yazaydin, A Ozgür; Snurr, Randall Q; Park, Tae-Hong; Koh, Kyoungmoo; Liu, Jian; Levan, M Douglas; Benin, Annabelle I; Jakubczak, Paulina; Lanuza, Mary; Galloway, Douglas B; Low, John J; Willis, Richard R

    2009-12-30

    A diverse collection of 14 metal-organic frameworks (MOFs) was screened for CO(2) capture from flue gas using a combined experimental and modeling approach. Adsorption measurements are reported for the screened MOFs at room temperature up to 1 bar. These data are used to validate a generalized strategy for molecular modeling of CO(2) and other small molecules in MOFs. MOFs possessing a high density of open metal sites are found to adsorb significant amounts of CO(2) even at low pressure. An excellent correlation is found between the heat of adsorption and the amount of CO(2) adsorbed below 1 bar. Molecular modeling can aid in selection of adsorbents for CO(2) capture from flue gas by screening a large number of MOFs.

  20. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part II: gadolinium neutron capture therapy models and therapeutic effects.

    PubMed

    Wangerin, K; Culbertson, C N; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for gadolinium neutron capture therapy (GdNCT) related modeling. The validity of COG NCT model has been established for this model, and here the calculation was extended to analyze the effect of various gadolinium concentrations on dose distribution and cell-kill effect of the GdNCT modality and to determine the optimum therapeutic conditions for treating brain cancers. The computational results were compared with the widely used MCNP code. The differences between the COG and MCNP predictions were generally small and suggest that the COG code can be applied to similar research problems in NCT. Results for this study also showed that a concentration of 100 ppm gadolinium in the tumor was most beneficial when using an epithermal neutron beam.

  1. Commercial aspects of semi-reusable launch systems

    NASA Astrophysics Data System (ADS)

    Obersteiner, M. H.; Müller, H.; Spies, H.

    2003-07-01

    This paper presents a business planning model for a commercial space launch system. The financing model is based on market analyses and projections combined with market capture models. An operations model is used to derive the annual cash income. Parametric cost modeling, development and production schedules are used for quantifying the annual expenditures, the internal rate of return, break even point of positive cash flow and the respective prices per launch. Alternative consortia structures, cash flow methods, capture rates and launch prices are used to examine the sensitivity of the model. Then the model is applied for a promising semi-reusable launcher concept, showing the general achievability of the commercial approach and the necessary pre-conditions.

  2. Statistical inference for capture-recapture experiments

    USGS Publications Warehouse

    Pollock, Kenneth H.; Nichols, James D.; Brownie, Cavell; Hines, James E.

    1990-01-01

    This monograph presents a detailed, practical exposition on the design, analysis, and interpretation of capture-recapture studies. The Lincoln-Petersen model (Chapter 2) and the closed population models (Chapter 3) are presented only briefly because these models have been covered in detail elsewhere. The Jolly- Seber open population model, which is central to the monograph, is covered in detail in Chapter 4. In Chapter 5 we consider the "enumeration" or "calendar of captures" approach, which is widely used by mammalogists and other vertebrate ecologists. We strongly recommend that it be abandoned in favor of analyses based on the Jolly-Seber model. We consider 2 restricted versions of the Jolly-Seber model. We believe the first of these, which allows losses (mortality or emigration) but not additions (births or immigration), is likely to be useful in practice. Another series of restrictive models requires the assumptions of a constant survival rate or a constant survival rate and a constant capture rate for the duration of the study. Detailed examples are given that illustrate the usefulness of these restrictions. There often can be a substantial gain in precision over Jolly-Seber estimates. In Chapter 5 we also consider 2 generalizations of the Jolly-Seber model. The temporary trap response model allows newly marked animals to have different survival and capture rates for 1 period. The other generalization is the cohort Jolly-Seber model. Ideally all animals would be marked as young, and age effects considered by using the Jolly-Seber model on each cohort separately. In Chapter 6 we present a detailed description of an age-dependent Jolly-Seber model, which can be used when 2 or more identifiable age classes are marked. In Chapter 7 we present a detailed description of the "robust" design. Under this design each primary period contains several secondary sampling periods. We propose an estimation procedure based on closed and open population models that allows for heterogeneity and trap response of capture rates (hence the name robust design). We begin by considering just 1 age class and then extend to 2 age classes. When there are 2 age classes it is possible to distinguish immigrants and births. In Chapter 8 we give a detailed discussion of the design of capture-recapture studies. First, capture-recapture is compared to other possible sampling procedures. Next, the design of capture-recapture studies to minimize assumption violations is considered. Finally, we consider the precision of parameter estimates and present figures on proportional standard errors for a variety of initial parameter values to aid the biologist about to plan a study. A new program, JOLLY, has been written to accompany the material on the Jolly-Seber model (Chapter 4) and its extensions (Chapter 5). Another new program, JOLLYAGE, has been written for a special case of the age-dependent model (Chapter 6) where there are only 2 age classes. In Chapter 9 a brief description of the different versions of the 2 programs is given. Chapter 10 gives a brief description of some alternative approaches that were not considered in this monograph. We believe that an excellent overall view of capture- recapture models may be obtained by reading the monograph by White et al. (1982) emphasizing closed models and then reading this monograph where we concentrate on open models. The important recent monograph by Burnham et al. (1987) could then be read if there were interest in the comparison of different populations.

  3. Modeling misidentification errors that result from use of genetic tags in capture-recapture studies

    USGS Publications Warehouse

    Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.

    2011-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.

  4. Estimation of sex-specific survival from capture-recapture data when sex is not always known

    USGS Publications Warehouse

    Nichols, J.D.; Kendall, W.L.; Hines, J.E.; Spendelow, J.A.

    2004-01-01

    Many animals lack obvious sexual dimorphism, making assignment of sex difficult even for observed or captured animals. For many such species it is possible to assign sex with certainty only at some occasions; for example, when they exhibit certain types of behavior. A common approach to handling this situation in capture-recapture studies has been to group capture histories into those of animals eventually identified as male and female and those for which sex was never known. Because group membership is dependent on the number of occasions at which an animal was caught or observed (known sex animals, on average, will have been observed at more occasions than unknown-sex animals), survival estimates for known-sex animals will be positively biased, and those for unknown animals will be negatively biased. In this paper, we develop capture-recapture models that incorporate sex ratio and sex assignment parameters that permit unbiased estimation in the face of this sampling problem. We demonstrate the magnitude of bias in the traditional capture-recapture approach to this sampling problem, and we explore properties of estimators from other ad hoc approaches. The model is then applied to capture-recapture data for adult Roseate Terns (Sterna dougallii) at Falkner Island, Connecticut, 1993-2002. Sex ratio among adults in this population favors females, and we tested the hypothesis that this population showed sex-specific differences in adult survival. Evidence was provided for higher survival of adult females than males, as predicted. We recommend use of this modeling approach for future capture-recapture studies in which sex cannot always be assigned to captured or observed animals. We also place this problem in the more general context of uncertainty in state classification in multistate capture-recapture models.

  5. A Typology for Modeling Processes in Clinical Guidelines and Protocols

    NASA Astrophysics Data System (ADS)

    Tu, Samson W.; Musen, Mark A.

    We analyzed the graphical representations that are used by various guideline-modeling methods to express process information embodied in clinical guidelines and protocols. From this analysis, we distilled four modeling formalisms and the processes they typically model: (1) flowcharts for capturing problem-solving processes, (2) disease-state maps that link decision points in managing patient problems over time, (3) plans that specify sequences of activities that contribute toward a goal, (4) workflow specifications that model care processes in an organization. We characterized the four approaches and showed that each captures some aspect of what a guideline may specify. We believe that a general guideline-modeling system must provide explicit representation for each type of process.

  6. Scientists' perspectives on consent in the context of biobanking research

    PubMed Central

    Master, Zubin; Campo-Engelstein, Lisa; Caulfield, Timothy

    2015-01-01

    Most bioethics studies have focused on capturing the views of patients and the general public on research ethics issues related to informed consent for biobanking and only a handful of studies have examined the perceptions of scientists. Capturing the opinions of scientists is important because they are intimately involved with biobanks as collectors and users of samples and health information. In this study, we performed interviews with scientists followed by qualitative analysis to capture the diversity of perspectives on informed consent. We found that the majority of scientists in our study reported their preference for a general consent approach although they do not believe there to be a consensus on consent type. Despite their overall desire for a general consent model, many reported several concerns including donors needing some form of assurance that nothing unethical will be done with their samples and information. Finally, scientists reported mixed opinions about incorporating exclusion clauses in informed consent as a means of limiting some types of contentious research as a mechanism to assure donors that their samples and information are being handled appropriately. This study is one of the first to capture the views of scientists on informed consent in biobanking. Future studies should attempt to generalize findings on the perspectives of different scientists on informed consent for biobanking. PMID:25074466

  7. Estimation and modeling of electrofishing capture efficiency for fishes in wadeable warmwater streams

    USGS Publications Warehouse

    Price, A.; Peterson, James T.

    2010-01-01

    Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.

  8. Size-sex variation in survival rates and abundance of pig frogs, Rana grylio, in northern Florida wetlands

    USGS Publications Warehouse

    Wood, K.V.; Nichols, J.D.; Percival, H.F.; Hines, J.E.

    1998-01-01

    During 1991-1993, we conducted capture-recapture studies on pig frogs, Rana grylio, in seven study locations in northcentral Florida. Resulting data were used to test hypotheses about variation in survival probability over different size-sex classes of pig frogs. We developed multistate capture-recapture models for the resulting data and used them to estimate survival rates and frog abundance. Tests provided strong evidence of survival differences among size-sex classes, with adult females showing the highest survival probabilities. Adult males and juvenile frogs had lower survival rates that were similar to each other. Adult females were more abundant than adult males in most locations at most sampling occasions. We recommended probabilistic capture-recapture models in general, and multistate models in particular, for robust estimation of demographic parameters in amphibian populations.

  9. The role of capture spiral silk properties in the diversification of orb webs.

    PubMed

    Tarakanova, Anna; Buehler, Markus J

    2012-12-07

    Among a myriad of spider web geometries, the orb web presents a fascinating, exquisite example in architecture and evolution. Orb webs can be divided into two categories according to the capture silk used in construction: cribellate orb webs (composed of pseudoflagelliform silk) coated with dry cribellate threads and ecribellate orb webs (composed of flagelliform silk fibres) coated by adhesive glue droplets. Cribellate capture silk is generally stronger but less-extensible than viscid capture silk, and a body of phylogenic evidence suggests that cribellate capture silk is more closely related to the ancestral form of capture spiral silk. Here, we use a coarse-grained web model to investigate how the mechanical properties of spiral capture silk affect the behaviour of the whole web, illustrating that more elastic capture spiral silk yields a decrease in web system energy absorption, suggesting that the function of the capture spiral shifted from prey capture to other structural roles. Additionally, we observe that in webs with more extensible capture silk, the effect of thread strength on web performance is reduced, indicating that thread elasticity is a dominant driving factor in web diversification.

  10. Multiscale Modeling of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Mital, Subodh K.; Pineda, Evan J.; Arnold, Steven M.

    2015-01-01

    Results of multiscale modeling simulations of the nonlinear response of SiC/SiC ceramic matrix composites are reported, wherein the microstructure of the ceramic matrix is captured. This micro scale architecture, which contains free Si material as well as the SiC ceramic, is responsible for residual stresses that play an important role in the subsequent thermo-mechanical behavior of the SiC/SiC composite. Using the novel Multiscale Generalized Method of Cells recursive micromechanics theory, the microstructure of the matrix, as well as the microstructure of the composite (fiber and matrix) can be captured.

  11. General Nonlinear Ferroelectric Model v. Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Wen; Robbins, Josh

    2017-03-14

    The purpose of this software is to function as a generalized ferroelectric material model. The material model is designed to work with existing finite element packages by providing updated information on material properties that are nonlinear and dependent on loading history. The two major nonlinear phenomena this model captures are domain-switching and phase transformation. The software itself does not contain potentially sensitive material information and instead provides a framework for different physical phenomena observed within ferroelectric materials. The model is calibrated to a specific ferroelectric material through input parameters provided by the user.

  12. A Bayesian Model of the Memory Colour Effect.

    PubMed

    Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.

  13. A Bayesian Model of the Memory Colour Effect

    PubMed Central

    Olkkonen, Maria; Gegenfurtner, Karl R.

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874

  14. Model Analysis and Model Creation: Capturing the Task-Model Structure of Quantitative Item Domains. Research Report. ETS RR-06-11

    ERIC Educational Resources Information Center

    Deane, Paul; Graf, Edith Aurora; Higgins, Derrick; Futagi, Yoko; Lawless, René

    2006-01-01

    This study focuses on the relationship between item modeling and evidence-centered design (ECD); it considers how an appropriately generalized item modeling software tool can support systematic identification and exploitation of task-model variables, and then examines the feasibility of this goal, using linear-equation items as a test case. The…

  15. Capture Versus Capture Zones: Clarifying Terminology Related to Sources of Water to Wells.

    PubMed

    Barlow, Paul M; Leake, Stanley A; Fienen, Michael N

    2018-03-15

    The term capture, related to the source of water derived from wells, has been used in two distinct yet related contexts by the hydrologic community. The first is a water-budget context, in which capture refers to decreases in the rates of groundwater outflow and (or) increases in the rates of recharge along head-dependent boundaries of an aquifer in response to pumping. The second is a transport context, in which capture zone refers to the specific flowpaths that define the three-dimensional, volumetric portion of a groundwater flow field that discharges to a well. A closely related issue that has become associated with the source of water to wells is streamflow depletion, which refers to the reduction in streamflow caused by pumping, and is a type of capture. Rates of capture and streamflow depletion are calculated by use of water-budget analyses, most often with groundwater-flow models. Transport models, particularly particle-tracking methods, are used to determine capture zones to wells. In general, however, transport methods are not useful for quantifying actual or potential streamflow depletion or other types of capture along aquifer boundaries. To clarify the sometimes subtle differences among these terms, we describe the processes and relations among capture, capture zones, and streamflow depletion, and provide proposed terminology to distinguish among them. Published 2018. This article is a U.S. Government work and is in the public domain in the USA. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  16. Capture-recapture studies for multiple strata including non-markovian transitions

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.; Pollock, K.H.; Hestbeck, J.B.

    1993-01-01

    We consider capture-recapture studies where release and recapture data are available from each of a number of strata on every capture occasion. Strata may, for example, be geographic locations or physiological states. Movement of animals among strata occurs with unknown probabilities, and estimation of these unknown transition probabilities is the objective. We describe a computer routine for carrying out the analysis under a model that assumes Markovian transitions and under reduced parameter versions of this model. We also introduce models that relax the Markovian assumption and allow 'memory' to operate (i.e., allow dependence of the transition probabilities on the previous state). For these models, we sugg st an analysis based on a conditional likelihood approach. Methods are illustrated with data from a large study on Canada geese (Branta canadensis) banded in three geographic regions. The assumption of Markovian transitions is rejected convincingly for these data, emphasizing the importance of the more general models that allow memory.

  17. Parameter-expanded data augmentation for Bayesian analysis of capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, Robert M.

    2012-01-01

    Data augmentation (DA) is a flexible tool for analyzing closed and open population models of capture-recapture data, especially models which include sources of hetereogeneity among individuals. The essential concept underlying DA, as we use the term, is based on adding "observations" to create a dataset composed of a known number of individuals. This new (augmented) dataset, which includes the unknown number of individuals N in the population, is then analyzed using a new model that includes a reformulation of the parameter N in the conventional model of the observed (unaugmented) data. In the context of capture-recapture models, we add a set of "all zero" encounter histories which are not, in practice, observable. The model of the augmented dataset is a zero-inflated version of either a binomial or a multinomial base model. Thus, our use of DA provides a general approach for analyzing both closed and open population models of all types. In doing so, this approach provides a unified framework for the analysis of a huge range of models that are treated as unrelated "black boxes" and named procedures in the classical literature. As a practical matter, analysis of the augmented dataset by MCMC is greatly simplified compared to other methods that require specialized algorithms. For example, complex capture-recapture models of an augmented dataset can be fitted with popular MCMC software packages (WinBUGS or JAGS) by providing a concise statement of the model's assumptions that usually involves only a few lines of pseudocode. In this paper, we review the basic technical concepts of data augmentation, and we provide examples of analyses of closed-population models (M 0, M h , distance sampling, and spatial capture-recapture models) and open-population models (Jolly-Seber) with individual effects.

  18. Goodness-of-fit tests for open capture-recapture models

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1985-01-01

    General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.

  19. Optimizing local capture of atrial fibrillation by rapid pacing: study of the influence of tissue dynamics.

    PubMed

    Uldry, Laurent; Virag, Nathalie; Jacquemet, Vincent; Vesin, Jean-Marc; Kappenberger, Lukas

    2010-12-01

    While successful termination by pacing of organized atrial tachycardias has been observed in patients, rapid pacing of AF can induce a local capture of the atrial tissue but in general no termination. The purpose of this study was to perform a systematic evaluation of the ability to capture AF by rapid pacing in a biophysical model of the atria with different dynamics in terms of conduction velocity (CV) and action potential duration (APD). Rapid pacing was applied during 30 s at five locations on the atria, for pacing cycle lengths in the range 60-110% of the mean AF cycle length (AFCL(mean)). Local AF capture could be achieved using rapid pacing at pacing sites located distal to major anatomical obstacles. Optimal pacing cycle lengths were found in the range 74-80% AFCL(mean) (capture window width: 14.6 ± 3% AFCL(mean)). An increase/decrease in CV or APD led to a significant shrinking/stretching of the capture window. Capture did not depend on AFCL, but did depend on the atrial substrate as characterized by an estimate of its wavelength, a better capture being achieved at shorter wavelengths. This model-based study suggests that a proper selection of the pacing site and cycle length can influence local capture results and that atrial tissue properties (CV and APD) are determinants of the response to rapid pacing.

  20. Spatial regression methods capture prediction uncertainty in species distribution model projections through time

    Treesearch

    Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz

    2013-01-01

    The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...

  1. Interpersonal distance modeling during fighting activities.

    PubMed

    Dietrich, Gilles; Bredin, Jonathan; Kerlirzin, Yves

    2010-10-01

    The aim of this article is to elaborate a general framework for modeling dual opposition activities, or more generally, dual interaction. The main hypothesis is that opposition behavior can be measured directly from a global variable and that the relative distance between the two subjects can be this parameter. Moreover, this parameter should be considered as multidimensional parameter depending not only on the dynamics of the subjects but also on the "internal" parameters of the subjects, such as sociological and/or emotional states. Standard and simple mechanical formalization will be used to model this multifactorial distance. To illustrate such a general modeling methodology, this model was compared with actual data from an opposition activity like Japanese fencing (kendo). This model captures not only coupled coordination, but more generally interaction in two-subject activities.

  2. A capture-recapture survival analysis model for radio-tagged animals

    USGS Publications Warehouse

    Pollock, K.H.; Bunck, C.M.; Winterstein, S.R.; Chen, C.-L.; North, P.M.; Nichols, J.D.

    1995-01-01

    In recent years, survival analysis of radio-tagged animals has developed using methods based on the Kaplan-Meier method used in medical and engineering applications (Pollock et al., 1989a,b). An important assumption of this approach is that all tagged animals with a functioning radio can be relocated at each sampling time with probability 1. This assumption may not always be reasonable in practice. In this paper, we show how a general capture-recapture model can be derived which allows for some probability (less than one) for animals to be relocated. This model is not simply a Jolly-Seber model because it is possible to relocate both dead and live animals, unlike when traditional tagging is used. The model can also be viewed as a generalization of the Kaplan-Meier procedure, thus linking the Jolly-Seber and Kaplan-Meier approaches to survival estimation. We present maximum likelihood estimators and discuss testing between submodels. We also discuss model assumptions and their validity in practice. An example is presented based on canvasback data collected by G. M. Haramis of Patuxent Wildlife Research Center, Laurel, Maryland, USA.

  3. Application of a multistate model to estimate culvert effects on movement of small fishes

    USGS Publications Warehouse

    Norman, J.R.; Hagler, M.M.; Freeman, Mary C.; Freeman, B.J.

    2009-01-01

    While it is widely acknowledged that culverted road-stream crossings may impede fish passage, effects of culverts on movement of nongame and small-bodied fishes have not been extensively studied and studies generally have not accounted for spatial variation in capture probabilities. We estimated probabilities for upstream and downstream movement of small (30-120 mm standard length) benthic and water column fishes across stream reaches with and without culverts at four road-stream crossings over a 4-6-week period. Movement and reach-specific capture probabilities were estimated using multistate capture-recapture models. Although none of the culverts were complete barriers to passage, only a bottomless-box culvert appeared to permit unrestricted upstream and downstream movements by benthic fishes based on model estimates of movement probabilities. At two box culverts that were perched above the water surface at base flow, observed movements were limited to water column fishes and to intervals when runoff from storm events raised water levels above the perched level. Only a single fish was observed to move through a partially embedded pipe culvert. Estimates for probabilities of movement over distances equal to at least the length of one culvert were low (e.g., generally ???0.03, estimated for 1-2-week intervals) and had wide 95% confidence intervals as a consequence of few observed movements to nonadjacent reaches. Estimates of capture probabilities varied among reaches by a factor of 2 to over 10, illustrating the importance of accounting for spatially variable capture rates when estimating movement probabilities with capture-recapture data. Longer-term studies are needed to evaluate temporal variability in stream fish passage at culverts (e.g., in relation to streamflow variability) and to thereby better quantify the degree of population fragmentation caused by road-stream crossings with culverts. ?? American Fisheries Society 2009.

  4. Bayesian inference in camera trapping studies for a class of spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Karanth, K. Ullas; Gopalaswamy, Arjun M.; Kumar, N. Samba

    2009-01-01

    We develop a class of models for inference about abundance or density using spatial capture-recapture data from studies based on camera trapping and related methods. The model is a hierarchical model composed of two components: a point process model describing the distribution of individuals in space (or their home range centers) and a model describing the observation of individuals in traps. We suppose that trap- and individual-specific capture probabilities are a function of distance between individual home range centers and trap locations. We show that the models can be regarded as generalized linear mixed models, where the individual home range centers are random effects. We adopt a Bayesian framework for inference under these models using a formulation based on data augmentation. We apply the models to camera trapping data on tigers from the Nagarahole Reserve, India, collected over 48 nights in 2006. For this study, 120 camera locations were used, but cameras were only operational at 30 locations during any given sample occasion. Movement of traps is common in many camera-trapping studies and represents an important feature of the observation model that we address explicitly in our application.

  5. The role of capture spiral silk properties in the diversification of orb webs

    PubMed Central

    Tarakanova, Anna; Buehler, Markus J.

    2012-01-01

    Among a myriad of spider web geometries, the orb web presents a fascinating, exquisite example in architecture and evolution. Orb webs can be divided into two categories according to the capture silk used in construction: cribellate orb webs (composed of pseudoflagelliform silk) coated with dry cribellate threads and ecribellate orb webs (composed of flagelliform silk fibres) coated by adhesive glue droplets. Cribellate capture silk is generally stronger but less-extensible than viscid capture silk, and a body of phylogenic evidence suggests that cribellate capture silk is more closely related to the ancestral form of capture spiral silk. Here, we use a coarse-grained web model to investigate how the mechanical properties of spiral capture silk affect the behaviour of the whole web, illustrating that more elastic capture spiral silk yields a decrease in web system energy absorption, suggesting that the function of the capture spiral shifted from prey capture to other structural roles. Additionally, we observe that in webs with more extensible capture silk, the effect of thread strength on web performance is reduced, indicating that thread elasticity is a dominant driving factor in web diversification. PMID:22896566

  6. Simulation of charge breeding of rubidium using Monte Carlo charge breeding code and generalized ECRIS model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, L.; Cluggish, B.; Kim, J. S.

    2010-02-15

    A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recentmore » charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.« less

  7. Measurements of neutron capture cross sections on 70Zn at 0.96 and 1.69 MeV

    NASA Astrophysics Data System (ADS)

    Punte, L. R. M.; Lalremruata, B.; Otuka, N.; Suryanarayana, S. V.; Iwamoto, Y.; Pachuau, Rebecca; Satheesh, B.; Thanga, H. H.; Danu, L. S.; Desai, V. V.; Hlondo, L. R.; Kailas, S.; Ganesan, S.; Nayak, B. K.; Saxena, A.

    2017-02-01

    The cross sections of the 70Zn(n ,γ )Zn71m (T1 /2=3.96 ±0.05 -h ) reaction have been measured relative to the 197Au(n ,γ )198Au cross sections at 0.96 and 1.69 MeV using a 7Li(p ,n )7Be neutron source and activation technique. The cross section of this reaction has been measured for the first time in the MeV region. The new experimental cross sections have been compared with the theoretical prediction by talys-1.6 with various level-density models and γ -ray strength functions as well as the tendl-2015 library. The talys-1.6 calculation with the generalized superfluid level-density model and Kopecky-Uhl generalized Lorentzian γ -ray strength function predicted the new experimental cross sections at both incident energies. The 70Zn(n ,γ ) g+m 71Zn total capture cross sections have also been derived by applying the evaluated isomeric ratios in the tendl-2015 library to the measured partial capture cross sections. The spectrum averaged total capture cross sections derived in the present paper agree well with the jendl-4.0 library at 0.96 MeV, whereas it lies between the tendl-2015 and the jendl-4.0 libraries at 1.69 MeV.

  8. TLS for generating multi-LOD of 3D building model

    NASA Astrophysics Data System (ADS)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  9. Item Response Theory Using Hierarchical Generalized Linear Models

    ERIC Educational Resources Information Center

    Ravand, Hamdollah

    2015-01-01

    Multilevel models (MLMs) are flexible in that they can be employed to obtain item and person parameters, test for differential item functioning (DIF) and capture both local item and person dependence. Papers on the MLM analysis of item response data have focused mostly on theoretical issues where applications have been add-ons to simulation…

  10. An Ensemble System Based on Hybrid EGARCH-ANN with Different Distributional Assumptions to Predict S&P 500 Intraday Volatility

    NASA Astrophysics Data System (ADS)

    Lahmiri, S.; Boukadoum, M.

    2015-10-01

    Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.

  11. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  12. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  13. The Development of the General Factor of Psychopathology 'p Factor' Through Childhood and Adolescence.

    PubMed

    Murray, Aja Louise; Eisner, Manuel; Ribeaud, Denis

    2016-11-01

    Recent studies have suggested that the structure of psychopathology may be usefully represented in terms of a general factor of psychopathology (p-factor) capturing variance common to a broad range of symptoms transcending diagnostic domains in addition to specific factors capturing variance common to smaller subsets of more closely related symptoms. Little is known about how the general co-morbidity captured by this p-factor develops and whether general co-morbidity increases or decreases over childhood and adolescence. We evaluated two competing hypotheses: 1) dynamic mutualism which predicts growth in general co-morbidity and associated p-factor strength over time and 2) p-differentiation which predicts that manifestations of liabilities towards psychopathology become increasingly specific over time. Data came from the Zurich Project on the Social Development of Children and Youths (z-proso), a longitudinal study of a normative sample (approx. 50 % male) measured at 8 time points from ages 7 to 15. We operationalised general co-morbidity as p-factor strength in a bi-factor model and used omega hierarchical to track how this changed over development. In contrast to the predictions of both dynamic mutualism and p-differentiation, p-factor strength remained relatively constant over the studied period suggesting that such processes do not govern the interplay between psychopathological symptoms during this phase of development. Future research should focus on earlier phases of development and on factors that maintain the consistency of symptom-general covariation across this period.

  14. Implementation of a Smeared Crack Band Model in a Micromechanics Framework

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.

    2012-01-01

    The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.

  15. An adaptive strategy for reducing Feral Cat predation on endangered hawaiian birds

    USGS Publications Warehouse

    Hess, S.C.; Banko, P.C.; Hansen, H.

    2009-01-01

    Despite the long history of Feral Cats Felis catus in Hawai'i, there has been little research to provide strategies to improve control programmes and reduce depredation on endangered species. Our objective Was to develop a predictive model to determine how landscape features on Mauna Kea, such as habitat, elevation, and proximity to roads, may affect the number of Feral Cats captured at each trap. We used log-link generalized linear models and QAIC c model ranking criteria to determine the effect of these factors. We found that The number of cats captured per trap Was related to effort, habitat type, and Whether traps Were located on The West or North Slope of Mauna Kea. We recommend an adaptive management strategy to minimize trapping interference by non-target Small Indian Mongoose Herpestes auropunctatus with toxicants, to focus trapping efforts in M??mane Sophora chrysophylla habitat on the West slope of Mauna Kea, and to cluster traps near others that have previously captured multiple cats.

  16. Estimation of population size using open capture-recapture models

    USGS Publications Warehouse

    McDonald, T.L.; Amstrup, Steven C.

    2001-01-01

    One of the most important needs for wildlife managers is an accurate estimate of population size. Yet, for many species, including most marine species and large mammals, accurate and precise estimation of numbers is one of the most difficult of all research challenges. Open-population capture-recapture models have proven useful in many situations to estimate survival probabilities but typically have not been used to estimate population size. We show that open-population models can be used to estimate population size by developing a Horvitz-Thompson-type estimate of population size and an estimator of its variance. Our population size estimate keys on the probability of capture at each trap occasion and therefore is quite general and can be made a function of external covariates measured during the study. Here we define the estimator and investigate its bias, variance, and variance estimator via computer simulation. Computer simulations make extensive use of real data taken from a study of polar bears (Ursus maritimus) in the Beaufort Sea. The population size estimator is shown to be useful because it was negligibly biased in all situations studied. The variance estimator is shown to be useful in all situations, but caution is warranted in cases of extreme capture heterogeneity.

  17. Estimation by capture-recapture of recruitment and dispersal over several sites

    USGS Publications Warehouse

    Lebreton, J.D.; Hines, J.E.; Pradel, R.; Nichols, J.D.; Spendelow, J.A.

    2003-01-01

    Dispersal in animal populations is intimately linked with accession to reproduction, i.e. recruitment, and population regulation. Dispersal processes are thus a key component of population dynamics to the same extent as reproduction or mortality processes. Despite the growing interest in spatial aspects of population dynamics, the methodology for estimating dispersal, in particular in relation with recruitment, is limited. In many animal populations, in particular vertebrates, the impossibility of following individuals over space and time in an exhaustive way leads to the need to frame the estimation of dispersal in the context of capture-recapture methodology. We present here a class of age-dependent multistate capture-recapture models for the simultaneous estimation of natal dispersal, breeding dispersal, and age-dependent recruitment. These models are suitable for populations in which individuals are marked at birth and then recaptured over several sites. Under simple constraints, they can be used in populations where non-breeders are not observed, as is often the case with colonial waterbirds monitored on their breeding grounds. Biological questions can be addressed by comparing models differing in structure, according to the generalized linear model philosophy broadly used in capture-recapture methodology. We illustrate the potential of this approach by an analysis of recruitment and dispersal in the roseate tern Sterna dougallii.

  18. Statistical downscaling of precipitation using long short-term memory recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra

    2017-11-01

    Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.

  19. Progressive Failure of a Unidirectional Fiber-Reinforced Composite Using the Method of Cells: Discretization Objective Computational Results

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.

    2012-01-01

    The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.

  20. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  1. Generalizing the dynamic field theory of spatial cognition across real and developmental time scales

    PubMed Central

    Simmering, Vanessa R.; Spencer, John P.; Schutte, Anne R.

    2008-01-01

    Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective. PMID:17716632

  2. Qualitative dynamics semantics for SBGN process description.

    PubMed

    Rougny, Adrien; Froidevaux, Christine; Calzone, Laurence; Paulevé, Loïc

    2016-06-16

    Qualitative dynamics semantics provide a coarse-grain modeling of networks dynamics by abstracting away kinetic parameters. They allow to capture general features of systems dynamics, such as attractors or reachability properties, for which scalable analyses exist. The Systems Biology Graphical Notation Process Description language (SBGN-PD) has become a standard to represent reaction networks. However, no qualitative dynamics semantics taking into account all the main features available in SBGN-PD had been proposed so far. We propose two qualitative dynamics semantics for SBGN-PD reaction networks, namely the general semantics and the stories semantics, that we formalize using asynchronous automata networks. While the general semantics extends standard Boolean semantics of reaction networks by taking into account all the main features of SBGN-PD, the stories semantics allows to model several molecules of a network by a unique variable. The obtained qualitative models can be checked against dynamical properties and therefore validated with respect to biological knowledge. We apply our framework to reason on the qualitative dynamics of a large network (more than 200 nodes) modeling the regulation of the cell cycle by RB/E2F. The proposed semantics provide a direct formalization of SBGN-PD networks in dynamical qualitative models that can be further analyzed using standard tools for discrete models. The dynamics in stories semantics have a lower dimension than the general one and prune multiple behaviors (which can be considered as spurious) by enforcing the mutual exclusiveness between the activity of different nodes of a same story. Overall, the qualitative semantics for SBGN-PD allow to capture efficiently important dynamical features of reaction network models and can be exploited to further refine them.

  3. Selective gas capture via kinetic trapping

    DOE PAGES

    Kundu, Joyjit; Pascal, Tod; Prendergast, David; ...

    2016-07-13

    Conventional approaches to the capture of CO 2 by metal-organic frameworks focus on equilibrium conditions, and frameworks that contain little CO 2 in equilibrium are often rejected as carbon-capture materials. Here we use a statistical mechanical model, parameterized by quantum mechanical data, to suggest that metal-organic frameworks can be used to separate CO 2 from a typical flue gas mixture when used under nonequilibrium conditions. The origin of this selectivity is an emergent gas-separation mechanism that results from the acquisition by different gas types of different mobilities within a crowded framework. The resulting distribution of gas types within the frameworkmore » is in general spatially and dynamically heterogeneous. Our results suggest that relaxing the requirement of equilibrium can substantially increase the parameter space of conditions and materials for which selective gas capture can be effected.« less

  4. Energetic neutrinos from heavy-neutralino annihilation in the Sun. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kamionkowski, Marc

    1991-01-01

    Neutralinos may be captured in the sun and annihilated therein producing high-energy neutrinos. Present limits on the flux of such neutrinos from underground detectors such as IMB and Kamiokande 2 may be used to rule out certain supersymmetric dark matter candidates, while in many other supersymmetric models the rates are large enough that if neutralinos do reside in the galactic halo, observation of a neutrino signal may be possible in the near future. Neutralinos that are either nearly pure Higgsino or a Higgsino/gaugino combination are generally captured in the sun by coherent scattering off nuclei via exchange of the lightest Higgs boson. If the squark mass is not much greater than the neutralino mass, then capture of neutralinos that are primarily gaugino occurs predominantly by spin-dependent scattering off hydrogen in the sun. The neutrino signal from annihilation of WIMPs with masses in the range of 80 to 1000 GeV in the sun should generally be stronger than that from weakly interacting massive particle (WIMP) annihilation in the earth, and detection rates for mixed-state neutralinos are generally higher than those for Higgsinos or gauginos.

  5. Estimating temporary emigration and breeding proportions using capture-recapture data with Pollock's robust design

    USGS Publications Warehouse

    Kendall, W.L.; Nichols, J.D.; Hines, J.E.

    1997-01-01

    Statistical inference for capture-recapture studies of open animal populations typically relies on the assumption that all emigration from the studied population is permanent. However, there are many instances in which this assumption is unlikely to be met. We define two general models for the process of temporary emigration, completely random and Markovian. We then consider effects of these two types of temporary emigration on Jolly-Seber (Seber 1982) estimators and on estimators arising from the full-likelihood approach of Kendall et al. (1995) to robust design data. Capture-recapture data arising from Pollock's (1982) robust design provide the basis for obtaining unbiased estimates of demographic parameters in the presence of temporary emigration and for estimating the probability of temporary emigration. We present a likelihood-based approach to dealing with temporary emigration that permits estimation under different models of temporary emigration and yields tests for completely random and Markovian emigration. In addition, we use the relationship between capture probability estimates based on closed and open models under completely random temporary emigration to derive three ad hoc estimators for the probability of temporary emigration, two of which should be especially useful in situations where capture probabilities are heterogeneous among individual animals. Ad hoc and full-likelihood estimators are illustrated for small mammal capture-recapture data sets. We believe that these models and estimators will be useful for testing hypotheses about the process of temporary emigration, for estimating demographic parameters in the presence of temporary emigration, and for estimating probabilities of temporary emigration. These latter estimates are frequently of ecological interest as indicators of animal movement and, in some sampling situations, as direct estimates of breeding probabilities and proportions.

  6. Geologic and Landuse Controls of the Risk for Domestic Well Pollution from Septic Tank Leachate

    NASA Astrophysics Data System (ADS)

    Horn, J.; Harter, T.

    2006-12-01

    A highly resolved three-dimensional groundwater model containing a domestic drinking water well and its surrounding gravel pack is simulated with MODFLOW. Typical recharge rates, domestic well depths and well sealing lengths are obtained by analyzing well log data from eastern Stanislaus County, California, an area with a significant rural and suburban population relying on domestic wells and septic tank systems. The domestic well model is run for a range of hydraulic conductivities of both, the gravel pack and the aquifer. Reverse particle tracking with MODPATH 3D is carried out to determine the capture zone of the well as a function of hydraulic conductivity. The resulting capture zone is divided into two areas: Particles representing water entering the top of the well screen represent water that flows downward through the gravel pack from somewhere below the well seal and above the well screen. The source area associated with these particles forms a narrow well-ward elongation of the main capture zone, which represents that of particles flowing horizontally across the gravel pack into the well screen. The properties of the modeled capture zones are compared to existing analytical capture zone models. A clear influence of the gravel pack on capture zone shape and size is shown. Using the information on capture zone geometry, a risk assessment tool is developed to estimate the chance that a domestic well capture zone intersects at least one septic tank drainfield in a checkerboard of rural or suburban lots of a given size, but random drainfield and domestic well distribution. Risk is computed as a function of aquifer and gravel pack hydraulic conductivity, and as a function of lot size. We show the risk of collocation of a septic tank leach field with a domestic well capture zone for various scenarios. This risk is generally highest for high hydraulic conductivities of the gravel pack and the aquifer, limited anisotropy, and higher septic system densities. Under typical conditions, the risk of septic leachate reaching a domestic well is significant and may range from 5% to over 50%.

  7. The non-storm time corrugated upper thermosphere: What is beyond MSIS?

    NASA Astrophysics Data System (ADS)

    Liu, Huixin; Thayer, Jeff; Zhang, Yongliang; Lee, Woo Kyoung

    2017-06-01

    Observations in the recent decade have revealed many thermospheric density corrugations/perturbations under nonstorm conditions (Kp < 2). They are generally not captured by empirical models like Mass Spectrometer Incoherent Scatter (MSIS) but are operationally important for long-term orbital evolution of Low Earth Orbiting satellites and theoretically for coupling processes in the atmosphere-ionosphere system. We review these density corrugations by classifying them into three types which are driven respectively by the lower atmosphere, ionosphere, and solar wind/magnetosphere. Model capabilities in capturing these features are discussed. A summary table of these corrugations is included to provide a quick guide on their magnitudes, occurring latitude, local time, and season.

  8. Competing opinions and stubborness: Connecting models to data.

    PubMed

    Burghardt, Keith; Rand, William; Girvan, Michelle

    2016-03-01

    We introduce a general contagionlike model for competing opinions that includes dynamic resistance to alternative opinions. We show that this model can describe candidate vote distributions, spatial vote correlations, and a slow approach to opinion consensus with sensible parameter values. These empirical properties of large group dynamics, previously understood using distinct models, may be different aspects of human behavior that can be captured by a more unified model, such as the one introduced in this paper.

  9. Estimating breeding proportions and testing hypotheses about costs of reproduction with capture-recapture data

    USGS Publications Warehouse

    Nichols, James D.; Hines, James E.; Pollock, Kenneth H.; Hinz, Robert L.; Link, William A.

    1994-01-01

    The proportion of animals in a population that breeds is an important determinant of population growth rate. Usual estimates of this quantity from field sampling data assume that the probability of appearing in the capture or count statistic is the same for animals that do and do not breed. A similar assumption is required by most existing methods used to test ecologically interesting hypotheses about reproductive costs using field sampling data. However, in many field sampling situations breeding and nonbreeding animals are likely to exhibit different probabilities of being seen or caught. In this paper, we propose the use of multistate capture-recapture models for these estimation and testing problems. This methodology permits a formal test of the hypothesis of equal capture/sighting probabilities for breeding and nonbreeding individuals. Two estimators of breeding proportion (and associated standard errors) are presented, one for the case of equal capture probabilities and one for the case of unequal capture probabilities. The multistate modeling framework also yields formal tests of hypotheses about reproductive costs to future reproduction or survival or both fitness components. The general methodology is illustrated using capture-recapture data on female meadow voles, Microtus pennsylvanicus. Resulting estimates of the proportion of reproductively active females showed strong seasonal variation, as expected, with low breeding proportions in midwinter. We found no evidence of reproductive costs extracted in subsequent survival or reproduction. We believe that this methodological framework has wide application to problems in animal ecology concerning breeding proportions and phenotypic reproductive costs.

  10. Machine learning-based diagnosis of melanoma using macro images.

    PubMed

    Gautam, Diwakar; Ahmed, Mushtaq; Meena, Yogesh Kumar; Ul Haq, Ahtesham

    2018-05-01

    Cancer bears a poisoning threat to human society. Melanoma, the skin cancer, originates from skin layers and penetrates deep into subcutaneous layers. There exists an extensive research in melanoma diagnosis using dermatoscopic images captured through a dermatoscope. While designing a diagnostic model for general handheld imaging systems is an emerging trend, this article proposes a computer-aided decision support system for macro images captured by a general-purpose camera. General imaging conditions are adversely affected by nonuniform illumination, which further affects the extraction of relevant information. To mitigate it, we process an image to define a smooth illumination surface using the multistage illumination compensation approach, and the infected region is extracted using the proposed multimode segmentation method. The lesion information is numerated as a feature set comprising geometry, photometry, border series, and texture measures. The redundancy in feature set is reduced using information theory methods, and a classification boundary is modeled to distinguish benign and malignant samples using support vector machine, random forest, neural network, and fast discriminative mixed-membership-based naive Bayesian classifiers. Moreover, the experimental outcome is supported by hypothesis testing and boxplot representation for classification losses. The simulation results prove the significance of the proposed model that shows an improved performance as compared with competing arts. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Topological phases in the Haldane model with spin–spin on-site interactions

    NASA Astrophysics Data System (ADS)

    Rubio-García, A.; García-Ripoll, J. J.

    2018-04-01

    Ultracold atom experiments allow the study of topological insulators, such as the non-interacting Haldane model. In this work we study a generalization of the Haldane model with spin–spin on-site interactions that can be implemented on such experiments. We focus on measuring the winding number, a topological invariant, of the ground state, which we compute using a mean-field calculation that effectively captures long-range correlations and a matrix product state computation in a lattice with 64 sites. Our main result is that we show how the topological phases present in the non-interacting model survive until the interactions are comparable to the kinetic energy. We also demonstrate the accuracy of our mean-field approach in efficiently capturing long-range correlations. Based on state-of-the-art ultracold atom experiments, we propose an implementation of our model that can give information about the topological phases.

  12. Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture: Original Research Article: Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture

    DOE PAGES

    Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling; ...

    2018-03-25

    In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less

  13. Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture: Original Research Article: Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling

    In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less

  14. An assessment of precipitation and surface air temperature over China by regional climate models

    NASA Astrophysics Data System (ADS)

    Wang, Xueyuan; Tang, Jianping; Niu, Xiaorui; Wang, Shuyu

    2016-12-01

    An analysis of a 20-year summer time simulation of present-day climate (1989-2008) over China using four regional climate models coupled with different land surface models is carried out. The climatic means, interannual variability, linear trends, and extremes are examined, with focus on precipitation and near surface air temperature. The models are able to reproduce the basic features of the observed summer mean precipitation and temperature over China and the regional detail due to topographic forcing. Overall, the model performance is better for temperature than that of precipitation. The models reasonably grasp the major anomalies and standard deviations over China and the five subregions studied. The models generally reproduce the spatial pattern of high interannual variability over wet regions, and low variability over the dry regions. The models also capture well the variable temperature gradient increase to the north by latitude. Both the observed and simulated linear trend of precipitation shows a drying tendency over the Yangtze River Basin and wetting over South China. The models capture well the relatively small temperature trends in large areas of China. The models reasonably simulate the characteristics of extreme precipitation indices of heavy rain days and heavy precipitation fraction. Most of the models also performed well in capturing both the sign and magnitude of the daily maximum and minimum temperatures over China.

  15. Impact of Media Richness and Flow on E-Learning Technology Acceptance

    ERIC Educational Resources Information Center

    Liu, Su-Houn; Liao, Hsiu-Li; Pratt, Jean A.

    2009-01-01

    Advances in e-learning technologies parallels a general increase in sophistication by computer users. The use of just one theory or model, such as the technology acceptance model, is no longer sufficient to study the intended use of e-learning systems. Rather, a combination of theories must be integrated in order to fully capture the complexity of…

  16. Predicting the spread of all invasive forest pests in the United States

    Treesearch

    Emma J. Hudgins; Andrew M. Liebhold; Brian Leung; Regan Early

    2017-01-01

    We tested whether a general spread model could capture macroecological patterns across all damaging invasive forest pests in the United States. We showed that a common constant dispersal kernel model, simulated from the discovery date, explained 67.94% of the variation in range size across all pests, and had 68.00% locational accuracy between predicted and observed...

  17. Improvement of Progressive Damage Model to Predicting Crashworthy Composite Corrugated Plate

    NASA Astrophysics Data System (ADS)

    Ren, Yiru; Jiang, Hongyong; Ji, Wenyuan; Zhang, Hanyu; Xiang, Jinwu; Yuan, Fuh-Gwo

    2018-02-01

    To predict the crashworthy composite corrugated plate, different single and stacked shell models are evaluated and compared, and a stacked shell progressive damage model combined with continuum damage mechanics is proposed and investigated. To simulate and predict the failure behavior, both of the intra- and inter- laminar failure behavior are considered. The tiebreak contact method, 1D spot weld element and cohesive element are adopted in stacked shell model, and a surface-based cohesive behavior is used to capture delamination in the proposed model. The impact load and failure behavior of purposed and conventional progressive damage models are demonstrated. Results show that the single shell could simulate the impact load curve without the delamination simulation ability. The general stacked shell model could simulate the interlaminar failure behavior. The improved stacked shell model with continuum damage mechanics and cohesive element not only agree well with the impact load, but also capture the fiber, matrix debonding, and interlaminar failure of composite structure.

  18. Suicidal Ideation and Interpersonal Needs: Factor Structure of a Short Version of the Interpersonal Needs Questionnaire in an At-Risk Military Sample.

    PubMed

    Allan, Nicholas P; Gros, Daniel F; Hom, Melanie A; Joiner, Thomas E; Stecker, Tracy

    2016-01-01

    The interpersonal-psychological theory of suicide posits that perceived burdensomeness (PB; i.e., the belief that others would be better off if one were dead) and thwarted belongingness (TB; i.e., the belief that one lacks meaningful social connections) are both necessary risk factors for the development of suicidal ideation. To test these relations, measures are needed that are well validated, especially in samples of at-risk adults. The current study was designed to examine the factor structure of an eight-item version of the Interpersonal Needs Questionnaire (INQ) in a sample of 405 U.S. past and current military personnel (M age  = 31.57 years, SD = 7.28; 90.4% male) who endorsed either current suicidal ideation and/or a past suicide attempt. Analyses were conducted using confirmatory factor analysis (CFA). A bifactor model comprising a general factor, labeled interpersonal needs, and two specific factors, labeled PB and TB, fit the data best. The general factor captured a high proportion of overall variance (81.9%). In contrast, the TB factor captured only a modest amount of variance in items meant to capture this factor (59.1%) and the PB factor captured very little variance in items meant to capture this factor (13.5%). Further, only the interpersonal needs factor was associated with lifetime and past-week suicidal ideation as well as suicidal ideation frequency and duration. The current findings indicate that, for the INQ-8 in high-risk military personnel, a general interpersonal needs factor accounted for the relations PB and TB share with suicidal ideation.

  19. Do little interactions get lost in dark random forests?

    PubMed

    Wright, Marvin N; Ziegler, Andreas; König, Inke R

    2016-03-31

    Random forests have often been claimed to uncover interaction effects. However, if and how interaction effects can be differentiated from marginal effects remains unclear. In extensive simulation studies, we investigate whether random forest variable importance measures capture or detect gene-gene interactions. With capturing interactions, we define the ability to identify a variable that acts through an interaction with another one, while detection is the ability to identify an interaction effect as such. Of the single importance measures, the Gini importance captured interaction effects in most of the simulated scenarios, however, they were masked by marginal effects in other variables. With the permutation importance, the proportion of captured interactions was lower in all cases. Pairwise importance measures performed about equal, with a slight advantage for the joint variable importance method. However, the overall fraction of detected interactions was low. In almost all scenarios the detection fraction in a model with only marginal effects was larger than in a model with an interaction effect only. Random forests are generally capable of capturing gene-gene interactions, but current variable importance measures are unable to detect them as interactions. In most of the cases, interactions are masked by marginal effects and interactions cannot be differentiated from marginal effects. Consequently, caution is warranted when claiming that random forests uncover interactions.

  20. An optimization model for carbon capture & storage/utilization vs. carbon trading: A case study of fossil-fired power plants in Turkey.

    PubMed

    Ağralı, Semra; Üçtuğ, Fehmi Görkem; Türkmen, Burçin Atılgan

    2018-06-01

    We consider fossil-fired power plants that operate in an environment where a cap and trade system is in operation. These plants need to choose between carbon capture and storage (CCS), carbon capture and utilization (CCU), or carbon trading in order to obey emissions limits enforced by the government. We develop a mixed-integer programming model that decides on the capacities of carbon capture units, if it is optimal to install them, the transportation network that needs to be built for transporting the carbon captured, and the locations of storage sites, if they are decided to be built. Main restrictions on the system are the minimum and maximum capacities of the different parts of the pipeline network, the amount of carbon that can be sold to companies for utilization, and the capacities on the storage sites. Under these restrictions, the model aims to minimize the net present value of the sum of the costs associated with installation and operation of the carbon capture unit and the transportation of carbon, the storage cost in case of CCS, the cost (or revenue) that results from the emissions trading system, and finally the negative revenue of selling the carbon to other entities for utilization. We implement the model on General Algebraic Modeling System (GAMS) by using data associated with two coal-fired power plants located in different regions of Turkey. We choose enhanced oil recovery (EOR) as the process in which carbon would be utilized. The results show that CCU is preferable to CCS as long as there is sufficient demand in the EOR market. The distance between the location of emission and location of utilization/storage, and the capacity limits on the pipes are an important factor in deciding between carbon capture and carbon trading. At carbon prices over $15/ton, carbon capture becomes preferable to carbon trading. These results show that as far as Turkey is concerned, CCU should be prioritized as a means of reducing nation-wide carbon emissions in an environmentally and economically rewarding manner. The model developed in this study is generic, and it can be applied to any industry at any location, as long as the required inputs are available. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Argon Bubble Transport and Capture in Continuous Casting with an External Magnetic Field Using GPU-Based Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Jin, Kai

    Continuous casting produces over 95% of steel in the world today, hence even small improvements to this important industrial process can have large economic impact. In the continuous casting of steel process, argon gas is usually injected at the slide gate or stopper rod to prevent clogging, but entrapped bubbles may cause defects in the final product. Many defects in this process are related to the transient fluid flow in the mold region of the caster. Electromagnetic braking (EMBr) device is often used at high casting speed to modify the mold flow, reduce the surface velocity and fluctuation. This work studies the physics in continuous casting process including effects of EMBr on the motion of fluid flow in the mold region, and transport and capture of bubbles in the solidification processes. A computational effective Reynolds-averaged Navier-Stokes (RANS) model and a high fidelity Large Eddy Simulation (LES) model are used to understand the motion of the molten steel flow. A general purpose multi-GPU Navier-Stokes solver, CUFLOW, is developed. A Coherent-Structure Smagorinsky LES model is implemented to model the turbulent flow. A two-way coupled Lagrangian particle tracking model is added to track the motion of argon bubbles. A particle/bubble capture model based on force balance at dendrite tips is validated and used to study the capture of argon bubbles by the solidifying steel shell. To investigate the effects of EMBr on the turbulent molten steel flow and bubble transport, an electrical potential method is implemented to solve the magnetohydrodynamics equations. Volume of Fluid (VOF) simulations are carried out to understand the additional resistance force on moving argon bubbles caused by adding transverse magnetic field. A modified drag coefficient is extrapolated from the results and used in the two-way coupled Eulerian-Lagrangian model to predict the argon bubble transport in a caster with EMBr. A hook capture model is developed to understand the effects of hooks on argon bubble capture.

  2. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  3. Hierarchical modeling and inference in ecology: The analysis of data from populations, metapopulations and communities

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, Robert M.

    2008-01-01

    A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.

  4. Variation in capture efficiency of a beach seine for small fishes

    USGS Publications Warehouse

    Parsley, M.J.; Palmer, D.E.; Burkhardt, R.W.

    1989-01-01

    We determined the capture efficiency of a beach seine as a means of improving abundance estimates of small fishes in littoral areas. Capture efficiency for 14 taxa (individual species or species groups) was determined by seining within an enclosure at night over fine and coarse substrates in the John Day Reservoir, Oregon–Washington. Mean efficiency ranged from 12% for prickly sculpin Cottus asper captured over coarse substrates to 96% for peamouth Mylocheilus caurinus captured over fine substrates. Mean capture efficiency for a taxon (genus or species) was generally higher over fine substrates than over coarse substrates, although mean capture efficiencies over fine substrates were significantly greater for only 3 of 10 taxa. Capture efficiency generally was not influenced by fish density or by water temperature (range, 8–26°C). Conclusions about the relative abundance of taxa captured by seining can change substantially after capture efficiencies are taken into account.

  5. Representing functions/procedures and processes/structures for analysis of effects of failures on functions and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Leifker, Daniel B.

    1991-01-01

    Current qualitative device and process models represent only the structure and behavior of physical systems. However, systems in the real world include goal-oriented activities that generally cannot be easily represented using current modeling techniques. An extension of a qualitative modeling system, known as functional modeling, which captures goal-oriented activities explicitly is proposed and how they may be used to support intelligent automation and fault management is shown.

  6. Assessing the Evaluative Content of Personality Questionnaires Using Bifactor Models.

    PubMed

    Biderman, Michael D; McAbee, Samuel T; Job Chen, Zhuo; Hendy, Nhung T

    2018-01-01

    Exploratory bifactor models with keying factors were applied to item response data for the NEO-FFI-3 and HEXACO-PI-R questionnaires. Loadings on a general factor and positive and negative keying factors correlated with independent estimates of item valence, suggesting that item valence influences responses to these questionnaires. Correlations between personality domain scores and measures of self-esteem, depression, and positive and negative affect were all reduced significantly when the influence of evaluative content represented by the general and keying factors was removed. Findings support the need to model personality inventories in ways that capture reactions to evaluative item content.

  7. A constitutive model for magnetostriction based on thermodynamic framework

    NASA Astrophysics Data System (ADS)

    Ho, Kwangsoo

    2016-08-01

    This work presents a general framework for the continuum-based formulation of dissipative materials with magneto-mechanical coupling in the viewpoint of irreversible thermodynamics. The thermodynamically consistent model developed for the magnetic hysteresis is extended to include the magnetostrictive effect. The dissipative and hysteretic response of magnetostrictive materials is captured through the introduction of internal state variables. The evolution rate of magnetostrictive strain as well as magnetization is derived from thermodynamic and dissipative potentials in accordance with the general principles of thermodynamics. It is then demonstrated that the constitutive model is competent to describe the magneto-mechanical behavior by comparing simulation results with the experimental data reported in the literature.

  8. Decoherence and discrete symmetries in deformed relativistic kinematics

    NASA Astrophysics Data System (ADS)

    Arzano, Michele

    2018-01-01

    Models of deformed Poincaré symmetries based on group valued momenta have long been studied as effective modifications of relativistic kinematics possibly capturing quantum gravity effects. In this contribution we show how they naturally lead to a generalized quantum time evolution of the type proposed to model fundamental decoherence for quantum systems in the presence of an evaporating black hole. The same structures which determine such generalized evolution also lead to a modification of the action of discrete symmetries and of the CPT operator. These features can in principle be used to put phenomenological constraints on models of deformed relativistic symmetries using precision measurements of neutral kaons.

  9. Hydrogen sulfide capture by limestone and dolomite at elevated pressure. 2: Sorbent particle conversion modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zevenhoven, C.A.P.; Yrjas, K.P.; Hupa, M.M.

    1996-03-01

    The physical structure of a limestone or dolomite to be used in in-bed sulfur capture in fluidized bed gasifiers has a great impact on the efficiency of sulfur capture and sorbent use. In this study an unreacted shrinking core model with variable effective diffusivity is applied to sulfidation test data from a pressurized thermogravimetric apparatus (P-TGA) for a set of physically and chemically different limestone and dolomite samples. The particle size was 250--300 {micro}m for all sorbents, which were characterized by chemical composition analysis, particle density measurement, mercury porosimetry, and BET internal surface measurement. Tests were done under typical conditionsmore » for a pressurized fluidized-bed gasifier, i.e., 20% CO{sub 2}, 950 C, 20 bar. At these conditions the limestone remains uncalcined, while the dolomite is half-calcined. Additional tests were done at low CO{sub 2} partial pressures, yielding calcined limestone and fully calcined dolomite. The generalized model allows for determination of values for the initial reaction rate and product layer diffusivity.« less

  10. Exploring Duopoly Markets with Conjectural Variations

    ERIC Educational Resources Information Center

    Julien, Ludovic A.; Musy, Olivier; Saïdi, Aurélien W.

    2014-01-01

    In this article, the authors investigate competitive firm behaviors in a two-firm environment assuming linear cost and demand functions. By introducing conjectural variations, they capture the different market structures as specific configurations of a more general model. Conjectural variations are based on the assumption that each firm believes…

  11. De Novo Design of Bioactive Small Molecules by Artificial Intelligence

    PubMed Central

    Merk, Daniel; Friedrich, Lukas; Grisoni, Francesca

    2018-01-01

    Abstract Generative artificial intelligence offers a fresh view on molecular design. We present the first‐time prospective application of a deep learning model for designing new druglike compounds with desired activities. For this purpose, we trained a recurrent neural network to capture the constitution of a large set of known bioactive compounds represented as SMILES strings. By transfer learning, this general model was fine‐tuned on recognizing retinoid X and peroxisome proliferator‐activated receptor agonists. We synthesized five top‐ranking compounds designed by the generative model. Four of the compounds revealed nanomolar to low‐micromolar receptor modulatory activity in cell‐based assays. Apparently, the computational model intrinsically captured relevant chemical and biological knowledge without the need for explicit rules. The results of this study advocate generative artificial intelligence for prospective de novo molecular design, and demonstrate the potential of these methods for future medicinal chemistry. PMID:29319225

  12. A Generalized Mixture Framework for Multi-label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069

  13. Semantic Likelihood Models for Bayesian Inference in Human-Robot Interaction

    NASA Astrophysics Data System (ADS)

    Sweet, Nicholas

    Autonomous systems, particularly unmanned aerial systems (UAS), remain limited in au- tonomous capabilities largely due to a poor understanding of their environment. Current sensors simply do not match human perceptive capabilities, impeding progress towards full autonomy. Recent work has shown the value of humans as sources of information within a human-robot team; in target applications, communicating human-generated 'soft data' to autonomous systems enables higher levels of autonomy through large, efficient information gains. This requires development of a 'human sensor model' that allows soft data fusion through Bayesian inference to update the probabilistic belief representations maintained by autonomous systems. Current human sensor models that capture linguistic inputs as semantic information are limited in their ability to generalize likelihood functions for semantic statements: they may be learned from dense data; they do not exploit the contextual information embedded within groundings; and they often limit human input to restrictive and simplistic interfaces. This work provides mechanisms to synthesize human sensor models from constraints based on easily attainable a priori knowledge, develops compression techniques to capture information-dense semantics, and investigates the problem of capturing and fusing semantic information contained within unstructured natural language. A robotic experimental testbed is also developed to validate the above contributions.

  14. A mixture theory approach to model co- and counter-current two-phase flow in porous media accounting for viscous coupling

    NASA Astrophysics Data System (ADS)

    Qiao, Y.; Andersen, P. Ø.; Evje, S.; Standnes, D. C.

    2018-02-01

    It is well known that relative permeabilities can depend on the flow configuration and they are commonly lower during counter-current flow as compared to co-current flow. Conventional models must deal with this by manually changing the relative permeability curves depending on the observed flow regime. In this paper we use a novel two-phase momentum-equation-approach based on general mixture theory to generate effective relative permeabilities where this dependence (and others) is automatically captured. In particular, this formulation includes two viscous coupling effects: (i) Viscous drag between the flowing phases and the stagnant porous rock; (ii) viscous drag caused by momentum transfer between the flowing phases. The resulting generalized model will predict that during co-current flow the faster moving fluid accelerates the slow fluid, but is itself decelerated, while for counter-current flow they are both decelerated. The implications of these mechanisms are demonstrated by investigating recovery of oil from a matrix block surrounded by water due to a combination of gravity drainage and spontaneous imbibition, a situation highly relevant for naturally fractured reservoirs. We implement relative permeability data obtained experimentally through co-current flooding experiments and then explore the model behavior for different flow cases ranging from counter-current dominated to co-current dominated. In particular, it is demonstrated how the proposed model seems to offer some possible interesting improvements over conventional modeling by providing generalized mobility functions that automatically are able to capture more correctly different flow regimes for one and the same parameter set.

  15. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    USGS Publications Warehouse

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  16. Revisiting the Procedures for the Vector Data Quality Assurance in Practice

    NASA Astrophysics Data System (ADS)

    Erdoğan, M.; Torun, A.; Boyacı, D.

    2012-07-01

    Immense use of topographical data in spatial data visualization, business GIS (Geographic Information Systems) solutions and applications, mobile and location-based services forced the topo-data providers to create standard, up-to-date and complete data sets in a sustainable frame. Data quality has been studied and researched for more than two decades. There have been un-countable numbers of references on its semantics, its conceptual logical and representations and many applications on spatial databases and GIS. However, there is a gap between research and practice in the sense of spatial data quality which increases the costs and decreases the efficiency of data production. Spatial data quality is well-known by academia and industry but usually in different context. The research on spatial data quality stated several issues having practical use such as descriptive information, metadata, fulfillment of spatial relationships among data, integrity measures, geometric constraints etc. The industry and data producers realize them in three stages; pre-, co- and post data capturing. The pre-data capturing stage covers semantic modelling, data definition, cataloguing, modelling, data dictionary and schema creation processes. The co-data capturing stage covers general rules of spatial relationships, data and model specific rules such as topologic and model building relationships, geometric threshold, data extraction guidelines, object-object, object-belonging class, object-non-belonging class, class-class relationships to be taken into account during data capturing. And post-data capturing stage covers specified QC (quality check) benchmarks and checking compliance to general and specific rules. The vector data quality criteria are different from the views of producers and users. But these criteria are generally driven by the needs, expectations and feedbacks of the users. This paper presents a practical method which closes the gap between theory and practice. Development of spatial data quality concepts into developments and application requires existence of conceptual, logical and most importantly physical existence of data model, rules and knowledge of realization in a form of geo-spatial data. The applicable metrics and thresholds are determined on this concrete base. This study discusses application of geo-spatial data quality issues and QA (quality assurance) and QC procedures in the topographic data production. Firstly we introduce MGCP (Multinational Geospatial Co-production Program) data profile of NATO (North Atlantic Treaty Organization) DFDD (DGIWG Feature Data Dictionary), the requirements of data owner, the view of data producers for both data capturing and QC and finally QA to fulfil user needs. Then, our practical and new approach which divides the quality into three phases is introduced. Finally, implementation of our approach to accomplish metrics, measures and thresholds of quality definitions is discussed. In this paper, especially geometry and semantics quality and quality control procedures that can be performed by the producers are discussed. Some applicable best-practices that we experienced on techniques of quality control, defining regulations that define the objectives and data production procedures are given in the final remarks. These quality control procedures should include the visual checks over the source data, captured vector data and printouts, some automatic checks that can be performed by software and some semi-automatic checks by the interaction with quality control personnel. Finally, these quality control procedures should ensure the geometric, semantic, attribution and metadata quality of vector data.

  17. Numerical simulation of groundwater flow for the Yakima River basin aquifer system, Washington

    USGS Publications Warehouse

    Ely, D.M.; Bachmann, M.P.; Vaccaro, J.J.

    2011-01-01

    Five applications (scenarios) of the model were completed to obtain a better understanding of the relation between pumpage and surface-water resources and groundwater levels. For the first three scenarios, the calibrated transient model was used to simulate conditions without: (1) pumpage from all hydrogeologic units, (2) pumpage from basalt hydrogeologic units, and (3) exempt-well pumpage. The simulation results indicated potential streamflow capture by the existing pumpage from 1960 through 2001. The quantity of streamflow capture generally was inversely related to the total quantity of pumpage eliminated in the model scenarios. For the fourth scenario, the model simulated 1994 through 2001 under existing conditions with additional pumpage estimated for pending groundwater applications. The differences between the calibrated model streamflow and this scenario indicated additional decreases in streamflow of 91 cubic feet per second in the model domain. Existing conditions representing 1994 through 2001 were projected through 2025 for the fifth scenario and indicated additional streamflow decreases of 38 cubic feet per second and groundwater-level declines.

  18. Modelling of volatility in monetary transmission mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobešová, Anna; Klepáč, Václav; Kolman, Pavel

    2015-03-10

    The aim of this paper is to compare different approaches to modeling of volatility in monetary transmission mechanism. For this purpose we built time-varying parameter VAR (TVP-VAR) model with stochastic volatility and VAR-DCC-GARCH model with conditional variance. The data from three European countries are included in the analysis: the Czech Republic, Germany and Slovakia. Results show that VAR-DCC-GARCH system captures higher volatility of observed variables but main trends and detected breaks are generally identical in both approaches.

  19. Generalized Dicke Nonequilibrium Dynamics in Trapped Ions

    NASA Astrophysics Data System (ADS)

    Genway, Sam; Li, Weibin; Ates, Cenap; Lanyon, Benjamin P.; Lesanovsky, Igor

    2014-01-01

    We explore trapped ions as a setting to investigate nonequilibrium phases in a generalized Dicke model of dissipative spins coupled to phonon modes. We find a rich dynamical phase diagram including superradiantlike regimes, dynamical phase coexistence, and phonon-lasing behavior. A particular advantage of trapped ions is that these phases and transitions among them can be probed in situ through fluorescence. We demonstrate that the main physical insights are captured by a minimal model and consider an experimental realization with Ca+ ions trapped in a linear Paul trap with a dressing scheme to create effective two-level systems with a tunable dissipation rate.

  20. Simultaneous capture of metal, sulfur and chlorine by sorbents during fluidized bed incineration.

    PubMed

    Ho, T C; Chuang, T C; Chelluri, S; Lee, Y; Hopper, J R

    2001-01-01

    Metal capture experiments were carried out in an atmospheric fluidized bed incinerator to investigate the effect of sulfur and chlorine on metal capture efficiency and the potential for simultaneous capture of metal, sulfur and chlorine by sorbents. In addition to experimental investigation, the effect of sulfur and chlorine on the metal capture process was also theoretically investigated through performing equilibrium calculations based on the minimization of system free energy. The observed results have indicated that, in general, the existence of sulfur and chlorine enhances the efficiency of metal capture especially at low to medium combustion temperatures. The capture mechanisms appear to include particulate scrubbing and chemisorption depending on the type of sorbents. Among the three sorbents tested, calcined limestone is capable of capturing all the three air pollutants simultaneously. The results also indicate that a mixture of the three sorbents, in general, captures more metals than a single sorbent during the process. In addition, the existence of sulfur and chlorine apparently enhances the metal capture process.

  1. THE RADIATIVE NEUTRON CAPTURE ON 2H, 6Li, 7Li, 12C AND 13C AT ASTROPHYSICAL ENERGIES

    NASA Astrophysics Data System (ADS)

    Dubovichenko, Sergey; Dzhazairov-Kakhramanov, Albert; Burkova, Natalia

    2013-05-01

    The continued interest in the study of radiative neutron capture on atomic nuclei is due, on the one hand, to the important role played by this process in the analysis of many fundamental properties of nuclei and nuclear reactions, and, on the other hand, to the wide use of the capture cross-section data in the various applications of nuclear physics and nuclear astrophysics, and, also, to the importance of the analysis of primordial nucleosynthesis in the Universe. This paper is devoted to the description of results for the processes of the radiative neutron capture on certain light atomic nuclei at thermal and astrophysical energies. The consideration of these processes is done within the framework of the potential cluster model (PCM), general description of which was given earlier. The methods of usage of the results obtained, based on the phase shift analysis intercluster potentials, are demonstrated in calculations of the radiative capture characteristics. The considered capture reactions are not part of stellar thermonuclear cycles, but involve in the basic reaction chain of primordial nucleosynthesis in the course of the Universe formation.

  2. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    NASA Astrophysics Data System (ADS)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  3. Inferring species interactions through joint mark–recapture analysis

    USGS Publications Warehouse

    Yackulic, Charles B.; Korman, Josh; Yard, Michael D.; Dzul, Maria C.

    2018-01-01

    Introduced species are frequently implicated in declines of native species. In many cases, however, evidence linking introduced species to native declines is weak. Failure to make strong inferences regarding the role of introduced species can hamper attempts to predict population viability and delay effective management responses. For many species, mark–recapture analysis is the more rigorous form of demographic analysis. However, to our knowledge, there are no mark–recapture models that allow for joint modeling of interacting species. Here, we introduce a two‐species mark–recapture population model in which the vital rates (and capture probabilities) of one species are allowed to vary in response to the abundance of the other species. We use a simulation study to explore bias and choose an approach to model selection. We then use the model to investigate species interactions between endangered humpback chub (Gila cypha) and introduced rainbow trout (Oncorhynchus mykiss) in the Colorado River between 2009 and 2016. In particular, we test hypotheses about how two environmental factors (turbidity and temperature), intraspecific density dependence, and rainbow trout abundance are related to survival, growth, and capture of juvenile humpback chub. We also project the long‐term effects of different rainbow trout abundances on adult humpback chub abundances. Our simulation study suggests this approach has minimal bias under potentially challenging circumstances (i.e., low capture probabilities) that characterized our application and that model selection using indicator variables could reliably identify the true generating model even when process error was high. When the model was applied to rainbow trout and humpback chub, we identified negative relationships between rainbow trout abundance and the survival, growth, and capture probability of juvenile humpback chub. Effects on interspecific interactions on survival and capture probability were strongly supported, whereas support for the growth effect was weaker. Environmental factors were also identified to be important and in many cases stronger than interspecific interactions, and there was still substantial unexplained variation in growth and survival rates. The general approach presented here for combining mark–recapture data for two species is applicable in many other systems and could be modified to model abundance of the invader via other modeling approaches.

  4. Discontinuous Galerkin methods for modeling Hurricane storm surge

    NASA Astrophysics Data System (ADS)

    Dawson, Clint; Kubatko, Ethan J.; Westerink, Joannes J.; Trahan, Corey; Mirabito, Christopher; Michoski, Craig; Panda, Nishant

    2011-09-01

    Storm surge due to hurricanes and tropical storms can result in significant loss of life, property damage, and long-term damage to coastal ecosystems and landscapes. Computer modeling of storm surge can be used for two primary purposes: forecasting of surge as storms approach land for emergency planning and evacuation of coastal populations, and hindcasting of storms for determining risk, development of mitigation strategies, coastal restoration and sustainability. Storm surge is modeled using the shallow water equations, coupled with wind forcing and in some events, models of wave energy. In this paper, we will describe a depth-averaged (2D) model of circulation in spherical coordinates. Tides, riverine forcing, atmospheric pressure, bottom friction, the Coriolis effect and wind stress are all important for characterizing the inundation due to surge. The problem is inherently multi-scale, both in space and time. To model these problems accurately requires significant investments in acquiring high-fidelity input (bathymetry, bottom friction characteristics, land cover data, river flow rates, levees, raised roads and railways, etc.), accurate discretization of the computational domain using unstructured finite element meshes, and numerical methods capable of capturing highly advective flows, wetting and drying, and multi-scale features of the solution. The discontinuous Galerkin (DG) method appears to allow for many of the features necessary to accurately capture storm surge physics. The DG method was developed for modeling shocks and advection-dominated flows on unstructured finite element meshes. It easily allows for adaptivity in both mesh ( h) and polynomial order ( p) for capturing multi-scale spatial events. Mass conservative wetting and drying algorithms can be formulated within the DG method. In this paper, we will describe the application of the DG method to hurricane storm surge. We discuss the general formulation, and new features which have been added to the model to better capture surge in complex coastal environments. These features include modifications to the method to handle spherical coordinates and maintain still flows, improvements in the stability post-processing (i.e. slope-limiting), and the modeling of internal barriers for capturing overtopping of levees and other structures. We will focus on applications of the model to recent events in the Gulf of Mexico, including Hurricane Ike.

  5. Long short-term memory for speaker generalization in supervised speech separation

    PubMed Central

    Chen, Jitong; Wang, DeLiang

    2017-01-01

    Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. To improve speaker generalization, a separation model based on long short-term memory (LSTM) is proposed, which naturally accounts for temporal dynamics of speech. Systematic evaluation shows that the proposed model substantially outperforms a DNN-based model on unseen speakers and unseen noises in terms of objective speech intelligibility. Analyzing LSTM internal representations reveals that LSTM captures long-term speech contexts. It is also found that the LSTM model is more advantageous for low-latency speech separation and it, without future frames, performs better than the DNN model with future frames. The proposed model represents an effective approach for speaker- and noise-independent speech separation. PMID:28679261

  6. Extracting Baseline Electricity Usage Using Gradient Tree Boosting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Taehoon; Lee, Dongeun; Choi, Jaesik

    To understand how specific interventions affect a process observed over time, we need to control for the other factors that influence outcomes. Such a model that captures all factors other than the one of interest is generally known as a baseline. In our study of how different pricing schemes affect residential electricity consumption, the baseline would need to capture the impact of outdoor temperature along with many other factors. In this work, we examine a number of different data mining techniques and demonstrate Gradient Tree Boosting (GTB) to be an effective method to build the baseline. We train GTB onmore » data prior to the introduction of new pricing schemes, and apply the known temperature following the introduction of new pricing schemes to predict electricity usage with the expected temperature correction. Our experiments and analyses show that the baseline models generated by GTB capture the core characteristics over the two years with the new pricing schemes. In contrast to the majority of regression based techniques which fail to capture the lag between the peak of daily temperature and the peak of electricity usage, the GTB generated baselines are able to correctly capture the delay between the temperature peak and the electricity peak. Furthermore, subtracting this temperature-adjusted baseline from the observed electricity usage, we find that the resulting values are more amenable to interpretation, which demonstrates that the temperature-adjusted baseline is indeed effective.« less

  7. The effect of capturing the correct turbulence dissipation rate in BHR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwarzkopf, John Dennis; Ristorcelli, Raymond

    In this manuscript, we discuss the shortcoming of a quasi-equilibrium assumption made in the BHR closure model. Turbulence closure models generally assume fully developed turbulence, which is not applicable to 1) non-equilibrium turbulence (e.g. change in mean pressure gradient) or 2) laminar-turbulence transition flows. Based on DNS data, we show that the current BHR dissipation equation [modeled based on the fully developed turbulence phenomenology] does not capture important features of nonequilibrium flows. To demonstrate our thesis, we use the BHR equations to predict a non-equilibrium flow both with the BHR dissipation and the dissipation from DNS. We find that themore » prediction can be substantially improved, both qualitatively and quantitatively, with the correct dissipation rate. We conclude that a new set of nonequilibrium phenomenological assumptions must be used to develop a new model equation for the dissipation to accurately predict the turbulence time scale used by other models.« less

  8. Non-extensitivity vs. informative moments for financial models —A unifying framework and empirical results

    NASA Astrophysics Data System (ADS)

    Herrmann, K.

    2009-11-01

    Information-theoretic approaches still play a minor role in financial market analysis. Nonetheless, there have been two very similar approaches evolving during the last years, one in the so-called econophysics and the other in econometrics. Both generalize the notion of GARCH processes in an information-theoretic sense and are able to capture kurtosis better than traditional models. In this article we present both approaches in a more general framework. The latter allows the derivation of a wide range of new models. We choose a third model using an entropy measure suggested by Kapur. In an application to financial market data, we find that all considered models - with similar flexibility in terms of skewness and kurtosis - lead to very similar results.

  9. Modeling for Battery Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick

    2017-01-01

    For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient, and is of suitable accuracy for reliable EOD prediction in a variety of operational profiles. The model can be considered an electrochemical engineering model, but unlike most such models found in the literature, certain approximations are done that allow to retain computational efficiency for online implementation of the model. Although the focus here is on Li-ion batteries, the model is quite general and can be applied to different chemistries through a change of model parameter values. Progress on model development, providing model validation results and EOD prediction results is being presented.

  10. A Simulation Study Comparing Epidemic Dynamics on Exponential Random Graph and Edge-Triangle Configuration Type Contact Network Models

    PubMed Central

    Rolls, David A.; Wang, Peng; McBryde, Emma; Pattison, Philippa; Robins, Garry

    2015-01-01

    We compare two broad types of empirically grounded random network models in terms of their abilities to capture both network features and simulated Susceptible-Infected-Recovered (SIR) epidemic dynamics. The types of network models are exponential random graph models (ERGMs) and extensions of the configuration model. We use three kinds of empirical contact networks, chosen to provide both variety and realistic patterns of human contact: a highly clustered network, a bipartite network and a snowball sampled network of a “hidden population”. In the case of the snowball sampled network we present a novel method for fitting an edge-triangle model. In our results, ERGMs consistently capture clustering as well or better than configuration-type models, but the latter models better capture the node degree distribution. Despite the additional computational requirements to fit ERGMs to empirical networks, the use of ERGMs provides only a slight improvement in the ability of the models to recreate epidemic features of the empirical network in simulated SIR epidemics. Generally, SIR epidemic results from using configuration-type models fall between those from a random network model (i.e., an Erdős-Rényi model) and an ERGM. The addition of subgraphs of size four to edge-triangle type models does improve agreement with the empirical network for smaller densities in clustered networks. Additional subgraphs do not make a noticeable difference in our example, although we would expect the ability to model cliques to be helpful for contact networks exhibiting household structure. PMID:26555701

  11. Repairing Femoral Fractures: A Model Lesson in Biomaterial Science

    ERIC Educational Resources Information Center

    Sakakeeny, Jarred

    2006-01-01

    Biomaterial science is a rapidly growing field that has scientists and doctors searching for new ways to repair the body. A merger between medicine and engineering, biomaterials can be complex subject matter, and it can certainly capture the minds of middle school students. In the lesson described in this article, seventh graders generally learn…

  12. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    NASA Astrophysics Data System (ADS)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.

  13. Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.

    PubMed

    Caglar, Mehmet Umut; Pal, Ranadip

    2013-01-01

    Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.

  14. Context and competition in the capture of visual attention.

    PubMed

    Hickey, Clayton; Theeuwes, Jan

    2011-10-01

    Competition-based models of visual attention propose that perceptual ambiguity is resolved through inhibition, which is stronger when objects share a greater number of neural receptive fields (RFs). According to this theory, the misallocation of attention to a salient distractor--that is, the capture of attention--can be indexed in RF-scaled interference costs. We used this pattern to investigate distractor-related costs in visual search across several manipulations of temporal context. Distractor costs are generally larger under circumstances in which the distractor can be defined by features that have recently characterised the target, suggesting that capture occurs in these trials. However, our results show that search for a target in the presence of a salient distractor also produces RF-scaled costs when the features defining the target and distractor do not vary from trial to trial. Contextual differences in distractor costs appear to reflect something other than capture, perhaps a qualitative difference in the type of attentional mechanism deployed to the distractor.

  15. Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks.

    PubMed

    Ghosh, Saurav; Chakraborty, Prithwish; Nsoesie, Elaine O; Cohn, Emily; Mekaru, Sumiko R; Brownstein, John S; Ramakrishnan, Naren

    2017-01-19

    In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include: applicability to a wide range of diseases and ability to capture disease dynamics, including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China, and India. We demonstrate that temporal topic trends extracted from disease-related news reports successfully capture the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that, when news coverage is uniform, efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations.

  16. Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks

    NASA Astrophysics Data System (ADS)

    Ghosh, Saurav; Chakraborty, Prithwish; Nsoesie, Elaine O.; Cohn, Emily; Mekaru, Sumiko R.; Brownstein, John S.; Ramakrishnan, Naren

    2017-01-01

    In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include: applicability to a wide range of diseases and ability to capture disease dynamics, including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China, and India. We demonstrate that temporal topic trends extracted from disease-related news reports successfully capture the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that, when news coverage is uniform, efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations.

  17. Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks

    PubMed Central

    Ghosh, Saurav; Chakraborty, Prithwish; Nsoesie, Elaine O.; Cohn, Emily; Mekaru, Sumiko R.; Brownstein, John S.; Ramakrishnan, Naren

    2017-01-01

    In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include: applicability to a wide range of diseases and ability to capture disease dynamics, including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China, and India. We demonstrate that temporal topic trends extracted from disease-related news reports successfully capture the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that, when news coverage is uniform, efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations. PMID:28102319

  18. An Extended Passive Motion Paradigm for Human-Like Posture and Movement Planning in Redundant Manipulators

    PubMed Central

    Tommasino, Paolo; Campolo, Domenico

    2017-01-01

    A major challenge in robotics and computational neuroscience is relative to the posture/movement problem in presence of kinematic redundancy. We recently addressed this issue using a principled approach which, in conjunction with nonlinear inverse optimization, allowed capturing postural strategies such as Donders' law. In this work, after presenting this general model specifying it as an extension of the Passive Motion Paradigm, we show how, once fitted to capture experimental postural strategies, the model is actually able to also predict movements. More specifically, the passive motion paradigm embeds two main intrinsic components: joint damping and joint stiffness. In previous work we showed that joint stiffness is responsible for static postures and, in this sense, its parameters are regressed to fit to experimental postural strategies. Here, we show how joint damping, in particular its anisotropy, directly affects task-space movements. Rather than using damping parameters to fit a posteriori task-space motions, we make the a priori hypothesis that damping is proportional to stiffness. This remarkably allows a postural-fitted model to also capture dynamic performance such as curvature and hysteresis of task-space trajectories during wrist pointing tasks, confirming and extending previous findings in literature. PMID:29249954

  19. One size does not fit all: Adapting mark-recapture and occupancy models for state uncertainty

    USGS Publications Warehouse

    Kendall, W.L.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    Multistate capture?recapture models continue to be employed with greater frequency to test hypotheses about metapopulation dynamics and life history, and more recently disease dynamics. In recent years efforts have begun to adjust these models for cases where there is uncertainty about an animal?s state upon capture. These efforts can be categorized into models that permit misclassification between two states to occur in either direction or one direction, where state is certain for a subset of individuals or is always uncertain, and where estimation is based on one sampling occasion per period of interest or multiple sampling occasions per period. State uncertainty also arises in modeling patch occupancy dynamics. I consider several case studies involving bird and marine mammal studies that illustrate how misclassified states can arise, and outline model structures for properly utilizing the data that are produced. In each case misclassification occurs in only one direction (thus there is a subset of individuals or patches where state is known with certainty), and there are multiple sampling occasions per period of interest. For the cases involving capture?recapture data I allude to a general model structure that could include each example as a special case. However, this collection of cases also illustrates how difficult it is to develop a model structure that can be directly useful for answering every ecological question of interest and account for every type of data from the field.

  20. Applying generalized linear models as an explanatory tool of sex steroids, thyroid hormones and their relationships with environmental and physiologic factors in immature East Pacific green sea turtles (Chelonia mydas).

    PubMed

    Labrada-Martagón, Vanessa; Méndez-Rodríguez, Lia C; Mangel, Marc; Zenteno-Savín, Tania

    2013-09-01

    Generalized linear models were fitted to evaluate the relationship between 17β-estradiol (E2), testosterone (T) and thyroxine (T4) levels in immature East Pacific green sea turtles (Chelonia mydas) and their body condition, size, mass, blood biochemistry parameters, handling time, year, season and site of capture. According to external (tail size) and morphological (<77.3 straight carapace length) characteristics, 95% of the individuals were juveniles. Hormone levels, assessed on sea turtles subjected to a capture stress protocol, were <34.7nmolTL(-1), <532.3pmolE2 L(-1) and <43.8nmolT4L(-1). The statistical model explained biologically plausible metabolic relationships between hormone concentrations and blood biochemistry parameters (e.g. glucose, cholesterol) and the potential effect of environmental variables (season and study site). The variables handling time and year did not contribute significantly to explain hormone levels. Differences in sex steroids between season and study sites found by the models coincided with specific nutritional, physiological and body condition differences related to the specific habitat conditions. The models correctly predicted the median levels of the measured hormones in green sea turtles, which confirms the fitted model's utility. It is suggested that quantitative predictions could be possible when the model is tested with additional data. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. XML-Based SHINE Knowledge Base Interchange Language

    NASA Technical Reports Server (NTRS)

    James, Mark; Mackey, Ryan; Tikidjian, Raffi

    2008-01-01

    The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.

  2. Electron-capture Rates for pf-shell Nuclei in Stellar Environments and Nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Suzuki, Toshio; Honma, Michio; Mori, Kanji; Famiano, Michael A.; Kajino, Toshitaka; Hidakai, Jun; Otsuka, Takaharu

    Gamow-Teller strengths in pf-shell nuclei obtained by a new shell-model Hamltonian, GXPF1J, are used to evaluate electron-capture rates in pf-shell nuclei at stellar environments. The nuclear weak rates with GXPF1J, which are generally smaller than previous evaluations for proton-rich nuclei, are applied to nucleosynthesis in type Ia supernova explosions. The updated rates are found to lead to less production of neutron-rich nuclei such as 58Ni and 54Cr, thus toward a solution of the problem of over-production of neutron-rich isotopes of iron-group nuclei compared to the solar abundance.

  3. Age Mediation of Frontoparietal Activation during Visual Feature Search

    PubMed Central

    Madden, David J.; Parks, Emily L.; Davis, Simon W.; Diaz, Michele T.; Potter, Guy G.; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto

    2014-01-01

    Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19 – 29 years of age) and 21 older adults (60 – 87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. PMID:25102420

  4. Electricity from fossil fuels without CO2 emissions: assessing the costs of carbon dioxide capture and sequestration in U.S. electricity markets.

    PubMed

    Johnson, T L; Keith, D W

    2001-10-01

    The decoupling of fossil-fueled electricity production from atmospheric CO2 emissions via CO2 capture and sequestration (CCS) is increasingly regarded as an important means of mitigating climate change at a reasonable cost. Engineering analyses of CO2 mitigation typically compare the cost of electricity for a base generation technology to that for a similar plant with CO2 capture and then compute the carbon emissions mitigated per unit of cost. It can be hard to interpret mitigation cost estimates from this plant-level approach when a consistent base technology cannot be identified. In addition, neither engineering analyses nor general equilibrium models can capture the economics of plant dispatch. A realistic assessment of the costs of carbon sequestration as an emissions abatement strategy in the electric sector therefore requires a systems-level analysis. We discuss various frameworks for computing mitigation costs and introduce a simplified model of electric sector planning. Results from a "bottom-up" engineering-economic analysis for a representative U.S. North American Electric Reliability Council (NERC) region illustrate how the penetration of CCS technologies and the dispatch of generating units vary with the price of carbon emissions and thereby determine the relationship between mitigation cost and emissions reduction.

  5. Electricity from Fossil Fuels without CO2 Emissions: Assessing the Costs of Carbon Dioxide Capture and Sequestration in U.S. Electricity Markets.

    PubMed

    Johnson, Timothy L; Keith, David W

    2001-10-01

    The decoupling of fossil-fueled electricity production from atmospheric CO 2 emissions via CO 2 capture and sequestration (CCS) is increasingly regarded as an important means of mitigating climate change at a reasonable cost. Engineering analyses of CO 2 mitigation typically compare the cost of electricity for a base generation technology to that for a similar plant with CO 2 capture and then compute the carbon emissions mitigated per unit of cost. It can be hard to interpret mitigation cost estimates from this plant-level approach when a consistent base technology cannot be identified. In addition, neither engineering analyses nor general equilibrium models can capture the economics of plant dispatch. A realistic assessment of the costs of carbon sequestration as an emissions abatement strategy in the electric sector therefore requires a systems-level analysis. We discuss various frameworks for computing mitigation costs and introduce a simplified model of electric sector planning. Results from a "bottom-up" engineering-economic analysis for a representative U.S. North American Electric Reliability Council (NERC) region illustrate how the penetration of CCS technologies and the dispatch of generating units vary with the price of carbon emissions and thereby determine the relationship between mitigation cost and emissions reduction.

  6. Stochastic Energy Deployment System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-11-30

    SEDS is an economy-wide energy model of the U.S. The model captures dynamics between supply, demand, and pricing of the major energy types consumed and produced within the U.S. These dynamics are captured by including: the effects of macroeconomics; the resources and costs of primary energy types such as oil, natural gas, coal, and biomass; the conversion of primary fuels into energy products like petroleum products, electricity, biofuels, and hydrogen; and lastly the end- use consumption attributable to residential and commercial buildings, light and heavy transportation, and industry. Projections from SEDS extend to the year 2050 by one-year time stepsmore » and are generally projected at the national level. SEDS differs from other economy-wide energy models in that it explicitly accounts for uncertainty in technology, markets, and policy. SEDS has been specifically developed to avoid the computational burden, and sometimes fruitless labor, that comes from modeling significantly low-level details. Instead, SEDS focuses on the major drivers within the energy economy and evaluates the impact of uncertainty around those drivers.« less

  7. Strengthening of Ocean Heat Uptake Efficiency Associated with the Recent Climate Hiatus

    NASA Technical Reports Server (NTRS)

    Watanabe, Masahiro; Kamae, Youichi; Yoshimori, Masakazu; Oka, Akira; Sato, Makiko; Ishii, Masayoshi; Mochizuki, Takashi; Kimoto, Masahide

    2013-01-01

    The rate of increase of global-mean surface air temperature (SAT(sub g)) has apparently slowed during the last decade. We investigated the extent to which state-of-the-art general circulation models (GCMs) can capture this hiatus period by using multimodel ensembles of historical climate simulations. While the SAT(sub g) linear trend for the last decade is not captured by their ensemble means regardless of differences in model generation and external forcing, it is barely represented by an 11-member ensemble of a GCM, suggesting an internal origin of the hiatus associated with active heat uptake by the oceans. Besides, we found opposite changes in ocean heat uptake efficiency (k), weakening in models and strengthening in nature, which explain why the models tend to overestimate the SAT(sub g) trend. The weakening of k commonly found in GCMs seems to be an inevitable response of the climate system to global warming, suggesting the recovery from hiatus in coming decades.

  8. De Novo Design of Bioactive Small Molecules by Artificial Intelligence.

    PubMed

    Merk, Daniel; Friedrich, Lukas; Grisoni, Francesca; Schneider, Gisbert

    2018-01-01

    Generative artificial intelligence offers a fresh view on molecular design. We present the first-time prospective application of a deep learning model for designing new druglike compounds with desired activities. For this purpose, we trained a recurrent neural network to capture the constitution of a large set of known bioactive compounds represented as SMILES strings. By transfer learning, this general model was fine-tuned on recognizing retinoid X and peroxisome proliferator-activated receptor agonists. We synthesized five top-ranking compounds designed by the generative model. Four of the compounds revealed nanomolar to low-micromolar receptor modulatory activity in cell-based assays. Apparently, the computational model intrinsically captured relevant chemical and biological knowledge without the need for explicit rules. The results of this study advocate generative artificial intelligence for prospective de novo molecular design, and demonstrate the potential of these methods for future medicinal chemistry. © 2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  9. Evaluation of an 18-year CMAQ simulation: Seasonal variations and long-term temporal changes in sulfate and nitrate

    NASA Astrophysics Data System (ADS)

    Civerolo, Kevin; Hogrefe, Christian; Zalewsky, Eric; Hao, Winston; Sistla, Gopal; Lynn, Barry; Rosenzweig, Cynthia; Kinney, Patrick L.

    2010-10-01

    This paper compares spatial and seasonal variations and temporal trends in modeled and measured concentrations of sulfur and nitrogen compounds in wet and dry deposition over an 18-year period (1988-2005) over a portion of the northeastern United States. Substantial emissions reduction programs occurred over this time period, including Title IV of the Clean Air Act Amendments of 1990 which primarily resulted in large decreases in sulfur dioxide (SO 2) emissions by 1995, and nitrogen oxide (NO x) trading programs which resulted in large decreases in warm season NO x emissions by 2004. Additionally, NO x emissions from mobile sources declined more gradually over this period. The results presented here illustrate the use of both operational and dynamic model evaluation and suggest that the modeling system largely captures the seasonal and long-term changes in sulfur compounds. The modeling system generally captures the long-term trends in nitrogen compounds, but does not reproduce the average seasonal variation or spatial patterns in nitrate.

  10. Engineering and Economic Analysis of an Advanced Ultra-Supercritical Pulverized Coal Power Plant with and without Post-Combustion Carbon Capture Task 7. Design and Economic Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booras, George; Powers, J.; Riley, C.

    2015-09-01

    This report evaluates the economics and performance of two A-USC PC power plants; Case 1 is a conventionally configured A-USC PC power plant with superior emission controls, but without CO 2 removal; and Case 2 adds a post-combustion carbon capture (PCC) system to the plant from Case 1, using the design and heat integration strategies from EPRI’s 2015 report, “Best Integrated Coal Plant.” The capture design basis for this case is “partial,” to meet EPA’s proposed New Source Performance Standard, which was initially proposed as 500 kg-CO 2/MWh (gross) or 1100 lb-CO 2/MWh (gross), but modified in August 2015 tomore » 635 kg-CO 2/MWh (gross) or 1400 lb-CO 2/MWh (gross). This report draws upon the collective experience of consortium members, with EPRI and General Electric leading the study. General Electric provided the steam cycle analysis as well as v the steam turbine design and cost estimating. EPRI performed integrated plant performance analysis using EPRI’s PC Cost model.« less

  11. Financial Structure and Economic Welfare: Applied General Equilibrium Development Economics.

    PubMed

    Townsend, Robert

    2010-09-01

    This review provides a common framework for researchers thinking about the next generation of micro-founded macro models of growth, inequality, and financial deepening, as well as direction for policy makers targeting microfinance programs to alleviate poverty. Topics include treatment of financial structure general equilibrium models: testing for as-if-complete markets or other financial underpinnings; examining dual-sector models with both a perfectly intermediated sector and a sector in financial autarky, as well as a second generation of these models that embeds information problems and other obstacles to trade; designing surveys to capture measures of income, investment/savings, and flow of funds; and aggregating individuals and households to the level of network, village, or national economy. The review concludes with new directions that overcome conceptual and computational limitations.

  12. Financial Structure and Economic Welfare: Applied General Equilibrium Development Economics

    PubMed Central

    Townsend, Robert

    2010-01-01

    This review provides a common framework for researchers thinking about the next generation of micro-founded macro models of growth, inequality, and financial deepening, as well as direction for policy makers targeting microfinance programs to alleviate poverty. Topics include treatment of financial structure general equilibrium models: testing for as-if-complete markets or other financial underpinnings; examining dual-sector models with both a perfectly intermediated sector and a sector in financial autarky, as well as a second generation of these models that embeds information problems and other obstacles to trade; designing surveys to capture measures of income, investment/savings, and flow of funds; and aggregating individuals and households to the level of network, village, or national economy. The review concludes with new directions that overcome conceptual and computational limitations. PMID:21037939

  13. Comparison of taxon-specific versus general locus sets for targeted sequence capture in plant phylogenomics.

    PubMed

    Chau, John H; Rahfeldt, Wolfgang A; Olmstead, Richard G

    2018-03-01

    Targeted sequence capture can be used to efficiently gather sequence data for large numbers of loci, such as single-copy nuclear loci. Most published studies in plants have used taxon-specific locus sets developed individually for a clade using multiple genomic and transcriptomic resources. General locus sets can also be developed from loci that have been identified as single-copy and have orthologs in large clades of plants. We identify and compare a taxon-specific locus set and three general locus sets (conserved ortholog set [COSII], shared single-copy nuclear [APVO SSC] genes, and pentatricopeptide repeat [PPR] genes) for targeted sequence capture in Buddleja (Scrophulariaceae) and outgroups. We evaluate their performance in terms of assembly success, sequence variability, and resolution and support of inferred phylogenetic trees. The taxon-specific locus set had the most target loci. Assembly success was high for all locus sets in Buddleja samples. For outgroups, general locus sets had greater assembly success. Taxon-specific and PPR loci had the highest average variability. The taxon-specific data set produced the best-supported tree, but all data sets showed improved resolution over previous non-sequence capture data sets. General locus sets can be a useful source of sequence capture targets, especially if multiple genomic resources are not available for a taxon.

  14. A constrained maximization formulation to analyze deformation of fiber reinforced elastomeric actuators

    NASA Astrophysics Data System (ADS)

    Singh, Gaurav; Krishnan, Girish

    2017-06-01

    Fiber reinforced elastomeric enclosures (FREEs) are soft and smart pneumatic actuators that deform in a predetermined fashion upon inflation. This paper analyzes the deformation behavior of FREEs by formulating a simple calculus of variations problem that involves constrained maximization of the enclosed volume. The model accurately captures the deformed shape for FREEs with any general fiber angle orientation, and its relation with actuation pressure, material properties and applied load. First, the accuracy of the model is verified with existing literature and experiments for the popular McKibben pneumatic artificial muscle actuator with two equal and opposite families of helically wrapped fibers. Then, the model is used to predict and experimentally validate the deformation behavior of novel rotating-contracting FREEs, for which no prior literature exist. The generality of the model enables conceptualization of novel FREEs whose fiber orientations vary arbitrarily along the geometry. Furthermore, the model is deemed to be useful in the design synthesis of fiber reinforced elastomeric actuators for general axisymmetric desired motion and output force requirement.

  15. Legacy nutrient dynamics and patterns of catchment response under changing land use and management

    NASA Astrophysics Data System (ADS)

    Attinger, S.; Van, M. K.; Basu, N. B.

    2017-12-01

    Watersheds are complex heterogeneous systems that store, transform, and release water and nutrients under a broad distribution of both natural and anthropogenic controls. Many current watershed models, from complex numerical models to simpler reservoir-type models, are considered to be well-developed in their ability to predict fluxes of water and nutrients to streams and groundwater. They are generally less adept, however, at capturing watershed storage dynamics. In other words, many current models are run with an assumption of steady-state dynamics, and focus on nutrient flows rather than changes in nutrient stocks within watersheds. Although these commonly used modeling approaches may be able to adequately capture short-term watershed dynamics, they are unable to represent the clear nonlinearities or hysteresis responses observed in watersheds experiencing significant changes in nutrient inputs. To address such a lack, we have, in the present work, developed a parsimonious modeling approach designed to capture long-term catchment responses to spatial and temporal changes in nutrient inputs. In this approach, we conceptualize the catchment as a biogeochemical reactor that is driven by nutrient inputs, characterized internally by both biogeochemical degradation and residence or travel time distributions, resulting in a specific nutrient output. For the model simulations, we define a range of different scenarios to represent real-world changes in land use and management implemented to improve water quality. We then introduce the concept of state-space trajectories to describe system responses to these potential changes in anthropogenic forcings. We also increase model complexity, in a stepwise fashion, by dividing the catchment into multiple biogeochemical reactors, coupled in series or in parallel. Using this approach, we attempt to answer the following questions: (1) What level of model complexity is needed to capture observed system responses? (2) How can we explain different patterns of nonlinearity in watershed nutrient dynamics? And finally, how does the accumulation of nutrient legacies within watersheds impact current and future water quality?

  16. Biological signatures of dynamic river networks from a coupled landscape evolution and neutral community model

    NASA Astrophysics Data System (ADS)

    Stokes, M.; Perron, J. T.

    2017-12-01

    Freshwater systems host exceptionally species-rich communities whose spatial structure is dictated by the topology of the river networks they inhabit. Over geologic time, river networks are dynamic; drainage basins shrink and grow, and river capture establishes new connections between previously separated regions. It has been hypothesized that these changes in river network structure influence the evolution of life by exchanging and isolating species, perhaps boosting biodiversity in the process. However, no general model exists to predict the evolutionary consequences of landscape change. We couple a neutral community model of freshwater organisms to a landscape evolution model in which the river network undergoes drainage divide migration and repeated river capture. Neutral community models are macro-ecological models that include stochastic speciation and dispersal to produce realistic patterns of biodiversity. We explore the consequences of three modes of speciation - point mutation, time-protracted, and vicariant (geographic) speciation - by tracking patterns of diversity in time and comparing the final result to an equilibrium solution of the neutral model on the final landscape. Under point mutation, a simple model of stochastic and instantaneous speciation, the results are identical to the equilibrium solution and indicate the dominance of the species-area relationship in forming patterns of diversity. The number of species in a basin is proportional to its area, and regional species richness reaches its maximum when drainage area is evenly distributed among sub-basins. Time-protracted speciation is also modeled as a stochastic process, but in order to produce more realistic rates of diversification, speciation is not assumed to be instantaneous. Rather, each new species must persist for a certain amount of time before it is considered to be established. When vicariance (geographic speciation) is included, there is a transient signature of increased regional diversity after river capture. The results indicate that the mode of speciation and the rate of speciation relative to the rate of divide migration determine the evolutionary signature of river capture.

  17. The Added Value to Global Model Projections of Climate Change by Dynamical Downscaling: A Case Study over the Continental U.S. using the GISS-ModelE2 and WRF Models

    NASA Technical Reports Server (NTRS)

    Racherla, P. N.; Shindell, D. T.; Faluvegi, G. S.

    2012-01-01

    Dynamical downscaling is being increasingly used for climate change studies, wherein the climates simulated by a coupled atmosphere-ocean general circulation model (AOGCM) for a historical and a future (projected) decade are used to drive a regional climate model (RCM) over a specific area. While previous studies have demonstrated that RCMs can add value to AOGCM-simulated climatologies over different world regions, it is unclear as to whether or not this translates to a better reproduction of the observed climate change therein. We address this issue over the continental U.S. using the GISS-ModelE2 and WRF models, a state-of-the-science AOGCM and RCM, respectively. As configured here, the RCM does not effect holistic improvement in the seasonally and regionally averaged surface air temperature or precipitation for the individual historical decades. Insofar as the climate change between the two decades is concerned, the RCM does improve upon the AOGCM when nudged in the domain proper, but only modestly so. Further, the analysis indicates that there is not a strong relationship between skill in capturing climatological means and skill in capturing climate change. Though additional research would be needed to demonstrate the robustness of this finding in AOGCM/RCM models generally, the evidence indicates that, for climate change studies, the most important factor is the skill of the driving global model itself, suggesting that highest priority should be given to improving the long-range climate skill of AOGCMs.

  18. Weirs: Counting and sampling adult salmonids in streams and rivers

    USGS Publications Warehouse

    Zimmerman, Christian E.; Zabkar, Laura M.; Johnson, David H.; Shrier, Brianna M.; O'Neal, Jennifer S.; Knutzen, John A.; Augerot, Xanthippe; O'Neal, Thomas A.; Pearsons, Todd N.

    2007-01-01

    Weirs—which function as porous barriers built across stream—have long been used to capture migrating fish in flowing waters. For example, the Netsilik peoples of northern Canada used V-shaped weirs constructed of river rocks gathered onsite to capture migrating Arctic char Salvelinus alpinus (Balikci 1970). Similarly, fences constructed of stakes and a latticework of willow branches or staves were used by Native Americans to capture migrating salmon in streams along the West Coast of North America (Stewart 1994). In modern times, weirs have also been used in terminal fisheries and to capture brood fish for use in fish culture. Weirs have been used to gather data on age structure, condition, sex ratio, spawning escapement, abundance, and migratory patterns of fish in streams. One of the critical elements of fisheries management and stock assessment of salmonids is a count of adult fish returning to spawn. Weirs are frequently used to capture or count fish to determine status and trends of populations or direct inseason management of fisheries; generally, weirs are the standard against which other techniques are measured. To evaluate fishery management actions, the number of fish escaping to spawn is often compared to river-specific target spawning requirements (O’Connell and Dempson 1995). A critical factor in these analyses is the determination of total run size (O’Connell 2003). O’Connell compared methods of run-size estimation against absolute counts from a rigid weir and concluded that, given the uncertainty of estimators, the absolute counts obtained at the weir wer significantly better than modeled estimates, which deviated as much as 50–60% from actual counts. The use of weirs is generally restricted to streams and small rivers because of construction expense, formation of navigation barriers, and the tendency of weirs to clog with debris, which can cause flooding and collapse of the structure (Hubert 1996). When feasible, however, weirs are generally regarded as the most accurate technique available to quantify escapement as the result is supposedly an absolute count (Cousens et al. 1982). Weirs also provide the opportunity to capture fish for observation and sampling of biological characteristics and tissues; they may also serve as recapture sites for basin-wide, mark–recapture population estimates. Temporary weirs are useful in monitoring wild populations of salmonids as well as for capturing broodstock for artificial propagation.

  19. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.

  20. A geostatistical extreme-value framework for fast simulation of natural hazard events

    PubMed Central

    Stephenson, David B.

    2016-01-01

    We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768

  1. Explaining Moral Behavior.

    PubMed

    Osman, Magda; Wiegmann, Alex

    2017-03-01

    In this review we make a simple theoretical argument which is that for theory development, computational modeling, and general frameworks for understanding moral psychology researchers should build on domain-general principles from reasoning, judgment, and decision-making research. Our approach is radical with respect to typical models that exist in moral psychology that tend to propose complex innate moral grammars and even evolutionarily guided moral principles. In support of our argument we show that by using a simple value-based decision model we can capture a range of core moral behaviors. Crucially, the argument we propose is that moral situations per se do not require anything specialized or different from other situations in which we have to make decisions, inferences, and judgments in order to figure out how to act.

  2. An evaluation of computerized adaptive testing for general psychological distress: combining GHQ-12 and Affectometer-2 in an item bank for public mental health research.

    PubMed

    Stochl, Jan; Böhnke, Jan R; Pickett, Kate E; Croudace, Tim J

    2016-05-20

    Recent developments in psychometric modeling and technology allow pooling well-validated items from existing instruments into larger item banks and their deployment through methods of computerized adaptive testing (CAT). Use of item response theory-based bifactor methods and integrative data analysis overcomes barriers in cross-instrument comparison. This paper presents the joint calibration of an item bank for researchers keen to investigate population variations in general psychological distress (GPD). Multidimensional item response theory was used on existing health survey data from the Scottish Health Education Population Survey (n = 766) to calibrate an item bank consisting of pooled items from the short common mental disorder screen (GHQ-12) and the Affectometer-2 (a measure of "general happiness"). Computer simulation was used to evaluate usefulness and efficacy of its adaptive administration. A bifactor model capturing variation across a continuum of population distress (while controlling for artefacts due to item wording) was supported. The numbers of items for different required reliabilities in adaptive administration demonstrated promising efficacy of the proposed item bank. Psychometric modeling of the common dimension captured by more than one instrument offers the potential of adaptive testing for GPD using individually sequenced combinations of existing survey items. The potential for linking other item sets with alternative candidate measures of positive mental health is discussed since an optimal item bank may require even more items than these.

  3. Modeling Progressive Failure of Bonded Joints Using a Single Joint Finite Element

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.; Bednarcyk, Brett A.

    2010-01-01

    Enhanced finite elements are elements with an embedded analytical solution which can capture detailed local fields, enabling more efficient, mesh-independent finite element analysis. In the present study, an enhanced finite element is applied to generate a general framework capable of modeling an array of joint types. The joint field equations are derived using the principle of minimum potential energy, and the resulting solutions for the displacement fields are used to generate shape functions and a stiffness matrix for a single joint finite element. This single finite element thus captures the detailed stress and strain fields within the bonded joint, but it can function within a broader structural finite element model. The costs associated with a fine mesh of the joint can thus be avoided while still obtaining a detailed solution for the joint. Additionally, the capability to model non-linear adhesive constitutive behavior has been included within the method, and progressive failure of the adhesive can be modeled by using a strain-based failure criteria and re-sizing the joint as the adhesive fails. Results of the model compare favorably with experimental and finite element results.

  4. Threshold Evaluation of Emergency Risk Communication for Health Risks Related to Hazardous Ambient Temperature.

    PubMed

    Liu, Yang; Hoppe, Brenda O; Convertino, Matteo

    2018-04-10

    Emergency risk communication (ERC) programs that activate when the ambient temperature is expected to cross certain extreme thresholds are widely used to manage relevant public health risks. In practice, however, the effectiveness of these thresholds has rarely been examined. The goal of this study is to test if the activation criteria based on extreme temperature thresholds, both cold and heat, capture elevated health risks for all-cause and cause-specific mortality and morbidity in the Minneapolis-St. Paul Metropolitan Area. A distributed lag nonlinear model (DLNM) combined with a quasi-Poisson generalized linear model is used to derive the exposure-response functions between daily maximum heat index and mortality (1998-2014) and morbidity (emergency department visits; 2007-2014). Specific causes considered include cardiovascular, respiratory, renal diseases, and diabetes. Six extreme temperature thresholds, corresponding to 1st-3rd and 97th-99th percentiles of local exposure history, are examined. All six extreme temperature thresholds capture significantly increased relative risks for all-cause mortality and morbidity. However, the cause-specific analyses reveal heterogeneity. Extreme cold thresholds capture increased mortality and morbidity risks for cardiovascular and respiratory diseases and extreme heat thresholds for renal disease. Percentile-based extreme temperature thresholds are appropriate for initiating ERC targeting the general population. Tailoring ERC by specific causes may protect some but not all individuals with health conditions exacerbated by hazardous ambient temperature exposure. © 2018 Society for Risk Analysis.

  5. Field evaluation of a new light trap for phlebotomine sand flies.

    PubMed

    Gaglio, Gabriella; Napoli, Ettore; Falsone, Luigi; Giannetto, Salvatore; Brianti, Emanuele

    2017-10-01

    Light traps are one of the most common attractive method for the collection of nocturnal insects. Although light traps are generally referred to as "CDC light traps", different models, equipped with incandescent or UV lamps, have been developed. A new light trap, named Laika trap 3.0, equipped with LED lamps and featured with a light and handy design, has been recently proposed into the market. In this study we tested and compared the capture performances of this new trap with those of a classical light trap model under field conditions. From May to November 2013, a Laika trap and a classical light trap were placed biweekly in an area endemic for sand flies. A total of 256 sand fly specimens, belonging to 3 species (Sergentomyia minuta, Phlebotomus perniciosus, Phlebotomus neglectus) were collected during the study period. The Laika trap captured 126 phlebotomine sand flies: P. perniciosus (n=38); S. minuta (n=88), a similar number of specimens (130) and the same species were captured by classical light trap which collected also 3 specimens of P. neglectus. No significant differences in the capture efficiency at each day of trapping, neither in the number of species or in the sex of sand flies were observed. According to results of this study, the Laika trap may be a valid alternative to classical light trap models especially when handy design and low power consumption are key factors in field studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Energy dissipation by submarine obstacles during landslide impact on reservoir - potentially avoiding catastrophic dam collapse

    NASA Astrophysics Data System (ADS)

    Kafle, Jeevan; Kattel, Parameshwari; Mergili, Martin; Fischer, Jan-Thomas; Tuladhar, Bhadra Man; Pudasaini, Shiva P.

    2017-04-01

    Dense geophysical mass flows such as landslides, debris flows and debris avalanches may generate super tsunami waves as they impact water bodies such as the sea, hydraulic reservoirs or mountain lakes. Here, we apply a comprehensive and general two-phase, physical-mathematical mass flow model (Pudasaini, 2012) that consists of non-linear and hyperbolic-parabolic partial differential equations for mass and momentum balances, and present novel, high-resolution simulation results for two-phase flows, as a mixture of solid grains and viscous fluid, impacting fluid reservoirs with obstacles. The simulations demonstrate that due to the presence of different obstacles in the water body, the intense flow-obstacle-interaction dramatically reduces the flow momentum resulting in the rapid energy dissipation around the obstacles. With the increase of obstacle height overtopping decreases but, the deflection and capturing (holding) of solid mass increases. In addition, the submarine solid mass is captured by the multiple obstacles and the moving mass decreases both in amount and speed as each obstacle causes the flow to deflect into two streams and also captures a portion of it. This results in distinct tsunami and submarine flow dynamics with multiple surface water and submarine debris waves. This novel approach can be implemented in open source GIS modelling framework r.avaflow, and be applied in hazard mitigation, prevention and relevant engineering or environmental tasks. This might be in particular for process chains, such as debris impacts in lakes and subsequent overtopping. So, as the complex flow-obstacle-interactions strongly and simultaneously dissipate huge energy at impact such installations potentially avoid great threat against the integrity of the dam. References: Pudasaini, S. P. (2012): A general two-phase debris flow model. J. Geophys. Res. 117, F03010, doi: 10.1029/ 2011JF002186.

  7. Carbon capture in vehicles : a review of general support, available mechanisms, and consumer-acceptance issues.

    DOT National Transportation Integrated Search

    2012-05-01

    This survey of the feasibility of introducing carbon capture and storage (CCS) into light vehicles : started by reviewing the level of international support for CCS in general. While there have been : encouraging signs that CCS is gaining acceptance ...

  8. The pricing of credit default swaps under a generalized mixed fractional Brownian motion

    NASA Astrophysics Data System (ADS)

    He, Xinjiang; Chen, Wenting

    2014-06-01

    In this paper, we consider the pricing of the CDS (credit default swap) under a GMFBM (generalized mixed fractional Brownian motion) model. As the name suggests, the GMFBM model is indeed a generalization of all the FBM (fractional Brownian motion) models used in the literature, and is proved to be able to effectively capture the long-range dependence of the stock returns. To develop the pricing mechanics of the CDS, we firstly derive a sufficient condition for the market modeled under the GMFBM to be arbitrage free. Then under the risk-neutral assumption, the CDS is fairly priced by investigating the two legs of the cash flow involved. The price we obtained involves elementary functions only, and can be easily implemented for practical purpose. Finally, based on numerical experiments, we analyze quantitatively the impacts of different parameters on the prices of the CDS. Interestingly, in comparison with all the other FBM models documented in the literature, the results produced from the GMFBM model are in a better agreement with those calculated from the classical Black-Scholes model.

  9. Evaluation of bias associated with capture maps derived from nonlinear groundwater flow models

    USGS Publications Warehouse

    Nadler, Cara; Allander, Kip K.; Pohll, Greg; Morway, Eric D.; Naranjo, Ramon C.; Huntington, Justin

    2018-01-01

    The impact of groundwater withdrawal on surface water is a concern of water users and water managers, particularly in the arid western United States. Capture maps are useful tools to spatially assess the impact of groundwater pumping on water sources (e.g., streamflow depletion) and are being used more frequently for conjunctive management of surface water and groundwater. Capture maps have been derived using linear groundwater flow models and rely on the principle of superposition to demonstrate the effects of pumping in various locations on resources of interest. However, nonlinear models are often necessary to simulate head-dependent boundary conditions and unconfined aquifers. Capture maps developed using nonlinear models with the principle of superposition may over- or underestimate capture magnitude and spatial extent. This paper presents new methods for generating capture difference maps, which assess spatial effects of model nonlinearity on capture fraction sensitivity to pumping rate, and for calculating the bias associated with capture maps. The sensitivity of capture map bias to selected parameters related to model design and conceptualization for the arid western United States is explored. This study finds that the simulation of stream continuity, pumping rates, stream incision, well proximity to capture sources, aquifer hydraulic conductivity, and groundwater evapotranspiration extinction depth substantially affect capture map bias. Capture difference maps demonstrate that regions with large capture fraction differences are indicative of greater potential capture map bias. Understanding both spatial and temporal bias in capture maps derived from nonlinear groundwater flow models improves their utility and defensibility as conjunctive-use management tools.

  10. Requirements Modeling with the Aspect-oriented User Requirements Notation (AoURN): A Case Study

    NASA Astrophysics Data System (ADS)

    Mussbacher, Gunter; Amyot, Daniel; Araújo, João; Moreira, Ana

    The User Requirements Notation (URN) is a recent ITU-T standard that supports requirements engineering activities. The Aspect-oriented URN (AoURN) adds aspect-oriented concepts to URN, creating a unified framework that allows for scenario-based, goal-oriented, and aspect-oriented modeling. AoURN is applied to the car crash crisis management system (CCCMS), modeling its functional and non-functional requirements (NFRs). AoURN generally models all use cases, NFRs, and stakeholders as individual concerns and provides general guidelines for concern identification. AoURN handles interactions between concerns, capturing their dependencies and conflicts as well as the resolutions. We present a qualitative comparison of aspect-oriented techniques for scenario-based and goal-oriented requirements engineering. An evaluation carried out based on the metrics adapted from literature and a task-based evaluation suggest that AoURN models are more scalable than URN models and exhibit better modularity, reusability, and maintainability.

  11. Learning and robustness to catch-and-release fishing in a shark social network

    PubMed Central

    Brown, Culum; Planes, Serge

    2017-01-01

    Individuals can play different roles in maintaining connectivity and social cohesion in animal populations and thereby influence population robustness to perturbations. We performed a social network analysis in a reef shark population to assess the vulnerability of the global network to node removal under different scenarios. We found that the network was generally robust to the removal of nodes with high centrality. The network appeared also highly robust to experimental fishing. Individual shark catchability decreased as a function of experience, as revealed by comparing capture frequency and site presence. Altogether, these features suggest that individuals learnt to avoid capture, which ultimately increased network robustness to experimental catch-and-release. Our results also suggest that some caution must be taken when using capture–recapture models often used to assess population size as assumptions (such as equal probabilities of capture and recapture) may be violated by individual learning to escape recapture. PMID:28298593

  12. A Critical Look at Entropy-Based Gene-Gene Interaction Measures.

    PubMed

    Lee, Woojoo; Sjölander, Arvid; Pawitan, Yudi

    2016-07-01

    Several entropy-based measures for detecting gene-gene interaction have been proposed recently. It has been argued that the entropy-based measures are preferred because entropy can better capture the nonlinear relationships between genotypes and traits, so they can be useful to detect gene-gene interactions for complex diseases. These suggested measures look reasonable at intuitive level, but so far there has been no detailed characterization of the interactions captured by them. Here we study analytically the properties of some entropy-based measures for detecting gene-gene interactions in detail. The relationship between interactions captured by the entropy-based measures and those of logistic regression models is clarified. In general we find that the entropy-based measures can suffer from a lack of specificity in terms of target parameters, i.e., they can detect uninteresting signals as interactions. Numerical studies are carried out to confirm theoretical findings. © 2016 WILEY PERIODICALS, INC.

  13. Mathematical analysis of a power-law form time dependent vector-borne disease transmission model.

    PubMed

    Sardar, Tridip; Saha, Bapi

    2017-06-01

    In the last few years, fractional order derivatives have been used in epidemiology to capture the memory phenomena. However, these models do not have proper biological justification in most of the cases and lack a derivation from a stochastic process. In this present manuscript, using theory of a stochastic process, we derived a general time dependent single strain vector borne disease model. It is shown that under certain choice of time dependent transmission kernel this model can be converted into the classical integer order system. When the time-dependent transmission follows a power law form, we showed that the model converted into a vector borne disease model with fractional order transmission. We explicitly derived the disease-free and endemic equilibrium of this new fractional order vector borne disease model. Using mathematical properties of nonlinear Volterra type integral equation it is shown that the unique disease-free state is globally asymptotically stable under certain condition. We define a threshold quantity which is epidemiologically known as the basic reproduction number (R 0 ). It is shown that if R 0 > 1, then the derived fractional order model has a unique endemic equilibrium. We analytically derived the condition for the local stability of the endemic equilibrium. To test the model capability to capture real epidemic, we calibrated our newly proposed model to weekly dengue incidence data of San Juan, Puerto Rico for the time period 30th April 1994 to 23rd April 1995. We estimated several parameters, including the order of the fractional derivative of the proposed model using aforesaid data. It is shown that our proposed fractional order model can nicely capture real epidemic. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Micro-porous layer stochastic reconstruction and transport parameter determination

    NASA Astrophysics Data System (ADS)

    El Hannach, Mohamed; Singh, Randhir; Djilali, Ned; Kjeang, Erik

    2015-05-01

    The Micro-Porous Layer (MPL) is a porous, thin layer commonly used in fuel cells at the interfaces between the catalyst layers and gas diffusion media. It is generally made from spherical carbon nanoparticles and PTFE acting as hydrophobic agent. The scale and brittle nature of the MPL structure makes it challenging to study experimentally. In the present work, a 3D stochastic model is developed to virtually reconstruct the MPL structure. The carbon nanoparticle and PTFE phases are fully distinguished by the algorithm. The model is shown to capture the actual structural morphology of the MPL and is validated by comparing the results to available experimental data. The model shows a good capability in generating a realistic MPL successfully using a set of parameters introduced to capture specific morphological features of the MPL. A numerical model that resolves diffusive transport at the pore scale is used to compute the effective transport properties of the reconstructed MPLs. A parametric study is conducted to illustrate the capability of the model as an MPL design tool that can be used to guide and optimize the functionality of the material.

  15. Electron-capture Isotopes Could Constrain Cosmic-Ray Propagation Models

    NASA Astrophysics Data System (ADS)

    Benyamin, David; Shaviv, Nir J.; Piran, Tsvi

    2017-12-01

    Electron capture (EC) isotopes are known to provide constraints on the low-energy behavior of cosmic rays (CRs), such as reacceleration. Here, we study the EC isotopes within the framework of the dynamic spiral-arms CR propagation model in which most of the CR sources reside in the galactic spiral arms. The model was previously used to explain the B/C and sub-Fe/Fe ratios. We show that the known inconsistency between the 49Ti/49V and 51V/51Cr ratios remains also in the spiral-arms model. On the other hand, unlike the general wisdom that says the isotope ratios depend primarily on reacceleration, we find here that the ratio also depends on the halo size (Z h) and, in spiral-arms models, also on the time since the last spiral-arm passage ({τ }{arm}). Namely, EC isotopes can, in principle, provide interesting constraints on the diffusion geometry. However, with the present uncertainties in the lab measurements of both the electron attachment rate and the fragmentation cross sections, no meaningful constraint can be placed.

  16. Using dynamic N-mixture models to test cavity limitation on northern flying squirrel demographic parameters using experimental nest box supplementation.

    PubMed

    Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica

    2014-06-01

    Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise than those from CJS models. However, N-mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.

  17. Single-particle dynamics of the Anderson model: a local moment approach

    NASA Astrophysics Data System (ADS)

    Glossop, Matthew T.; Logan, David E.

    2002-07-01

    A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.

  18. Los Alamos National Laboratory Economic Analysis Capability Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith; Pasqualini, Donatella

    Los Alamos National Laboratory has developed two types of models to compute the economic impact of infrastructure disruptions. FastEcon is a fast running model that estimates first-­order economic impacts of large scale events such as hurricanes and floods and can be used to identify the amount of economic activity that occurs in a specific area. LANL’s Computable General Equilibrium (CGE) model estimates more comprehensive static and dynamic economic impacts of a broader array of events and captures the interactions between sectors and industries when estimating economic impacts.

  19. The Time Course of Attentional and Oculomotor Capture Reveals a Common Cause

    ERIC Educational Resources Information Center

    Hunt, Amelia R.; von Muhlenen, Adrian; Kingstone, Alan

    2007-01-01

    Eye movements are often misdirected toward a distractor when it appears abruptly, an effect known as oculomotor capture. Fundamental differences between eye movements and attention have led to questions about the relationship of oculomotor capture to the more general effect of sudden onsets on performance, known as attentional capture. This study…

  20. Generalized Reich-Moore R-matrix approximation

    NASA Astrophysics Data System (ADS)

    Arbanas, Goran; Sobes, Vladimir; Holcomb, Andrew; Ducru, Pablo; Pigni, Marco; Wiarda, Dorothea

    2017-09-01

    A conventional Reich-Moore approximation (RMA) of R-matrix is generalized into a manifestly unitary form by introducing a set of resonant capture channels treated explicitly in a generalized, reduced R-matrix. A dramatic reduction of channel space witnessed in conventional RMA, from Nc × Nc full R-matrix to Np × Np reduced R-matrix, where Nc = Np + Nγ, Np and Nγ denoting the number of particle and γ-ray channels, respectively, is due to Np < Nγ. A corresponding reduction of channel space in generalized RMA (GRMA) is from Nc × Nc full R-matrix to N × N, where N = Np + N, and where N is the number of capture channels defined in GRMA. We show that N = Nλ where Nλ is the number of R-matrix levels. This reduction in channel space, although not as dramatic as in the conventional RMA, could be significant for medium and heavy nuclides where N < Nγ. The resonant capture channels defined by GRMA accommodate level-level interference (via capture channels) neglected in conventional RMA. The expression for total capture cross section in GRMA is formally equal to that of the full Nc × NcR-matrix. This suggests that GRMA could yield improved nuclear data evaluations in the resolved resonance range at a cost of introducing N(N - 1)/2 resonant capture width parameters relative to conventional RMA. Manifest unitarity of GRMA justifies a method advocated by Fröhner and implemented in the SAMMY nuclear data evaluation code for enforcing unitarity of conventional RMA. Capture widths of GRMA are exactly convertible into alternative R-matrix parameters via Brune tranform. Application of idealized statistical methods to GRMA shows that variance among conventional RMA capture widths in extant RMA evaluations could be used to estimate variance among off-diagonal elements neglected by conventional RMA. Significant departure of capture widths from an idealized distribution may indicate the presence of underlying doorway states.

  1. Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model

    PubMed Central

    Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.

    2013-01-01

    One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874

  2. Real-time Adaptive Control Using Neural Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Haley, Pam; Soloway, Don; Gold, Brian

    1999-01-01

    The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.

  3. Evaluation of uncertainty in capturing the spatial variability and magnitudes of extreme hydrological events for the uMngeni catchment, South Africa

    NASA Astrophysics Data System (ADS)

    Kusangaya, Samuel; Warburton Toucher, Michele L.; van Garderen, Emma Archer

    2018-02-01

    Downscaled General Circulation Models (GCMs) output are used to forecast climate change and provide information used as input for hydrological modelling. Given that our understanding of climate change points towards an increasing frequency, timing and intensity of extreme hydrological events, there is therefore the need to assess the ability of downscaled GCMs to capture these extreme hydrological events. Extreme hydrological events play a significant role in regulating the structure and function of rivers and associated ecosystems. In this study, the Indicators of Hydrologic Alteration (IHA) method was adapted to assess the ability of simulated streamflow (using downscaled GCMs (dGCMs)) in capturing extreme river dynamics (high and low flows), as compared to streamflow simulated using historical climate data from 1960 to 2000. The ACRU hydrological model was used for simulating streamflow for the 13 water management units of the uMngeni Catchment, South Africa. Statistically downscaled climate models obtained from the Climate System Analysis Group at the University of Cape Town were used as input for the ACRU Model. Results indicated that, high flows and extreme high flows (one in ten year high flows/large flood events) were poorly represented both in terms of timing, frequency and magnitude. Simulated streamflow using dGCMs data also captures more low flows and extreme low flows (one in ten year lowest flows) than that captured in streamflow simulated using historical climate data. The overall conclusion was that although dGCMs output can reasonably be used to simulate overall streamflow, it performs poorly when simulating extreme high and low flows. Streamflow simulation from dGCMs must thus be used with caution in hydrological applications, particularly for design hydrology, as extreme high and low flows are still poorly represented. This, arguably calls for the further improvement of downscaling techniques in order to generate climate data more relevant and useful for hydrological applications such as in design hydrology. Nevertheless, the availability of downscaled climatic output provide the potential of exploring climate model uncertainties in different hydro climatic regions at local scales where forcing data is often less accessible but more accurate at finer spatial scales and with adequate spatial detail.

  4. Description and application of capture zone delineation for a wellfield at Hilton Head Island, South Carolina

    USGS Publications Warehouse

    Landmeyer, J.E.

    1994-01-01

    Ground-water capture zone boundaries for individual pumped wells in a confined aquffer were delineated by using groundwater models. Both analytical and numerical (semi-analytical) models that more accurately represent the $round-water-flow system were used. All models delineated 2-dimensional boundaries (capture zones) that represent the areal extent of groundwater contribution to a pumped well. The resultant capture zones were evaluated on the basis of the ability of each model to realistically rapresent the part of the ground-water-flow system that contributed water to the pumped wells. Analytical models used were based on a fixed radius approach, and induded; an arbitrary radius model, a calculated fixed radius model based on the volumetric-flow equation with a time-of-travel criterion, and a calculated fixed radius model derived from modification of the Theis model with a drawdown criterion. Numerical models used induded the 2-dimensional, finite-difference models RESSQC and MWCAP. The arbitrary radius and Theis analytical models delineated capture zone boundaries that compared least favorably with capture zones delineated using the volumetric-flow analytical model and both numerical models. The numerical models produced more hydrologically reasonable capture zones (that were oriented parallel to the regional flow direction) than the volumetric-flow equation. The RESSQC numerical model computed more hydrologically realistic capture zones than the MWCAP numerical model by accounting for changes in the shape of capture zones caused by multiple-well interference. The capture zone boundaries generated by using both analytical and numerical models indicated that the curnmtly used 100-foot radius of protection around a wellhead in South Carolina is an underestimate of the extent of ground-water capture for pumped wetis in this particular wellfield in the Upper Floridan aquifer. The arbitrary fixed radius of 100 feet was shown to underestimate the upgradient contribution of ground-water flow to a pumped well.

  5. Capturing a Commander's decision making style

    NASA Astrophysics Data System (ADS)

    Santos, Eugene; Nguyen, Hien; Russell, Jacob; Kim, Keumjoo; Veenhuis, Luke; Boparai, Ramnjit; Stautland, Thomas Kristoffer

    2017-05-01

    A Commander's decision making style represents how he weighs his choices and evaluates possible solutions with regards to his goals. Specifically, in the naval warfare domain, it relates the way he processes a large amount of information in dynamic, uncertain environments, allocates resources, and chooses appropriate actions to pursue. In this paper, we describe an approach to capture a Commander's decision style by creating a cognitive model that captures his decisionmaking process and evaluate this model using a set of scenarios using an online naval warfare simulation game. In this model, we use the Commander's past behaviors and generalize Commander's actions across multiple problems and multiple decision making sequences in order to recommend actions to a Commander in a manner that he may have taken. Our approach builds upon the Double Transition Model to represent the Commander's focus and beliefs to estimate his cognitive state. Each cognitive state reflects a stage in a Commander's decision making process, each action reflects the tasks that he has taken to move himself closer to a final decision, and the reward reflects how close he is to achieving his goal. We then use inverse reinforcement learning to compute a reward for each of the Commander's actions. These rewards and cognitive states are used to compare between different styles of decision making. We construct a set of scenarios in the game where rational, intuitive and spontaneous decision making styles will be evaluated.

  6. Dynamical spreading of small bodies in 1:1 resonance with planets by the diurnal Yarkovsky effect

    NASA Astrophysics Data System (ADS)

    Wang, Xuefeng; Hou, Xiyun

    2017-10-01

    A simple model is introduced to describe the inherent dynamics of Trojans in the presence of the diurnal Yarkovsky effect. For different spin statuses, the orbital elements of the Trojans (mainly semimajor axis, eccentricity and inclination) undergo different variations. The variation rate is generally very small, but the total variation of the semimajor axis or the orbit eccentricity over the age of the Solar system may be large enough to send small Trojans out of the regular region (or, vice versa, to capture small bodies in the regular region). In order to demonstrate the analytical analysis, we first carry out numerical simulations in a simple model, and then generalize these to two 'real' systems, namely the Sun-Jupiter system and the Sun-Earth system. In the Sun-Jupiter system, where the motion of Trojans is regular, the Yarkovsky effect gradually alters the libration width or the orbit eccentricity, forcing the Trojan to move from regular regionsto chaotic regions, where chaos may eventually cause it to escape. In the Sun-Earth system, where the motion of Trojans is generally chaotic, our limited numerical simulations indicate that the Yarkovsky effect is negligible for Trojans of 100 m in size, and even for larger ones. The Yarkovsky effect on small bodies captured in other 1:1 resonance orbits is also briefly discussed.

  7. A generalized partially linear mean-covariance regression model for longitudinal proportional data, with applications to the analysis of quality of life data from cancer clinical trials.

    PubMed

    Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng

    2017-05-30

    Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. The Capture of Interstellar Dust: The Pure Poynting-Robertson Case

    NASA Technical Reports Server (NTRS)

    Jackson, A. A.

    2001-01-01

    Ulysses and Galileo spacecraft have discovered interstellar dust particles entering the solar system. In general, particles trajectories not altered by Lorentz forces or radiation pressure should encounter the sun on open orbits. Under Newtonian forces alone these particles return to the interstellar medium. Dissipative forces, such as Poynting Robertson (PR) and corpuscular drag and non-dissipative Lorentz forces can modify open orbits to become closed. In particular, it is possible for the orbits of particles that pass close to the Sun to become closed due to PR drag. Further, solar irradiation will cause modification of the size of the dust particle by evaporation. The combination of these processes gives rise a class of capture orbits and bound orbits with evaporation. Considering only the case of pure PR drag a minimum impact parameter is derived for initial capture by Poynting-Robertson drag. Orbits in the solar radiation field are computed numerically accounting for evaporation with optical and material properties for ideal interstellar particles modeled. The properties of this kind of particle capture are discussed for the Sun but is applicable to other stars.

  9. Potential effects of groundwater pumping on water levels, phreatophytes, and spring discharges in Spring and Snake Valleys, White Pine County, Nevada, and adjacent areas in Nevada and Utah

    USGS Publications Warehouse

    Halford, Keith J.; Plume, Russell W.

    2011-01-01

    Assessing hydrologic effects of developing groundwater supplies in Snake Valley required numerical, groundwater-flow models to estimate the timing and magnitude of capture from streams, springs, wetlands, and phreatophytes. Estimating general water-table decline also required groundwater simulation. The hydraulic conductivity of basin fill and transmissivity of basement-rock distributions in Spring and Snake Valleys were refined by calibrating a steady state, three-dimensional, MODFLOW model of the carbonate-rock province to predevelopment conditions. Hydraulic properties and boundary conditions were defined primarily from the Regional Aquifer-System Analysis (RASA) model except in Spring and Snake Valleys. This locally refined model was referred to as the Great Basin National Park calibration (GBNP-C) model. Groundwater discharges from phreatophyte areas and springs in Spring and Snake Valleys were simulated as specified discharges in the GBNP-C model. These discharges equaled mapped rates and measured discharges, respectively. Recharge, hydraulic conductivity, and transmissivity were distributed throughout Spring and Snake Valleys with pilot points and interpolated to model cells with kriging in geologically similar areas. Transmissivity of the basement rocks was estimated because thickness is correlated poorly with transmissivity. Transmissivity estimates were constrained by aquifer-test results in basin-fill and carbonate-rock aquifers. Recharge, hydraulic conductivity, and transmissivity distributions of the GBNP-C model were estimated by minimizing a weighted composite, sum-of-squares objective function that included measurement and Tikhonov regularization observations. Tikhonov regularization observations were equations that defined preferred relations between the pilot points. Measured water levels, water levels that were simulated with RASA, depth-to-water beneath distributed groundwater and spring discharges, land-surface altitudes, spring discharge at Fish Springs, and changes in discharge on selected creek reaches were measurement observations. The effects of uncertain distributed groundwater-discharge estimates in Spring and Snake Valleys on transmissivity estimates were bounded with alternative models. Annual distributed groundwater discharges from Spring and Snake Valleys in the alternative models totaled 151,000 and 227,000 acre-feet, respectively and represented 20 percent differences from the 187,000 acre-feet per year that discharges from the GBNP-C model. Transmissivity estimates in the basin fill between Baker and Big Springs changed less than 50 percent between the two alternative models. Potential effects of pumping from Snake Valley were estimated with the Great Basin National Park predictive (GBNP-P) model, which is a transient groundwater-flow model. The hydraulic conductivity of basin fill and transmissivity of basement rock were the GBNP-C model distributions. Specific yields were defined from aquifer tests. Captures of distributed groundwater and spring discharges were simulated in the GBNP-P model using a combination of well and drain packages in MODFLOW. Simulated groundwater captures could not exceed measured groundwater-discharge rates. Four groundwater-development scenarios were investigated where total annual withdrawals ranged from 10,000 to 50,000 acre-feet during a 200-year pumping period. Four additional scenarios also were simulated that added the effects of existing pumping in Snake Valley. Potential groundwater pumping locations were limited to nine proposed points of diversion. Results are presented as maps of groundwater capture and drawdown, time series of drawdowns and discharges from selected wells, and time series of discharge reductions from selected springs and control volumes. Simulated drawdown propagation was attenuated where groundwater discharge could be captured. General patterns of groundwater capture and water-table declines were similar for all scenarios. Simulated drawdowns greater than 1 ft propagated outside of Spring and Snake Valleys after 200 years of pumping in all scenarios.

  10. Morphological Idiosyncracies in Classical Arabic: Evidence Favoring Lexical Representations over Rules.

    ERIC Educational Resources Information Center

    Miller, Ann M.

    A lexical representational analysis of Classical Arabic is proposed that captures a generalization that McCarthy's (1979, 1981) autosegmental analysis misses, namely that idiosyncratic characteristics of the derivational binyanim in Arabic are lexical, not morphological. This analysis captures that generalization by treating all the idiosyncracies…

  11. Architecture of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.

    1986-01-01

    Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.

  12. Architecture of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.

    1989-01-01

    Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.

  13. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    NASA Astrophysics Data System (ADS)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  14. Performance evaluation of automated manufacturing systems using generalized stochastic Petri Nets. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Al-Jaar, Robert Y.; Desrochers, Alan A.

    1989-01-01

    The main objective of this research is to develop a generic modeling methodology with a flexible and modular framework to aid in the design and performance evaluation of integrated manufacturing systems using a unified model. After a thorough examination of the available modeling methods, the Petri Net approach was adopted. The concurrent and asynchronous nature of manufacturing systems are easily captured by Petri Net models. Three basic modules were developed: machine, buffer, and Decision Making Unit. The machine and buffer modules are used for modeling transfer lines and production networks. The Decision Making Unit models the functions of a computer node in a complex Decision Making Unit Architecture. The underlying model is a Generalized Stochastic Petri Net (GSPN) that can be used for performance evaluation and structural analysis. GSPN's were chosen because they help manage the complexity of modeling large manufacturing systems. There is no need to enumerate all the possible states of the Markov Chain since they are automatically generated from the GSPN model.

  15. Simulation of tropospheric ozone with MOZART-2: An evaluation study over East Asia

    NASA Astrophysics Data System (ADS)

    Liu, Qianxia; Zhang, Meigen; Wang, Bin

    2005-07-01

    Climate changes induced by human activities have attracted a great amount of attention. With this, a coupling system of an atmospheric chemistry model and a climate model is greatly needed in China for better understanding the interaction between atmospheric chemical components and the climate. As the first step to realize this coupling goal, the three-dimensional global atmospheric chemistry transport model MOZART-2 (the global Model of Ozone and Related Chemical Tracers, version 2) coupled with CAM2 (the Community Atmosphere Model, version 2) is set up and the model results are compared against observations obtained in East Asia in order to evaluate the model performance. Comparison of simulated ozone mixing ratios with ground level observations at Minamitorishima and Ryori and with ozonesonde data at Naha and Tateno in Japan shows that the observed ozone concentrations can be reproduced reasonably well at Minamitorishima but they tend to be slightly overestimated in winter and autumn while underestimated a little in summer at Ryori. The model also captures the general features of surface CO seasonal variations quite well, while it underestimates CO levels at both Minamitorishima and Ryori. The underestimation is primarily associated with the emission inventory adopted in this study. Compared with the ozonesonde data, the simulated vertical gradient and magnitude of ozone can be reasonably well simulated with a little overestimation in winter, especially in the upper troposphere. The model also generally captures the seasonal, latitudinal and altitudinal variations in ozone concentration. Analysis indicates that the underestimation of tropopause height in February contributes to the overestimation of winter ozone in the upper and middle troposphere at Tateno.

  16. Age mediation of frontoparietal activation during visual feature search.

    PubMed

    Madden, David J; Parks, Emily L; Davis, Simon W; Diaz, Michele T; Potter, Guy G; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto

    2014-11-15

    Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19-29 years of age) and 21 older adults (60-87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Atmospheric Chemistry of the Carbon Capture Solvent Monoethanolamine (MEA): A Theoretical Study

    NASA Astrophysics Data System (ADS)

    da Silva, G.

    2012-12-01

    The development of amine solvent technology for carbon capture and storage has the potential to create large new sources of amines to the atmosphere. The atmospheric chemistry of amines generally, and carbon capture solvents in particular, is not well understood. We have used quantum chemistry and master equation modelling to investigate the OH radical initiated oxidation of monoethanolamine (NH2CH2CH2OH), or MEA, the archetypal carbon capture solvent. The OH radical can abstract H atoms from either carbon atom in MEA, with negative reaction barriers. Treating these reactions with a two transition state model can reliably reproduce experimental rate constants and their temperature dependence. The products of the MEA + OH reaction, the NH2CHCH2OH and NH2CH2CHOH radicals, undergo subsequent reaction with O2, which has also been studied. In both cases chemically activated reactions that bypass peroxyl radical intermediates dominate, producing 2-iminoethanol + HO2 (from NH2CHCH2OH) or aminoacetaldehyde + HO2 (from NH2CH2CHOH), making the process HOx-neutral. The operation of chemically activated reaction mechanisms has implications for the ozone forming potential of MEA. The products of MEA photo-oxidation are proposed as important species in the formation of both organic and inorganic secondary aerosols, particularly through uptake of the imine 2-iminoethanol and subsequent hydrolysis to ammonia and glycolaldehyde.

  18. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    PubMed

    Whittington, Jesse; Sawaya, Michael A

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park's population of grizzly bears requires continued conservation-oriented management actions.

  19. From blackbirds to black holes: Investigating capture-recapture methods for time domain astronomy

    NASA Astrophysics Data System (ADS)

    Laycock, Silas G. T.

    2017-07-01

    In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced which is demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within python.

  20. New quasibound states of the compound nucleus in α -particle capture by the nucleus

    NASA Astrophysics Data System (ADS)

    Maydanyuk, Sergei P.; Zhang, Peng-Ming; Zou, Li-Ping

    2017-07-01

    We generalize the theory of nuclear decay and capture of Gamow that is based on tunneling through the barrier and internal oscillations inside the nucleus. In our formalism an additional factor is obtained, which describes distribution of the wave function of the the α particle inside the nuclear region. We discover new most stable states (called quasibound states) of the compound nucleus (CN) formed during the capture of α particle by the nucleus. With a simple example, we explain why these states cannot appear in traditional calculations of the α capture cross sections based on monotonic penetrabilities of a barrier, but they appear in a complete description of the evolution of the CN. Our result is obtained by a complete description of the CN evolution, which has the advantages of (1) a clear picture of the formation of the CN and its disintegration, (2) a detailed quantum description of the CN, (3) tests of the calculated amplitudes based on quantum mechanics (not realized in other approaches), and (4) high accuracy of calculations (not achieved in other approaches). These peculiarities are shown with the capture reaction of α +44Ca . We predict quasibound energy levels and determine fusion probabilities for this reaction. The difference between our approach and theory of quasistationary states with complex energies applied for the α capture is also discussed. We show (1) that theory does not provide calculations for the cross section of α capture (according to modern models of the α capture), in contrast with our formalism, and (2) these two approaches describe different states of the α capture (for the same α -nucleus potential).

  1. A comparison of hematology, plasma chemistry, and injuries in Hickory shad (Alosa mediocris) captured by electrofishing or angling during a spawning run.

    PubMed

    Matsche, Mark A; Rosemary, Kevin; Stence, Charles P

    2017-09-01

    Declines in Hickory shad (Alosa mediocris) populations in Chesapeake Bay have prompted efforts at captive propagation of wild broodfish for stock enhancement and research. The objectives of this study were to evaluate injuries sustained, and immediate and delayed (24 hours) effects on blood variables related to 2 fish capturing methods (electrofishing [EF] and angling). Blood specimens were collected from fish immediately following capture by EF and angling (n = 40 per sex and capture method) from the Susquehanna River (MD, USA). Additional fish (n = 25 per sex and capture method) were collected on the same day, placed in holding tanks and bled 24 hours following capture. Blood data that were non-Gaussian in distribution were transformed (Box-Cox), and effects of sex, method of capture, and holding time were tested using ANOVA with general linear models. Fish were evaluated for injuries by necropsy and radiography. Sex-specific differences were observed for RBC, HGB, PCV, MCH, MCHC, total proteins (TP), globulins, glucose, calcium, AST, CK, and lactate, while RBC, HGB, PCV, MCV, MCH, MCHC, TP, albumin, globulins, glucose, potassium, sodium, AST, CK, and lactate differed significantly by fish capturing method. Electrofishing may have induced greater disruption in blood variables, but mortality (4%) was not significantly different compared to angling. Electrofishing for Hickory shad using a constant DC voltage resulted in numerous hematologic and biochemical changes, with no additional injuries or deaths compared to angling. Capture method must be considered when evaluating fish condition, and blood variables should be partitioned by sex during spawning season. © 2017 American Society for Veterinary Clinical Pathology.

  2. Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Kogelbauer, Florian; Haller, George

    2018-06-01

    We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.

  3. A biphasic model for bleeding in soft tissue

    NASA Astrophysics Data System (ADS)

    Chang, Yi-Jui; Chong, Kwitae; Eldredge, Jeff D.; Teran, Joseph; Benharash, Peyman; Dutson, Erik

    2017-11-01

    The modeling of blood passing through soft tissues in the body is important for medical applications. The current study aims to capture the effect of tissue swelling and the transport of blood under bleeding or hemorrhaging conditions. The soft tissue is considered as a non-static poro-hyperelastic material with liquid-filled voids. A biphasic formulation effectively, a generalization of Darcy's law-is utilized, treating the phases as occupying fractions of the same volume. The interaction between phases is captured through a Stokes-like friction force on their relative velocities and a pressure that penalizes deviations from volume fractions summing to unity. The soft tissue is modeled as a hyperelastic material with a typical J-shaped stress-strain curve, while blood is considered as a Newtonian fluid. The method of Smoothed Particle Hydrodynamics is used to discretize the conservation equations based on the ease of treating free surfaces in the liquid. Simulations of swelling under acute hemorrhage and of draining under gravity and compression will be demonstrated. Ongoing progress in modeling of organ tissues under injuries and surgical conditions will be discussed.

  4. A system-level model for the microbial regulatory genome.

    PubMed

    Brooks, Aaron N; Reiss, David J; Allard, Antoine; Wu, Wei-Ju; Salvanha, Diego M; Plaisier, Christopher L; Chandrasekaran, Sriram; Pan, Min; Kaur, Amardeep; Baliga, Nitin S

    2014-07-15

    Microbes can tailor transcriptional responses to diverse environmental challenges despite having streamlined genomes and a limited number of regulators. Here, we present data-driven models that capture the dynamic interplay of the environment and genome-encoded regulatory programs of two types of prokaryotes: Escherichia coli (a bacterium) and Halobacterium salinarum (an archaeon). The models reveal how the genome-wide distributions of cis-acting gene regulatory elements and the conditional influences of transcription factors at each of those elements encode programs for eliciting a wide array of environment-specific responses. We demonstrate how these programs partition transcriptional regulation of genes within regulons and operons to re-organize gene-gene functional associations in each environment. The models capture fitness-relevant co-regulation by different transcriptional control mechanisms acting across the entire genome, to define a generalized, system-level organizing principle for prokaryotic gene regulatory networks that goes well beyond existing paradigms of gene regulation. An online resource (http://egrin2.systemsbiology.net) has been developed to facilitate multiscale exploration of conditional gene regulation in the two prokaryotes. © 2014 The Authors. Published under the terms of the CC BY 4.0 license.

  5. Carbon Capture and Utilization in the Industrial Sector.

    PubMed

    Psarras, Peter C; Comello, Stephen; Bains, Praveen; Charoensawadpong, Panunya; Reichelstein, Stefan; Wilcox, Jennifer

    2017-10-03

    The fabrication and manufacturing processes of industrial commodities such as iron, glass, and cement are carbon-intensive, accounting for 23% of global CO 2 emissions. As a climate mitigation strategy, CO 2 capture from flue gases of industrial processes-much like that of the power sector-has not experienced wide adoption given its high associated costs. However, some industrial processes with relatively high CO 2 flue concentration may be viable candidates to cost-competitively supply CO 2 for utilization purposes (e.g., polymer manufacturing, etc.). This work develops a methodology that determines the levelized cost ($/tCO 2 ) of separating, compressing, and transporting carbon dioxide. A top-down model determines the cost of separating and compressing CO 2 across 18 industrial processes. Further, the study calculates the cost of transporting CO 2 via pipeline and tanker truck to appropriately paired sinks using a bottom-up cost model and geo-referencing approach. The results show that truck transportation is generally the low-cost alternative given the relatively small volumes (ca. 100 kt CO 2 /a). We apply our methodology to a regional case study in Pennsylvania, which shows steel and cement manufacturing paired to suitable sinks as having the lowest levelized cost of capture, compression, and transportation.

  6. Using data from an encounter sampler to model fish dispersal

    USGS Publications Warehouse

    Obaza, A.; DeAngelis, D.L.; Trexler, J.C.

    2011-01-01

    A method to estimate speed of free-ranging fishes using a passive sampling device is described and illustrated with data from the Everglades, U.S.A. Catch per unit effort (CPUE) from minnow traps embedded in drift fences was treated as an encounter rate and used to estimate speed, when combined with an independent estimate of density obtained by use of throw traps that enclose 1 m2 of marsh habitat. Underwater video was used to evaluate capture efficiency and species-specific bias of minnow traps and two sampling studies were used to estimate trap saturation and diel-movement patterns; these results were used to optimize sampling and derive correction factors to adjust species-specific encounter rates for bias and capture efficiency. Sailfin mollies Poecilia latipinna displayed a high frequency of escape from traps, whereas eastern mosquitofish Gambusia holbrooki were most likely to avoid a trap once they encountered it; dollar sunfish Lepomis marginatus were least likely to avoid the trap once they encountered it or to escape once they were captured. Length of sampling and time of day affected CPUE; fishes generally had a very low retention rate over a 24 h sample time and only the Everglades pygmy sunfish Elassoma evergladei were commonly captured at night. Dispersal speed of fishes in the Florida Everglades, U.S.A., was shown to vary seasonally and among species, ranging from 0.05 to 0.15 m s-1 for small poeciliids and fundulids to 0.1 to 1.8 m s-1 for L. marginatus. Speed was generally highest late in the wet season and lowest in the dry season, possibly tied to dispersal behaviours linked to finding and remaining in dry-season refuges. These speed estimates can be used to estimate the diffusive movement rate, which is commonly employed in spatial ecological models.

  7. Mesosacle eddies in a high resolution OGCM and coupled ocean-atmosphere GCM

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Liu, H.; Lin, P.

    2017-12-01

    The present study described high-resolution climate modeling efforts including oceanic, atmospheric and coupled general circulation model (GCM) at the state key laboratory of numerical modeling for atmospheric sciences and geophysical fluid dynamics (LASG), Institute of Atmospheric Physics (IAP). The high-resolution OGCM is established based on the latest version of the LASG/IAP Climate system Ocean Model (LICOM2.1), but its horizontal resolution and vertical resolution are increased to 1/10° and 55 layers, respectively. Forced by the surface fluxes from the reanalysis and observed data, the model has been integrated for approximately more than 80 model years. Compared with the simulation of the coarse-resolution OGCM, the eddy-resolving OGCM not only better simulates the spatial-temporal features of mesoscale eddies and the paths and positions of western boundary currents but also reproduces the large meander of the Kuroshio Current and its interannual variability. Another aspect, namely, the complex structures of equatorial Pacific currents and currents in the coastal ocean of China, are better captured due to the increased horizontal and vertical resolution. Then we coupled the high resolution OGCM to NCAR CAM4 with 25km resolution, in which the mesoscale air-sea interaction processes are better captured.

  8. Capacity Analysis of Multihop Packet Radio Networks under a General Class of Channel Access Protocols and Capture Models

    DTIC Science & Technology

    1987-03-01

    Gitman in [Gitm75]. The system considered consisted of a set of clusters (each with an infinite popula- tion of users) that communicate with a central...30, no. 5, pp. 985-995, May 1982. [Gitm75] I. Gitman , "On the Capacity of Slotted ALOHA Networks and Some Design Problems," IEEE Trans. Comm., vol

  9. Conflict: Operational Realism versus Analytical Rigor in Defense Modeling and Simulation

    DTIC Science & Technology

    2012-06-14

    Campbell, Experimental and Quasi- Eperimental Designs for Generalized Causal Inference, Boston: Houghton Mifflin Company, 2002. [7] R. T. Johnson, G...experimentation? In order for an experiment to be considered rigorous, and the results valid, the experiment should be designed using established...addition to the interview, the pilots were administered a written survey, designed to capture their reactions regarding the level of realism present

  10. The Emerging Importance of Business Process Standards in the Federal Government

    DTIC Science & Technology

    2006-02-23

    delivers enough value for its commercialization into the general industry. Today, we are seeing standards such as SOA, BPMN and BPEL hit that...Process Modeling Notation ( BPMN ) and the Business Process Execution Language (BPEL). BPMN provides a standard representation for capturing and...execution. The combination of BPMN and BPEL offers organizations the potential to standardize processes in a distributed environment, enabling

  11. Design of protonation constant measurement apparatus for carbon dioxide capturing solvents

    NASA Astrophysics Data System (ADS)

    Ma'mun, S.; Amelia, E.; Rahmat, V.; Alwani, D. R.; Kurniawan, D.

    2016-11-01

    Global warming phenomenon has led to world climate change caused by high concentrations of greenhouse gases (GHG), e.g. carbon dioxide (CO2), in the atmosphere. Carbon dioxide is produced in large amount from coal-fired power plants, iron and steel production, cement production, chemical and petrochemical manufacturing, natural gas purification, and transportation. Carbon dioxide emissions seem to rise from year to year; some efforts to reduce the emissions are, therefore, required. Amine-based absorption could be deployed for post-combustion capture. Some parameters, e.g. mass transfer coefficients and chemical equilibrium constants, are required for a vapor-liquid equilibrium modeling. Protonation constant (pKa), as one of those parameters, could then be measured experimentally. Therefore, an experimental setup to measure pKa of CO2 capturing solvents was designed and validated by measuring the pKa of acetic acid at 30 to 70 °C by a potentiometric titration method. The set up was also used to measure the pKa of MEA at 27 °C. Based on the validation results and due to low vapor pressure of CO2 capturing solvents in general, e.g. alkanolamines, the setup could therefore be used for measuring pKa of the CO2 capturing solvents at temperatures up to 70 °C.

  12. Trends in stratospheric ozone profiles using functional mixed models

    NASA Astrophysics Data System (ADS)

    Park, A. Y.; Guillas, S.; Petropavlovskikh, I.

    2013-05-01

    This paper is devoted to the modeling of altitude-dependent patterns of ozone variations over time. Umkher ozone profiles (quarter of Umkehr layer) from 1978 to 2011 are investigated at two locations: Boulder (USA) and Arosa (Switzerland). The study consists of two statistical stages. First we approximate ozone profiles employing an appropriate basis. To capture primary modes of ozone variations without losing essential information, a functional principal component analysis is performed as it penalizes roughness of the function and smooths excessive variations in the shape of the ozone profiles. As a result, data driven basis functions are obtained. Secondly we estimate the effects of covariates - month, year (trend), quasi biennial oscillation, the Solar cycle, arctic oscillation and the El Niño/Southern Oscillation cycle - on the principal component scores of ozone profiles over time using generalized additive models. The effects are smooth functions of the covariates, and are represented by knot-based regression cubic splines. Finally we employ generalized additive mixed effects models incorporating a more complex error structure that reflects the observed seasonality in the data. The analysis provides more accurate estimates of influences and trends, together with enhanced uncertainty quantification. We are able to capture fine variations in the time evolution of the profiles such as the semi-annual oscillation. We conclude by showing the trends by altitude over Boulder. The strongly declining trends over 2003-2011 for altitudes of 32-64 hPa show that stratospheric ozone is not yet fully recovering.

  13. Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.

    2015-12-01

    Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.

  14. Highly Coarse-Grained Representations of Transmembrane Proteins

    PubMed Central

    2017-01-01

    Numerous biomolecules and biomolecular complexes, including transmembrane proteins (TMPs), are symmetric or at least have approximate symmetries. Highly coarse-grained models of such biomolecules, aiming at capturing the essential structural and dynamical properties on resolution levels coarser than the residue scale, must preserve the underlying symmetry. However, making these models obey the correct physics is in general not straightforward, especially at the highly coarse-grained resolution where multiple (∼3–30 in the current study) amino acid residues are represented by a single coarse-grained site. In this paper, we propose a simple and fast method of coarse-graining TMPs obeying this condition. The procedure involves partitioning transmembrane domains into contiguous segments of equal length along the primary sequence. For the coarsest (lowest-resolution) mappings, it turns out to be most important to satisfy the symmetry in a coarse-grained model. As the resolution is increased to capture more detail, however, it becomes gradually more important to match modular repeats in the secondary structure (such as helix-loop repeats) instead. A set of eight TMPs of various complexity, functionality, structural topology, and internal symmetry, representing different classes of TMPs (ion channels, transporters, receptors, adhesion, and invasion proteins), has been examined. The present approach can be generalized to other systems possessing exact or approximate symmetry, allowing for reliable and fast creation of multiscale, highly coarse-grained mappings of large biomolecular assemblies. PMID:28043122

  15. Analysis of fast boundary-integral approximations for modeling electrostatic contributions of molecular binding

    PubMed Central

    Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.

    2013-01-01

    We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561

  16. Satellite attitude motion models for capture and retrieval investigations

    NASA Technical Reports Server (NTRS)

    Cochran, John E., Jr.; Lahr, Brian S.

    1986-01-01

    The primary purpose of this research is to provide mathematical models which may be used in the investigation of various aspects of the remote capture and retrieval of uncontrolled satellites. Emphasis has been placed on analytical models; however, to verify analytical solutions, numerical integration must be used. Also, for satellites of certain types, numerical integration may be the only practical or perhaps the only possible method of solution. First, to provide a basis for analytical and numerical work, uncontrolled satellites were categorized using criteria based on: (1) orbital motions, (2) external angular momenta, (3) internal angular momenta, (4) physical characteristics, and (5) the stability of their equilibrium states. Several analytical solutions for the attitude motions of satellite models were compiled, checked, corrected in some minor respects and their short-term prediction capabilities were investigated. Single-rigid-body, dual-spin and multi-rotor configurations are treated. To verify the analytical models and to see how the true motion of a satellite which is acted upon by environmental torques differs from its corresponding torque-free motion, a numerical simulation code was developed. This code contains a relatively general satellite model and models for gravity-gradient and aerodynamic torques. The spacecraft physical model for the code and the equations of motion are given. The two environmental torque models are described.

  17. Multi-scaling modelling in financial markets

    NASA Astrophysics Data System (ADS)

    Liu, Ruipeng; Aste, Tomaso; Di Matteo, T.

    2007-12-01

    In the recent years, a new wave of interest spurred the involvement of complexity in finance which might provide a guideline to understand the mechanism of financial markets, and researchers with different backgrounds have made increasing contributions introducing new techniques and methodologies. In this paper, Markov-switching multifractal models (MSM) are briefly reviewed and the multi-scaling properties of different financial data are analyzed by computing the scaling exponents by means of the generalized Hurst exponent H(q). In particular we have considered H(q) for price data, absolute returns and squared returns of different empirical financial time series. We have computed H(q) for the simulated data based on the MSM models with Binomial and Lognormal distributions of the volatility components. The results demonstrate the capacity of the multifractal (MF) models to capture the stylized facts in finance, and the ability of the generalized Hurst exponents approach to detect the scaling feature of financial time series.

  18. Improved short-term variability in the thermosphere-ionosphere-mesosphere-electrodynamics general circulation model

    NASA Astrophysics Data System (ADS)

    Häusler, K.; Hagan, M. E.; Baumgaertner, A. J. G.; Maute, A.; Lu, G.; Doornbos, E.; Bruinsma, S.; Forbes, J. M.; Gasperini, F.

    2014-08-01

    We report on a new source of tidal variability in the National Center for Atmospheric Research thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM). Lower boundary forcing of the TIME-GCM for a simulation of November-December 2009 based on 3-hourly Modern-Era Retrospective Analysis for Research and Application (MERRA) reanalysis data includes day-to-day variations in both diurnal and semidiurnal tides of tropospheric origin. Comparison with TIME-GCM results from a heretofore standard simulation that includes climatological tropospheric tides from the global-scale wave model reveal evidence of the impacts of MERRA forcing throughout the model domain, including measurable tidal variability in the TIME-GCM upper thermosphere. Additional comparisons with measurements made by the Gravity field and steady-state Ocean Circulation Explorer satellite show improved TIME-GCM capability to capture day-to-day variations in thermospheric density for the November-December 2009 period with the new MERRA lower boundary forcing.

  19. Trapping two types of particles with a focused generalized Multi-Gaussian Schell model beam

    NASA Astrophysics Data System (ADS)

    Liu, Xiayin; Zhao, Daomu

    2015-11-01

    We numerically investigate the trapping effect of the focused generalized Multi-Gaussian Schell model (GMGSM) beam of the first kind which produces dark hollow beam profile at the focal plane. By calculating the radiation forces on the Rayleigh dielectric sphere in the focused GMGSM beam, we show that such beam can trap low-refractive-index particles at the focus, and simultaneously capture high-index particles at different positions of the focal plane. The trapping range and stability depend on the values of the beam index N and the coherence width. Under the same conditions, the low limits of the radius of low-index and high-index particles for stable trapping are indicated to be different.

  20. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    PubMed

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  1. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models

    PubMed Central

    Whittington, Jesse; Sawaya, Michael A.

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal’s home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786–1.071) for females, 0.844 (0.703–0.975) for males, and 0.882 (0.779–0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758–1.024) for females, 0.825 (0.700–0.948) for males, and 0.863 (0.771–0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park’s population of grizzly bears requires continued conservation-oriented management actions. PMID:26230262

  2. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  3. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  4. A shock capturing technique for hypersonic, chemically relaxing flows

    NASA Technical Reports Server (NTRS)

    Eberhardt, S.; Brown, K.

    1986-01-01

    A fully coupled, shock capturing technique is presented for chemically reacting flows at high Mach numbers. The technique makes use of a total variation diminishing (TVD) dissipation operator which results in sharp, crisp shocks. The eigenvalues and eigenvectors of the fully coupled system, which includes species conversion equations in addition to the gas dynamics equations, are analytically derived for a general reacting gas. Species production terms for a model dissociating gas are introduced and are included in the algorithm. The convective terms are solved using a first-order TVD scheme while the source terms are solved using a fourth-order Runge-Kutta scheme to enhance stability. Results from one-dimensional numerical experiments are shown for a two species and a three species gas.

  5. Ecological Restoration Programs Induced Amelioration of the Dust Pollution in North China Plain

    NASA Astrophysics Data System (ADS)

    Long, X.; Tie, X.; Li, G.; Junji, C.

    2017-12-01

    With Moderate Resolution Imaging Spectroradiometer (MODIS) land cover product (MCD12Q1), we quantitatively evaluate the ecological restoration programs (ERP) induced land cover change in China by calculating gridded the land use fraction (LUF). We clearly capture two obvious vegetation (grass and forest) protective barriers arise between the dust source region DSR and North China Plain NCP from 2011 to 2013. The WRF-DUST model is applied to investigate the impact of ERPs on dust pollution from 2 to 8 March 2016, corresponding to a national dust storm event over China. Despite some model biases, the WRF-DUST model reasonably reproduced the temporal variations of dust storm event, involving IOA of 0.96 and NMB of 2% for DSR, with IOA of 0.83 and NMB of -15% for downwind area of NCP. Generally, the WRF-DUST model well capture the spatial variations and evolutions of dust storm events with episode-average [PMC] correlation coefficient (R) of 0.77, especially the dust storm outbreak and transport evolution, involving daily average [PMC] R of 0.9 and 0.73 on 4-5 March, respectively. It is found that the ERPs generally reduce the dust pollution in NCP, especially for BTH, involving upper dust pollution control benefits of -15.3% (-21.0 μg m-3) for BTH, and -6.2% (-9.3 μg m-3) for NCP. We are the first to conduct model sensitivity studies to quantitatively evaluate the impacts of the ERPs on the dust pollution in NCP. And our narrative is independently based on first-hand sources, whereas government statistics.

  6. Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.

    PubMed

    Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S

    2012-11-01

    One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.

  7. Bioproducts and environmental quality: Biofuels, greenhouse gases, and water quality

    NASA Astrophysics Data System (ADS)

    Ren, Xiaolin

    Promoting bio-based products is one oft-proposed solution to reduce GHG emissions because the feedstocks capture carbon, offsetting at least partially the carbon discharges resulting from use of the products. However, several life cycle analyses point out that while biofuels may emit less life cycle net carbon emissions than fossil fuels, they may exacerbate other parts of biogeochemical cycles, notably nutrient loads in the aquatic environment. In three essays, this dissertation explores the tradeoff between GHG emissions and nitrogen leaching associated with biofuel production using general equilibrium models. The first essay develops a theoretical general equilibrium model to calculate the second-best GHG tax with the existence of a nitrogen leaching distortion. The results indicate that the second-best GHG tax could be higher or lower than the first-best tax rates depending largely on the elasticity of substitution between fossil fuel and biofuel. The second and third essays employ computable general equilibrium models to further explore the tradeoff between GHG emissions and nitrogen leaching. The computable general equilibrium models also incorporate multiple biofuel pathways, i.e., biofuels made from different feedstocks using different processes, to identify the cost-effective combinations of biofuel pathways under different policies, and the corresponding economic and environmental impacts.

  8. A Comparison Study for DNA Motif Modeling on Protein Binding Microarray.

    PubMed

    Wong, Ka-Chun; Li, Yue; Peng, Chengbin; Wong, Hau-San

    2016-01-01

    Transcription factor binding sites (TFBSs) are relatively short (5-15 bp) and degenerate. Identifying them is a computationally challenging task. In particular, protein binding microarray (PBM) is a high-throughput platform that can measure the DNA binding preference of a protein in a comprehensive and unbiased manner; for instance, a typical PBM experiment can measure binding signal intensities of a protein to all possible DNA k-mers (k = 8∼10). Since proteins can often bind to DNA with different binding intensities, one of the major challenges is to build TFBS (also known as DNA motif) models which can fully capture the quantitative binding affinity data. To learn DNA motif models from the non-convex objective function landscape, several optimization methods are compared and applied to the PBM motif model building problem. In particular, representative methods from different optimization paradigms have been chosen for modeling performance comparison on hundreds of PBM datasets. The results suggest that the multimodal optimization methods are very effective for capturing the binding preference information from PBM data. In particular, we observe a general performance improvement if choosing di-nucleotide modeling over mono-nucleotide modeling. In addition, the models learned by the best-performing method are applied to two independent applications: PBM probe rotation testing and ChIP-Seq peak sequence prediction, demonstrating its biological applicability.

  9. Detecting potential anomalies in projections of rainfall trends and patterns using human observations

    NASA Astrophysics Data System (ADS)

    Kohfeld, K. E.; Savo, V.; Sillmann, J.; Morton, C.; Lepofsky, D.

    2016-12-01

    Shifting precipitation patterns are a well-documented consequence of climate change, but their spatial variability is particularly difficult to assess. While the accuracy of global models has increased, specific regional changes in precipitation regimes are not well captured by these models. Typically, researchers who wish to detect trends and patterns in climatic variables, such as precipitation, use instrumental observations. In our study, we combined observations of rainfall by subsistence-oriented communities with several metrics of rainfall estimated from global instrumental records for comparable time periods (1955 - 2005). This comparison was aimed at identifying: 1) which rainfall metrics best match human observations of changes in precipitation; 2) areas where local communities observe changes not detected by global models. The collated observations ( 3800) made by subsistence-oriented communities covered 129 countries ( 1830 localities). For comparable time periods, we saw a substantial correspondence between instrumental records and human observations (66-77%) at the same locations, regardless of whether we considered trends in general rainfall, drought, or extreme rainfall. We observed a clustering of mismatches in two specific regions, possibly indicating some climatic phenomena not completely captured by the currently available global models. Many human observations also indicated an increased unpredictability in the start, end, duration, and continuity of the rainy seasons, all of which may hamper the performance of subsistence activities. We suggest that future instrumental metrics should capture this unpredictability of rainfall. This information would be important for thousands of subsistence-oriented communities in planning, coping, and adapting to climate change.

  10. A sprinkling experiment to quantify celerity-velocity differences at the hillslope scale.

    PubMed

    van Verseveld, Willem J; Barnard, Holly R; Graham, Chris B; McDonnell, Jeffrey J; Brooks, J Renée; Weiler, Markus

    2017-01-01

    Few studies have quantified the differences between celerity and velocity of hillslope water flow and explained the processes that control these differences. Here, we asses these differences by combining a 24-day hillslope sprinkling experiment with a spatially explicit hydrologic model analysis. We focused our work on Watershed 10 at the H. J. Andrews Experimental Forest in western Oregon. Celerities estimated from wetting front arrival times were generally much faster than average vertical velocities of δ 2 H. In the model analysis, this was consistent with an identifiable effective porosity (fraction of total porosity available for mass transfer) parameter, indicating that subsurface mixing was controlled by an immobile soil fraction, resulting in the attenuation of the δ 2 H input signal in lateral subsurface flow. In addition to the immobile soil fraction, exfiltrating deep groundwater that mixed with lateral subsurface flow captured at the experimental hillslope trench caused further reduction in the δ 2 H input signal. Finally, our results suggest that soil depth variability played a significant role in the celerity-velocity responses. Deeper upslope soils damped the δ 2 H input signal, while a shallow soil near the trench controlled the δ 2 H peak in lateral subsurface flow response. Simulated exit time and residence time distributions with our hillslope hydrologic model showed that water captured at the trench did not represent the entire modeled hillslope domain; the exit time distribution for lateral subsurface flow captured at the trench showed more early time weighting.

  11. A sprinkling experiment to quantify celerity-velocity differences at the hillslope scale

    NASA Astrophysics Data System (ADS)

    van Verseveld, Willem J.; Barnard, Holly R.; Graham, Chris B.; McDonnell, Jeffrey J.; Renée Brooks, J.; Weiler, Markus

    2017-11-01

    Few studies have quantified the differences between celerity and velocity of hillslope water flow and explained the processes that control these differences. Here, we asses these differences by combining a 24-day hillslope sprinkling experiment with a spatially explicit hydrologic model analysis. We focused our work on Watershed 10 at the H. J. Andrews Experimental Forest in western Oregon. Celerities estimated from wetting front arrival times were generally much faster than average vertical velocities of δ2H. In the model analysis, this was consistent with an identifiable effective porosity (fraction of total porosity available for mass transfer) parameter, indicating that subsurface mixing was controlled by an immobile soil fraction, resulting in the attenuation of the δ2H input signal in lateral subsurface flow. In addition to the immobile soil fraction, exfiltrating deep groundwater that mixed with lateral subsurface flow captured at the experimental hillslope trench caused further reduction in the δ2H input signal. Finally, our results suggest that soil depth variability played a significant role in the celerity-velocity responses. Deeper upslope soils damped the δ2H input signal, while a shallow soil near the trench controlled the δ2H peak in lateral subsurface flow response. Simulated exit time and residence time distributions with our hillslope hydrologic model showed that water captured at the trench did not represent the entire modeled hillslope domain; the exit time distribution for lateral subsurface flow captured at the trench showed more early time weighting.

  12. A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments

    PubMed Central

    Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J

    2014-01-01

    Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576

  13. Multisensory Integration and Behavioral Plasticity in Sharks from Different Ecological Niches

    PubMed Central

    Gardiner, Jayne M.; Atema, Jelle; Hueter, Robert E.; Motta, Philip J.

    2014-01-01

    The underwater sensory world and the sensory systems of aquatic animals have become better understood in recent decades, but typically have been studied one sense at a time. A comprehensive analysis of multisensory interactions during complex behavioral tasks has remained a subject of discussion without experimental evidence. We set out to generate a general model of multisensory information extraction by aquatic animals. For our model we chose to analyze the hierarchical, integrative, and sometimes alternate use of various sensory systems during the feeding sequence in three species of sharks that differ in sensory anatomy and behavioral ecology. By blocking senses in different combinations, we show that when some of their normal sensory cues were unavailable, sharks were often still capable of successfully detecting, tracking and capturing prey by switching to alternate sensory modalities. While there were significant species differences, odor was generally the first signal detected, leading to upstream swimming and wake tracking. Closer to the prey, as more sensory cues became available, the preferred sensory modalities varied among species, with vision, hydrodynamic imaging, electroreception, and touch being important for orienting to, striking at, and capturing the prey. Experimental deprivation of senses showed how sharks exploit the many signals that comprise their sensory world, each sense coming into play as they provide more accurate information during the behavioral sequence of hunting. The results may be applicable to aquatic hunting in general and, with appropriate modification, to other types of animal behavior. PMID:24695492

  14. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    PubMed

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  15. Frontiers of Theoretical Research on Shape Memory Alloys: A General Overview

    NASA Astrophysics Data System (ADS)

    Chowdhury, Piyas

    2018-03-01

    In this concise review, general aspects of modeling shape memory alloys (SMAs) are recounted. Different approaches are discussed under four general categories, namely, (a) macro-phenomenological, (b) micromechanical, (c) molecular dynamics, and (d) first principles models. Macro-phenomenological theories, stemming from empirical formulations depicting continuum elastic, plastic, and phase transformation, are primarily of engineering interest, whereby the performance of SMA-made components is investigated. Micromechanical endeavors are generally geared towards understanding microstructural phenomena within continuum mechanics such as the accommodation of straining due to phase change as well as role of precipitates. By contrast, molecular dynamics, being a more recently emerging computational technique, concerns attributes of discrete lattice structures, and thus captures SMA deformation mechanism by means of empirically reconstructing interatomic bonding forces. Finally, ab initio theories utilize quantum mechanical framework to peek into atomistic foundation of deformation, and can pave the way for studying the role of solid-sate effects. With specific examples, this paper provides concise descriptions of each category along with their relative merits and emphases.

  16. Game Theory for Proactive Dynamic Defense and Attack Mitigation in Cyber-Physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Letchford, Joshua

    While there has been a great deal of security research focused on preventing attacks, there has been less work on how one should balance security and resilience investments. In this work we developed and evaluated models that captured both explicit defenses and other mitigations that reduce the impact of attacks. We examined these issues both in more broadly applicable general Stackelberg models and in more specific network and power grid settings. Finally, we compared these solutions to existing work in terms of both solution quality and computational overhead.

  17. New mechanisms of macroion-induced disintegration of charged droplets

    NASA Astrophysics Data System (ADS)

    Consta, Styliani; Oh, Myong In; Malevanets, Anatoly

    2016-10-01

    Molecular modeling has revealed that the presence of charged macromolecules (macroions) in liquid droplets dramatically changes the pathways of droplet fission. These mechanisms are not captured by the traditional theories such as ion-evaporation and charge-residue models. We review the general mechanisms by which macroions emerge from droplets and the factors that determine the droplet fission. These mechanisms include counter-intuitive ;star; droplet formations and extrusion of linear macroions from droplets. These findings may play a direct role in determining macromolecule charge states in electrospray mass spectrometry experiments.

  18. Modeling association among demographic parameters in analysis of open population capture-recapture data.

    PubMed

    Link, William A; Barker, Richard J

    2005-03-01

    We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  19. A 1D-2D Shallow Water Equations solver for discontinuous porosity field based on a Generalized Riemann Problem

    NASA Astrophysics Data System (ADS)

    Ferrari, Alessia; Vacondio, Renato; Dazzi, Susanna; Mignosa, Paolo

    2017-09-01

    A novel augmented Riemann Solver capable of handling porosity discontinuities in 1D and 2D Shallow Water Equation (SWE) models is presented. With the aim of accurately approximating the porosity source term, a Generalized Riemann Problem is derived by adding an additional fictitious equation to the SWEs system and imposing mass and momentum conservation across the porosity discontinuity. The modified Shallow Water Equations are theoretically investigated, and the implementation of an augmented Roe Solver in a 1D Godunov-type finite volume scheme is presented. Robust treatment of transonic flows is ensured by introducing an entropy fix based on the wave pattern of the Generalized Riemann Problem. An Exact Riemann Solver is also derived in order to validate the numerical model. As an extension of the 1D scheme, an analogous 2D numerical model is also derived and validated through test cases with radial symmetry. The capability of the 1D and 2D numerical models to capture different wave patterns is assessed against several Riemann Problems with different wave patterns.

  20. Towards refactoring the Molecular Function Ontology with a UML profile for function modeling.

    PubMed

    Burek, Patryk; Loebe, Frank; Herre, Heinrich

    2017-10-04

    Gene Ontology (GO) is the largest resource for cataloging gene products. This resource grows steadily and, naturally, this growth raises issues regarding the structure of the ontology. Moreover, modeling and refactoring large ontologies such as GO is generally far from being simple, as a whole as well as when focusing on certain aspects or fragments. It seems that human-friendly graphical modeling languages such as the Unified Modeling Language (UML) could be helpful in connection with these tasks. We investigate the use of UML for making the structural organization of the Molecular Function Ontology (MFO), a sub-ontology of GO, more explicit. More precisely, we present a UML dialect, called the Function Modeling Language (FueL), which is suited for capturing functions in an ontologically founded way. FueL is equipped, among other features, with language elements that arise from studying patterns of subsumption between functions. We show how to use this UML dialect for capturing the structure of molecular functions. Furthermore, we propose and discuss some refactoring options concerning fragments of MFO. FueL enables the systematic, graphical representation of functions and their interrelations, including making information explicit that is currently either implicit in MFO or is mainly captured in textual descriptions. Moreover, the considered subsumption patterns lend themselves to the methodical analysis of refactoring options with respect to MFO. On this basis we argue that the approach can increase the comprehensibility of the structure of MFO for humans and can support communication, for example, during revision and further development.

  1. A Separable Two-Dimensional Random Field Model of Binary Response Data from Multi-Day Behavioral Experiments.

    PubMed

    Malem-Shinitski, Noa; Zhang, Yingzhuo; Gray, Daniel T; Burke, Sara N; Smith, Anne C; Barnes, Carol A; Ba, Demba

    2018-04-18

    The study of learning in populations of subjects can provide insights into the changes that occur in the brain with aging, drug intervention, and psychiatric disease. We introduce a separable two-dimensional (2D) random field (RF) model for analyzing binary response data acquired during the learning of object-reward associations across multiple days. The method can quantify the variability of performance within a day and across days, and can capture abrupt changes in learning. We apply the method to data from young and aged macaque monkeys performing a reversal-learning task. The method provides an estimate of performance within a day for each age group, and a learning rate across days for each monkey. We find that, as a group, the older monkeys require more trials to learn the object discriminations than do the young monkeys, and that the cognitive flexibility of the younger group is higher. We also use the model estimates of performance as features for clustering the monkeys into two groups. The clustering results in two groups that, for the most part, coincide with those formed by the age groups. Simulation studies suggest that clustering captures inter-individual differences in performance levels. In comparison with generalized linear models, this method is better able to capture the inherent two-dimensional nature of the data and find between group differences. Applied to binary response data from groups of individuals performing multi-day behavioral experiments, the model discriminates between-group differences and identifies subgroups. Copyright © 2018. Published by Elsevier B.V.

  2. Langevin Dynamics with Spatial Correlations as a Model for Electron-Phonon Coupling

    NASA Astrophysics Data System (ADS)

    Tamm, A.; Caro, M.; Caro, A.; Samolyuk, G.; Klintenberg, M.; Correa, A. A.

    2018-05-01

    Stochastic Langevin dynamics has been traditionally used as a tool to describe nonequilibrium processes. When utilized in systems with collective modes, traditional Langevin dynamics relaxes all modes indiscriminately, regardless of their wavelength. We propose a generalization of Langevin dynamics that can capture a differential coupling between collective modes and the bath, by introducing spatial correlations in the random forces. This allows modeling the electronic subsystem in a metal as a generalized Langevin bath endowed with a concept of locality, greatly improving the capabilities of the two-temperature model. The specific form proposed here for the spatial correlations produces a physical wave-vector and polarization dependency of the relaxation produced by the electron-phonon coupling in a solid. We show that the resulting model can be used for describing the path to equilibration of ions and electrons and also as a thermostat to sample the equilibrium canonical ensemble. By extension, the family of models presented here can be applied in general to any dense system, solids, alloys, and dense plasmas. As an example, we apply the model to study the nonequilibrium dynamics of an electron-ion two-temperature Ni crystal.

  3. In-theater piracy: finding where the pirate was

    NASA Astrophysics Data System (ADS)

    Chupeau, Bertrand; Massoudi, Ayoub; Lefèbvre, Frédéric

    2008-02-01

    Pirate copies of feature films are proliferating on the Internet. DVD rip or screener recording methods involve the duplication of officially distributed media whereas 'cam' versions are illicitly captured with handheld camcorders in movie theaters. Several, complementary, multimedia forensic techniques such as copy identification, forensic tracking marks or sensor forensics can deter those clandestine recordings. In the case of camcorder capture in a theater, the image is often geometrically distorted, the main artifact being the trapezoidal effect, also known as 'keystoning', due to a capture viewing axis not being perpendicular to the screen. In this paper we propose to analyze the geometric distortions in a pirate copy to determine the camcorder viewing angle to the screen perpendicular and derive the approximate position of the pirate in the theater. The problem is first of all geometrically defined, by describing the general projection and capture setup, and by identifying unknown parameters and estimates. The estimation approach based on the identification of an eight-parameter homographic model of the 'keystoning' effect is then presented. A validation experiment based on ground truth collected in a real movie theater is reported, and the accuracy of the proposed method is assessed.

  4. Capture zone of a multi-well system in bounded peninsula-shaped aquifers.

    PubMed

    Zarei-Doudeji, Somayeh; Samani, Nozar

    2014-08-01

    In this paper we present the equation of capture zone for multi-well system in peninsula-shaped confined and unconfined aquifers. The aquifer is rectangular in plan view, bounded along three sides, and extends to infinity at the fourth side. The bounding boundaries are either no-flow (impervious) or in-flow (constant head) so that aquifers with six possible boundary configurations are formed. The well system is consisted of any number of extraction or injection wells or combination of both with any flow rates. The complex velocity potential equations for such a well-aquifer system are derived to delineate the capture envelop. Solutions are provided for the aquifers with and without a uniform regional flow of any directions. The presented equations are of general character and have no limitations in terms of well numbers, positions and types, extraction/injection rate, and regional flow rate and direction. These solutions are presented in form of capture type curves which are useful tools in hands of practitioners to design in-situ groundwater remediation systems, to contain contaminant plumes, to evaluate the surface-subsurface water interaction and to verify numerical models. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Toward Adversarial Online Learning and the Science of Deceptive Machines

    DTIC Science & Technology

    2015-11-14

    noise . Adver- saries can take advantage of this inherent blind spot to avoid detection (mimicry). Adversarial label noise is the intentional switching...of classification labels leading to de- terministic noise , error that the model cannot capture due to its generalization bias. An experiment in user...potentially infinite and with imperfect information. We will combine Monte-Carlo tree search ( MCTS ) with rein- forcement learning because the manipulation

  6. Unified Static and Dynamic Recrystallization Model for the Minerals of Earth's Mantle Using Internal State Variable Model

    NASA Astrophysics Data System (ADS)

    Cho, H. E.; Horstemeyer, M. F.; Baumgardner, J. R.

    2017-12-01

    In this study, we present an internal state variable (ISV) constitutive model developed to model static and dynamic recrystallization and grain size progression in a unified manner. This method accurately captures temperature, pressure and strain rate effect on the recrystallization and grain size. Because this ISV approach treats dislocation density, volume fraction of recrystallization and grain size as internal variables, this model can simultaneously track their history during the deformation with unprecedented realism. Based on this deformation history, this method can capture realistic mechanical properties such as stress-strain behavior in the relationship of microstructure-mechanical property. Also, both the transient grain size during the deformation and the steady-state grain size of dynamic recrystallization can be predicted from the history variable of recrystallization volume fraction. Furthermore, because this model has a capability to simultaneously handle plasticity and creep behaviors (unified creep-plasticity), the mechanisms (static recovery (or diffusion creep), dynamic recovery (or dislocation creep) and hardening) related to dislocation dynamics can also be captured. To model these comprehensive mechanical behaviors, the mathematical formulation of this model includes elasticity to evaluate yield stress, work hardening in treating plasticity, creep, as well as the unified recrystallization and grain size progression. Because pressure sensitivity is especially important for the mantle minerals, we developed a yield function combining Drucker-Prager shear failure and von Mises yield surfaces to model the pressure dependent yield stress, while using pressure dependent work hardening and creep terms. Using these formulations, we calibrated against experimental data of the minerals acquired from the literature. Additionally, we also calibrated experimental data for metals to show the general applicability of our model. Understanding of realistic mantle dynamics can only be acquired once the various deformation regimes and mechanisms are comprehensively modeled. The results of this study demonstrate that this ISV model is a good modeling candidate to help reveal the realistic dynamics of the Earth's mantle.

  7. Sulfate and Pb-210 Simulated in a Global Model Using Assimilated Meteorological Fields

    NASA Technical Reports Server (NTRS)

    Chin, Mian; Rood, Richard; Lin, S.-J.; Jacob, Daniel; Muller, Jean-Francois

    1999-01-01

    This report presents the results of distributions of tropospheric sulfate, Pb-210 and their precursors from a global 3-D model. This model is driven by assimilated meteorological fields generated by the Goddard Data Assimilation Office. Model results are compared with observations from surface sites and from multiplatform field campaigns of Pacific Exploratory Missions (PEM) and Advanced Composition Explorer (ACE). The model generally captures the seasonal variation of sulfate at the surface sites, and reproduces well the short-term in-situ observations. We will discuss the roles of various processes contributing to the sulfate levels in the troposphere, and the roles of sulfate aerosol in regional and global radiative forcing.

  8. Modeling the Non-Linear Response of Fiber-Reinforced Laminates Using a Combined Damage/Plasticity Model

    NASA Technical Reports Server (NTRS)

    Schuecker, Clara; Davila, Carlos G.; Pettermann, Heinz E.

    2008-01-01

    The present work is concerned with modeling the non-linear response of fiber reinforced polymer laminates. Recent experimental data suggests that the non-linearity is not only caused by matrix cracking but also by matrix plasticity due to shear stresses. To capture the effects of those two mechanisms, a model combining a plasticity formulation with continuum damage has been developed to simulate the non-linear response of laminates under plane stress states. The model is used to compare the predicted behavior of various laminate lay-ups to experimental data from the literature by looking at the degradation of axial modulus and Poisson s ratio of the laminates. The influence of residual curing stresses and in-situ effect on the predicted response is also investigated. It is shown that predictions of the combined damage/plasticity model, in general, correlate well with the experimental data. The test data shows that there are two different mechanisms that can have opposite effects on the degradation of the laminate Poisson s ratio which is captured correctly by the damage/plasticity model. Residual curing stresses are found to have a minor influence on the predicted response for the cases considered here. Some open questions remain regarding the prediction of damage onset.

  9. Functional response and capture timing in an individual-based model: predation by northern squawfish (Ptychocheilus oregonensis) on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, James H.; DeAngelis, Donald L.

    1992-01-01

    The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.

  10. Interlinked population balance and cybernetic models for the simultaneous saccharification and fermentation of natural polymers.

    PubMed

    Ho, Yong Kuen; Doshi, Pankaj; Yeoh, Hak Koon; Ngoh, Gek Cheng

    2015-10-01

    Simultaneous Saccharification and Fermentation (SSF) is a process where microbes have to first excrete extracellular enzymes to break polymeric substrates such as starch or cellulose into edible nutrients, followed by in situ conversion of those nutrients into more valuable metabolites via fermentation. As such, SSF is very attractive as a one-pot synthesis method of biological products. However, due to the co-existence of multiple biochemical steps, modeling SSF faces two major challenges. The first is to capture the successive chain-end and/or random scission of the polymeric substrates over time, which determines the rate of generation of various fermentable substrates. The second is to incorporate the response of microbes, including their preferential substrate utilization, to such a complex broth. Each of the above-mentioned challenges has manifested itself in many related areas, and has been competently but separately attacked with two diametrically different tools, i.e., the Population Balance Modeling (PBM) and the Cybernetic Modeling (CM), respectively. To date, they have yet to be applied in unison on SSF resulting in a general inadequacy or haphazard approaches to examine the dynamics and interactions of depolymerization and fermentation. To overcome this unsatisfactory state of affairs, here, the general linkage between PBM and CM is established to model SSF. A notable feature is the flexible linkage, which allows the individual PBM and CM models to be independently modified to the desired levels of detail. A more general treatment of the secretion of extracellular enzyme is also proposed in the CM model. Through a case study on the growth of a recombinant Saccharomyces cerevisiae capable of excreting a chain-end scission enzyme (glucoamylase) on starch, the interlinked model calibrated using data from the literature (Nakamura et al., Biotechnol. Bioeng. 53:21-25, 1997), captured features not attainable by existing approaches. In particular, the effect of various enzymatic actions on the temporal evolution of the polymer distribution and how the microbes respond to the diverse polymeric environment can be studied through this framework. © 2015 Wiley Periodicals, Inc.

  11. Generalized estimators of avian abundance from count survey data

    USGS Publications Warehouse

    Royle, J. Andrew

    2004-01-01

    I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.

  12. Mathematical models to characterize early epidemic growth: A Review

    PubMed Central

    Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile

    2016-01-01

    There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-15 Ebola epidemic in West Africa. PMID:27451336

  13. Mathematical models to characterize early epidemic growth: A review

    NASA Astrophysics Data System (ADS)

    Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile

    2016-09-01

    There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-2015 Ebola epidemic in West Africa.

  14. TARGET: Rapid Capture of Process Knowledge

    NASA Technical Reports Server (NTRS)

    Ortiz, C. J.; Ly, H. V.; Saito, T.; Loftin, R. B.

    1993-01-01

    TARGET (Task Analysis/Rule Generation Tool) represents a new breed of tool that blends graphical process flow modeling capabilities with the function of a top-down reporting facility. Since NASA personnel frequently perform tasks that are primarily procedural in nature, TARGET models mission or task procedures and generates hierarchical reports as part of the process capture and analysis effort. Historically, capturing knowledge has proven to be one of the greatest barriers to the development of intelligent systems. Current practice generally requires lengthy interactions between the expert whose knowledge is to be captured and the knowledge engineer whose responsibility is to acquire and represent the expert's knowledge in a useful form. Although much research has been devoted to the development of methodologies and computer software to aid in the capture and representation of some types of knowledge, procedural knowledge has received relatively little attention. In essence, TARGET is one of the first tools of its kind, commercial or institutional, that is designed to support this type of knowledge capture undertaking. This paper will describe the design and development of TARGET for the acquisition and representation of procedural knowledge. The strategies employed by TARGET to support use by knowledge engineers, subject matter experts, programmers and managers will be discussed. This discussion includes the method by which the tool employs its graphical user interface to generate a task hierarchy report. Next, the approach to generate production rules for incorporation in and development of a CLIPS based expert system will be elaborated. TARGET also permits experts to visually describe procedural tasks as a common medium for knowledge refinement by the expert community and knowledge engineer making knowledge consensus possible. The paper briefly touches on the verification and validation issues facing the CLIPS rule generation aspects of TARGET. A description of efforts to support TARGET's interoperability issues on PCs, Macintoshes and UNIX workstations concludes the paper.

  15. Quantifying the Contribution of Wind-Driven Linear Response to the Seasonal and Interannual Variability of Amoc Volume Transports Across 26.5ºN

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; von Storch, J. S.; Haak, H.; Nakayama, K.; Marotzke, J.

    2014-12-01

    Surface wind stress is considered to be an important forcing of the seasonal and interannual variability of Atlantic Meridional Overturning Circulation (AMOC) volume transports. A recent study showed that even linear response to wind forcing captures observed features of the mean seasonal cycle. However, the study did not assess the contribution of wind-driven linear response in realistic conditions against the RAPID/MOCHA array observation or Ocean General Circulation Model (OGCM) simulations, because it applied a linear two-layer model to the Atlantic assuming constant upper layer thickness and density difference across the interface. Here, we quantify the contribution of wind-driven linear response to the seasonal and interannual variability of AMOC transports by comparing wind-driven linear simulations under realistic continuous stratification against the RAPID observation and OCGM (MPI-OM) simulations with 0.4º resolution (TP04) and 0.1º resolution (STORM). All the linear and MPI-OM simulations capture more than 60% of the variance in the observed mean seasonal cycle of the Upper Mid-Ocean (UMO) and Florida Strait (FS) transports, two components of the upper branch of the AMOC. The linear and TP04 simulations also capture 25-40% of the variance in the observed transport time series between Apr 2004 and Oct 2012; the STORM simulation does not capture the observed variance because of the stochastic signal in both datasets. Comparison of half-overlapping 12-month-long segments reveals some periods when the linear and TP04 simulations capture 40-60% of the observed variance, as well as other periods when the simulations capture only 0-20% of the variance. These results show that wind-driven linear response is a major contributor to the seasonal and interannual variability of the UMO and FS transports, and that its contribution varies in an interannual timescale, probably due to the variability of stochastic processes.

  16. A Double-Blinded, Randomized Comparison of Medetomidine-Tiletamine-Zolazepam and Dexmedetomidine-Tiletamine-Zolazepam Anesthesia in Free-Ranging Brown Bears (Ursus Arctos)

    PubMed Central

    Cattet, Marc; Zedrosser, Andreas; Stenhouse, Gordon B.; Küker, Susanne; Evans, Alina L.; Arnemo, Jon M.

    2017-01-01

    We compared anesthetic features, blood parameters, and physiological responses to either medetomidine-tiletamine-zolazepam or dexmedetomidine-tiletamine-zolazepam using a double-blinded, randomized experimental design during 40 anesthetic events of free-ranging brown bears (Ursus arctos) either captured by helicopter in Sweden or by culvert trap in Canada. Induction was smooth and predictable with both anesthetic protocols. Induction time, the need for supplemental drugs to sustain anesthesia, and capture-related stress were analyzed using generalized linear models, but anesthetic protocol did not differentially affect these variables. Arterial blood gases and acid-base status, and physiological responses were examined using linear mixed models. We documented acidemia (pH of arterial blood < 7.35), hypoxemia (partial pressure of arterial oxygen < 80 mmHg), and hypercapnia (partial pressure of arterial carbon dioxide ≥ 45 mmHg) with both protocols. Arterial pH and oxygen partial pressure were similar between groups with the latter improving markedly after oxygen supplementation (p < 0.001). We documented dose-dependent effects of both anesthetic protocols on induction time and arterial oxygen partial pressure. The partial pressure of arterial carbon dioxide increased as respiratory rate increased with medetomidine-tiletamine-zolazepam, but not with dexmedetomidine-tiletamine-zolazepam, demonstrating a differential drug effect. Differences in heart rate, respiratory rate, and rectal temperature among bears could not be attributed to the anesthetic protocol. Heart rate increased with increasing rectal temperature (p < 0.001) and ordinal day of capture (p = 0.002). Respiratory rate was significantly higher in bears captured by helicopter in Sweden than in bears captured by culvert trap in Canada (p < 0.001). Rectal temperature significantly decreased over time (p ≤ 0.05). Overall, we did not find any benefit of using dexmedetomidine-tiletamine-zolazepam instead of medetomidine-tiletamine-zolazepam in the anesthesia of brown bears. Both drug combinations appeared to be safe and reliable for the anesthesia of free-ranging brown bears captured by helicopter or by culvert trap. PMID:28118413

  17. Effect of Multiple Scattering on the Compton Recoil Current Generated in an EMP, Revisited

    DOE PAGES

    Farmer, William A.; Friedman, Alex

    2015-06-18

    Multiple scattering has historically been treated in EMP modeling through the obliquity factor. The validity of this approach is examined here. A simplified model problem, which correctly captures cyclotron motion, Doppler shifting due to the electron motion, and multiple scattering is first considered. The simplified problem is solved three ways: the obliquity factor, Monte-Carlo, and Fokker-Planck finite-difference. Because of the Doppler effect, skewness occurs in the distribution. It is demonstrated that the obliquity factor does not correctly capture this skewness, but the Monte-Carlo and Fokker-Planck finite-difference approaches do. Here, the obliquity factor and Fokker-Planck finite-difference approaches are then compared inmore » a fuller treatment, which includes the initial Klein-Nishina distribution of the electrons, and the momentum dependence of both drag and scattering. It is found that, in general, the obliquity factor is adequate for most situations. However, as the gamma energy increases and the Klein-Nishina becomes more peaked in the forward direction, skewness in the distribution causes greater disagreement between the obliquity factor and a more accurate model of multiple scattering.« less

  18. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  19. Comparison of land-surface humidity between observations and CMIP5 models

    NASA Astrophysics Data System (ADS)

    Dunn, Robert; Willett, Kate; Ciavarella, Andrew; Stott, Peter; Jones, Gareth

    2017-04-01

    We compare the latest observational land-surface humidity dataset, HadISDH, with the CMIP5 model archive spatially and temporally over the period 1973-2015. None of the CMIP5 models or experiments capture the observed temporal behaviour of the globally averaged relative or specific humidity over the entire study period. When using an atmosphere-only model, driven by observed sea-surface temperatures and radiative forcing changes, the behaviour of regional average temperature and specific humidity are better captured, but there is little improvement in the relative humidity. Comparing the observed and historical model climatologies show that the models are generally cooler everywhere, are drier and less saturated in the tropics and extra tropics, and have comparable moisture levels but are more saturated in the high latitudes. The spatial pattern of linear trends are relatively similar between the models and HadISDH for temperature and specific humidity, but there are large differences for relative humidity, with less moistening shown in the models over the Tropics, and very little at high atitudes. The observed temporal behaviour appears to be a robust climate feature rather than observational error. It has been previously documented and is theoretically consistent with faster warming rates over land compared to oceans. Thus, the poor replication in the models, especially in the atmosphere only model, leads to questions over future projections of impacts related to changes in surface relative humidity.

  20. Free boundary models for mosquito range movement driven by climate warming.

    PubMed

    Bao, Wendi; Du, Yihong; Lin, Zhigui; Zhu, Huaiping

    2018-03-01

    As vectors, mosquitoes transmit numerous mosquito-borne diseases. Among the many factors affecting the distribution and density of mosquitoes, climate change and warming have been increasingly recognized as major ones. In this paper, we make use of three diffusive logistic models with free boundary in one space dimension to explore the impact of climate warming on the movement of mosquito range. First, a general model incorporating temperature change with location and time is introduced. In order to gain insights of the model, a simplified version of the model with the change of temperature depending only on location is analyzed theoretically, for which the dynamical behavior is completely determined and presented. The general model can be modified into a more realistic one of seasonal succession type, to take into account of the seasonal changes of mosquito movements during each year, where the general model applies only for the time period of the warm seasons of the year, and during the cold season, the mosquito range is fixed and the population is assumed to be in a hibernating status. For both the general model and the seasonal succession model, our numerical simulations indicate that the long-time dynamical behavior is qualitatively similar to the simplified model, and the effect of climate warming on the movement of mosquitoes can be easily captured. Moreover, our analysis reveals that hibernating enhances the chances of survival and successful spreading of the mosquitoes, but it slows down the spreading speed.

  1. An introduction to analyzing dichotomous outcomes in a longitudinal setting: a NIDRR traumatic brain injury model systems communication.

    PubMed

    Pretz, Christopher R; Ketchum, Jessica M; Cuthbert, Jeffery P

    2014-01-01

    An untapped wealth of temporal information is captured within the Traumatic Brain Injury Model Systems National Database. Utilization of appropriate longitudinal analyses can provide an avenue toward unlocking the value of this information. This article highlights 2 statistical methods used for assessing change over time when examination of noncontinuous outcomes is of interest where this article focuses on investigation of dichotomous responses. Specifically, the intent of this article is to familiarize the rehabilitation community with the application of generalized estimating equations and generalized linear mixed models as used in longitudinal studies. An introduction to each method is provided where similarities and differences between the 2 are discussed. In addition, to reinforce the ideas and concepts embodied in each approach, we highlight each method, using examples based on data from the Rocky Mountain Regional Brain Injury System.

  2. Recurrent personality dimensions in inclusive lexical studies: indications for a big six structure.

    PubMed

    Saucier, Gerard

    2009-10-01

    Previous evidence for both the Big Five and the alternative six-factor model has been drawn from lexical studies with relatively narrow selections of attributes. This study examined factors from previous lexical studies using a wider selection of attributes in 7 languages (Chinese, English, Filipino, Greek, Hebrew, Spanish, and Turkish) and found 6 recurrent factors, each with common conceptual content across most of the studies. The previous narrow-selection-based six-factor model outperformed the Big Five in capturing the content of the 6 recurrent wideband factors. Adjective markers of the 6 recurrent wideband factors showed substantial incremental prediction of important criterion variables over and above the Big Five. Correspondence between wideband 6 and narrowband 6 factors indicate they are variants of a "Big Six" model that is more general across variable-selection procedures and may be more general across languages and populations.

  3. Plan recognition and generalization in command languages with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Yared, Wael I.; Sheridan, Thomas B.

    1991-01-01

    A method for pragmatic inference as a necessary accompaniment to command languages is proposed. The approach taken focuses on the modeling and recognition of the human operator's intent, which relates sequences of domain actions ('plans') to changes in some model of the task environment. The salient feature of this module is that it captures some of the physical and linguistic contextual aspects of an instruction. This provides a basis for generalization and reinterpretation of the instruction in different task environments. The theoretical development is founded on previous work in computational linguistics and some recent models in the theory of action and intention. To illustrate these ideas, an experimental command language to a telerobot is implemented. The program consists of three different components: a robot graphic simulation, the command language itself, and the domain-independent pragmatic inference module. Examples of task instruction processes are provided to demonstrate the benefits of this approach.

  4. From particle systems to learning processes. Comment on "Collective learning modeling based on the kinetic theory of active particles" by Diletta Burini, Silvana De Lillo, and Livio Gibelli

    NASA Astrophysics Data System (ADS)

    Lachowicz, Mirosław

    2016-03-01

    The very stimulating paper [6] discusses an approach to perception and learning in a large population of living agents. The approach is based on a generalization of kinetic theory methods in which the interactions between agents are described in terms of game theory. Such an approach was already discussed in Ref. [2-4] (see also references therein) in various contexts. The processes of perception and learning are based on the interactions between agents and therefore the general kinetic theory is a suitable tool for modeling them. However the main question that rises is how the perception and learning processes may be treated in the mathematical modeling. How may we precisely deliver suitable mathematical structures that are able to capture various aspects of perception and learning?

  5. Electron capture into large-l Rydberg states of multiply charged ions escaping from solid surfaces

    NASA Astrophysics Data System (ADS)

    Nedeljković, N.; Nedeljković, Lj.; Mirković, M.

    2003-07-01

    We have investigated the electron capture into large-l Rydberg states of multiply charged ionic projectiles (e.g., the core charges Z=6, 7, and 8) escaping solid surfaces with intermediate velocities (v≈1 a.u.) in the normal emergence geometry. A model of the nonresonant electron capture from the solid conduction band into the moving large angular-momentum Rydberg states of the ions is developed through a generalization of our results obtained previously for the low-l cases (l=0, 1, and 2). The model is based on the two-wave-function dynamics of the Demkov-Ostrovskii type. The electron exchange process is described by a mixed flux through a moving plane (“Firsov plane”), placed between the solid surface and the ionic projectile. Due to low eccentricities of the large-l Rydberg systems, the mixed flux must be evaluated through the whole Firsov plane. It is for this purpose that a suitable asymptotic method is developed. For intermediate ionic velocities and for all relevant values of the principal quantum number n≈Z, the population probability Pnl is obtained as a nonlinear l distribution. The theoretical predictions concerning the ions S VI, Cl VII, and Ar VIII are compared with the available results of the beam-foil experiments.

  6. Multivariate moment closure techniques for stochastic kinetic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporallymore » evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.« less

  7. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    PubMed

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. A biologically inspired approach to modeling unmanned vehicle teams

    NASA Astrophysics Data System (ADS)

    Cortesi, Roger S.; Galloway, Kevin S.; Justh, Eric W.

    2008-04-01

    Cooperative motion control of teams of agile unmanned vehicles presents modeling challenges at several levels. The "microscopic equations" describing individual vehicle dynamics and their interaction with the environment may be known fairly precisely, but are generally too complicated to yield qualitative insights at the level of multi-vehicle trajectory coordination. Interacting particle models are suitable for coordinating trajectories, but require care to ensure that individual vehicles are not driven in a "costly" manner. From the point of view of the cooperative motion controller, the individual vehicle autopilots serve to "shape" the microscopic equations, and we have been exploring the interplay between autopilots and cooperative motion controllers using a multivehicle hardware-in-the-loop simulator. Specifically, we seek refinements to interacting particle models in order to better describe observed behavior, without sacrificing qualitative understanding. A recent analogous example from biology involves introducing a fixed delay into a curvature-control-based feedback law for prey capture by an echolocating bat. This delay captures both neural processing time and the flight-dynamic response of the bat as it uses sensor-driven feedback. We propose a comparable approach for unmanned vehicle modeling; however, in contrast to the bat, with unmanned vehicles we have an additional freedom to modify the autopilot. Simulation results demonstrate the effectiveness of this biologically guided modeling approach.

  9. Modelling eye movements in a categorical search task

    PubMed Central

    Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris

    2013-01-01

    We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720

  10. Non-linear corrections to the time-covariance function derived from a multi-state chemical master equation.

    PubMed

    Scott, M

    2012-08-01

    The time-covariance function captures the dynamics of biochemical fluctuations and contains important information about the underlying kinetic rate parameters. Intrinsic fluctuations in biochemical reaction networks are typically modelled using a master equation formalism. In general, the equation cannot be solved exactly and approximation methods are required. For small fluctuations close to equilibrium, a linearisation of the dynamics provides a very good description of the relaxation of the time-covariance function. As the number of molecules in the system decrease, deviations from the linear theory appear. Carrying out a systematic perturbation expansion of the master equation to capture these effects results in formidable algebra; however, symbolic mathematics packages considerably expedite the computation. The authors demonstrate that non-linear effects can reveal features of the underlying dynamics, such as reaction stoichiometry, not available in linearised theory. Furthermore, in models that exhibit noise-induced oscillations, non-linear corrections result in a shift in the base frequency along with the appearance of a secondary harmonic.

  11. Time series modeling of live-cell shape dynamics for image-based phenotypic profiling.

    PubMed

    Gordonov, Simon; Hwang, Mun Kyung; Wells, Alan; Gertler, Frank B; Lauffenburger, Douglas A; Bathe, Mark

    2016-01-01

    Live-cell imaging can be used to capture spatio-temporal aspects of cellular responses that are not accessible to fixed-cell imaging. As the use of live-cell imaging continues to increase, new computational procedures are needed to characterize and classify the temporal dynamics of individual cells. For this purpose, here we present the general experimental-computational framework SAPHIRE (Stochastic Annotation of Phenotypic Individual-cell Responses) to characterize phenotypic cellular responses from time series imaging datasets. Hidden Markov modeling is used to infer and annotate morphological state and state-switching properties from image-derived cell shape measurements. Time series modeling is performed on each cell individually, making the approach broadly useful for analyzing asynchronous cell populations. Two-color fluorescent cells simultaneously expressing actin and nuclear reporters enabled us to profile temporal changes in cell shape following pharmacological inhibition of cytoskeleton-regulatory signaling pathways. Results are compared with existing approaches conventionally applied to fixed-cell imaging datasets, and indicate that time series modeling captures heterogeneous dynamic cellular responses that can improve drug classification and offer additional important insight into mechanisms of drug action. The software is available at http://saphire-hcs.org.

  12. Disruption of River Networks in Nature and Models

    NASA Astrophysics Data System (ADS)

    Perron, J. T.; Black, B. A.; Stokes, M.; McCoy, S. W.; Goldberg, S. L.

    2017-12-01

    Many natural systems display especially informative behavior as they respond to perturbations. Landscapes are no exception. For example, longitudinal elevation profiles of rivers responding to changes in uplift rate can reveal differences among erosional mechanisms that are obscured while the profiles are in equilibrium. The responses of erosional river networks to perturbations, including disruption of their network structure by diversion, truncation, resurfacing, or river capture, may be equally revealing. In this presentation, we draw attention to features of disrupted erosional river networks that a general model of landscape evolution should be able to reproduce, including the consequences of different styles of planetary tectonics and the response to heterogeneous bedrock structure and deformation. A comparison of global drainage directions with long-wavelength topography on Earth, Mars, and Saturn's moon Titan reveals the extent to which persistent and relatively rapid crustal deformation has disrupted river networks on Earth. Motivated by this example and others, we ask whether current models of river network evolution adequately capture the disruption of river networks by tectonic, lithologic, or climatic perturbations. In some cases the answer appears to be no, and we suggest some processes that models may be missing.

  13. Mechanical-Electrochemical-Thermal Simulation of Lithium-Ion Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanagopalan, Shriram; Zhang, Chao; Sprague, Michael A.

    2016-06-01

    Models capture the force response for single-cell and cell-string levels to within 15%-20% accuracy and predict the location for the origin of failure based on the deformation data from the experiments. At the module level, there is some discrepancy due to poor mechanical characterization of the packaging material between the cells. The thermal response (location and value of maximum temperature) agrees qualitatively with experimental data. In general, the X-plane results agree with model predictions to within 20% (pending faulty thermocouples, etc.); the Z-plane results show a bigger variability both between the models and test-results, as well as among multiple repeatsmore » of the tests. The models are able to capture the timing and sequence in voltage drop observed in the multi-cell experiments; the shapes of the current and temperature profiles need more work to better characterize propagation. The cells within packaging experience about 60% less force under identical impact test conditions, so the packaging on the test articles is robust. However, under slow-crush simulations, the maximum deformation of the cell strings with packaging is about twice that of cell strings without packaging.« less

  14. High-frequency health data and spline functions.

    PubMed

    Martín-Rodríguez, Gloria; Murillo-Fort, Carlos

    2005-03-30

    Seasonal variations are highly relevant for health service organization. In general, short run movements of medical magnitudes are important features for managers in this field to make adequate decisions. Thus, the analysis of the seasonal pattern in high-frequency health data is an appealing task. The aim of this paper is to propose procedures that allow the analysis of the seasonal component in this kind of data by means of spline functions embedded into a structural model. In the proposed method, useful adaptions of the traditional spline formulation are developed, and the resulting procedures are capable of capturing periodic variations, whether deterministic or stochastic, in a parsimonious way. Finally, these methodological tools are applied to a series of daily emergency service demand in order to capture simultaneous seasonal variations in which periods are different.

  15. Building confidence and credibility amid growing model and computing complexity

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Mahajan, S.; Veneziani, C.; Kennedy, J. H.

    2017-12-01

    As global Earth system models are developed to answer an ever-wider range of science questions, software products that provide robust verification, validation, and evaluation must evolve in tandem. Measuring the degree to which these new models capture past behavior, predict the future, and provide the certainty of predictions is becoming ever more challenging for reasons that are generally well known, yet are still challenging to address. Two specific and divergent needs for analysis of the Accelerated Climate Model for Energy (ACME) model - but with a similar software philosophy - are presented to show how a model developer-based focus can address analysis needs during expansive model changes to provide greater fidelity and execute on multi-petascale computing facilities. A-PRIME is a python script-based quick-look overview of a fully-coupled global model configuration to determine quickly if it captures specific behavior before significant computer time and expense is invested. EVE is an ensemble-based software framework that focuses on verification of performance-based ACME model development, such as compiler or machine settings, to determine the equivalence of relevant climate statistics. The challenges and solutions for analysis of multi-petabyte output data are highlighted from the aspect of the scientist using the software, with the aim of fostering discussion and further input from the community about improving developer confidence and community credibility.

  16. Modelling the dynamics of two political parties in the presence of switching.

    PubMed

    Nyabadza, F; Alassey, Tobge Yawo; Muchatibaya, Gift

    2016-01-01

    This paper generalizes the model proposed by Misra, by considering switching between political parties. In the model proposed, the movements of members from political party B to political party C and vice versa, are considered but the net movement is considered by assuming that [Formula: see text] (a constant), which implies that the movement of members is either from party B to party C or from party C to party B. In this paper we remodel these movements through switching functions to capture how individuals switch between parties. The results provide a more comprehensive synopsis of the dynamics between two political parties.

  17. A unified stochastic formulation of dissipative quantum dynamics. I. Generalized hierarchical equations

    NASA Astrophysics Data System (ADS)

    Hsieh, Chang-Yu; Cao, Jianshu

    2018-01-01

    We extend a standard stochastic theory to study open quantum systems coupled to a generic quantum environment. We exemplify the general framework by studying a two-level quantum system coupled bilinearly to the three fundamental classes of non-interacting particles: bosons, fermions, and spins. In this unified stochastic approach, the generalized stochastic Liouville equation (SLE) formally captures the exact quantum dissipations when noise variables with appropriate statistics for different bath models are applied. Anharmonic effects of a non-Gaussian bath are precisely encoded in the bath multi-time correlation functions that noise variables have to satisfy. Starting from the SLE, we devise a family of generalized hierarchical equations by averaging out the noise variables and expand bath multi-time correlation functions in a complete basis of orthonormal functions. The general hierarchical equations constitute systems of linear equations that provide numerically exact simulations of quantum dynamics. For bosonic bath models, our general hierarchical equation of motion reduces exactly to an extended version of hierarchical equation of motion which allows efficient simulation for arbitrary spectral densities and temperature regimes. Similar efficiency and flexibility can be achieved for the fermionic bath models within our formalism. The spin bath models can be simulated with two complementary approaches in the present formalism. (I) They can be viewed as an example of non-Gaussian bath models and be directly handled with the general hierarchical equation approach given their multi-time correlation functions. (II) Alternatively, each bath spin can be first mapped onto a pair of fermions and be treated as fermionic environments within the present formalism.

  18. 40 CFR 63.3100 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... be in compliance with the operating limits for emission capture systems and add-on control devices...) You must maintain a log detailing the operation and maintenance of the emission capture systems, add... capture system and add-on control device performance tests have been completed, as specified in § 63.3160...

  19. 40 CFR 63.3100 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... be in compliance with the operating limits for emission capture systems and add-on control devices...) You must maintain a log detailing the operation and maintenance of the emission capture systems, add... capture system and add-on control device performance tests have been completed, as specified in § 63.3160...

  20. 40 CFR 63.3100 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... be in compliance with the operating limits for emission capture systems and add-on control devices...) You must maintain a log detailing the operation and maintenance of the emission capture systems, add... capture system and add-on control device performance tests have been completed, as specified in § 63.3160...

  1. Direct Air Capture of CO2 with an Amine Resin: A Molecular Modeling Study of the CO2 Capturing Process

    PubMed Central

    2017-01-01

    Several reactions, known from other amine systems for CO2 capture, have been proposed for Lewatit R VP OC 1065. The aim of this molecular modeling study is to elucidate the CO2 capture process: the physisorption process prior to the CO2-capture and the reactions. Molecular modeling yields that the resin has a structure with benzyl amine groups on alternating positions in close vicinity of each other. Based on this structure, the preferred adsorption mode of CO2 and H2O was established. Next, using standard Density Functional Theory two catalytic reactions responsible for the actual CO2 capture were identified: direct amine and amine-H2O catalyzed formation of carbamic acid. The latter is a new type of catalysis. Other reactions are unlikely. Quantitative verification of the molecular modeling results with known experimental CO2 adsorption isotherms, applying a dual site Langmuir adsorption isotherm model, further supports all results of this molecular modeling study. PMID:29142339

  2. Use of models to map potential capture of surface water

    USGS Publications Warehouse

    Leake, Stanley A.

    2006-01-01

    The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.

  3. Real‐time monitoring and control of the load phase of a protein A capture step

    PubMed Central

    Rüdt, Matthias; Brestrich, Nina; Rolinger, Laura

    2016-01-01

    ABSTRACT The load phase in preparative Protein A capture steps is commonly not controlled in real‐time. The load volume is generally based on an offline quantification of the monoclonal antibody (mAb) prior to loading and on a conservative column capacity determined by resin‐life time studies. While this results in a reduced productivity in batch mode, the bottleneck of suitable real‐time analytics has to be overcome in order to enable continuous mAb purification. In this study, Partial Least Squares Regression (PLS) modeling on UV/Vis absorption spectra was applied to quantify mAb in the effluent of a Protein A capture step during the load phase. A PLS model based on several breakthrough curves with variable mAb titers in the HCCF was successfully calibrated. The PLS model predicted the mAb concentrations in the effluent of a validation experiment with a root mean square error (RMSE) of 0.06 mg/mL. The information was applied to automatically terminate the load phase, when a product breakthrough of 1.5 mg/mL was reached. In a second part of the study, the sensitivity of the method was further increased by only considering small mAb concentrations in the calibration and by subtracting an impurity background signal. The resulting PLS model exhibited a RMSE of prediction of 0.01 mg/mL and was successfully applied to terminate the load phase, when a product breakthrough of 0.15 mg/mL was achieved. The proposed method has hence potential for the real‐time monitoring and control of capture steps at large scale production. This might enhance the resin capacity utilization, eliminate time‐consuming offline analytics, and contribute to the realization of continuous processing. Biotechnol. Bioeng. 2017;114: 368–373. © 2016 The Authors. Biotechnology and Bioengineering published by Wiley Periodicals, Inc. PMID:27543789

  4. A simple nonlinear model for the return to isotropy in turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, Sutanu; Speziale, Charles G.

    1990-01-01

    A quadratic nonlinear generalization of the linear Rotta model for the slow pressure-strain correlation of turbulence is developed. The model is shown to satisfy realizability and to give rise to no stable nontrivial equilibrium solutions for the anisotropy tensor in the case of vanishing mean velocity gradients. The absence of stable nontrivial equilibrium solutions is a necessary condition to ensure that the model predicts a return to isotropy for all relaxational turbulent flows. Both the phase space dynamics and the temporal behavior of the model are examined and compared against experimental data for the return to isotropy problem. It is demonstrated that the quadratic model successfully captures the experimental trends which clearly exhibit nonlinear behavior. Direct comparisons are also made with the predictions of the Rotta model and the Lumley model.

  5. Choice and explanation in medical management: a multiattribute model of artificial intelligence approaches.

    PubMed

    Rennels, G D; Shortliffe, E H; Miller, P L

    1987-01-01

    This paper explores a model of choice and explanation in medical management and makes clear its advantages and limitations. The model is based on multiattribute decision making (MADM) and consists of four distinct strategies for choice and explanation, plus combinations of these four. Each strategy is a restricted form of the general MADM approach, and each makes restrictive assumptions about the nature of the domain. The advantage of tailoring a restricted form of a general technique to a particular domain is that such efforts may better capture the character of the domain and allow choice and explanation to be more naturally modelled. The uses of the strategies for both choice and explanation are illustrated with analyses of several existing medical management artificial intelligence (AI) systems, and also with examples from the management of primary breast cancer. Using the model it is possible to identify common underlying features of these AI systems, since each employs portions of this model in different ways. Thus the model enables better understanding and characterization of the seemingly ad hoc decision making of previous systems.

  6. Interplay between inhibited transport and reaction in nanoporous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackerman, David Michael

    2013-01-01

    This work presents a detailed formulation of reaction and diffusion dynamics of molecules in confined pores such as mesoporous silica and zeolites. A general reaction-diffusion model and discrete Monte Carlo simulations are presented. Both transient and steady state behavior is covered. Failure of previous mean-field models for these systems is explained and discussed. A coarse-grained, generalized hydrodynamic model is developed that accurately captures the interplay between reaction and restricted transport in these systems. This method incorporates the non-uniform chemical diffusion behavior present in finite pores with multi-component diffusion. Two methods of calculating these diffusion values are developed: a random walkmore » based approach and a driven diffusion model based on an extension of Fick's law. The effects of reaction, diffusion, pore length, and catalytic site distribution are investigated. In addition to strictly single file motion, quasi-single file diffusion is incorporated into the model to match a range of experimental systems. The connection between these experimental systems and model parameters is made through Langevin dynamics modeling of particles in confined pores.« less

  7. Numerical Modeling of Hailstorms and Hailstone Growth. Part III: Simulation of an Alberta Hailstorm--Natural and Seeded Cases.

    NASA Astrophysics Data System (ADS)

    Farley, Richard D.

    1987-07-01

    This paper reports on simulations of a multicellular hailstorm case observed during the 1983 Alberta Hail Project. The field operations on that day concentrated on two successive feeder cells which were subjected to controlled seeding experiments. The fist of these cells received the placebo treatment and the second was seeded with dry ice. The principal tool of this study is a modified version of the two-dimensional, time dependent hail category model described in Part I of this series of papers. It is with this model that hail growth processes are investigated, including the simulated effects of cloud seeding techniques as practiced in Alberta.The model simulation of the natural case produces a very good replication of the observed storm, particularly the placebo feeder cell. This is evidenced, in particular, by the high degree of fidelity of the observed and modeled radar reflectivity in terms of magnitudes, structure, and evolution. The character of the hailfall at the surface and the scale of the storm are captured nicely by the model, although cloud-top heights are generally too high, particularly for the mature storm system.Seeding experiments similar to those conducted in the field have also been simulated. These involve seeding the feeder cell early in its active development phase with dry ice (CO2) or silver iodide (AgI) introduced near cloud top. The model simulations of these seeded cases capture some of the observed seeding signatures detected by radar and aircraft. In these model experiments, CO2 seeding produced a stronger response than AgI seeding relative to inhibiting hail formation. For both seeded cases, production of precipitating ice was initially enhanced by the seeding, but retarded slightly in the later stages, the net result being modest increases in surface rainfall, with hail reduced slightly. In general, the model simulations support several subhypotheses of the operational strategy of the Alberta Research Council regarding the earlier formation of ice, snow, and graupel due to seeding.

  8. Program SPACECAP: software for estimating animal density using spatially explicit capture-recapture models

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas

    2012-01-01

    1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.

  9. Prediction of microcracking in composite laminates under thermomechanical loading

    NASA Technical Reports Server (NTRS)

    Maddocks, Jason R.; Mcmanus, Hugh L.

    1995-01-01

    Composite laminates used in space structures are exposed to both thermal and mechanical loads. Cracks in the matrix form, changing the laminate thermoelastic properties. An analytical methodology is developed to predict microcrack density in a general laminate exposed to an arbitrary thermomechanical load history. The analysis uses a shear lag stress solution in conjunction with an energy-based cracking criterion. Experimental investigation was used to verify the analysis. Correlation between analysis and experiment is generally excellent. The analysis does not capture machining-induced cracking, or observed delayed crack initiation in a few ply groups, but these errors do not prevent the model from being a useful preliminary design tool.

  10. Generalizing ecological site concepts of the Colorado Plateau for landscape-level applications

    USGS Publications Warehouse

    Duniway, Michael C.; Nauman, Travis; Johanson, Jamin K.; Green, Shane; Miller, Mark E.; Bestelmeyer, Brandon T.

    2016-01-01

    Numerous ecological site descriptions in the southern Utah portion of the Colorado Plateau can be difficult to navigate, so we held a workshop aimed at adding value and functionality to the current ecological site system.We created new groups of ecological sites and drafted state-and-transition models for these new groups.We were able to distill the current large number of ecological sites in the study area (ca. 150) into eight ecological site groups that capture important variability in ecosystem dynamics.Several inventory and monitoring programs and landscape scale planning actions will likely benefit from more generalized ecological site group concepts.

  11. Skeletal muscle tensile strain dependence: hyperviscoelastic nonlinearity

    PubMed Central

    Wheatley, Benjamin B; Morrow, Duane A; Odegard, Gregory M; Kaufman, Kenton R; Donahue, Tammy L Haut

    2015-01-01

    Introduction Computational modeling of skeletal muscle requires characterization at the tissue level. While most skeletal muscle studies focus on hyperelasticity, the goal of this study was to examine and model the nonlinear behavior of both time-independent and time-dependent properties of skeletal muscle as a function of strain. Materials and Methods Nine tibialis anterior muscles from New Zealand White rabbits were subject to five consecutive stress relaxation cycles of roughly 3% strain. Individual relaxation steps were fit with a three-term linear Prony series. Prony series coefficients and relaxation ratio were assessed for strain dependence using a general linear statistical model. A fully nonlinear constitutive model was employed to capture the strain dependence of both the viscoelastic and instantaneous components. Results Instantaneous modulus (p<0.0005) and mid-range relaxation (p<0.0005) increased significantly with strain level, while relaxation at longer time periods decreased with strain (p<0.0005). Time constants and overall relaxation ratio did not change with strain level (p>0.1). Additionally, the fully nonlinear hyperviscoelastic constitutive model provided an excellent fit to experimental data, while other models which included linear components failed to capture muscle function as accurately. Conclusions Material properties of skeletal muscle are strain-dependent at the tissue level. This strain dependence can be included in computational models of skeletal muscle performance with a fully nonlinear hyperviscoelastic model. PMID:26409235

  12. Modeling nitrous oxide production during biological nitrogen removal via nitrification and denitrification: extensions to the general ASM models.

    PubMed

    Ni, Bing-Jie; Ruscalleda, Maël; Pellicer-Nàcher, Carles; Smets, Barth F

    2011-09-15

    Nitrous oxide (N(2)O) can be formed during biological nitrogen (N) removal processes. In this work, a mathematical model is developed that describes N(2)O production and consumption during activated sludge nitrification and denitrification. The well-known ASM process models are extended to capture N(2)O dynamics during both nitrification and denitrification in biological N removal. Six additional processes and three additional reactants, all involved in known biochemical reactions, have been added. The validity and applicability of the model is demonstrated by comparing simulations with experimental data on N(2)O production from four different mixed culture nitrification and denitrification reactor study reports. Modeling results confirm that hydroxylamine oxidation by ammonium oxidizers (AOB) occurs 10 times slower when NO(2)(-) participates as final electron acceptor compared to the oxic pathway. Among the four denitrification steps, the last one (N(2)O reduction to N(2)) seems to be inhibited first when O(2) is present. Overall, N(2)O production can account for 0.1-25% of the consumed N in different nitrification and denitrification systems, which can be well simulated by the proposed model. In conclusion, we provide a modeling structure, which adequately captures N(2)O dynamics in autotrophic nitrification and heterotrophic denitrification driven biological N removal processes and which can form the basis for ongoing refinements.

  13. Food abundance, prey morphology, and diet specialization influence individual sea otter tool use

    USGS Publications Warehouse

    Fujii, Jessica A.; Ralls, Katherine; Tinker, M. Tim

    2017-01-01

    Sea otters are well-known tool users, employing objects such as rocks or shells to break open invertebrate prey. We used a series of generalized linear mixed effect models to examine observational data on prey capture and tool use from 211 tagged individuals from 5 geographically defined study areas throughout the sea otter’s range in California. Our best supported model was able to explain 75% of the variation in the frequency of tool use by individual sea otters with only ecological and demographic variables. In one study area, where sea otter food resources were abundant, all individuals had similar diets focusing on preferred prey items and used tools at low to moderate frequencies (4–38% of prey captures). In the remaining areas, where sea otters were food-limited, individuals specialized on different subsets of the available prey and had a wider range of average tool-use frequency (0–98% of prey captures). The prevalence of difficult-to-access prey in individual diets was a major predictor of tool use and increased the likelihood of using tools on prey that were not difficult to access as well. Age, sex, and feeding habitat also contributed to the probability of tool use but to a smaller extent. We developed a conceptual model illustrating how food abundance, the prevalence of difficult-to-access prey, and individual diet specialization interacted to determine the likelihood that individual sea otters would use tools and considered the model’s relevance to other tool-using species.

  14. The evolution of ecosystem ascendency in a complex systems based model.

    PubMed

    Brinck, Katharina; Jensen, Henrik Jeldtoft

    2017-09-07

    General patterns in ecosystem development can shed light on driving forces behind ecosystem formation and recovery and have been of long interest. In recent years, the need for integrative and process oriented approaches to capture ecosystem growth, development and organisation, as well as the scope of information theory as a descriptive tool has been addressed from various sides. However data collection of ecological network flows is difficult and tedious and comprehensive models are lacking. We use a hierarchical version of the Tangled Nature Model of evolutionary ecology to study the relationship between structure, flow and organisation in model ecosystems, their development over evolutionary time scales and their relation to ecosystem stability. Our findings support the validity of ecosystem ascendency as a meaningful measure of ecosystem organisation, which increases over evolutionary time scales and significantly drops during periods of disturbance. The results suggest a general trend towards both higher integrity and increased stability driven by functional and structural ecosystem coadaptation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Shape Transformations of Epithelial Shells

    PubMed Central

    Misra, Mahim; Audoly, Basile; Kevrekidis, Ioannis G.; Shvartsman, Stanislav Y.

    2016-01-01

    Regulated deformations of epithelial sheets are frequently foreshadowed by patterning of their mechanical properties. The connection between patterns of cell properties and the emerging tissue deformations is studied in multiple experimental systems, but the general principles remain poorly understood. For instance, it is in general unclear what determines the direction in which the patterned sheet is going to bend and whether the resulting shape transformation will be discontinuous or smooth. Here these questions are explored computationally, using vertex models of epithelial shells assembled from prismlike cells. In response to rings and patches of apical cell contractility, model epithelia smoothly deform into invaginated or evaginated shapes similar to those observed in embryos and tissue organoids. Most of the observed effects can be captured by a simpler model with polygonal cells, modified to include the effects of the apicobasal polarity and natural curvature of epithelia. Our models can be readily extended to include the effects of multiple constraints and used to describe a wide range of morphogenetic processes. PMID:27074691

  16. S2O - A software tool for integrating research data from general purpose statistic software into electronic data capture systems.

    PubMed

    Bruland, Philipp; Dugas, Martin

    2017-01-07

    Data capture for clinical registries or pilot studies is often performed in spreadsheet-based applications like Microsoft Excel or IBM SPSS. Usually, data is transferred into statistic software, such as SAS, R or IBM SPSS Statistics, for analyses afterwards. Spreadsheet-based solutions suffer from several drawbacks: It is generally not possible to ensure a sufficient right and role management; it is not traced who has changed data when and why. Therefore, such systems are not able to comply with regulatory requirements for electronic data capture in clinical trials. In contrast, Electronic Data Capture (EDC) software enables a reliable, secure and auditable collection of data. In this regard, most EDC vendors support the CDISC ODM standard to define, communicate and archive clinical trial meta- and patient data. Advantages of EDC systems are support for multi-user and multicenter clinical trials as well as auditable data. Migration from spreadsheet based data collection to EDC systems is labor-intensive and time-consuming at present. Hence, the objectives of this research work are to develop a mapping model and implement a converter between the IBM SPSS and CDISC ODM standard and to evaluate this approach regarding syntactic and semantic correctness. A mapping model between IBM SPSS and CDISC ODM data structures was developed. SPSS variables and patient values can be mapped and converted into ODM. Statistical and display attributes from SPSS are not corresponding to any ODM elements; study related ODM elements are not available in SPSS. The S2O converting tool was implemented as command-line-tool using the SPSS internal Java plugin. Syntactic and semantic correctness was validated with different ODM tools and reverse transformation from ODM into SPSS format. Clinical data values were also successfully transformed into the ODM structure. Transformation between the spreadsheet format IBM SPSS and the ODM standard for definition and exchange of trial data is feasible. S2O facilitates migration from Excel- or SPSS-based data collections towards reliable EDC systems. Thereby, advantages of EDC systems like reliable software architecture for secure and traceable data collection and particularly compliance with regulatory requirements are achievable.

  17. Classification framework for partially observed dynamical systems

    NASA Astrophysics Data System (ADS)

    Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira

    2017-04-01

    We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.

  18. Mixed models approaches for joint modeling of different types of responses.

    PubMed

    Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert

    2016-01-01

    In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.

  19. The shadow map: a general contact definition for capturing the dynamics of biomolecular folding and function.

    PubMed

    Noel, Jeffrey K; Whitford, Paul C; Onuchic, José N

    2012-07-26

    Structure-based models (SBMs) are simplified models of the biomolecular dynamics that arise from funneled energy landscapes. We recently introduced an all-atom SBM that explicitly represents the atomic geometry of a biomolecule. While this initial study showed the robustness of the all-atom SBM Hamiltonian to changes in many of the energetic parameters, an important aspect, which has not been explored previously, is the definition of native interactions. In this study, we propose a general definition for generating atomically grained contact maps called "Shadow". The Shadow algorithm initially considers all atoms within a cutoff distance and then, controlled by a screening parameter, discards the occluded contacts. We show that this choice of contact map is not only well behaved for protein folding, since it produces consistently cooperative folding behavior in SBMs but also desirable for exploring the dynamics of macromolecular assemblies since, it distributes energy similarly between RNAs and proteins despite their disparate internal packing. All-atom structure-based models employing Shadow contact maps provide a general framework for exploring the geometrical features of biomolecules, especially the connections between folding and function.

  20. Model validation of simple-graph representations of metabolism

    PubMed Central

    Holme, Petter

    2009-01-01

    The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012

  1. Discussion of “Bayesian design of experiments for industrial and scientific applications via gaussian processes”

    DOE PAGES

    Anderson-Cook, Christine M.; Burke, Sarah E.

    2016-10-18

    First, we would like to commend Dr. Woods on his thought-provoking paper and insightful presentation at the 4th Annual Stu Hunter conference. We think that the material presented highlights some important needs in the area of design of experiments for generalized linear models (GLMs). In addition, we agree with Dr. Woods that design of experiements of GLMs does implicitly require expert judgement about model parameters, and hence using a Bayesian approach to capture this knowledge is a natural strategy to summarize what is known with the opportunity to incorporate associated uncertainty about that information.

  2. Discussion of “Bayesian design of experiments for industrial and scientific applications via gaussian processes”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine M.; Burke, Sarah E.

    First, we would like to commend Dr. Woods on his thought-provoking paper and insightful presentation at the 4th Annual Stu Hunter conference. We think that the material presented highlights some important needs in the area of design of experiments for generalized linear models (GLMs). In addition, we agree with Dr. Woods that design of experiements of GLMs does implicitly require expert judgement about model parameters, and hence using a Bayesian approach to capture this knowledge is a natural strategy to summarize what is known with the opportunity to incorporate associated uncertainty about that information.

  3. Elucidating the Origin of the Attractive Force among Hydrophilic Macroions

    PubMed Central

    Liu, Zhuonan; Liu, Tianbo; Tsige, Mesfin

    2016-01-01

    Coarse-grained simulation approach is applied to provide a general understanding of various soluble, hydrophilic macroionic solutions, especially the strong attractions among the like-charged soluble macroions and the consequent spontaneous, reversible formation of blackberry structures with tunable sizes. This model captures essential molecular details of the macroions and their interactions in polar solvents. Results using this model provide consistent conclusions to the experimental observations, from the nature of the attractive force among macroions (counterion-mediated attraction), to the blackberry formation mechanism. The conclusions can be applied to various macroionic solutions from inorganic molecular clusters to dendrimers and biomacromolecules. PMID:27215898

  4. Assessing temporally and spatially resolved PM 2.5 exposures for epidemiological studies using satellite aerosol optical depth measurements

    NASA Astrophysics Data System (ADS)

    Kloog, Itai; Koutrakis, Petros; Coull, Brent A.; Lee, Hyung Joo; Schwartz, Joel

    2011-11-01

    Land use regression (LUR) models provide good estimates of spatially resolved long-term exposures, but are poor at capturing short term exposures. Satellite-derived Aerosol Optical Depth (AOD) measurements have the potential to provide spatio-temporally resolved predictions of both long and short term exposures, but previous studies have generally showed relatively low predictive power. Our objective was to extend our previous work on day-specific calibrations of AOD data using ground PM 2.5 measurements by incorporating commonly used LUR variables and meteorological variables, thus benefiting from both the spatial resolution from the LUR models and the spatio-temporal resolution from the satellite models. Later we use spatial smoothing to predict PM 2.5 concentrations for day/locations with missing AOD measures. We used mixed models with random slopes for day to calibrate AOD data for 2000-2008 across New-England with monitored PM 2.5 measurements. We then used a generalized additive mixed model with spatial smoothing to estimate PM 2.5 in location-day pairs with missing AOD, using regional measured PM 2.5, AOD values in neighboring cells, and land use. Finally, local (100 m) land use terms were used to model the difference between grid cell prediction and monitored value to capture very local traffic particles. Out-of-sample ten-fold cross-validation was used to quantify the accuracy of our predictions. For days with available AOD data we found high out-of-sample R2 (mean out-of-sample R2 = 0.830, year to year variation 0.725-0.904). For days without AOD values, our model performance was also excellent (mean out-of-sample R2 = 0.810, year to year variation 0.692-0.887). Importantly, these R2 are for daily, rather than monthly or yearly, values. Our model allows one to assess short term and long-term human exposures in order to investigate both the acute and chronic effects of ambient particles, respectively.

  5. Analysis of Gas-Particle Flows through Multi-Scale Simulations

    NASA Astrophysics Data System (ADS)

    Gu, Yile

    Multi-scale structures are inherent in gas-solid flows, which render the modeling efforts challenging. On one hand, detailed simulations where the fine structures are resolved and particle properties can be directly specified can account for complex flow behaviors, but they are too computationally expensive to apply for larger systems. On the other hand, coarse-grained simulations demand much less computations but they necessitate constitutive models which are often not readily available for given particle properties. The present study focuses on addressing this issue, as it seeks to provide a general framework through which one can obtain the required constitutive models from detailed simulations. To demonstrate the viability of this general framework in which closures can be proposed for different particle properties, we focus on the van der Waals force of interaction between particles. We start with Computational Fluid Dynamics (CFD) - Discrete Element Method (DEM) simulations where the fine structures are resolved and van der Waals force between particles can be directly specified, and obtain closures for stress and drag that are required for coarse-grained simulations. Specifically, we develop a new cohesion model that appropriately accounts for van der Waals force between particles to be used for CFD-DEM simulations. We then validate this cohesion model and the CFD-DEM approach by showing that it can qualitatively capture experimental results where the addition of small particles to gas fluidization reduces bubble sizes. Based on the DEM and CFD-DEM simulation results, we propose stress models that account for the van der Waals force between particles. Finally, we apply machine learning, specifically neural networks, to obtain a drag model that captures the effects from fine structures and inter-particle cohesion. We show that this novel approach using neural networks, which can be readily applied for other closures other than drag here, can take advantage of the large amount of data generated from simulations, and therefore offer superior modeling performance over traditional approaches.

  6. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  7. Modeling association among demographic parameters in analysis of open population capture-recapture data

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.

    2005-01-01

    We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  8. Angular momentum and torques in a simulation of the atmosphere's response to the 1982-83 El Nino

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponte, R.M.; Rosen, R.D.; Boer, G.J.

    Anomalies in the angular momentum of the atmosphere (M) during the 1982-83 El Nino event and the torques responsible for these anomalies are investigated using output from the Canadian Climate Centre general circulation model. Model values of M during the year of the event are generally larger than those for the model climatology, thereby capturing the observed tendency toward higher values of M during El Nino. Differences exist between the model and observations in the timing and amplitude of the largest anomalies, but these differences may be due to natural variability and not necessarily directly associated with the 1982-83 Elmore » Nino conditions. In late September and October 1982, the model atmosphere acquires momentum more rapidly than usual, leading to the development of the largest deviations from mean conditions at the end of October. A secondary maximum in the departure from mean M values occurs in January 1983 and is related to a general strengthening of westerly momentum anomalies over the model's tropical and midlatitude regions. Both mountain and tangential stress torques are involved in this episode, but no particular mechanism or region dominates the anomalous exchange of momentum. 24 refs., 10 figs., 1 tab.« less

  9. Creating markets for captured carbon: Retrofit of Abbott Power Plant and Future Utilization of Captured CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Kevin C.; Lu, Yongqi; Patel, Vinod

    The successful implementation of CCUS requires the confluence of technology, regulatory, and financial factors. One of the factors that impact this confluence is the ability to utilize and monetize captured CO 2. The generally accepted utilization approach has been CO 2-based Enhanced Oil Recovery (EOR), yet this is not always feasible and/or a preferable approach. There is a need to be able to explore a multitude of utilization approaches in order to identify a portfolio of potential utilization mechanisms. This portfolio must be adapted based on the economy of the region. In response to this need, the University of Illinoismore » has formed a Carbon Dioxide Utilization and Reduction (COOULR) Center. The open nature of the university, coupled with a university policy to reduce CO 2 emissions, provides a model for the issues communities will face when attempting to reduce emissions while still maintaining reliable and affordable power. This Center is one of the key steps in the formation of a market for captured CO 2. Furthermore, the goal of the Center is to not only evaluate technologies, but also demonstrate at a large pilot scale how communities may be able to adjust to the need to reduce GHG emissions.« less

  10. Creating markets for captured carbon: Retrofit of Abbott Power Plant and Future Utilization of Captured CO 2

    DOE PAGES

    O'Brien, Kevin C.; Lu, Yongqi; Patel, Vinod; ...

    2017-01-01

    The successful implementation of CCUS requires the confluence of technology, regulatory, and financial factors. One of the factors that impact this confluence is the ability to utilize and monetize captured CO 2. The generally accepted utilization approach has been CO 2-based Enhanced Oil Recovery (EOR), yet this is not always feasible and/or a preferable approach. There is a need to be able to explore a multitude of utilization approaches in order to identify a portfolio of potential utilization mechanisms. This portfolio must be adapted based on the economy of the region. In response to this need, the University of Illinoismore » has formed a Carbon Dioxide Utilization and Reduction (COOULR) Center. The open nature of the university, coupled with a university policy to reduce CO 2 emissions, provides a model for the issues communities will face when attempting to reduce emissions while still maintaining reliable and affordable power. This Center is one of the key steps in the formation of a market for captured CO 2. Furthermore, the goal of the Center is to not only evaluate technologies, but also demonstrate at a large pilot scale how communities may be able to adjust to the need to reduce GHG emissions.« less

  11. The limited importance of size-asymmetric light competition and growth of pioneer species in early secondary forest succession in Vietnam.

    PubMed

    van Kuijk, Marijke; Anten, N P R; Oomen, R J; van Bentum, D W; Werger, M J A

    2008-08-01

    It is generally believed that asymmetric competition for light plays a predominant role in determining the course of succession by increasing size inequalities between plants. Size-related growth is the product of size-related light capture and light-use efficiency (LUE). We have used a canopy model to calculate light capture and photosynthetic rates of pioneer species in sequential vegetation stages of a young secondary forest stand. Growth of the same saplings was followed in time as succession proceeded. Photosynthetic rate per unit plant mass (P(mass): mol C g(-1) day(-1)), a proxy for plant growth, was calculated as the product of light capture efficiency [Phi(mass): mol photosynthetic photon flux density (PPFD) g(-1) day(-1)] and LUE (mol C mol PPFD(-1)). Species showed different morphologies and photosynthetic characteristics, but their light-capturing and light-use efficiencies, and thus P (mass), did not differ much. This was also observed in the field: plant growth was not size-asymmetric. The size hierarchy that was present from the very early beginning of succession remained for at least the first 5 years. We conclude, therefore, that in slow-growing regenerating vegetation stands, the importance of asymmetric competition for light and growth can be much less than is often assumed.

  12. Constant-parameter capture-recapture models

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  13. Overgeneral autobiographical memory in healthy young and older adults: Differential age effects on components of the capture and rumination, functional avoidance, and impaired executive control (CaRFAX) model.

    PubMed

    Ros, Laura; Latorre, Jose M; Serrano, Juan P; Ricarte, Jorge J

    2017-08-01

    The CaRFAX model (Williams et al., 2007) has been used to explain the causes of overgeneral autobiographical memory (OGM; the difficulty to retrieve specific autobiographical memories), a cognitive phenomenon generally related with different psychopathologies. This model proposes 3 different mechanisms to explain OGM: capture and rumination (CaR), functional avoidance (FA) and impaired executive functions (X). However, the complete CaRFAX model has not been tested in nonclinical populations. This study aims to assess the usefulness of the CaRFAX model to explain OGM in 2 healthy samples: a young sample and an older sample, to test for possible age-related differences in the underlying causes of OGM. A total of 175 young (age range: 19-36 years) and 175 older (age range: 53-88 years) participants completed measures of brooding rumination (CaR), functional avoidance (FA), and executive tasks (X). Using structural equation modeling, we found that memory specificity is mainly associated with lower functional avoidance and higher executive functions in the older group, but only with executive functions in young participants. We discuss the different roles of emotional regulation strategies used by young and older people and their relation to the CaRFAX model to explain OGM in healthy people. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Stratospheric temperatures and tracer transport in a nudged 4-year middle atmosphere GCM simulation

    NASA Astrophysics Data System (ADS)

    van Aalst, M. K.; Lelieveld, J.; Steil, B.; Brühl, C.; Jöckel, P.; Giorgetta, M. A.; Roelofs, G.-J.

    2005-02-01

    We have performed a 4-year simulation with the Middle Atmosphere General Circulation Model MAECHAM5/MESSy, while slightly nudging the model's meteorology in the free troposphere (below 113 hPa) towards ECMWF analyses. We show that the nudging 5 technique, which leaves the middle atmosphere almost entirely free, enables comparisons with synoptic observations. The model successfully reproduces many specific features of the interannual variability, including details of the Antarctic vortex structure. In the Arctic, the model captures general features of the interannual variability, but falls short in reproducing the timing of sudden stratospheric warmings. A 10 detailed comparison of the nudged model simulations with ECMWF data shows that the model simulates realistic stratospheric temperature distributions and variabilities, including the temperature minima in the Antarctic vortex. Some small (a few K) model biases were also identified, including a summer cold bias at both poles, and a general cold bias in the lower stratosphere, most pronounced in midlatitudes. A comparison 15 of tracer distributions with HALOE observations shows that the model successfully reproduces specific aspects of the instantaneous circulation. The main tracer transport deficiencies occur in the polar lowermost stratosphere. These are related to the tropopause altitude as well as the tracer advection scheme and model resolution. The additional nudging of equatorial zonal winds, forcing the quasi-biennial oscillation, sig20 nificantly improves stratospheric temperatures and tracer distributions.

  15. Jump-Diffusion models and structural changes for asset forecasting in hydrology

    NASA Astrophysics Data System (ADS)

    Tranquille Temgoua, André Guy; Martel, Richard; Chang, Philippe J. J.; Rivera, Alfonso

    2017-04-01

    Impacts of climate change on surface water and groundwater are of concern in many regions of the world since water is an essential natural resource. Jump-Diffusion models are generally used in economics and other related fields but not in hydrology. The potential application could be made for hydrologic data series analysis and forecast. The present study uses Jump-Diffusion models by adding structural changes to detect fluctuations in hydrologic processes in relationship with climate change. The model implicitly assumes that modifications in rivers' flowrates can be divided into three categories: (a) normal changes due to irregular precipitation events especially in tropical regions causing major disturbance in hydrologic processes (this component is modelled by a discrete Brownian motion); (b) abnormal, sudden and non-persistent modifications in hydrologic proceedings are handled by Poisson processes; (c) the persistence of hydrologic fluctuations characterized by structural changes in hydrological data related to climate variability. The objective of this paper is to add structural changes in diffusion models with jumps, in order to capture the persistence of hydrologic fluctuations. Indirectly, the idea is to observe if there are structural changes of discharge/recharge over the study area, and to find an efficient and flexible model able of capturing a wide variety of hydrologic processes. Structural changes in hydrological data are estimated using the method of nonlinear discrete filters via Method of Simulated Moments (MSM). An application is given using sensitive parameters such as baseflow index and recession coefficient to capture discharge/recharge. Historical dataset are examined by the Volume Spread Analysis (VSA) to detect real time and random perturbations in hydrologic processes. The application of the method allows establishing more accurate hydrologic parameters. The impact of this study is perceptible in forecasting floods and groundwater recession. Keywords: hydrologic processes, Jump-Diffusion models, structural changes, forecast, climate change

  16. Micro air vehicle motion tracking and aerodynamic modeling

    NASA Astrophysics Data System (ADS)

    Uhlig, Daniel V.

    Aerodynamic performance of small-scale fixed-wing flight is not well understood, and flight data are needed to gain a better understanding of the aerodynamics of micro air vehicles (MAVs) flying at Reynolds numbers between 10,000 and 30,000. Experimental studies have shown the aerodynamic effects of low Reynolds number flow on wings and airfoils, but the amount of work that has been conducted is not extensive and mostly limited to tests in wind and water tunnels. In addition to wind and water tunnel testing, flight characteristics of aircraft can be gathered through flight testing. The small size and low weight of MAVs prevent the use of conventional on-board instrumentation systems, but motion tracking systems that use off-board triangulation can capture flight trajectories (position and attitude) of MAVs with minimal onboard instrumentation. Because captured motion trajectories include minute noise that depends on the aircraft size, the trajectory results were verified in this work using repeatability tests. From the captured glide trajectories, the aerodynamic characteristics of five unpowered aircraft were determined. Test results for the five MAVs showed the forces and moments acting on the aircraft throughout the test flights. In addition, the airspeed, angle of attack, and sideslip angle were also determined from the trajectories. Results for low angles of attack (less than approximately 20 deg) showed the lift, drag, and moment coefficients during nominal gliding flight. For the lift curve, the results showed a linear curve until stall that was generally less than finite wing predictions. The drag curve was well described by a polar. The moment coefficients during the gliding flights were used to determine longitudinal and lateral stability derivatives. The neutral point, weather-vane stability and the dihedral effect showed some variation with different trim speeds (different angles of attack). In the gliding flights, the aerodynamic characteristics exhibited quasi-steady effects caused by small variations in the angle of attack. The quasi-steady effects, or small unsteady effects, caused variations in the aerodynamic characteristics (particularly incrementing the lift curve), and the magnitude of the influence depended on the angle-of-attack rate. In addition to nominal gliding flight, MAVs in general are capable of flying over a wide flight envelope including agile maneuvers such as perching, hovering, deep stall and maneuvering in confined spaces. From the captured motion trajectories, the aerodynamic characteristics during the numerous unsteady flights were gathered without the complexity required for unsteady wind tunnel tests. Experimental results for the MAVs show large flight envelopes that included high angles of attack (on the order of 90 deg) and high angular rates, and the aerodynamic coefficients had dynamic stall hysteresis loops and large values. From the large number of unsteady high angle-of-attack flights, an aerodynamic modeling method was developed and refined for unsteady MAV flight at high angles of attack. The method was based on a separation parameter that depended on the time history of the angle of attack and angle-of-attack rate. The separation parameter accounted for the time lag inherit in the longitudinal characteristics during dynamic maneuvers. The method was applied to three MAVs and showed general agreement with unsteady experimental results and with nominal gliding flight results. The flight tests with the MAVs indicate that modern motion tracking systems are capable of capturing the flight trajectories, and the captured trajectories can be used to determine the aerodynamic characteristics. From the captured trajectories, low Reynolds number MAV flight is explored in both nominal gliding flight and unsteady high angle-of-attack flight. Building on the experimental results, a modeling method for the longitudinal characteristics is developed that is applicable to the full flight envelope.

  17. Effects of sampling conditions on DNA-based estimates of American black bear abundance

    USGS Publications Warehouse

    Laufenberg, Jared S.; Van Manen, Frank T.; Clark, Joseph D.

    2013-01-01

    DNA-based capture-mark-recapture techniques are commonly used to estimate American black bear (Ursus americanus) population abundance (N). Although the technique is well established, many questions remain regarding study design. In particular, relationships among N, capture probability of heterogeneity mixtures A and B (pA and pB, respectively, or p, collectively), the proportion of each mixture (π), number of capture occasions (k), and probability of obtaining reliable estimates of N are not fully understood. We investigated these relationships using 1) an empirical dataset of DNA samples for which true N was unknown and 2) simulated datasets with known properties that represented a broader array of sampling conditions. For the empirical data analysis, we used the full closed population with heterogeneity data type in Program MARK to estimate N for a black bear population in Great Smoky Mountains National Park, Tennessee. We systematically reduced the number of those samples used in the analysis to evaluate the effect that changes in capture probabilities may have on parameter estimates. Model-averaged N for females and males were 161 (95% CI = 114–272) and 100 (95% CI = 74–167), respectively (pooled N = 261, 95% CI = 192–419), and the average weekly p was 0.09 for females and 0.12 for males. When we reduced the number of samples of the empirical data, support for heterogeneity models decreased. For the simulation analysis, we generated capture data with individual heterogeneity covering a range of sampling conditions commonly encountered in DNA-based capture-mark-recapture studies and examined the relationships between those conditions and accuracy (i.e., probability of obtaining an estimated N that is within 20% of true N), coverage (i.e., probability that 95% confidence interval includes true N), and precision (i.e., probability of obtaining a coefficient of variation ≤20%) of estimates using logistic regression. The capture probability for the larger of 2 mixture proportions of the population (i.e., pA or pB, depending on the value of π) was most important for predicting accuracy and precision, whereas capture probabilities of both mixture proportions (pA and pB) were important to explain variation in coverage. Based on sampling conditions similar to parameter estimates from the empirical dataset (pA = 0.30, pB = 0.05, N = 250, π = 0.15, and k = 10), predicted accuracy and precision were low (60% and 53%, respectively), whereas coverage was high (94%). Increasing pB, the capture probability for the predominate but most difficult to capture proportion of the population, was most effective to improve accuracy under those conditions. However, manipulation of other parameters may be more effective under different conditions. In general, the probabilities of obtaining accurate and precise estimates were best when p≥ 0.2. Our regression models can be used by managers to evaluate specific sampling scenarios and guide development of sampling frameworks or to assess reliability of DNA-based capture-mark-recapture studies.

  18. A Techno-Economic Assessment of Hybrid Cooling Systems for Coal- and Natural-Gas-Fired Power Plants with and without Carbon Capture and Storage.

    PubMed

    Zhai, Haibo; Rubin, Edward S

    2016-04-05

    Advanced cooling systems can be deployed to enhance the resilience of thermoelectric power generation systems. This study developed and applied a new power plant modeling option for a hybrid cooling system at coal- or natural-gas-fired power plants with and without amine-based carbon capture and storage (CCS) systems. The results of the plant-level analyses show that the performance and cost of hybrid cooling systems are affected by a range of environmental, technical, and economic parameters. In general, when hot periods last the entire summer, the wet unit of a hybrid cooling system needs to share about 30% of the total plant cooling load in order to minimize the overall system cost. CCS deployment can lead to a significant increase in the water use of hybrid cooling systems, depending on the level of CO2 capture. Compared to wet cooling systems, widespread applications of hybrid cooling systems can substantially reduce water use in the electric power sector with only a moderate increase in the plant-level cost of electricity generation.

  19. On the evolution process of two-component dark matter in the Sun

    NASA Astrophysics Data System (ADS)

    Chen, Chian-Shu; Lin, Yen-Hsun

    2018-04-01

    We introduce dark matter (DM) evolution process in the Sun under a two-component DM (2DM) scenario. Both DM species χ and ξ with masses heavier than 1 GeV are considered. In this picture, both species could be captured by the Sun through DM-nucleus scattering and DM self-scatterings, e.g. χχ and ξξ collisions. In addition, the heterogeneous self-scattering due to χ and ξ collision is essentially possible in any 2DM models. This new introduced scattering naturally weaves the evolution processes of the two DM species that was assumed to evolve independently. Moreover, the heterogeneous self-scattering enhances the number of DM being captured in the Sun mutually. This effect significantly exists in a broad range of DM mass spectrum. We have studied this phenomena and its implication for the solar-captured DM annihilation rate. It would be crucial to the DM indirect detection when the two masses are close. General formalism of the 2DM evolution in the Sun as well as its kinematics are studied.

  20. Multiprofessional Primary Care Units: What Affects the Clinical Performance of Italian General Practitioners?

    PubMed

    Armeni, Patrizio; Compagni, Amelia; Longo, Francesco

    2014-08-01

    Multiprofessional primary care models promise to deliver better care and reduce waste. This study evaluates the impact of such a model, the primary care unit (PCU), on three outcomes. A multilevel analysis within a "pre- and post-PCU" study design and a cross-sectional analysis were conducted on 215 PCUs located in the Emilia-Romagna region in Italy. Seven dimensions captured a set of processes and services characterizing a well-functioning PCU, or its degree of vitality. The impact of each dimension on outcomes was evaluated. The analyses show that certain dimensions of PCU vitality (i.e., the possibility for general practitioners to meet and share patients) can lead to better outcomes. However, dimensions related to the interaction and the joint works of general practitioners with other professionals tend not to have a significant or positive impact. This suggests that more effort needs to be invested to realize all the potential benefits of the PCU's multiprofessional approach to care. © The Author(s) 2014.

  1. Structural validation of the Self-Compassion Scale with a German general population sample

    PubMed Central

    Kwakkenbos, Linda; Moran, Chelsea; Thombs, Brett; Albani, Cornelia; Bourkas, Sophia; Zenger, Markus; Brahler, Elmar; Körner, Annett

    2018-01-01

    Background Published validation studies have reported different factor structures for the Self-Compassion Scale (SCS). The objective of this study was to assess the factor structure of the SCS in a large general population sample representative of the German population. Methods A German population sample completed the SCS and other self-report measures. Confirmatory factor analysis (CFA) in MPlus was used to test six models previously found in factor analytic studies (unifactorial model, two-factor model, three-factor model, six-factor model, a hierarchical (second order) model with six first-order factors and two second-order factors, and a model with arbitrarily assigned items to six factors). In addition, three bifactor models were also tested: bifactor model #1 with two group factors (SCS positive items, called SCS positive) and SCS negative items, called SCS negative) and one general factor (overall SCS); bifactor model #2, which is a two-tier model with six group factors, three (SCS positive subscales) corresponding to one general dimension (SCS positive) and three (SCS negative subscales) corresponding to the second general dimension (SCS negative); bifactor model #3 with six group factors (six SCS subscales) and one general factor (overall SCS). Results The two-factor model, the six-factor model, and the hierarchical model showed less than ideal, but acceptable fit. The model fit indices for these models were comparable, with no apparent advantage of the six-factor model over the two-factor model. The one-factor model, the three-factor model, and bifactor model #3 showed poor fit. The other two bifactor models showed strong support for two factors: SCS positive and SCS negative. Conclusion The main results of this study are that, among the German general population, six SCS factors and two SCS factors fit the data reasonably well. While six factors can be modelled, the three negative factors and the three positive factors, respectively, did not reflect reliable or meaningful variance beyond the two summative positive and negative item factors. As such, we recommend the use of two subscale scores to capture a positive factor and a negative factor when administering the German SCS to general population samples and we strongly advise against the use of a total score across all SCS items. PMID:29408888

  2. Comparative analysis of the effects of electron and hole capture on the power characteristics of a semiconductor quantum-well laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokolova, Z. N., E-mail: Zina.Sokolova@mail.ioffe.ru; Pikhtin, N. A.; Tarasov, I. S.

    The operating characteristics of a semiconductor quantum-well laser calculated using three models are compared. These models are (i) a model not taking into account differences between the electron and hole parameters and using the electron parameters for both types of charge carriers; (ii) a model, which does not take into account differences between the electron and hole parameters and uses the hole parameters for both types of charge carriers; and (iii) a model taking into account the asymmetry between the electron and hole parameters. It is shown that, at the same velocity of electron and hole capture into an unoccupiedmore » quantum well, the laser characteristics, obtained using the three models, differ considerably. These differences are due to a difference between the filling of the electron and hole subbands in a quantum well. The electron subband is more occupied than the hole subband. As a result, at the same velocities of electron and hole capture into an empty quantum well, the effective electron-capture velocity is lower than the effective hole-capture velocity. Specifically, it is shown that for the laser structure studied the hole-capture velocity of 5 × 10{sup 5} cm/s into an empty quantum well and the corresponding electron-capture velocity of 3 × 10{sup 6} cm/s into an empty quantum well describe the rapid capture of these carriers, at which the light–current characteristic of the laser remains virtually linear up to high pump-current densities. However, an electron-capture velocity of 5 × 10{sup 5} cm/s and a corresponding hole-capture velocity of 8.4 × 10{sup 4} cm/s describe the slow capture of these carriers, causing significant sublinearity in the light–current characteristic.« less

  3. Estimating juvenile Chinook salmon (Oncorhynchus tshawytscha) abundance from beach seine data collected in the Sacramento–San Joaquin Delta and San Francisco Bay, California

    USGS Publications Warehouse

    Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble

    2016-06-17

    Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.

  4. Comparative efficiency of two models of CO2 traps in the collection of free-living stages of ixodides.

    PubMed

    Guedes, Elizângela; de Azevedo Prata, Márcia Cristina; dos Reis, Eder Sebastião; Cançado, Paulo Henrique Duarte; Leite, Romário Cerqueira

    2012-12-01

    Traps using carbon dioxide (CO(2)) as a chemical attractant are known to be effective when capturing nymphs and adults of some free-living tick species such as Amblyomma cajennense and Amblyomma parvum. Despite the fact that the main source of CO(2) is dry ice, the chemical trap which uses 20 % lactic acid (C(3)H(6)O(3)) and calcium carbonate (CaCO(3)) has been tested as an alternative source of CO(2) whenever it is difficult to obtain dry ice. The objective of this paper was to test and compare the efficiency of these two models of traps during the study of population dynamics of A. cajennense and Amblyomma dubitatum in Coronel Pacheco, Minas Gerais, Brazil. Within the period comprising May 2006 to April 2008, eight CO(2) traps, of which four were dry ice and four chemical, were put in the pasture every 14 days at preestablished areas over a 1.0-m(2) white cotton flannel cloth with a capture dispositive which constituted of double-sided adhesive tapes fixed onto the four corners of the flannels. On every collection day, a cotton flannel without any chemical attractant was placed in the same area of the pasture to become an instrument to control the traps' capture efficiency. After 1 h, the white flannels were collected and placed in plastic bags for later identification and counting of the ticks. A total of 2,133 nymphs of Amblyomma sp., 328 adults of A. cajennense, and 292 adults of A. dubitatum were collected. Out of this total, the dry ice traps captured 1,087 nymphs (51 %), 188 A. cajennense (58.2 %), and 151 A. dubitatum (53 %), while the chemical traps captured 1,016 nymphs (47.6 %), 133 A. cajennense (41 %), and 133 A. dubitatum (46.5 %); 30 nymphs (1.4 %), 7 A. cajennense (0.8 %), and 8 A. dubitatum (0.5 %) were found on the control flannel. The capture potentials of ticks, nymphs, and adults, by the two models of traps tested, were statistically similar (p > 0.05). These results confirm the efficiency of the chemical trap enabling its use in areas of either difficult access or too distant from a dry ice supplier as is the case of forest areas where studies about ixodological fauna are generally carried out.

  5. Approximation methods of European option pricing in multiscale stochastic volatility model

    NASA Astrophysics Data System (ADS)

    Ni, Ying; Canhanga, Betuel; Malyarenko, Anatoliy; Silvestrov, Sergei

    2017-01-01

    In the classical Black-Scholes model for financial option pricing, the asset price follows a geometric Brownian motion with constant volatility. Empirical findings such as volatility smile/skew, fat-tailed asset return distributions have suggested that the constant volatility assumption might not be realistic. A general stochastic volatility model, e.g. Heston model, GARCH model and SABR volatility model, in which the variance/volatility itself follows typically a mean-reverting stochastic process, has shown to be superior in terms of capturing the empirical facts. However in order to capture more features of the volatility smile a two-factor, of double Heston type, stochastic volatility model is more useful as shown in Christoffersen, Heston and Jacobs [12]. We consider one modified form of such two-factor volatility models in which the volatility has multiscale mean-reversion rates. Our model contains two mean-reverting volatility processes with a fast and a slow reverting rate respectively. We consider the European option pricing problem under one type of the multiscale stochastic volatility model where the two volatility processes act as independent factors in the asset price process. The novelty in this paper is an approximating analytical solution using asymptotic expansion method which extends the authors earlier research in Canhanga et al. [5, 6]. In addition we propose a numerical approximating solution using Monte-Carlo simulation. For completeness and for comparison we also implement the semi-analytical solution by Chiarella and Ziveyi [11] using method of characteristics, Fourier and bivariate Laplace transforms.

  6. Combining band recovery data and Pollock's robust design to model temporary and permanent emigration

    USGS Publications Warehouse

    Lindberg, M.S.; Kendall, W.L.; Hines, J.E.; Anderson, M.G.

    2001-01-01

    Capture-recapture models are widely used to estimate demographic parameters of marked populations. Recently, this statistical theory has been extended to modeling dispersal of open populations. Multistate models can be used to estimate movement probabilities among subdivided populations if multiple sites are sampled. Frequently, however, sampling is limited to a single site. Models described by Burnham (1993, in Marked Individuals in the Study of Bird Populations, 199-213), which combined open population capture-recapture and band-recovery models, can be used to estimate permanent emigration when sampling is limited to a single population. Similarly, Kendall, Nichols, and Hines (1997, Ecology 51, 563-578) developed models to estimate temporary emigration under Pollock's (1982, Journal of Wildlife Management 46, 757-760) robust design. We describe a likelihood-based approach to simultaneously estimate temporary and permanent emigration when sampling is limited to a single population. We use a sampling design that combines the robust design and recoveries of individuals obtained immediately following each sampling period. We present a general form for our model where temporary emigration is a first-order Markov process, and we discuss more restrictive models. We illustrate these models with analysis of data on marked Canvasback ducks. Our analysis indicates that probability of permanent emigration for adult female Canvasbacks was 0.193 (SE = 0.082) and that birds that were present at the study area in year i - 1 had a higher probability of presence in year i than birds that were not present in year i - 1.

  7. Strong potential wave functions with elastic channel distortion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macek, J.; Taulbjerg, K.

    1989-06-01

    The strong-potential Born approximation is analyzed in a channel-distorted-wave approach. Channel-distorted SPB wave functions are reduced to a conventional form in which the standard off-energy-shell factor /ital g/ has been replaced by a modified factor ..gamma.., which represents a suitable average of /ital g/ over the momentum distribution of the distorted-channel function. The modified factor is evaluated in a physically realistic model for the distortion potential, and it is found that ..gamma.. is well represented by a slowly varying phase factor. The channel-distorted SPB approximation is accordingly identical to the impulse approximation if the phase variation of ..gamma.. can bemore » ignored. This is generally the case in applications to radiative electron capture and to a good approximation for ordinary capture at not too small velocities.« less

  8. Minimum required capture radius in a coplanar model of the aerial combat problem

    NASA Technical Reports Server (NTRS)

    Breakwell, J. V.; Merz, A. W.

    1977-01-01

    Coplanar aerial combat is modeled with constant speeds and specified turn rates. The minimum capture radius which will always permit capture, regardless of the initial conditions, is calculated. This 'critical' capture radius is also the maximum range which the evader can guarantee indefinitely if the initial range, for example, is large. A composite barrier is constructed which gives the boundary, at any heading, of relative positions for which the capture radius is less than critical.

  9. 48 CFR 304.602 - General.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false General. 304.602 Section 304.602 Federal Acquisition Regulations System HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATIVE MATTERS Contract Reporting 304.602 General. HHS' Departmental Contracts Information System (DCIS) captures...

  10. Revisiting the Zingiberales: using multiplexed exon capture to resolve ancient and recent phylogenetic splits in a charismatic plant lineage

    PubMed Central

    Iles, William J.D.; Barrett, Craig F.; Smith, Selena Y.; Specht, Chelsea D.

    2016-01-01

    The Zingiberales are an iconic order of monocotyledonous plants comprising eight families with distinctive and diverse floral morphologies and representing an important ecological element of tropical and subtropical forests. While the eight families are demonstrated to be monophyletic, phylogenetic relationships among these families remain unresolved. Neither combined morphological and molecular studies nor recent attempts to resolve family relationships using sequence data from whole plastomes has resulted in a well-supported, family-level phylogenetic hypothesis of relationships. Here we approach this challenge by leveraging the complete genome of one member of the order, Musa acuminata, together with transcriptome information from each of the other seven families to design a set of nuclear loci that can be enriched from highly divergent taxa with a single array-based capture of indexed genomic DNA. A total of 494 exons from 418 nuclear genes were captured for 53 ingroup taxa. The entire plastid genome was also captured for the same 53 taxa. Of the total genes captured, 308 nuclear and 68 plastid genes were used for phylogenetic estimation. The concatenated plastid and nuclear dataset supports the position of Musaceae as sister to the remaining seven families. Moreover, the combined dataset recovers known intra- and inter-family phylogenetic relationships with generally high bootstrap support. This is a flexible and cost effective method that gives the broader plant biology community a tool for generating phylogenomic scale sequence data in non-model systems at varying evolutionary depths. PMID:26819846

  11. Revisiting the Zingiberales: using multiplexed exon capture to resolve ancient and recent phylogenetic splits in a charismatic plant lineage.

    PubMed

    Sass, Chodon; Iles, William J D; Barrett, Craig F; Smith, Selena Y; Specht, Chelsea D

    2016-01-01

    The Zingiberales are an iconic order of monocotyledonous plants comprising eight families with distinctive and diverse floral morphologies and representing an important ecological element of tropical and subtropical forests. While the eight families are demonstrated to be monophyletic, phylogenetic relationships among these families remain unresolved. Neither combined morphological and molecular studies nor recent attempts to resolve family relationships using sequence data from whole plastomes has resulted in a well-supported, family-level phylogenetic hypothesis of relationships. Here we approach this challenge by leveraging the complete genome of one member of the order, Musa acuminata, together with transcriptome information from each of the other seven families to design a set of nuclear loci that can be enriched from highly divergent taxa with a single array-based capture of indexed genomic DNA. A total of 494 exons from 418 nuclear genes were captured for 53 ingroup taxa. The entire plastid genome was also captured for the same 53 taxa. Of the total genes captured, 308 nuclear and 68 plastid genes were used for phylogenetic estimation. The concatenated plastid and nuclear dataset supports the position of Musaceae as sister to the remaining seven families. Moreover, the combined dataset recovers known intra- and inter-family phylogenetic relationships with generally high bootstrap support. This is a flexible and cost effective method that gives the broader plant biology community a tool for generating phylogenomic scale sequence data in non-model systems at varying evolutionary depths.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.

    Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less

  13. Estimating taxonomic diversity, extinction rates, and speciation rates from fossil data using capture-recapture models

    USGS Publications Warehouse

    Nichols, J.D.; Pollock, K.H.

    1983-01-01

    Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.

  14. The Husting dilemma: A methodological note

    USGS Publications Warehouse

    Nichols, J.D.; Hepp, G.R.; Pollock, K.H.; Hines, J.E.

    1987-01-01

    Recently, Gill (1985) discussed the interpretation of capture history data resulting from his own studies on the red-spotted newt, Notophthalmus viridescens , and work by Husting (1965) on spotted salamanders, Ambystoma maculatum. Gill (1985) noted that gaps in capture histories (years in which individuals were not captured, preceded and followed by years in which they were) could result from either of two very different possibilities: (1) failure of the animal to return to the fenced pond to breed (the alternative Husting (1965) favored), or (2) return of the animal to the breeding pond, but failure of the investigator to capture it and detect its presence. The authors agree entirely with Gill (1985) that capture history data such as his or those of Husting (1965) should be analyzed using models that recognize the possibility of 'census error,' and that it is important to try to distinguish between such 'error' and skipped breeding efforts. The purpose of this note is to point out the relationship between Gill's (1985:347) null model and certain capture-recapture models, and to use capture-recapture models and tests to analyze the original data of Husting (1965).

  15. IOOS modeling subsystem: vision and implementation strategy

    USGS Publications Warehouse

    Rosenfeld, Leslie; Chao, Yi; Signell, Richard P.

    2012-01-01

    Numerical modeling is vital to achieving the U.S. IOOS® goals of predicting, understanding and adapting to change in the ocean and Great Lakes. In the next decade IOOS should cultivate a holistic approach to coastal ocean prediction, and encourage more balanced investment among the observing, modeling and information management subsystems. We believe the vision of a prediction framework driven by observations, and leveraging advanced technology and understanding of the ocean and Great Lakes, would lead to a new era for IOOS that would not only produce more powerful information, but would also capture broad community support, particularly from the general public, thus allowing IOOS to develop into the comprehensive information system that was envisioned at the outset.

  16. Concept Model on Topological Learning

    NASA Astrophysics Data System (ADS)

    Ae, Tadashi; Kioi, Kazumasa

    2010-11-01

    We discuss a new model for concept based on topological learning, where the learning process on the neural network is represented by mathematical topology. The topological learning of neural networks is summarized by a quotient of input space and the hierarchical step induces a tree where each node corresponds to a quotient. In general, the concept acquisition is a difficult problem, but the emotion for a subject is represented by providing the questions to a person. Therefore, a kind of concept is captured by such data and the answer sheet can be mapped into a topology consisting of trees. In this paper, we will discuss a way of mapping the emotional concept to a topological learning model.

  17. Defect interactions in anisotropic two-dimensional fluids

    NASA Astrophysics Data System (ADS)

    Stannarius, Ralf; Harth, Kirsten

    Disclinations in liquid crystals bear striking analogies to defect structures in a wide variety of physical systems, they are excellent models to study fundamental properties of defect interactions. Freely suspended smectic C films behave like quasi 2D polar nematics. An experimental procedure is introduced to capture high-strength disclinations in localized spots. After they are released in a controlled way, the motion of the mutually repelling topological charges is studied. We demonstrate that the classical models, based on elastic one-constant approximation, fail to describe their dynamics correctly. In realistic liquid crystals, the models work only in ideal configurations. In general, additional director walls modify interactions substantially. Funded by DFG within project STA 425/28-1.

  18. A simple nonlinear model for the return to isotropy in turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, Sutanu; Speziale, Charles G.

    1989-01-01

    A quadratic nonlinear generalization of the linear Rotta model for the slow pressure-strain correlation of turbulence is developed. The model is shown to satisfy realizability and to give rise to no stable non-trivial equilibrium solutions for the anisotropy tensor in the case of vanishing mean velocity gradients. The absence of stable non-trivial equilibrium solutions is a necessary condition to ensure that the model predicts a return to isotropy for all relaxational turbulent flows. Both the phase space dynamics and the temporal behavior of the model are examined and compared against experimental data for the return to isotropy problem. It is demonstrated that the quadratic model successfully captures the experimental trends which clearly exhibit nonlinear behavior. Direct comparisons are also made with the predictions of the Rotta model and the Lumley model.

  19. Evaluation of XHVRB for Capturing Explosive Shock Desensitization

    NASA Astrophysics Data System (ADS)

    Tuttle, Leah; Schmitt, Robert; Kittell, Dave; Harstad, Eric

    2017-06-01

    Explosive shock desensitization phenomena have been recognized for some time. It has been demonstrated that pressure-based reactive flow models do not adequately capture the basic nature of the explosive behavior. Historically, replacing the local pressure with a shock captured pressure has dramatically improved the numerical modeling approaches. Models based upon shock pressure or functions of entropy have recently been developed. A pseudo-entropy based formulation using the History Variable Reactive Burn model, as proposed by Starkenberg, was implemented into the Eulerian shock physics code CTH. Improvements in the shock capturing algorithm were made. The model is demonstrated to reproduce single shock behavior consistent with published pop plot data. It is also demonstrated to capture a desensitization effect based on available literature data, and to qualitatively capture dead zones from desensitization in 2D corner turning experiments. This models shows promise for use in modeling and simulation problems that are relevant to the desensitization phenomena. Issues are identified with the current implementation and future work is proposed for improving and expanding model capabilities. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. A global scale mechanistic model of photosynthetic capacity (LUNA V1.0)

    DOE PAGES

    Ali, Ashehad A.; Xu, Chonggang; Rogers, Alistair; ...

    2016-02-12

    Although plant photosynthetic capacity as determined by the maximum carboxylation rate (i.e., V c,max25) and the maximum electron transport rate (i.e., J max25) at a reference temperature (generally 25 °C) is known to vary considerably in space and time in response to environmental conditions, it is typically parameterized in Earth system models (ESMs) with tabulated values associated with plant functional types. In this study, we have developed a mechanistic model of leaf utilization of nitrogen for assimilation (LUNA) to predict photosynthetic capacity at the global scale under different environmental conditions. We adopt an optimality hypothesis to nitrogen allocation among lightmore » capture, electron transport, carboxylation and respiration. The LUNA model is able to reasonably capture the measured spatial and temporal patterns of photosynthetic capacity as it explains ~55 % of the global variation in observed values of V c,max25 and ~65 % of the variation in the observed values of J max25. Model simulations with LUNA under current and future climate conditions demonstrate that modeled values of V c,max25 are most affected in high-latitude regions under future climates. In conclusion, ESMs that relate the values of V c,max25 or J max25 to plant functional types only are likely to substantially overestimate future global photosynthesis.« less

  1. Comparison and Assessment of Three Advanced Land Surface Models in Simulating Terrestrial Water Storage Components over the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Youlong; Mocko, David; Huang, Maoyi

    2017-03-01

    In preparation for next generation North American Land Data Assimilation System (NLDAS), 3 three advanced land surface models (CLM4.0, Noah-MP, and CLSM-F2.5) were run from 1979 4 to 2014 within the NLDAS-based framework. Monthly total water storage anomaly (TWSA) and 5 its individual water storage components were evaluated against satellite-based and in situ 6 observations, and reference reanalysis products at basin-wide and statewide scales. In general, all 7 three models are able to reasonably capture the monthly and interannual variability and 8 magnitudes for TWSA. However, contributions of the anomalies of individual water 9 components to TWSA are very dependentmore » on the model and basin. A major contributor to the 10 TWSA is the anomaly of total column soil moisture content (SMCA) for CLM4.0 and Noah-MP 11 or groundwater storage anomaly (GWSA) for CLSM-F2.5 although other components such as 12 the anomaly of snow water equivalent (SWEA) also play some role. For each individual water 13 storage component, the models are able to capture broad features such as monthly and 14 interannual variability. However, there are large inter-model differences and quantitative 15 uncertainties in this study. Therefore, it should be thought of as a preliminary synthesis and 16 analysis.« less

  2. Extending SME to Handle Large-Scale Cognitive Modeling.

    PubMed

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  3. Courses of action for effects based operations using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Haider, Sajjad; Levis, Alexander H.

    2006-05-01

    This paper presents an Evolutionary Algorithms (EAs) based approach to identify effective courses of action (COAs) in Effects Based Operations. The approach uses Timed Influence Nets (TINs) as the underlying mathematical model to capture a dynamic uncertain situation. TINs provide a concise graph-theoretic probabilistic approach to specify the cause and effect relationships that exist among the variables of interest (actions, desired effects, and other uncertain events) in a problem domain. The purpose of building these TIN models is to identify and analyze several alternative courses of action. The current practice is to use trial and error based techniques which are not only labor intensive but also produce sub-optimal results and are not capable of modeling constraints among actionable events. The EA based approach presented in this paper is aimed to overcome these limitations. The approach generates multiple COAs that are close enough in terms of achieving the desired effect. The purpose of generating multiple COAs is to give several alternatives to a decision maker. Moreover, the alternate COAs could be generalized based on the relationships that exist among the actions and their execution timings. The approach also allows a system analyst to capture certain types of constraints among actionable events.

  4. Topic segmentation via community detection in complex networks

    NASA Astrophysics Data System (ADS)

    de Arruda, Henrique F.; Costa, Luciano da F.; Amancio, Diego R.

    2016-06-01

    Many real systems have been modeled in terms of network concepts, and written texts are a particular example of information networks. In recent years, the use of network methods to analyze language has allowed the discovery of several interesting effects, including the proposition of novel models to explain the emergence of fundamental universal patterns. While syntactical networks, one of the most prevalent networked models of written texts, display both scale-free and small-world properties, such a representation fails in capturing other textual features, such as the organization in topics or subjects. We propose a novel network representation whose main purpose is to capture the semantical relationships of words in a simple way. To do so, we link all words co-occurring in the same semantic context, which is defined in a threefold way. We show that the proposed representations favor the emergence of communities of semantically related words, and this feature may be used to identify relevant topics. The proposed methodology to detect topics was applied to segment selected Wikipedia articles. We found that, in general, our methods outperform traditional bag-of-words representations, which suggests that a high-level textual representation may be useful to study the semantical features of texts.

  5. A holographic model for the fractional quantum Hall effect

    NASA Astrophysics Data System (ADS)

    Lippert, Matthew; Meyer, René; Taliotis, Anastasios

    2015-01-01

    Experimental data for fractional quantum Hall systems can to a large extent be explained by assuming the existence of a Γ0(2) modular symmetry group commuting with the renormalization group flow and hence mapping different phases of two-dimensional electron gases into each other. Based on this insight, we construct a phenomenological holographic model which captures many features of the fractional quantum Hall effect. Using an -invariant Einstein-Maxwell-axio-dilaton theory capturing the important modular transformation properties of quantum Hall physics, we find dyonic diatonic black hole solutions which are gapped and have a Hall conductivity equal to the filling fraction, as expected for quantum Hall states. We also provide several technical results on the general behavior of the gauge field fluctuations around these dyonic dilatonic black hole solutions: we specify a sufficient criterion for IR normalizability of the fluctuations, demonstrate the preservation of the gap under the action, and prove that the singularity of the fluctuation problem in the presence of a magnetic field is an accessory singularity. We finish with a preliminary investigation of the possible IR scaling solutions of our model and some speculations on how they could be important for the observed universality of quantum Hall transitions.

  6. Analytic Couple Modeling Introducing Device Design Factor, Fin Factor, Thermal Diffusivity Factor, and Inductance Factor

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    A set of convenient thermoelectric device solutions have been derived in order to capture a number of factors which are previously only resolved with numerical techniques. The concise conversion efficiency equations derived from governing equations provide intuitive and straight-forward design guidelines. These guidelines allow for better device design without requiring detailed numerical modeling. The analytical modeling accounts for factors such as i) variable temperature boundary conditions, ii) lateral heat transfer, iii) temperature variable material properties, and iv) transient operation. New dimensionless parameters, similar to the figure of merit, are introduced including the device design factor, fin factor, thermal diffusivity factor, and inductance factor. These new device factors allow for the straight-forward description of phenomenon generally only captured with numerical work otherwise. As an example a device design factor of 0.38, which accounts for thermal resistance of the hot and cold shoes, can be used to calculate a conversion efficiency of 2.28 while the ideal conversion efficiency based on figure of merit alone would be 6.15. Likewise an ideal couple with efficiency of 6.15 will be reduced to 5.33 when lateral heat is accounted for with a fin factor of 1.0.

  7. Topic segmentation via community detection in complex networks.

    PubMed

    de Arruda, Henrique F; Costa, Luciano da F; Amancio, Diego R

    2016-06-01

    Many real systems have been modeled in terms of network concepts, and written texts are a particular example of information networks. In recent years, the use of network methods to analyze language has allowed the discovery of several interesting effects, including the proposition of novel models to explain the emergence of fundamental universal patterns. While syntactical networks, one of the most prevalent networked models of written texts, display both scale-free and small-world properties, such a representation fails in capturing other textual features, such as the organization in topics or subjects. We propose a novel network representation whose main purpose is to capture the semantical relationships of words in a simple way. To do so, we link all words co-occurring in the same semantic context, which is defined in a threefold way. We show that the proposed representations favor the emergence of communities of semantically related words, and this feature may be used to identify relevant topics. The proposed methodology to detect topics was applied to segment selected Wikipedia articles. We found that, in general, our methods outperform traditional bag-of-words representations, which suggests that a high-level textual representation may be useful to study the semantical features of texts.

  8. Combined sphere-spheroid particle model for the retrieval of the microphysical aerosol parameters via regularized inversion of lidar data

    NASA Astrophysics Data System (ADS)

    Samaras, Stefanos; Böckmann, Christine; Nicolae, Doina

    2016-06-01

    In this work we propose a two-step advancement of the Mie spherical-particle model accounting for particle non-sphericity. First, a naturally two-dimensional (2D) generalized model (GM) is made, which further triggers analogous 2D re-definitions of microphysical parameters. We consider a spheroidal-particle approach where the size distribution is additionally dependent on aspect ratio. Second, we incorporate the notion of a sphere-spheroid particle mixture (PM) weighted by a non-sphericity percentage. The efficiency of these two models is investigated running synthetic data retrievals with two different regularization methods to account for the inherent instability of the inversion procedure. Our preliminary studies show that a retrieval with the PM model improves the fitting errors and the microphysical parameter retrieval and it has at least the same efficiency as the GM. While the general trend of the initial size distributions is captured in our numerical experiments, the reconstructions are subject to artifacts. Finally, our approach is applied to a measurement case yielding acceptable results.

  9. On the Helix Propensity in Generalized Born Solvent Descriptions of Modeling the Dark Proteome

    PubMed Central

    Olson, Mark A.

    2017-01-01

    Intrinsically disordered proteins that populate the so-called “Dark Proteome” offer challenging benchmarks of atomistic simulation methods to accurately model conformational transitions on a multidimensional energy landscape. This work explores the application of parallel tempering with implicit solvent models as a computational framework to capture the conformational ensemble of an intrinsically disordered peptide derived from the Ebola virus protein VP35. A recent X-ray crystallographic study reported a protein-peptide interface where the VP35 peptide underwent a folding transition from a disordered form to a helix-β-turn-helix topological fold upon molecular association with the Ebola protein NP. An assessment is provided of the accuracy of two generalized Born solvent models (GBMV2 and GBSW2) using the CHARMM force field and applied with temperature-based replica exchange dynamics to calculate the disorder propensity of the peptide and its probability density of states in a continuum solvent. A further comparison is presented of applying an explicit/implicit solvent hybrid replica exchange simulation of the peptide to determine the effect of modeling water interactions at the all-atom resolution. PMID:28197405

  10. On the Helix Propensity in Generalized Born Solvent Descriptions of Modeling the Dark Proteome.

    PubMed

    Olson, Mark A

    2017-01-01

    Intrinsically disordered proteins that populate the so-called "Dark Proteome" offer challenging benchmarks of atomistic simulation methods to accurately model conformational transitions on a multidimensional energy landscape. This work explores the application of parallel tempering with implicit solvent models as a computational framework to capture the conformational ensemble of an intrinsically disordered peptide derived from the Ebola virus protein VP35. A recent X-ray crystallographic study reported a protein-peptide interface where the VP35 peptide underwent a folding transition from a disordered form to a helix-β-turn-helix topological fold upon molecular association with the Ebola protein NP. An assessment is provided of the accuracy of two generalized Born solvent models (GBMV2 and GBSW2) using the CHARMM force field and applied with temperature-based replica exchange dynamics to calculate the disorder propensity of the peptide and its probability density of states in a continuum solvent. A further comparison is presented of applying an explicit/implicit solvent hybrid replica exchange simulation of the peptide to determine the effect of modeling water interactions at the all-atom resolution.

  11. Nonequilibrium Interfacial Tension in Simple and Complex Fluids

    NASA Astrophysics Data System (ADS)

    Truzzolillo, Domenico; Mora, Serge; Dupas, Christelle; Cipelletti, Luca

    2016-10-01

    Interfacial tension between immiscible phases is a well-known phenomenon, which manifests itself in everyday life, from the shape of droplets and foam bubbles to the capillary rise of sap in plants or the locomotion of insects on a water surface. More than a century ago, Korteweg generalized this notion by arguing that stresses at the interface between two miscible fluids act transiently as an effective, nonequilibrium interfacial tension, before homogenization is eventually reached. In spite of its relevance in fields as diverse as geosciences, polymer physics, multiphase flows, and fluid removal, experiments and theoretical works on the interfacial tension of miscible systems are still scarce, and mostly restricted to molecular fluids. This leaves crucial questions unanswered, concerning the very existence of the effective interfacial tension, its stabilizing or destabilizing character, and its dependence on the fluid's composition and concentration gradients. We present an extensive set of measurements on miscible complex fluids that demonstrate the existence and the stabilizing character of the effective interfacial tension, unveil new regimes beyond Korteweg's predictions, and quantify its dependence on the nature of the fluids and the composition gradient at the interface. We introduce a simple yet general model that rationalizes nonequilibrium interfacial stresses to arbitrary mixtures, beyond Korteweg's small gradient regime, and show that the model captures remarkably well both our new measurements and literature data on molecular and polymer fluids. Finally, we briefly discuss the relevance of our model to a variety of interface-driven problems, from phase separation to fracture, which are not adequately captured by current approaches based on the assumption of small gradients.

  12. Modeling and Measurements of Multiphase Flow and Bubble Entrapment in Steel Continuous Casting

    NASA Astrophysics Data System (ADS)

    Jin, Kai; Thomas, Brian G.; Ruan, Xiaoming

    2016-02-01

    In steel continuous casting, argon gas is usually injected to prevent clogging, but the bubbles also affect the flow pattern, and may become entrapped to form defects in the final product. To investigate this behavior, plant measurements were conducted, and a computational model was applied to simulate turbulent flow of the molten steel and the transport and capture of argon gas bubbles into the solidifying shell in a continuous slab caster. First, the flow field was solved with an Eulerian k- ɛ model of the steel, which was two-way coupled with a Lagrangian model of the large bubbles using a discrete random walk method to simulate their turbulent dispersion. The flow predicted on the top surface agreed well with nailboard measurements and indicated strong cross flow caused by biased flow of Ar gas due to the slide-gate orientation. Then, the trajectories and capture of over two million bubbles (25 μm to 5 mm diameter range) were simulated using two different capture criteria (simple and advanced). Results with the advanced capture criterion agreed well with measurements of the number, locations, and sizes of captured bubbles, especially for larger bubbles. The relative capture fraction of 0.3 pct was close to the measured 0.4 pct for 1 mm bubbles and occurred mainly near the top surface. About 85 pct of smaller bubbles were captured, mostly deeper down in the caster. Due to the biased flow, more bubbles were captured on the inner radius, especially near the nozzle. On the outer radius, more bubbles were captured near to narrow face. The model presented here is an efficient tool to study the capture of bubbles and inclusion particles in solidification processes.

  13. Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.

    PubMed

    Zhang, Yue; Berhane, Kiros

    2016-01-01

    We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.

  14. On the generality of the displaywide contingent orienting hypothesis: can a visual onset capture attention without top-down control settings for displaywide onset?

    PubMed

    Yeh, Su-Ling; Liao, Hsin-I

    2010-10-01

    The contingent orienting hypothesis (Folk, Remington, & Johnston, 1992) states that attentional capture is contingent on top-down control settings induced by task demands. Past studies supporting this hypothesis have identified three kinds of top-down control settings: for target-specific features, for the strategy to search for a singleton, and for visual features in the target display as a whole. Previously, we have found stimulus-driven capture by onset that was not contingent on the first two kinds of settings (Yeh & Liao, 2008). The current study aims to test the third kind: the displaywide contingent orienting hypothesis (Gibson & Kelsey, 1998). Specifically, we ask whether an onset stimulus can still capture attention in the spatial cueing paradigm when attentional control settings for the displaywide onset of the target are excluded by making all letters in the target display emerge from placeholders. Results show that a preceding uninformative onset cue still captured attention to its location in a stimulus-driven fashion, whereas a color cue captured attention only when it was contingent on the setting for displaywide color. These results raise doubts as to the generality of the displaywide contingent orienting hypothesis and help delineate the boundary conditions on this hypothesis. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Modeling and Control of Airport Queueing Dynamics under Severe Flow Restrictions

    NASA Technical Reports Server (NTRS)

    Carr, Francis; Evans, Antony; Clarke, John-Paul; Deron, Eric

    2003-01-01

    Based on field observations and interviews with controllers at BOS and EWR, we identify the closure of local departure fixes as the most severe class of airport departure restrictions. A set of simple queueing dynamics and traffic rules are developed to model departure traffic under such restrictions. The validity of the proposed model is tested via Monte Carlo simulation against 10 hours of actual operations data collected during a case-study at EWR on June 29,2000. In general, the model successfully reproduces the aggregate departure congestion. An analysis of the average error over 40 simulation runs indicates that flow-rate restrictions also significantly impact departure traffic; work is underway to capture these effects. Several applications and what-if scenarios are discussed for future evaluation using the calibrated model.

  16. Constitutive equations for the cyclic behaviour of short carbon fibre-reinforced thermoplastics and identification on a uniaxial database

    NASA Astrophysics Data System (ADS)

    Leveuf, Louis; Navrátil, Libor; Le Saux, Vincent; Marco, Yann; Olhagaray, Jérôme; Leclercq, Sylvain

    2018-01-01

    A constitutive model for the cyclic behaviour of short carbon fibre-reinforced thermoplastics for aeronautical applications is proposed. First, an extended experimental database is generated in order to highlight the specificities of the studied material. This database is composed of complex tests and is used to design a relevant constitutive model able to capture the cyclic behaviour of the material. A general 3D formulation of the model is then proposed, and an identification strategy is defined to identify its parameters. Finally, a validation of the identification is performed by challenging the prediction of the model to the tests that were not used for the identification. An excellent agreement between the numerical results and the experimental data is observed revealing the capabilities of the model.

  17. Extending Primitive Spatial Data Models to Include Semantics

    NASA Astrophysics Data System (ADS)

    Reitsma, F.; Batcheller, J.

    2009-04-01

    Our traditional geospatial data model involves associating some measurable quality, such as temperature, or observable feature, such as a tree, with a point or region in space and time. When capturing data we implicitly subscribe to some kind of conceptualisation. If we can make this explicit in an ontology and associate it with the captured data, we can leverage formal semantics to reason with the concepts represented in our spatial data sets. To do so, we extend our fundamental representation of geospatial data in a data model by including a URI in our basic data model that links it to our ontology defining our conceptualisation, We thus extend Goodchild et al's geo-atom [1] with the addition of a URI: (x, Z, z(x), URI) . This provides us with pixel or feature level knowledge and the ability to create layers of data from a set of pixels or features that might be drawn from a database based on their semantics. Using open source tools, we present a prototype that involves simple reasoning as a proof of concept. References [1] M.F. Goodchild, M. Yuan, and T.J. Cova. Towards a general theory of geographic representation in gis. International Journal of Geographical Information Science, 21(3):239-260, 2007.

  18. Unsteady numerical simulation of the flow in the U9 Kaplan turbine model

    NASA Astrophysics Data System (ADS)

    Javadi, Ardalan; Nilsson, Håkan

    2014-03-01

    The Reynolds-averaged Navier-Stokes equations with the RNG k-ε turbulence model closure are utilized to simulate the unsteady turbulent flow throughout the whole flow passage of the U9 Kaplan turbine model. The U9 Kaplan turbine model comprises 20 stationary guide vanes and 6 rotating blades (696.3 RPM), working at best efficiency load (0.71 m3/s). The computations are conducted using a general finite volume method, using the OpenFOAM CFD code. A dynamic mesh is used together with a sliding GGI interface to include the effect of the rotating runner. The clearance is included in the guide vane. The hub and tip clearances are also included in the runner. An analysis is conducted of the unsteady behavior of the flow field, the pressure fluctuation in the draft tube, and the coherent structures of the flow. The tangential and axial velocity distributions at three sections in the draft tube are compared against LDV measurements. The numerical result is in reasonable agreement with the experimental data, and the important flow physics close to the hub in the draft tube is captured. The hub and tip vortices and an on-axis forced vortex are captured. The numerical results show that the frequency of the forced vortex in 1/5 of the runner rotation.

  19. Predicting population dynamics from the properties of individuals: a cross-level test of dynamic energy budget theory.

    PubMed

    Martin, Benjamin T; Jager, Tjalling; Nisbet, Roger M; Preuss, Thomas G; Grimm, Volker

    2013-04-01

    Individual-based models (IBMs) are increasingly used to link the dynamics of individuals to higher levels of biological organization. Still, many IBMs are data hungry, species specific, and time-consuming to develop and analyze. Many of these issues would be resolved by using general theories of individual dynamics as the basis for IBMs. While such theories have frequently been examined at the individual level, few cross-level tests exist that also try to predict population dynamics. Here we performed a cross-level test of dynamic energy budget (DEB) theory by parameterizing an individual-based model using individual-level data of the water flea, Daphnia magna, and comparing the emerging population dynamics to independent data from population experiments. We found that DEB theory successfully predicted population growth rates and peak densities but failed to capture the decline phase. Further assumptions on food-dependent mortality of juveniles were needed to capture the population dynamics after the initial population peak. The resulting model then predicted, without further calibration, characteristic switches between small- and large-amplitude cycles, which have been observed for Daphnia. We conclude that cross-level tests help detect gaps in current individual-level theories and ultimately will lead to theory development and the establishment of a generic basis for individual-based models and ecology.

  20. Prey should hide more randomly when a predator attacks more persistently.

    PubMed

    Gal, Shmuel; Alpern, Steve; Casas, Jérôme

    2015-12-06

    When being searched for and then (if found) pursued by a predator, a prey animal has a choice between choosing very randomly among hiding locations so as to be hard to find or alternatively choosing a location from which it is more likely to successfully flee if found. That is, the prey can choose to be hard to find or hard to catch, if found. In our model, capture of prey requires both finding it and successfully pursuing it. We model this dilemma as a zero-sum repeated game between predator and prey, with the eventual capture probability as the pay-off to the predator. We find that the more random hiding strategy is better when the chances of repeated pursuit, which are known to be related to area topography, are high. Our results extend earlier results of Gal and Casas, where there was at most only a single pursuit. In that model, hiding randomly was preferred by the prey when the predator has only a few looks. Thus, our new multistage model shows that the effect of more potential looks is opposite. Our results can be viewed as a generalization of search games to the repeated game context and are in accordance with observed escape behaviour of different animals. © 2015 The Author(s).

  1. Prey should hide more randomly when a predator attacks more persistently

    PubMed Central

    Gal, Shmuel; Alpern, Steve; Casas, Jérôme

    2015-01-01

    When being searched for and then (if found) pursued by a predator, a prey animal has a choice between choosing very randomly among hiding locations so as to be hard to find or alternatively choosing a location from which it is more likely to successfully flee if found. That is, the prey can choose to be hard to find or hard to catch, if found. In our model, capture of prey requires both finding it and successfully pursuing it. We model this dilemma as a zero-sum repeated game between predator and prey, with the eventual capture probability as the pay-off to the predator. We find that the more random hiding strategy is better when the chances of repeated pursuit, which are known to be related to area topography, are high. Our results extend earlier results of Gal and Casas, where there was at most only a single pursuit. In that model, hiding randomly was preferred by the prey when the predator has only a few looks. Thus, our new multistage model shows that the effect of more potential looks is opposite. Our results can be viewed as a generalization of search games to the repeated game context and are in accordance with observed escape behaviour of different animals. PMID:26631332

  2. Using hierarchical Bayesian multi-species mixture models to estimate tandem hoop-net based habitat associations and detection probabilities of fishes in reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2015-01-01

    Species distribution models are useful tools to evaluate habitat relationships of fishes. We used hierarchical Bayesian multispecies mixture models to evaluate the relationships of both detection and abundance with habitat of reservoir fishes caught using tandem hoop nets. A total of 7,212 fish from 12 species were captured, and the majority of the catch was composed of Channel Catfish Ictalurus punctatus (46%), Bluegill Lepomis macrochirus(25%), and White Crappie Pomoxis annularis (14%). Detection estimates ranged from 8% to 69%, and modeling results suggested that fishes were primarily influenced by reservoir size and context, water clarity and temperature, and land-use types. Species were differentially abundant within and among habitat types, and some fishes were found to be more abundant in turbid, less impacted (e.g., by urbanization and agriculture) reservoirs with longer shoreline lengths; whereas, other species were found more often in clear, nutrient-rich impoundments that had generally shorter shoreline length and were surrounded by a higher percentage of agricultural land. Our results demonstrated that habitat and reservoir characteristics may differentially benefit species and assemblage structure. This study provides a useful framework for evaluating capture efficiency for not only hoop nets but other gear types used to sample fishes in reservoirs.

  3. 'Cape capture': Geologic data and modeling results suggest the holocene loss of a Carolina Cape

    USGS Publications Warehouse

    Thieler, E.R.; Ashton, A.D.

    2011-01-01

    For more than a century, the origin and evolution of the set of cuspate forelands known as the Carolina Capes-Hatteras, Lookout, Fear, and Romain-off the eastern coast of the United States have been discussed and debated. The consensus conceptual model is not only that these capes existed through much or all of the Holocene transgression, but also that their number has not changed. Here we describe bathymetric, lithologic, seismic, and chronologic data that suggest another cape may have existed between Capes Hatteras and Lookout during the early to middle Holocene. This cape likely formed at the distal end of the Neuse-Tar-Pamlico fiuvial system during the early Holocene transgression, when this portion of the shelf was fiooded ca. 9 cal (calibrated) kyr B.P., and was probably abandoned by ca. 4 cal kyr B.P., when the shoreline attained its present general configuration. Previously proposed mechanisms for cape formation suggest that the large-scale, rhythmic pattern of the Carolina Capes arose from a hydrodynamic template or the preexisting geologic framework. Numerical modeling, however, suggests that the number and spacing of capes can be dynamic, and that a coast can self-organize in response to a high-angle-wave instability in shoreline shape. In shoreline evolution model simulations, smaller cuspate forelands are subsumed by larger neighbors over millennial time scales through a process of 'cape capture.' The suggested former cape in Raleigh Bay represents the first interpreted geological evidence of dynamic abandonment suggested by the self-organization hypothesis. Cape capture may be a widespread process in coastal environments with large-scale rhythmic shoreline features; its preservation in the sedimentary record will vary according to geologic setting, physical processes, and sea-level history. ?? 2011 Geological Society of America.

  4. Numerical simulation of terrain-induced mesoscale circulation in the Chiang Mai area, Thailand

    NASA Astrophysics Data System (ADS)

    Sathitkunarat, Surachai; Wongwises, Prungchan; Pan-Aram, Rudklao; Zhang, Meigen

    2008-11-01

    The regional atmospheric modeling system (RAMS) was applied to Chiang Mai province, a mountainous area in Thailand, to study terrain-induced mesoscale circulations. Eight cases in wet and dry seasons under different weather conditions were analyzed to show thermal and dynamic impacts on local circulations. This is the first study of RAMS in Thailand especially investigating the effect of mountainous area on the simulated meteorological data. Analysis of model results indicates that the model can reproduce major features of local circulation and diurnal variations in temperatures. For evaluating the model performance, model results were compared with observed wind speed, wind direction, and temperature monitored at a meteorological tower. Comparison shows that the modeled values are generally in good agreement with observations and that the model captured many of the observed features.

  5. Capturing nonlocal interaction effects in the Hubbard model: Optimal mappings and limits of applicability

    NASA Astrophysics Data System (ADS)

    van Loon, E. G. C. P.; Schüler, M.; Katsnelson, M. I.; Wehling, T. O.

    2016-10-01

    We investigate the Peierls-Feynman-Bogoliubov variational principle to map Hubbard models with nonlocal interactions to effective models with only local interactions. We study the renormalization of the local interaction induced by nearest-neighbor interaction and assess the quality of the effective Hubbard models in reproducing observables of the corresponding extended Hubbard models. We compare the renormalization of the local interactions as obtained from numerically exact determinant quantum Monte Carlo to approximate but more generally applicable calculations using dual boson, dynamical mean field theory, and the random phase approximation. These more approximate approaches are crucial for any application with real materials in mind. Furthermore, we use the dual boson method to calculate observables of the extended Hubbard models directly and benchmark these against determinant quantum Monte Carlo simulations of the effective Hubbard model.

  6. Estimating survival of radio-tagged birds

    USGS Publications Warehouse

    Bunck, C.M.; Pollock, K.H.; Lebreton, J.-D.; North, P.M.

    1993-01-01

    Parametric and nonparametric methods for estimating survival of radio-tagged birds are described. The general assumptions of these methods are reviewed. An estimate based on the assumption of constant survival throughout the period is emphasized in the overview of parametric methods. Two nonparametric methods, the Kaplan-Meier estimate of the survival funcrion and the log rank test, are explained in detail The link between these nonparametric methods and traditional capture-recapture models is discussed aloag with considerations in designing studies that use telemetry techniques to estimate survival.

  7. Recommendations on Implementing the Energy Conservation Building Code in Visakhapatnam, AP, India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Madanagobalane, Samhita S.; Yu, Sha

    Visakhapatnam can play an important role in improving energy efficiency in its buildings by implementing ECBC. This document seeks to capture stakeholder recommendations on a road map for implementation, which can help all market players plan for implementation. Visakhapatnam also has an opportunity to serve as a role model for other Smart Cities and cities in general in India. The road map and steps that VUDA adopts to implement ECBC can provide helpful examples to these other cities.

  8. Estimating snow leopard population abundance using photography and capture-recapture techniques

    USGS Publications Warehouse

    Jackson, R.M.; Roe, J.D.; Wangchuk, R.; Hunter, D.O.

    2006-01-01

    Conservation and management of snow leopards (Uncia uncia) has largely relied on anecdotal evidence and presence-absence data due to their cryptic nature and the difficult terrain they inhabit. These methods generally lack the scientific rigor necessary to accurately estimate population size and monitor trends. We evaluated the use of photography in capture-mark-recapture (CMR) techniques for estimating snow leopard population abundance and density within Hemis National Park, Ladakh, India. We placed infrared camera traps along actively used travel paths, scent-sprayed rocks, and scrape sites within 16- to 30-km2 sampling grids in successive winters during January and March 2003-2004. We used head-on, oblique, and side-view camera configurations to obtain snow leopard photographs at varying body orientations. We calculated snow leopard abundance estimates using the program CAPTURE. We obtained a total of 66 and 49 snow leopard captures resulting in 8.91 and 5.63 individuals per 100 trap-nights during 2003 and 2004, respectively. We identified snow leopards based on the distinct pelage patterns located primarily on the forelimbs, flanks, and dorsal surface of the tail. Capture probabilities ranged from 0.33 to 0.67. Density estimates ranged from 8.49 (SE = 0.22; individuals per 100 km2 in 2003 to 4.45 (SE = 0.16) in 2004. We believe the density disparity between years is attributable to different trap density and placement rather than to an actual decline in population size. Our results suggest that photographic capture-mark-recapture sampling may be a useful tool for monitoring demographic patterns. However, we believe a larger sample size would be necessary for generating a statistically robust estimate of population density and abundance based on CMR models.

  9. Minimizing capture-related stress on white-tailed deer with a capture collar

    USGS Publications Warehouse

    DelGiudice, G.D.; Kunkel, K.E.; Mech, L.D.; Seal, U.S.

    1990-01-01

    We compared the effect of 3 capture methods for white-tailed deer (Odocoileus virginianus) on blood indicators of acute excitement and stress from 1 February to 20 April 1989. Eleven adult females were captured by Clover trap or cannon net between 1 February and 9 April 1989 in northeastern Minnesota [USA]. These deer were fitted with radio-controlled capture collars, and 9 deer were recaptured 7-33 days later. Trapping method affected serum cortisol (P < 0.0001), hemoglobin (Hb) (P < 0.06), and packed cell volume (PCV) (P < 0.07). Cortisol concentrations were lower (P < 0.0001) in capture-collared deer (0.54 .+-. 0.07 [SE] .mu.g/dL) compared to Clover-trapped (4.37 .+-. 0.69 .mu.g/dL) and cannon-netted (3.88 .+-. 0.82 .mu.g/dL) deer. Capture-collared deer were minimally stressed compared to deer captured by traditional methods. Use of the capture collar should permit more accurate interpretation of blood profiles of deer for assessement of condition and general health.

  10. NAO and its relationship with the Northern Hemisphere mean surface temperature in CMIP5 simulations

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofan; Li, Jianping; Sun, Cheng; Liu, Ting

    2017-04-01

    The North Atlantic Oscillation (NAO) is one of the most prominent teleconnection patterns in the Northern Hemisphere and has recently been found to be both an internal source and useful predictor of the multidecadal variability of the Northern Hemisphere mean surface temperature (NHT). In this study, we examine how well the variability of the NAO and NHT are reproduced in historical simulations generated by the 40 models that constitute Phase 5 of the Coupled Model Intercomparison Project (CMIP5). All of the models are able to capture the basic characteristics of the interannual NAO pattern reasonably well, whereas the simulated decadal NAO patterns show less consistency with the observations. The NAO fluctuations over multidecadal time scales are underestimated by almost all models. Regarding the NHT multidecadal variability, the models generally represent the externally forced variations well but tend to underestimate the internal NHT. With respect to the performance of the models in reproducing the NAO-NHT relationship, 14 models capture the observed decadal lead of the NAO, and model discrepancies in the representation of this linkage are derived mainly from their different interpretation of the underlying physical processes associated with the Atlantic Multidecadal Oscillation (AMO) and the Atlantic meridional overturning circulation (AMOC). This study suggests that one way to improve the simulation of the multidecadal variability of the internal NHT lies in better simulation of the multidecadal variability of the NAO and its delayed effect on the NHT variability via slow ocean processes.

  11. Preschool Psychopathology Reported by Parents in 23 Societies: Testing the Seven-Syndrome Model of the Child Behavior Checklist for Ages 1.5–5

    PubMed Central

    Ivanova, Masha Y.; Achenbach, Thomas M.; Rescorla, Leslie A.; Harder, Valerie S.; Ang, Rebecca P.; Bilenberg, Niels; Bjarnadottir, Gudrun; Capron, Christiane; De Pauw, Sarah S.W.; Dias, Pedro; Dobrean, Anca; Doepfner, Manfred; Duyme, Michele; Eapen, Valsamma; Erol, Nese; Esmaeili, Elaheh Mohammad; Ezpeleta, Lourdes; Frigerio, Alessandra; Gonçalves, Miguel M.; Gudmundsson, Halldor S.; Jeng, Suh-Fang; Jetishi, Pranvera; Jusiene, Roma; Kim, Young-Ah; Kristensen, Solvejg; Lecannelier, Felipe; Leung, Patrick W.L.; Liu, Jianghong; Montirosso, Rosario; Oh, Kyung Ja; Plueck, Julia; Pomalima, Rolando; Shahini, Mimoza; Silva, Jaime R.; Simsek, Zynep; Sourander, Andre; Valverde, Jose; Van Leeuwen, Karla G.; Woo, Bernardine S.C.; Wu, Yen-Tzu; Zubrick, Stephen R.; Verhulst, Frank C.

    2014-01-01

    Objective To test the fit of a seven-syndrome model to ratings of preschoolers' problems by parents in very diverse societies. Method Parents of 19,106 children 18 to 71 months of age from 23 societies in Asia, Australasia, Europe, the Middle East, and South America completed the Child Behavior Checklist for Ages 1.5–5 (CBCL/1.5–5). Confirmatory factor analyses were used to test the seven-syndrome model separately for each society. Results The primary model fit index, the root mean square error of approximation (RMSEA), indicated acceptable to good fit for each society. Although a six-syndrome model combining the Emotionally Reactive and Anxious/Depressed syndromes also fit the data for nine societies, it fit less well than the seven-syndrome model for seven of the nine societies. Other fit indices yielded less consistent results than the RMSEA. Conclusions The seven-syndrome model provides one way to capture patterns of children's problems that are manifested in ratings by parents from many societies. Clinicians working with preschoolers from these societies can thus assess and describe parents' ratings of behavioral, emotional, and social problems in terms of the seven syndromes. The results illustrate possibilities for culture–general taxonomic constructs of preschool psychopathology. Problems not captured by the CBCL/1.5–5 may form additional syndromes, and other syndrome models may also fit the data. PMID:21093771

  12. Preschool psychopathology reported by parents in 23 societies: testing the seven-syndrome model of the child behavior checklist for ages 1.5-5.

    PubMed

    Ivanova, Masha Y; Achenbach, Thomas M; Rescorla, Leslie A; Harder, Valerie S; Ang, Rebecca P; Bilenberg, Niels; Bjarnadottir, Gudrun; Capron, Christiane; De Pauw, Sarah S W; Dias, Pedro; Dobrean, Anca; Doepfner, Manfred; Duyme, Michele; Eapen, Valsamma; Erol, Nese; Esmaeili, Elaheh Mohammad; Ezpeleta, Lourdes; Frigerio, Alessandra; Gonçalves, Miguel M; Gudmundsson, Halldor S; Jeng, Suh-Fang; Jetishi, Pranvera; Jusiene, Roma; Kim, Young-Ah; Kristensen, Solvejg; Lecannelier, Felipe; Leung, Patrick W L; Liu, Jianghong; Montirosso, Rosario; Oh, Kyung Ja; Plueck, Julia; Pomalima, Rolando; Shahini, Mimoza; Silva, Jaime R; Simsek, Zynep; Sourander, Andre; Valverde, Jose; Van Leeuwen, Karla G; Woo, Bernardine S C; Wu, Yen-Tzu; Zubrick, Stephen R; Verhulst, Frank C

    2010-12-01

    To test the fit of a seven-syndrome model to ratings of preschoolers' problems by parents in very diverse societies. Parents of 19,106 children 18 to 71 months of age from 23 societies in Asia, Australasia, Europe, the Middle East, and South America completed the Child Behavior Checklist for Ages 1.5-5 (CBCL/1.5-5). Confirmatory factor analyses were used to test the seven-syndrome model separately for each society. The primary model fit index, the root mean square error of approximation (RMSEA), indicated acceptable to good fit for each society. Although a six-syndrome model combining the Emotionally Reactive and Anxious/Depressed syndromes also fit the data for nine societies, it fit less well than the seven-syndrome model for seven of the nine societies. Other fit indices yielded less consistent results than the RMSEA. The seven-syndrome model provides one way to capture patterns of children's problems that are manifested in ratings by parents from many societies. Clinicians working with preschoolers from these societies can thus assess and describe parents' ratings of behavioral, emotional, and social problems in terms of the seven syndromes. The results illustrate possibilities for culture-general taxonomic constructs of preschool psychopathology. Problems not captured by the CBCL/1.5-5 may form additional syndromes, and other syndrome models may also fit the data. Copyright © 2010 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  13. Mitigating Seabird Bycatch during Hauling by Pelagic Longline Vessels

    PubMed Central

    Gilman, Eric; Chaloupka, Milani; Wiedoff, Brett; Willson, Jeremy

    2014-01-01

    Bycatch in longline fisheries threatens the viability of some seabird populations. The Hawaii longline swordfish fishery reduced seabird captures by an order of magnitude primarily through mitigating bycatch during setting. Now, 75% of captures occur during hauling. We fit observer data to a generalized additive regression model with mixed effects to determine the significance of the effect of various factors on the standardized seabird haul catch rate. Density of albatrosses attending vessels during hauling, leader length and year had largest model effects. The standardized haul catch rate significantly increased with increased albatross density during hauling. The standardized catch rate was significantly higher the longer the leader: shorter leaders place weighted swivels closer to hooks, reducing the likelihood of baited hooks becoming available to surface-scavenging albatrosses. There was a significant linear increasing temporal trend in the standardized catch rate, possibly partly due to an observed increasing temporal trend in the local abundance of albatrosses attending vessels during hauling. Swivel weight, Beaufort scale and season were also significant but smaller model effects. Most (81%) haul captures were on branchlines actively being retrieved. Future haul mitigation research should therefore focus on reducing bird access to hooks as crew coil branchlines, including methods identified here of shorter leaders and heavier swivels, and other potentially effective methods, including faster branchline coiling and shielding the area where hooks becomes accessible. The proportion of Laysan albatross (Phoebastria immutabilis) captures that occurred during hauling was significantly, 1.6 times, higher than for black-footed albatrosses (P. nigripes), perhaps due to differences in the time of day of foraging and in daytime scavenging competitiveness; mitigating haul bycatch would therefore be a larger benefit to Laysans. Locally, findings identify opportunities to nearly eliminate seabird bycatch. Globally, findings fill a gap in knowledge of methods to mitigate seabird bycatch during pelagic longline hauling. PMID:24400096

  14. Evaluating the importance of characterizing soil structure and horizons in parameterizing a hydrologic process model

    USGS Publications Warehouse

    Mirus, Benjamin B.

    2015-01-01

    Incorporating the influence of soil structure and horizons into parameterizations of distributed surface water/groundwater models remains a challenge. Often, only a single soil unit is employed, and soil-hydraulic properties are assigned based on textural classification, without evaluating the potential impact of these simplifications. This study uses a distributed physics-based model to assess the influence of soil horizons and structure on effective parameterization. This paper tests the viability of two established and widely used hydrogeologic methods for simulating runoff and variably saturated flow through layered soils: (1) accounting for vertical heterogeneity by combining hydrostratigraphic units with contrasting hydraulic properties into homogeneous, anisotropic units and (2) use of established pedotransfer functions based on soil texture alone to estimate water retention and conductivity, without accounting for the influence of pedon structures and hysteresis. The viability of this latter method for capturing the seasonal transition from runoff-dominated to evapotranspiration-dominated regimes is also tested here. For cases tested here, event-based simulations using simplified vertical heterogeneity did not capture the state-dependent anisotropy and complex combinations of runoff generation mechanisms resulting from permeability contrasts in layered hillslopes with complex topography. Continuous simulations using pedotransfer functions that do not account for the influence of soil structure and hysteresis generally over-predicted runoff, leading to propagation of substantial water balance errors. Analysis suggests that identifying a dominant hydropedological unit provides the most acceptable simplification of subsurface layering and that modified pedotransfer functions with steeper soil-water retention curves might adequately capture the influence of soil structure and hysteresis on hydrologic response in headwater catchments.

  15. Captures of Boll Weevils (Coleoptera: Curculionidae) in Relation to Trap Orientation and Distance From Brush Lines.

    PubMed

    Spurgeon, Dale W

    2016-04-01

    Eradication programs for the boll weevil (Anthonomus grandis grandis Boheman) rely on pheromone-baited traps to trigger insecticide treatments and monitor program progress. A key objective of monitoring in these programs is the timely detection of incipient weevil populations to limit or prevent re-infestation. Therefore, improvements in the effectiveness of trapping would enhance efforts to achieve and maintain eradication. Association of pheromone traps with woodlots and other prominent vegetation are reported to increase captures of weevils, but the spatial scale over which this effect occurs is unknown. The influences of trap distance (0, 10, and 20 m) and orientation (leeward or windward) to brush lines on boll weevil captures were examined during three noncropping seasons (October to February) in the Rio Grande Valley of Texas. Differences in numbers of captured weevils and in the probability of capture between traps at 10 or 20 m from brush, although often statistically significant, were generally small and variable. Variations in boll weevil population levels, wind directions, and wind speeds apparently contributed to this variability. In contrast, traps closely associated with brush (0 m) generally captured larger numbers of weevils, and offered a higher probability of weevil capture compared with traps away from brush. These increases in the probability of weevil capture were as high as 30%. Such increases in the ability of traps to detect low-level boll weevil populations indicate trap placement with respect to prominent vegetation is an important consideration in maximizing the effectiveness of trap-based monitoring for the boll weevil.

  16. A Generalized Maxwell Model for Creep Behavior of Artery Opening Angle

    PubMed Central

    Zhang, W.; Guo, X.; Kassab, G. S.

    2009-01-01

    An artery ring springs open into a sector after a radial cut. The opening angle characterizes the residual strain in the unloaded state, which is fundamental to understanding stress and strain in the vessel wall. A recent study revealed that the opening angle decreases with time if the artery is cut from the loaded state, while it increases if the cut is made from the no-load state due to viscoelasticity. In both cases, the opening angle approaches the same value in 3 hours. This implies that the characteristic relaxation time is about 10,000 sec. Here, the creep function of a generalized Maxwell model (a spring in series with six Voigt bodies) is used to predict the temporal change of opening angle in multiple time scales. It is demonstrated that the theoretical model captures the salient features of the experimental results. The proposed creep function may be extended to study the viscoelastic response of blood vessels under various loading conditions. PMID:19045526

  17. Morphology, Kinematics, and Dynamics: The Mechanics of Suction Feeding in Fishes.

    PubMed

    Day, Steven W; Higham, Timothy E; Holzman, Roi; Van Wassenbergh, Sam

    2015-07-01

    Suction feeding is pervasive among aquatic vertebrates, and our understanding of the functional morphology and biomechanics of suction feeding has recently been advanced by combining experimental and modeling approaches. Key advances include the visualization of the patterns of flow in front of the mouth of a feeding fish, the measurement of pressure inside their mouth cavity, and the employment of analytical and computational models. Here, we review the key components of the morphology and kinematics of the suction-feeding system of anatomically generalized, adult ray-finned fishes, followed by an overview of the hydrodynamics involved. In the suction-feeding apparatus, a strong mechanistic link among morphology, kinematics, and the capture of prey is manifested through the hydrodynamic interactions between the suction flows and solid surfaces (the mouth cavity and the prey). It is therefore a powerful experimental system in which the ecology and evolution of the capture of prey can be studied based on first principals. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  18. EFFECTS OF TURBULENCE, ECCENTRICITY DAMPING, AND MIGRATION RATE ON THE CAPTURE OF PLANETS INTO MEAN MOTION RESONANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ketchum, Jacob A.; Adams, Fred C.; Bloch, Anthony M.

    2011-01-01

    Pairs of migrating extrasolar planets often lock into mean motion resonance as they drift inward. This paper studies the convergent migration of giant planets (driven by a circumstellar disk) and determines the probability that they are captured into mean motion resonance. The probability that such planets enter resonance depends on the type of resonance, the migration rate, the eccentricity damping rate, and the amplitude of the turbulent fluctuations. This problem is studied both through direct integrations of the full three-body problem and via semi-analytic model equations. In general, the probability of resonance decreases with increasing migration rate, and with increasingmore » levels of turbulence, but increases with eccentricity damping. Previous work has shown that the distributions of orbital elements (eccentricity and semimajor axis) for observed extrasolar planets can be reproduced by migration models with multiple planets. However, these results depend on resonance locking, and this study shows that entry into-and maintenance of-mean motion resonance depends sensitively on the migration rate, eccentricity damping, and turbulence.« less

  19. Data-Model Comparisons of the October, 2002 Event Using the Space Weather Modeling Framework

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Chappell, C. R.; Schunk, R. W.; Barakat, A. R.; Eccles, V.; Glocer, A.; Kistler, L. M.; Haaland, S.; Moore, T. E.

    2014-12-01

    The September 27 - October 4, 2002 time period has been selected by the Geospace Environment Modeling Ionospheric Outflow focus group for community collaborative study because of its high magnetospheric activity and extensive data coverage. The FAST, Polar, and Cluster missions, as well as others, all made key observations during this period, creating a prime event for data-model comparisons. The GEM community has come together to simulate this period using many different methods in order to evaluate models, compare results, and expand our knowledge of ionospheric outflow and its effects on global dynamics. This paper presents Space Weather Modeling Framework (SWMF) simulations of this important period compared against observations from the Polar TIDE, Cluster CODIF and EFW instruments. Density and velocity of oxygen and hydrogen throughout the lobes, plasmasheet, and inner magnetosphere will be the focus of these comparisons. For these simulations, the SWMF couples the multifluid version of BATS-R-US MHD to a variety of ionospheric outflow models of varying complexity. The simplest is outflow arising from constant MHD inner boundary conditions. Two first-principles-based models are also leveraged: the Polar Wind Outflow Model (PWOM), a fluid treatment of outflow dynamics, and the Generalized Polar Wind (GPW) model, which combines fluid and particle-in-cell approaches. Each model is capable of capturing a different set of energization mechanisms, yielding different outflow results. The data-model comparisons will illustrate how well each approach captures reality and which energization mechanisms are most important. This work will also assess our current capability to reproduce ionosphere-magnetosphere mass coupling.

  20. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    NASA Astrophysics Data System (ADS)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  1. The Impact of Updated Zr Neutron-capture Cross Sections and New Asymptotic Giant Branch Models on our Understanding of the s process and the origin of stardust

    DOE PAGES

    Lugaro, M.; Tagliente, Giuseppe; Karakas, Amanda I.; ...

    2013-12-13

    We present model predictions for the Zr isotopic ratios produced by slow neutron captures in C-rich asymptotic giant branch (AGB) stars of masses 1.25-4 M-circle dot and metallicities Z = 0.01-0.03, and compare them to data from single meteoritic stardust silicon carbide (SiC) and high-density graphite grains that condensed in the outflows of these stars. We compare predictions produced using the Zr neutron-capture cross sections from Bao et al. and from n_TOF experiments at CERN, and present a new evaluation for the neutron-capture cross section of the unstable isotope Zr-95, the branching point leading to the production of Zr-96. Themore » new cross sections generally present an improved match with the observational data, except for the Zr-92/Zr-94 ratios, which are on average still substantially higher than predicted. The Zr-96/Zr-94 ratios can be explained using our range of initial stellar masses, with the most Zr-96-depleted grains originating from AGB stars of masses 1.8-3 M-circle dot and the others from either lower or higher masses. The Zr-90,Zr-91/Zr-94 variations measured in the grains are well reproduced by the range of stellar metallicities considered here, which is the same needed to cover the Si composition of the grains produced by the chemical evolution of the Galaxy. The Zr-92/Zr-94 versus Si-29/Si-28 positive correlation observed in the available data suggests that stellar metallicity rather than rotation plays the major role in covering the Zr-90,Zr-91,Zr-92/Zr-94 spread« less

  2. Elucidating the origin of the attractive force among hydrophilic macroions

    DOE PAGES

    Liu, Zhuonan; Liu, Tianbo; Tsige, Mesfin

    2016-05-24

    In this study, coarse-grained simulation approach is applied to provide a general understanding of various soluble, hydrophilic macroionic solutions, especially the strong attractions among the like-charged soluble macroions and the consequent spontaneous, reversible formation of blackberry structures with tunable sizes. This model captures essential molecular details of the macroions and their interactions in polar solvents. Results using this model provide consistent conclusions to the experimental observations, from the nature of the attractive force among macroions (counterion-mediated attraction), to the blackberry formation mechanism. The conclusions can be applied to various macroionic solutions from inorganic molecular clusters to dendrimers and biomacromolecules.

  3. Elucidating the origin of the attractive force among hydrophilic macroions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuonan; Liu, Tianbo; Tsige, Mesfin

    In this study, coarse-grained simulation approach is applied to provide a general understanding of various soluble, hydrophilic macroionic solutions, especially the strong attractions among the like-charged soluble macroions and the consequent spontaneous, reversible formation of blackberry structures with tunable sizes. This model captures essential molecular details of the macroions and their interactions in polar solvents. Results using this model provide consistent conclusions to the experimental observations, from the nature of the attractive force among macroions (counterion-mediated attraction), to the blackberry formation mechanism. The conclusions can be applied to various macroionic solutions from inorganic molecular clusters to dendrimers and biomacromolecules.

  4. Topological model of composite fermions in the cyclotron band generator picture: New insights

    NASA Astrophysics Data System (ADS)

    Staśkiewicz, Beata

    2018-03-01

    A combinatorial group theory in the braid groups is correlated with the unusual "anyon" statistic of particles in 2D Hall system in the fractional quantum regime well. On this background has been derived cyclotron band generator as a modification and generalization band generator, first established to solve the word and conjugacy problems in the braid group terms. Topological commensurability condition has been embraced by canonical factors - like, based on the concept of parallel descending cycles. Owing to this we can mathematically capture the general hierarchy of correlated states in the lowest Landau level, describing the fractional quantum Hall effect hierarchy, in terms of cyclotron band generators, especially for those being beyond conventional composite fermions model. It has been also shown that cyclotron braid subgroups, developed for interpretation of Laughlin correlations, are a special case of the right-angled Artin groups.

  5. Role of resolution in regional climate change projections over China

    NASA Astrophysics Data System (ADS)

    Shi, Ying; Wang, Guiling; Gao, Xuejie

    2017-11-01

    This paper investigates the sensitivity of projected future climate changes over China to the horizontal resolution of a regional climate model RegCM4.4 (RegCM), using RCP8.5 as an example. Model validation shows that RegCM performs better in reproducing the spatial distribution and magnitude of present-day temperature, precipitation and climate extremes than the driving global climate model HadGEM2-ES (HadGEM, at 1.875° × 1.25° degree resolution), but little difference is found between the simulations at 50 and 25 km resolutions. Comparison with observational data at different resolutions confirmed the added value of the RCM and finer model resolutions in better capturing the probability distribution of precipitation. However, HadGEM and RegCM at both resolutions project a similar pattern of significant future warming during both winter and summer, and a similar pattern of winter precipitation changes including dominant increase in most areas of northern China and little change or decrease in the southern part. Projected precipitation changes in summer diverge among the three models, especially over eastern China, with a general increase in HadGEM, little change in RegCM at 50 km, and a mix of increase and decrease in RegCM at 25 km resolution. Changes of temperature-related extremes (annual total number of daily maximum temperature > 25 °C, the maximum value of daily maximum temperature, the minimum value of daily minimum temperature in the three simulations especially in the two RegCM simulations are very similar to each other; so are the precipitation-related extremes (maximum consecutive dry days, maximum consecutive 5-day precipitation and extremely wet days' total amount). Overall, results from this study indicate a very low sensitivity of projected changes in this region to model resolution. While fine resolution is critical for capturing the spatial variability of the control climate, it may not be as important for capturing the climate response to homogeneous forcing (in this case greenhouse gas concentration changes).

  6. Challenges Associated with Estimating Utility in Wet Age-Related Macular Degeneration: A Novel Regression Analysis to Capture the Bilateral Nature of the Disease.

    PubMed

    Hodgson, Robert; Reason, Timothy; Trueman, David; Wickstead, Rose; Kusel, Jeanette; Jasilek, Adam; Claxton, Lindsay; Taylor, Matthew; Pulikottil-Jacob, Ruth

    2017-10-01

    The estimation of utility values for the economic evaluation of therapies for wet age-related macular degeneration (AMD) is a particular challenge. Previous economic models in wet AMD have been criticized for failing to capture the bilateral nature of wet AMD by modelling visual acuity (VA) and utility values associated with the better-seeing eye only. Here we present a de novo regression analysis using generalized estimating equations (GEE) applied to a previous dataset of time trade-off (TTO)-derived utility values from a sample of the UK population that wore contact lenses to simulate visual deterioration in wet AMD. This analysis allows utility values to be estimated as a function of VA in both the better-seeing eye (BSE) and worse-seeing eye (WSE). VAs in both the BSE and WSE were found to be statistically significant (p < 0.05) when regressed separately. When included without an interaction term, only the coefficient for VA in the BSE was significant (p = 0.04), but when an interaction term between VA in the BSE and WSE was included, only the constant term (mean TTO utility value) was significant, potentially a result of the collinearity between the VA of the two eyes. The lack of both formal model fit statistics from the GEE approach and theoretical knowledge to support the superiority of one model over another make it difficult to select the best model. Limitations of this analysis arise from the potential influence of collinearity between the VA of both eyes, and the use of contact lenses to reflect VA states to obtain the original dataset. Whilst further research is required to elicit more accurate utility values for wet AMD, this novel regression analysis provides a possible source of utility values to allow future economic models to capture the quality of life impact of changes in VA in both eyes. Novartis Pharmaceuticals UK Limited.

  7. Explanation-based generalization of partially ordered plans

    NASA Technical Reports Server (NTRS)

    Kambhampati, Subbarao; Kedar, Smadar

    1991-01-01

    Most previous work in analytic generalization of plans dealt with totally ordered plans. These methods cannot be directly applied to generalizing partially ordered plans, since they do not capture all interactions among plan operators for all total orders of such plans. We introduce a new method for generalizing partially ordered plans. This method is based on providing explanation-based generalization (EBG) with explanations which systematically capture the interactions among plan operators for all the total orders of a partially-ordered plan. The explanations are based on the Modal Truth Criterion which states the necessary and sufficient conditions for ensuring the truth of a proposition at any point in a plan, for a class of partially ordered plans. The generalizations obtained by this method guarantee successful and interaction-free execution of any total order of the generalized plan. In addition, the systematic derivation of the generalization algorithms from the Modal Truth Criterion obviates the need for carrying out a separate formal proof of correctness of the EBG algorithms.

  8. Modeling the viscosity of polydisperse suspensions: Improvements in prediction of limiting behavior

    NASA Astrophysics Data System (ADS)

    Mwasame, Paul M.; Wagner, Norman J.; Beris, Antony N.

    2016-06-01

    The present study develops a fully consistent extension of the approach pioneered by Farris ["Prediction of the viscosity of multimodal suspensions from unimodal viscosity data," Trans. Soc. Rheol. 12, 281-301 (1968)] to describe the viscosity of polydisperse suspensions significantly improving upon our previous model [P. M. Mwasame, N. J. Wagner, and A. N. Beris, "Modeling the effects of polydispersity on the viscosity of noncolloidal hard sphere suspensions," J. Rheol. 60, 225-240 (2016)]. The new model captures the Farris limit of large size differences between consecutive particle size classes in a suspension. Moreover, the new model includes a further generalization that enables its application to real, complex suspensions that deviate from ideal non-colloidal suspension behavior. The capability of the new model to predict the viscosity of complex suspensions is illustrated by comparison against experimental data.

  9. Hydrodynamics of bacterial colonies: A model

    NASA Astrophysics Data System (ADS)

    Lega, J.; Passot, T.

    2003-03-01

    We propose a hydrodynamic model for the evolution of bacterial colonies growing on soft agar plates. This model consists of reaction-diffusion equations for the concentrations of nutrients, water, and bacteria, coupled to a single hydrodynamic equation for the velocity field of the bacteria-water mixture. It captures the dynamics inside the colony as well as on its boundary and allows us to identify a mechanism for collective motion towards fresh nutrients, which, in its modeling aspects, is similar to classical chemotaxis. As shown in numerical simulations, our model reproduces both usual colony shapes and typical hydrodynamic motions, such as the whirls and jets recently observed in wet colonies of Bacillus subtilis. The approach presented here could be extended to different experimental situations and provides a general framework for the use of advection-reaction-diffusion equations in modeling bacterial colonies.

  10. ALAMEDA, a Structural–Functional Model for Faba Bean Crops: Morphological Parameterization and Verification

    PubMed Central

    RUIZ-RAMOS, MARGARITA; MÍNGUEZ, M. INÉS

    2006-01-01

    • Background Plant structural (i.e. architectural) models explicitly describe plant morphology by providing detailed descriptions of the display of leaf and stem surfaces within heterogeneous canopies and thus provide the opportunity for modelling the functioning of plant organs in their microenvironments. The outcome is a class of structural–functional crop models that combines advantages of current structural and process approaches to crop modelling. ALAMEDA is such a model. • Methods The formalism of Lindenmayer systems (L-systems) was chosen for the development of a structural model of the faba bean canopy, providing both numerical and dynamic graphical outputs. It was parameterized according to the results obtained through detailed morphological and phenological descriptions that capture the detailed geometry and topology of the crop. The analysis distinguishes between relationships of general application for all sowing dates and stem ranks and others valid only for all stems of a single crop cycle. • Results and Conclusions The results reveal that in faba bean, structural parameterization valid for the entire plant may be drawn from a single stem. ALAMEDA was formed by linking the structural model to the growth model ‘Simulation d'Allongement des Feuilles’ (SAF) with the ability to simulate approx. 3500 crop organs and components of a group of nine plants. Model performance was verified for organ length, plant height and leaf area. The L-system formalism was able to capture the complex architecture of canopy leaf area of this indeterminate crop and, with the growth relationships, generate a 3D dynamic crop simulation. Future development and improvement of the model are discussed. PMID:16390842

  11. Observations of cross-Saharan transport of water vapour via cycle of cold pools and moist convection

    NASA Astrophysics Data System (ADS)

    Trzeciak, Tomasz; Garcia-Carreras, Luis; Marsham, John H.

    2017-04-01

    Very limited observational data has previously limited our ability to study meteorological processes in the Sahara. The Sahara is a key component of the West African monsoon and the world's largest dust source, but its representation is a major uncertainty in global models. Past studies have shown that there is a persistent warm and dry model bias throughout the Sahara, and this has been attributed to the lack of convectively-generated cold pools in the model, which can ventilate the central Sahara from its margins. Here we present an observed case from June 2012 which explains how cold pools are able to transport water vapour across a large area of the Sahara over a period of several days. A daily cycle is found to occur, where deep convection in the evening generates moist cold pools that then feed the next day's convection; the new convection in turn generates new cold pools, providing a vertical recycling of moisture. Trajectories driven by analyses can capture the general direction of transport, but not its full extent, especially at night when cold pools are most active, highlighting the difficulties for models to capture these processes. These results show the importance of cold pools for moisture transport, dust and clouds in the region, and demonstrate the need to include these processes in models to improve the representation of the Saharan atmosphere.

  12. Dengue forecasting in São Paulo city with generalized additive models, artificial neural networks and seasonal autoregressive integrated moving average models.

    PubMed

    Baquero, Oswaldo Santos; Santana, Lidia Maria Reis; Chiaravalloti-Neto, Francisco

    2018-01-01

    Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.

  13. Assessment of the NASA Space Shuttle Program's Problem Reporting and Corrective Action System

    NASA Technical Reports Server (NTRS)

    Korsmeryer, D. J.; Schreiner, J. A.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper documents the general findings and recommendations of the Design for Safety Programs Study of the Space Shuttle Programs (SSP) Problem Reporting and Corrective Action (PRACA) System. The goals of this Study were: to evaluate and quantify the technical aspects of the SSP's PRACA systems, and to recommend enhancements addressing specific deficiencies in preparation for future system upgrades. The Study determined that the extant SSP PRACA systems accomplished a project level support capability through the use of a large pool of domain experts and a variety of distributed formal and informal database systems. This operational model is vulnerable to staff turnover and loss of the vast corporate knowledge that is not currently being captured by the PRACA system. A need for a Program-level PRACA system providing improved insight, unification, knowledge capture, and collaborative tools was defined in this study.

  14. Tertiary instability of zonal flows within the Wigner-Moyal formulation of drift turbulence

    NASA Astrophysics Data System (ADS)

    Zhu, Hongxuan; Ruiz, D. E.; Dodin, I. Y.

    2017-10-01

    The stability of zonal flows (ZFs) is analyzed within the generalized-Hasegawa-Mima model. The necessary and sufficient condition for a ZF instability, which is also known as the tertiary instability, is identified. The qualitative physics behind the tertiary instability is explained using the recently developed Wigner-Moyal formulation and the corresponding wave kinetic equation (WKE) in the geometrical-optics (GO) limit. By analyzing the drifton phase space trajectories, we find that the corrections proposed in Ref. to the WKE are critical for capturing the spatial scales characteristic for the tertiary instability. That said, we also find that this instability itself cannot be adequately described within a GO formulation in principle. Using the Wigner-Moyal equations, which capture diffraction, we analytically derive the tertiary-instability growth rate and compare it with numerical simulations. The research was sponsored by the U.S. Department of Energy.

  15. A general health policy model: update and applications.

    PubMed Central

    Kaplan, R M; Anderson, J P

    1988-01-01

    This article describes the development of a General Health Policy Model that can be used for program evaluation, population monitoring, clinical research, and policy analysis. An important component of the model, the Quality of Well-being scale (QWB) combines preference-weighted measures of symptoms and functioning to provide a numerical point-in-time expression of well-being, ranging from 0 for death to 1.0 for asymptomatic optimum functioning. The level of wellness at particular points in time is governed by the prognosis (transition rates or probabilities) generated by the underlying disease or injury under different treatment (control) variables. Well-years result from integrating the level of wellness, or health-related quality of life, over the life expectancy. Several issues relevant to the application of the model are discussed. It is suggested that a quality of life measure need not have separate components for social and mental health. Social health has been difficult to define; social support may be a poor criterion for resource allocation; and some evidence suggests that aspects of mental health are captured by the general measure. Although it has been suggested that measures of child health should differ from those used for adults, we argue that a separate conceptualization of child health creates new problems for policy analysis. After offering several applications of the model for the evaluation of prevention programs, we conclude that many of the advantages of general measures have been overlooked and should be given serious consideration in future studies. PMID:3384669

  16. Modelling the Course of an HIV Infection: Insights from Ecology and Evolution

    PubMed Central

    Alizon, Samuel; Magnus, Carsten

    2012-01-01

    The Human Immunodeficiency Virus (HIV) is one of the most threatening viral agents. This virus infects approximately 33 million people, many of whom are unaware of their status because, except for flu-like symptoms right at the beginning of the infection during the acute phase, the disease progresses more or less symptom-free for 5 to 10 years. During this asymptomatic phase, the virus slowly destroys the immune system until the onset of AIDS when opportunistic infections like pneumonia or Kaposi’s sarcoma can overcome immune defenses. Mathematical models have played a decisive role in estimating important parameters (e.g., virion clearance rate or life-span of infected cells). However, most models only account for the acute and asymptomatic latency phase and cannot explain the progression to AIDS. Models that account for the whole course of the infection rely on different hypotheses to explain the progression to AIDS. The aim of this study is to review these models, present their technical approaches and discuss the robustness of their biological hypotheses. Among the few models capturing all three phases of an HIV infection, we can distinguish between those that mainly rely on population dynamics and those that involve virus evolution. Overall, the modeling quest to capture the dynamics of an HIV infection has improved our understanding of the progression to AIDS but, more generally, it has also led to the insight that population dynamics and evolutionary processes can be necessary to explain the course of an infection. PMID:23202449

  17. Communication: Fragment-based Hamiltonian model of electronic charge-excitation gaps and gap closure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valone, S. M.; Pilania, G.; Liu, X. Y.

    2015-11-14

    Capturing key electronic properties such as charge excitation gaps within models at or above the atomic scale presents an ongoing challenge to understanding molecular, nanoscale, and condensed phase systems. One strategy is to describe the system in terms of properties of interacting material fragments, but it is unclear how to accomplish this for charge-excitation and charge-transfer phenomena. Hamiltonian models such as the Hubbard model provide formal frameworks for analyzing gap properties but are couched purely in terms of states of electrons, rather than the states of the fragments at the scale of interest. The recently introduced Fragment Hamiltonian (FH) modelmore » uses fragments in different charge states as its building blocks, enabling a uniform, quantum-mechanical treatment that captures the charge-excitation gap. These gaps are preserved in terms of inter-fragment charge-transfer hopping integrals T and on-fragment parameters U{sup (FH)}. The FH model generalizes the standard Hubbard model (a single intra-band hopping integral t and on-site repulsion U) from quantum states for electrons to quantum states for fragments. We demonstrate that even for simple two-fragment and multi-fragment systems, gap closure is enabled once T exceeds the threshold set by U{sup (FH)}, thus providing new insight into the nature of metal-insulator transitions. This result is in contrast to the standard Hubbard model for 1d rings, for which Lieb and Wu proved that gap closure was impossible, regardless of the choices for t and U.« less

  18. Transport coefficient computation based on input/output reduced order models

    NASA Astrophysics Data System (ADS)

    Hurst, Joshua L.

    The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator equation. Its a recursive method that solves nonlinear ODE's by solving a LTV systems at each iteration to obtain a new closer solution. LTV models are derived for both Gosling and Lees-Edwards type models. Particular attention is given to SLLOD Lees-Edwards models because they are in a form most amenable to performing Taylor series expansion, and the most commonly used model to examine viscosity. With linear models developed a method is presented to calculate viscosity based on LTI Gosling models but is shown to have some limitations. To address these issues LTV SLLOD models are analyzed with both Balanced Truncation and POD and both show that significant order reduction is possible. By examining the singular values of both techniques it is shown that Balanced Truncation has a potential to offer greater reduction, which should be expected as it is based on the input/output mapping instead of just the state information as in POD. Obtaining reduced order systems that capture the property of interest is challenging. For Balanced Truncation reduced order models for 1-D LJ and FENE systems are obtained and are shown to capture the output of interest fairly well. However numerical challenges currently limit this analysis to small order systems. Suggestions are presented to extend this method to larger systems. In addition reduced 2nd order systems are obtained from POD. Here the challenge is extending the solution beyond the original period used for the projection, in particular identifying the manifold the solution travels along. The remaining challenges are presented and discussed.

  19. Co-occurrence and seasonal and environmental distributions of the sandflies Lutzomyia longipalpis and Nyssomyia whitmani in the city of Puerto Iguazú, northeastern Argentina.

    PubMed

    Santini, M S; Fernández, M S; Cavia, R; Salomón, O D

    2018-06-01

    The aim of this work was to study the distribution of Phlebotominae (Diptera: Psycodidade) abundance in time and space in an area in northeastern Argentina with vector transmission of visceral and tegumentary leishmaniasis. For this, 51 households were selected using a 'worst scenario' criterion where one light trap was set during two consecutive nights in peridomiciles in the transitions between the four seasons, and the environment was surveyed simultaneously. The relationships of phlebotomine assemblage structure and the most abundant species with seasonality and environmental variables were evaluated using a canonical correspondence analysis and generalized linear mixed models, respectively. A total of 5110 individuals were captured. Lutzomyia longipalpis (Lutz & Neiva, 1912) and Nyssomyia whitmani (Antunes & Coutinho, 1939) were the most abundant species captured in all samplings (98.3% of the total capture). The period of highest abundance of Lu. longipalpis was early autumn, and it was distributed in the most urbanized areas. Nyssomyia whitmani occupied mainly the less urbanized areas, showing peaks of abundance in early spring and summer. Other species were captured in low numbers and showed seasonal-spatial variations similar to those of Ny. whitmani. We confirmed Leishmania spp. vector persistence throughout the year in spatial patches of high abundance even during the less favorable season. © 2017 The Royal Entomological Society.

  20. An analytical model for light backscattering by coccoliths and coccospheres of Emiliania huxleyi.

    PubMed

    Fournier, Georges; Neukermans, Griet

    2017-06-26

    We present an analytical model for light backscattering by coccoliths and coccolithophores of the marine calcifying phytoplankter Emiliania huxleyi. The model is based on the separation of the effects of diffraction, refraction, and reflection on scattering, a valid assumption for particle sizes typical of coccoliths and coccolithophores. Our model results match closely with results from an exact scattering code that uses complex particle geometry and our model also mimics well abrupt transitions in scattering magnitude. Finally, we apply our model to predict changes in the spectral backscattering coefficient during an Emiliania huxleyi bloom with results that closely match in situ measurements. Because our model captures the key features that control the light backscattering process, it can be generalized to coccoliths and coccolithophores of different morphologies which can be obtained from size-calibrated electron microphotographs. Matlab codes of this model are provided as supplementary material.

  1. On a generalized laminate theory with application to bending, vibration, and delamination buckling in composite laminates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbero, E.J.

    1989-01-01

    In this study, a computational model for accurate analysis of composite laminates and laminates with including delaminated interfaces is developed. An accurate prediction of stress distributions, including interlaminar stresses, is obtained by using the Generalized Laminate Plate Theory of Reddy in which layer-wise linear approximation of the displacements through the thickness is used. Analytical as well as finite-element solutions of the theory are developed for bending and vibrations of laminated composite plates for the linear theory. Geometrical nonlinearity, including buckling and postbuckling are included and used to perform stress analysis of laminated plates. A general two dimensional theory of laminatedmore » cylindrical shells is also developed in this study. Geometrical nonlinearity and transverse compressibility are included. Delaminations between layers of composite plates are modelled by jump discontinuity conditions at the interfaces. The theory includes multiple delaminations through the thickness. Geometric nonlinearity is included to capture layer buckling. The strain energy release rate distribution along the boundary of delaminations is computed by a novel algorithm. The computational models presented herein are accurate for global behavior and particularly appropriate for the study of local effects.« less

  2. Spatial capture-recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sollmann, Rahel; Gardner, Beth

    2013-01-01

    Spatial Capture-Recapture provides a revolutionary extension of traditional capture-recapture methods for studying animal populations using data from live trapping, camera trapping, DNA sampling, acoustic sampling, and related field methods. This book is a conceptual and methodological synthesis of spatial capture-recapture modeling. As a comprehensive how-to manual, this reference contains detailed examples of a wide range of relevant spatial capture-recapture models for inference about population size and spatial and temporal variation in demographic parameters. Practicing field biologists studying animal populations will find this book to be a useful resource, as will graduate students and professionals in ecology, conservation biology, and fisheries and wildlife management.

  3. Distribution, density, and biomass of introduced small mammals in the southern mariana islands

    USGS Publications Warehouse

    Wiewel, A.S.; Adams, A.A.Y.; Rodda, G.H.

    2009-01-01

    Although it is generally accepted that introduced small mammals have detrimental effects on island ecology, our understanding of these effects is frequently limited by incomplete knowledge of small mammal distribution, density, and biomass. Such information is especially critical in the Mariana Islands, where small mammal density is inversely related to effectiveness of Brown Tree Snake (Boiga irregularis) control tools, such as mouse-attractant traps. We used mark-recapture sampling to determine introduced small mammal distribution, density, and biomass in the major habitats of Guam, Rota, Saipan, and Tinian, including grassland, Leucaena forest, and native limestone forest. Of the five species captured, Rattus diardii (sensu Robins et al. 2007) was most common across habitats and islands. In contrast, Mus musculus was rarely captured at forested sites, Suncus murinus was not captured on Rota, and R. exulans and R. norvegicus captures were uncommon. Modeling indicated that neophobia, island, sex, reproductive status, and rain amount influenced R. diardii capture probability, whereas time, island, and capture heterogeneity influenced S. murinus and M. musculus capture probability. Density and biomass were much greater on Rota, Saipan, and Tinian than on Guam, most likely a result of Brown Tree Snake predation pressure on the latter island. Rattus diardii and M. musculus density and biomass were greatest in grassland, whereas S. murinus density and biomass were greatest in Leucaena forest. The high densities documented during this research suggest that introduced small mammals (especially R. diardii) are impacting abundance and diversity of the native fauna and flora of the Mariana Islands. Further, Brown Tree Snake control and management tools that rely on mouse attractants will be less effective on Rota, Saipan, and Tinian than on Guam. If the Brown Tree Snake becomes established on these islands, high-density introduced small mammal populations will likely facilitate and support a high-density Brown Tree Snake population, even as native species are reduced or extirpated. ?? 2009 by University of Hawai'i Press All rights reserved.

  4. Experimental and theoretical studies of implant assisted magnetic drug targeting

    NASA Astrophysics Data System (ADS)

    Aviles, Misael O.

    One way to achieve drug targeting in the body is to incorporate magnetic nanoparticles into drug carriers and then retain them at the site using an externally applied magnetic field. This process is referred to as magnetic drug targeting (MDT). However, the main limitation of MDT is that an externally applied magnetic field alone may not be able to retain a sufficient number of magnetic drug carrier particles (MDCPs) to justify its use. Such a limitation might not exist when high gradient magnetic separation (HGMS) principles are applied to assist MDT by means of ferromagnetic implants. It was hypothesized that an Implant Assisted -- MDT (IA-MDT) system would increase the retention of the MDCPs at a target site where an implant had been previously located, since the magnetic forces are produced internally. With this in mind, the overall objective of this work was to demonstrate the feasibility of an IA-MDT system through mathematical modeling and in vitro experimentation. The mathematical models were developed and used to demonstrate the behavior and limitations of IA-MDT, and the in vitro experiments were designed and used to validate the models and to further elucidate the important parameters that affect the performance of the system. IA-MDT was studied with three plausible implants, ferromagnetic stents, seed particles, and wires. All implants were studied theoretically and experimentally using flow through systems with polymer particles containing magnetite nanoparticles as MDCPs. In the stent studies, a wire coil or mesh was simply placed in a flow field and the capture of the MDCPs was studied. In the other cases, a porous polymer matrix was used as a surrogate capillary tissue scaffold to study the capture of the MDCPs using wires or particle seeds as the implant, with the seeds either fixed within the polymer matrix or captured prior to capturing the MDCPs. An in vitro heart tissue perfusion model was also used to study the use of stents. In general, all the results demonstrated that IA-MDT is indeed feasible and that careful modification of the MDCP properties and implant properties are fundamental to the success of this technology.

  5. Strategies for Reduced-Order Models in Uncertainty Quantification of Complex Turbulent Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Qi, Di

    Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are applied in the training phase for calibrating model errors to achieve optimal imperfect model parameters; and total statistical energy dynamics are introduced to improve the model sensitivity in the prediction phase especially when strong external perturbations are exerted. The validity of reduced-order models for predicting statistical responses and intermittency is demonstrated on a series of instructive models with increasing complexity, including the stochastic triad model, the Lorenz '96 model, and models for barotropic and baroclinic turbulence. The skillful low-order modeling methods developed here should also be useful for other applications such as efficient algorithms for data assimilation.

  6. Dynamical influence processes on networks: general theory and applications to social contagion.

    PubMed

    Harris, Kameron Decker; Danforth, Christopher M; Dodds, Peter Sheridan

    2013-08-01

    We study binary state dynamics on a network where each node acts in response to the average state of its neighborhood. By allowing varying amounts of stochasticity in both the network and node responses, we find different outcomes in random and deterministic versions of the model. In the limit of a large, dense network, however, we show that these dynamics coincide. We construct a general mean-field theory for random networks and show this predicts that the dynamics on the network is a smoothed version of the average response function dynamics. Thus, the behavior of the system can range from steady state to chaotic depending on the response functions, network connectivity, and update synchronicity. As a specific example, we model the competing tendencies of imitation and nonconformity by incorporating an off-threshold into standard threshold models of social contagion. In this way, we attempt to capture important aspects of fashions and societal trends. We compare our theory to extensive simulations of this "limited imitation contagion" model on Poisson random graphs, finding agreement between the mean-field theory and stochastic simulations.

  7. Analysis and Modeling of Ground Operations at Hub Airports

    NASA Technical Reports Server (NTRS)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  8. A Wavelet Polarization Decomposition Net Model for Polarimetric SAR Image Classification

    NASA Astrophysics Data System (ADS)

    He, Chu; Ou, Dan; Yang, Teng; Wu, Kun; Liao, Mingsheng; Chen, Erxue

    2014-11-01

    In this paper, a deep model based on wavelet texture has been proposed for Polarimetric Synthetic Aperture Radar (PolSAR) image classification inspired by recent successful deep learning method. Our model is supposed to learn powerful and informative representations to improve the generalization ability for the complex scene classification tasks. Given the influence of speckle noise in Polarimetric SAR image, wavelet polarization decomposition is applied first to obtain basic and discriminative texture features which are then embedded into a Deep Neural Network (DNN) in order to compose multi-layer higher representations. We demonstrate that the model can produce a powerful representation which can capture some untraceable information from Polarimetric SAR images and show a promising achievement in comparison with other traditional SAR image classification methods for the SAR image dataset.

  9. Investigating changes in brain network properties in HIV-associated neurocognitive disease (HAND) using mutual connectivity analysis (MCA)

    NASA Astrophysics Data System (ADS)

    Abidin, Anas Zainul; D'Souza, Adora M.; Nagarajan, Mahesh B.; Wismüller, Axel

    2016-03-01

    About 50% of subjects infected with HIV present deficits in cognitive domains, which are known collectively as HIV associated neurocognitive disorder (HAND). The underlying synaptodendritic damage can be captured using resting state functional MRI, as has been demonstrated by a few earlier studies. Such damage may induce topological changes of brain connectivity networks. We test this hypothesis by capturing the functional interdependence of 90 brain network nodes using a Mutual Connectivity Analysis (MCA) framework with non-linear time series modeling based on Generalized Radial Basis function (GRBF) neural networks. The network nodes are selected based on the regions defined in the Automated Anatomic Labeling (AAL) atlas. Each node is represented by the average time series of the voxels of that region. The resulting networks are then characterized using graph-theoretic measures that quantify various network topology properties at a global as well as at a local level. We tested for differences in these properties in network graphs obtained for 10 subjects (6 male and 4 female, 5 HIV+ and 5 HIV-). Global network properties captured some differences between these subject cohorts, though significant differences were seen only with the clustering coefficient measure. Local network properties, such as local efficiency and the degree of connections, captured significant differences in regions of the frontal lobe, precentral and cingulate cortex amongst a few others. These results suggest that our method can be used to effectively capture differences occurring in brain network connectivity properties revealed by resting-state functional MRI in neurological disease states, such as HAND.

  10. Principles of proteome allocation are revealed using proteomic data and genome-scale models

    PubMed Central

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; Ebrahim, Ali; Saunders, Michael A.; Palsson, Bernhard O.

    2016-01-01

    Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thus represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. This flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models. PMID:27857205

  11. Principles of proteome allocation are revealed using proteomic data and genome-scale models

    DOE PAGES

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; ...

    2016-11-18

    Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less

  12. Utility of Policy Capturing as an Approach to Graduate Admissions Decision Making.

    ERIC Educational Resources Information Center

    Schmidt, Frank L.; And Others

    1978-01-01

    The present study examined and evaluated the application of linear policy-capturing models to the real-world decision task of graduate admissions. Utility of the policy-capturing models was great enough to be of practical significance, and least-squares weights showed no predictive advantage over equal weights. (Author/CTM)

  13. Sampling the stream landscape: Improving the applicability of an ecoregion-level capture probability model for stream fishes

    USGS Publications Warehouse

    Mollenhauer, Robert; Mouser, Joshua B.; Brewer, Shannon K.

    2018-01-01

    Temporal and spatial variability in streams result in heterogeneous gear capture probability (i.e., the proportion of available individuals identified) that confounds interpretation of data used to monitor fish abundance. We modeled tow-barge electrofishing capture probability at multiple spatial scales for nine Ozark Highland stream fishes. In addition to fish size, we identified seven reach-scale environmental characteristics associated with variable capture probability: stream discharge, water depth, conductivity, water clarity, emergent vegetation, wetted width–depth ratio, and proportion of riffle habitat. The magnitude of the relationship between capture probability and both discharge and depth varied among stream fishes. We also identified lithological characteristics among stream segments as a coarse-scale source of variable capture probability. The resulting capture probability model can be used to adjust catch data and derive reach-scale absolute abundance estimates across a wide range of sampling conditions with similar effort as used in more traditional fisheries surveys (i.e., catch per unit effort). Adjusting catch data based on variable capture probability improves the comparability of data sets, thus promoting both well-informed conservation and management decisions and advances in stream-fish ecology.

  14. The role of the basic state in the ENSO-monsoon relationship and implications for predictability

    NASA Astrophysics Data System (ADS)

    Turner, A. G.; Inness, P. M.; Slingo, J. M.

    2005-04-01

    The impact of systematic model errors on a coupled simulation of the Asian summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the general-circulation model. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the general-circulation model, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Niño. In part this is related to changes in the characteristics of El Niño, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.

  15. Annual survival of Snail Kites in Florida: Radio telemetry versus capture-resighting data

    USGS Publications Warehouse

    Bennetts, R.E.; Dreitz, V.J.; Kitchens, W.M.; Hines, J.E.; Nichols, J.D.

    1999-01-01

    We estimated annual survival of Snail Kites (Rostrhamus sociabilis) in Florida using the Kaplan-Meier estimator with data from 271 radio-tagged birds over a three-year period and capture-recapture (resighting) models with data from 1,319 banded birds over a six-year period. We tested the hypothesis that survival differed among three age classes using both data sources. We tested additional hypotheses about spatial and temporal variation using a combination of data from radio telemetry and single- and multistrata capture-recapture models. Results from these data sets were similar in their indications of the sources of variation in survival, but they differed in some parameter estimates. Both data sources indicated that survival was higher for adults than for juveniles, but they did not support delineation of a subadult age class. Our data also indicated that survival differed among years and regions for juveniles but not for adults. Estimates of juvenile survival using radio telemetry data were higher than estimates using capture-recapture models for two of three years (1992 and 1993). Ancillary evidence based on censored birds indicated that some mortality of radio-tagged juveniles went undetected during those years, resulting in biased estimates. Thus, we have greater confidence in our estimates of juvenile survival using capture-recapture models. Precision of estimates reflected the number of parameters estimated and was surprisingly similar between radio telemetry and single-stratum capture-recapture models, given the substantial differences in sample sizes. Not having to estimate resighting probability likely offsets, to some degree, the smaller sample sizes from our radio telemetry data. Precision of capture-recapture models was lower using multistrata models where region-specific parameters were estimated than using single-stratum models, where spatial variation in parameters was not taken into account.

  16. Estimating time-varying conditional correlations between stock and foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Tastan, Hüseyin

    2006-02-01

    This study explores the dynamic interaction between stock market returns and changes in nominal exchange rates. Many financial variables are known to exhibit fat tails and autoregressive variance structure. It is well-known that unconditional covariance and correlation coefficients also vary significantly over time and multivariate generalized autoregressive model (MGARCH) is able to capture the time-varying variance-covariance matrix for stock market returns and changes in exchange rates. The model is applied to daily Euro-Dollar exchange rates and two stock market indexes from the US economy: Dow-Jones Industrial Average Index and S&P500 Index. The news impact surfaces are also drawn based on the model estimates to see the effects of idiosyncratic shocks in respective markets.

  17. Resource Tracking Model Updates and Trade Studies

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Stambaugh, Imelda; Moore, Michael

    2016-01-01

    The Resource Tracking Model has been updated to capture system manager and project manager inputs. Both the Trick/General Use Nodal Network Solver Resource Tracking Model (RTM) simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included the addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier Reactor methane, which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Case studies have been run to show the relative effect of performance changes on vehicle resources.

  18. Nature of the Congested Traffic and Quasi-steady States of the General Motor Models

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Xu, Xihua; Pang, John Z. F.; Monterola, Christopher

    2015-03-01

    We look at the general motor (GM) class microscopic traffic models and analyze some of the universal features of the (multi-)cluster solutions, including the emergence of an intrinsic scale and the quasisoliton dynamics. We show that the GM models can capture the essential physics of the real traffic dynamics, especially the phase transition from the free flow to the congested phase, from which the wide moving jams emerges (the F-S-J transition pioneered by B.S. Kerner). In particular, the congested phase can be associated with either the multi-cluster quasi-steady states, or their more homogeneous precursor states. In both cases the states can last for a long time, and the narrow clusters will eventually grow and merge, leading to the formation of the wide moving jams. We present a general method to fit the empirical parameters so that both quantitative and qualitative macroscopic empirical features can be reproduced with a minimal GM model. We present numerical results for the traffic dynamics both with and without the bottleneck, including various types of spontaneous and induced ``synchronized flow,'' as well as the evolution of wide moving jams. We also discuss its implications to the nature of different phases in traffic dynamics.

  19. The problem of latent attentional capture: Easy visual search conceals capture by task-irrelevant abrupt onsets.

    PubMed

    Gaspelin, Nicholas; Ruthruff, Eric; Lien, Mei-Ching

    2016-08-01

    Researchers are sharply divided regarding whether irrelevant abrupt onsets capture spatial attention. Numerous studies report that they do and a roughly equal number report that they do not. This puzzle has inspired numerous attempts at reconciliation, none gaining general acceptance. The authors propose that abrupt onsets routinely capture attention, but the size of observed capture effects depends critically on how long attention dwells on distractor items which, in turn, depends critically on search difficulty. In a series of spatial cuing experiments, the authors show that irrelevant abrupt onsets produce robust capture effects when visual search is difficult, but not when search is easy. Critically, this effect occurs even when search difficulty varies randomly across trials, preventing any strategic adjustments of the attentional set that could modulate probability of capture by the onset cue. The authors argue that easy visual search provides an insensitive test for stimulus-driven capture by abrupt onsets: even though onsets truly capture attention, the effects of capture can be latent. This observation helps to explain previous failures to find capture by onsets, nearly all of which used an easy visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. The Problem of Latent Attentional Capture: Easy Visual Search Conceals Capture by Task-Irrelevant Abrupt Onsets

    PubMed Central

    Gaspelin, Nicholas; Ruthruff, Eric; Lien, Mei-Ching

    2016-01-01

    Researchers are sharply divided regarding whether irrelevant abrupt onsets capture spatial attention. Numerous studies report that they do and a roughly equal number report that they do not. This puzzle has inspired numerous attempts at reconciliation, none gaining general acceptance. We propose that abrupt onsets routinely capture attention, but the size of observed capture effects depends critically on how long attention dwells on distractor items which, in turn, depends critically on search difficulty. In a series of spatial cuing experiments, we show that irrelevant abrupt onsets produce robust capture effects when visual search is difficult, but not when search is easy. Critically, this effect occurs even when search difficulty varies randomly across trials, preventing any strategic adjustments of the attentional set that could modulate probability of capture by the onset cue. We argue that easy visual search provides an insensitive test for stimulus-driven capture by abrupt onsets: even though onsets truly capture attention, the effects of capture can be latent. This observation helps to explain previous failures to find capture by onsets, nearly all of which employed an easy visual search. PMID:26854530

  1. Estimation of capture zones and drawdown at the Northwest and West Well Fields, Miami-Dade County, Florida, using an unconstrained Monte Carlo analysis: recent (2004) and proposed conditions

    USGS Publications Warehouse

    Brakefield, Linzy K.; Hughes, Joseph D.; Langevin, Christian D.; Chartier, Kevin

    2013-01-01

    Travel-time capture zones and drawdown for two production well fields, used for drinking-water supply in Miami-Dade County, southeastern Florida, were delineated by the U.S Geological Survey using an unconstrained Monte Carlo analysis. The well fields, designed to supply a combined total of approximately 250 million gallons of water per day, pump from the highly transmissive Biscayne aquifer in the urban corridor between the Everglades and Biscayne Bay. A transient groundwater flow model was developed and calibrated to field data to ensure an acceptable match between simulated and observed values for aquifer heads and net exchange of water between the aquifer and canals. Steady-state conditions were imposed on the transient model and a post-processing backward particle-tracking approach was implemented. Multiple stochastic realizations of horizontal hydraulic conductivity, conductance of canals, and effective porosity were simulated for steady-state conditions representative of dry, average and wet hydrologic conditions to calculate travel-time capture zones of potential source areas of the well fields. Quarry lakes, formed as a product of rock-mining activities, whose effects have previously not been considered in estimation of capture zones, were represented using high hydraulic-conductivity, high-porosity cells, with the bulk hydraulic conductivity of each cell calculated based on estimates of aquifer hydraulic conductivity, lake depths and aquifer thicknesses. A post-processing adjustment, based on calculated residence times using lake outflows and known lake volumes, was utilized to adjust particle endpoints to account for an estimate of residence-time-based mixing of lakes. Drawdown contours of 0.1 and 0.25 foot were delineated for the dry, average, and wet hydrologic conditions as well. In addition, 95-percent confidence intervals (CIs) were calculated for the capture zones and drawdown contours to delineate a zone of uncertainty about the median estimates. Results of the Monte Carlo simulations indicate particle travel distances at the Northwest Well Field (NWWF) and West Well Field (WWF) are greatest to the west, towards the Everglades. The man-made quarry lakes substantially affect particle travel distances. In general near the NWWF, the capture zones in areas with lakes were smaller in areal extent than capture zones in areas without lakes. It is possible that contamination could reach the well fields quickly, within 10 days in some cases, if it were introduced into lakes nearest to supply wells, with one of the lakes being only approximately 650 feet from the nearest supply well. In addition to estimating drawdown and travel-time capture zones of 10, 30, 100, and 210 days for the NWWF and the WWF under more recent conditions, two proposed scenarios were evaluated with Monte Carlo simulations: the potential hydrologic effects of proposed Everglades groundwater seepage mitigation and quarry-lake expansion. The seepage mitigation scenario included the addition of two proposed anthropogenic features to the model: (1) an impermeable horizontal flow barrier east of the L-31N canal along the western model boundary between the Everglades and the urban areas of Miami-Dade County, and (2) a recharge canal along the Dade-Broward Levee near the NWWF. Capture zones and drawdown for the WWF were substantially affected by the addition of the barrier, which eliminates flow from the western boundary into the active model domain, shifting the predominant capture zone source area from the west more to the north and south. The 95-percent CI for the 210-day capture zone moved slightly in the NWWF as a result of the recharge canal. The lake-expansion scenario incorporated a proposed increase in the number and surface area of lakes by an additional 25 square miles. This scenario represents a 150-percent increase from the 2004 lake surface area near both well fields, but with the majority of increase proposed near the NWWF. The lake-expansion scenario substantially decreased the extent of the 210-day capture zone of the NWWF, which is limited to the lakes nearest the well field under proposed conditions.

  2. TRIC: Capturing the direct cellular targets of promoter‐bound transcriptional activators

    PubMed Central

    Dugan, Amanda; Pricer, Rachel; Katz, Micah

    2016-01-01

    Abstract Transcriptional activators coordinate the dynamic assembly of multiprotein coactivator complexes required for gene expression to occur. Here we combine the power of in vivo covalent chemical capture with p‐benzoyl‐L‐phenylalanine (Bpa), a genetically incorporated photo‐crosslinking amino acid, and chromatin immunoprecipitation (ChIP) to capture the direct protein interactions of the transcriptional activator VP16 with the general transcription factor TBP at the GAL1 promoter in live yeast. PMID:27213278

  3. Decadal climate predictability in the southern Indian Ocean captured by SINTEX-F using a simple SST-nudging scheme.

    PubMed

    Morioka, Yushi; Doi, Takeshi; Behera, Swadhin K

    2018-01-26

    Decadal climate variability in the southern Indian Ocean has great influences on southern African climate through modulation of atmospheric circulation. Although many efforts have been made to understanding physical mechanisms, predictability of the decadal climate variability, in particular, the internally generated variability independent from external atmospheric forcing, remains poorly understood. This study investigates predictability of the decadal climate variability in the southern Indian Ocean using a coupled general circulation model, called SINTEX-F. The ensemble members of the decadal reforecast experiments were initialized with a simple sea surface temperature (SST) nudging scheme. The observed positive and negative peaks during late 1990s and late 2000s are well reproduced in the reforecast experiments initiated from 1994 and 1999, respectively. The experiments initiated from 1994 successfully capture warm SST and high sea level pressure anomalies propagating from the South Atlantic to the southern Indian Ocean. Also, the other experiments initiated from 1999 skillfully predict phase change from a positive to negative peak. These results suggest that the SST-nudging initialization has the essence to capture the predictability of the internally generated decadal climate variability in the southern Indian Ocean.

  4. Neutron Capture Gamma-Ray Libraries for Nuclear Applications

    NASA Astrophysics Data System (ADS)

    Sleaford, B. W.; Firestone, R. B.; Summers, N.; Escher, J.; Hurst, A.; Krticka, M.; Basunia, S.; Molnar, G.; Belgya, T.; Revay, Z.; Choi, H. D.

    2011-06-01

    The neutron capture reaction is useful in identifying and analyzing the gamma-ray spectrum from an unknown assembly as it gives unambiguous information on its composition. This can be done passively or actively where an external neutron source is used to probe an unknown assembly. There are known capture gamma-ray data gaps in the ENDF libraries used by transport codes for various nuclear applications. The Evaluated Gamma-ray Activation file (EGAF) is a new thermal neutron capture database of discrete line spectra and cross sections for over 260 isotopes that was developed as part of an IAEA Coordinated Research Project. EGAF is being used to improve the capture gamma production in ENDF libraries. For medium to heavy nuclei the quasi continuum contribution to the gamma cascades is not experimentally resolved. The continuum contains up to 90% of all the decay energy and is modeled here with the statistical nuclear structure code DICEBOX. This code also provides a consistency check of the level scheme nuclear structure evaluation. The calculated continuum is of sufficient accuracy to include in the ENDF libraries. This analysis also determines new total thermal capture cross sections and provides an improved RIPL database. For higher energy neutron capture there is less experimental data available making benchmarking of the modeling codes more difficult. We are investigating the capture spectra from higher energy neutrons experimentally using surrogate reactions and modeling this with Hauser-Feshbach codes. This can then be used to benchmark CASINO, a version of DICEBOX modified for neutron capture at higher energy. This can be used to simulate spectra from neutron capture at incident neutron energies up to 20 MeV to improve the gamma-ray spectrum in neutron data libraries used for transport modeling of unknown assemblies.

  5. Generalized seasonal autoregressive integrated moving average models for count data with application to malaria time series with low case numbers.

    PubMed

    Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope

    2013-01-01

    With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.

  6. Generalized Seasonal Autoregressive Integrated Moving Average Models for Count Data with Application to Malaria Time Series with Low Case Numbers

    PubMed Central

    Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope

    2013-01-01

    Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448

  7. Asymmetric capture of Dirac dark matter by the Sun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blennow, Mattias; Clementz, Stefan

    2015-08-18

    Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles andmore » anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.« less

  8. Stochastic Car-Following Model for Explaining Nonlinear Traffic Phenomena

    NASA Astrophysics Data System (ADS)

    Meng, Jianping; Song, Tao; Dong, Liyun; Dai, Shiqiang

    There is a common time parameter for representing the sensitivity or the lag (response) time of drivers in many car-following models. In the viewpoint of traffic psychology, this parameter could be considered as the perception-response time (PRT). Generally, this parameter is set to be a constant in previous models. However, PRT is actually not a constant but a random variable described by the lognormal distribution. Thus the probability can be naturally introduced into car-following models by recovering the probability of PRT. For demonstrating this idea, a specific stochastic model is constructed based on the optimal velocity model. By conducting simulations under periodic boundary conditions, it is found that some important traffic phenomena, such as the hysteresis and phantom traffic jams phenomena, can be reproduced more realistically. Especially, an interesting experimental feature of traffic jams, i.e., two moving jams propagating in parallel with constant speed stably and sustainably, is successfully captured by the present model.

  9. The use of auxiliary variables in capture-recapture and removal experiments

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1984-01-01

    The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.

  10. Analyzing developmental processes on an individual level using nonstationary time series modeling.

    PubMed

    Molenaar, Peter C M; Sinclair, Katerina O; Rovine, Michael J; Ram, Nilam; Corneal, Sherry E

    2009-01-01

    Individuals change over time, often in complex ways. Generally, studies of change over time have combined individuals into groups for analysis, which is inappropriate in most, if not all, studies of development. The authors explain how to identify appropriate levels of analysis (individual vs. group) and demonstrate how to estimate changes in developmental processes over time using a multivariate nonstationary time series model. They apply this model to describe the changing relationships between a biological son and father and a stepson and stepfather at the individual level. The authors also explain how to use an extended Kalman filter with iteration and smoothing estimator to capture how dynamics change over time. Finally, they suggest further applications of the multivariate nonstationary time series model and detail the next steps in the development of statistical models used to analyze individual-level data.

  11. Orthogonal-blendshape-based editing system for facial motion capture data.

    PubMed

    Li, Qing; Deng, Zhigang

    2008-01-01

    The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.

  12. 26 CFR 1.475(a)-4 - Valuation safe harbor.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... achieved a predictable net cash flow (for example, a synthetic annuity) that reflects the captured bid-ask spread. This net cash flow is generally impervious to market fluctuations in the values on which the... cash flow attributable to the capture of these spreads. (3) Summary of paragraphs. Paragraph (b) of...

  13. Modelling biological behaviours with the unified modelling language: an immunological case study and critique.

    PubMed

    Read, Mark; Andrews, Paul S; Timmis, Jon; Kumar, Vipin

    2014-10-06

    We present a framework to assist the diagrammatic modelling of complex biological systems using the unified modelling language (UML). The framework comprises three levels of modelling, ranging in scope from the dynamics of individual model entities to system-level emergent properties. By way of an immunological case study of the mouse disease experimental autoimmune encephalomyelitis, we show how the framework can be used to produce models that capture and communicate the biological system, detailing how biological entities, interactions and behaviours lead to higher-level emergent properties observed in the real world. We demonstrate how the UML can be successfully applied within our framework, and provide a critique of UML's ability to capture concepts fundamental to immunology and biology more generally. We show how specialized, well-explained diagrams with less formal semantics can be used where no suitable UML formalism exists. We highlight UML's lack of expressive ability concerning cyclic feedbacks in cellular networks, and the compounding concurrency arising from huge numbers of stochastic, interacting agents. To compensate for this, we propose several additional relationships for expressing these concepts in UML's activity diagram. We also demonstrate the ambiguous nature of class diagrams when applied to complex biology, and question their utility in modelling such dynamic systems. Models created through our framework are non-executable, and expressly free of simulation implementation concerns. They are a valuable complement and precursor to simulation specifications and implementations, focusing purely on thoroughly exploring the biology, recording hypotheses and assumptions, and serve as a communication medium detailing exactly how a simulation relates to the real biology.

  14. Beta-Poisson model for single-cell RNA-seq data analyses.

    PubMed

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. The model for Fundamentals of Endovascular Surgery (FEVS) successfully defines the competent endovascular surgeon.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean

    2015-12-01

    Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  16. Modelling biological behaviours with the unified modelling language: an immunological case study and critique

    PubMed Central

    Read, Mark; Andrews, Paul S.; Timmis, Jon; Kumar, Vipin

    2014-01-01

    We present a framework to assist the diagrammatic modelling of complex biological systems using the unified modelling language (UML). The framework comprises three levels of modelling, ranging in scope from the dynamics of individual model entities to system-level emergent properties. By way of an immunological case study of the mouse disease experimental autoimmune encephalomyelitis, we show how the framework can be used to produce models that capture and communicate the biological system, detailing how biological entities, interactions and behaviours lead to higher-level emergent properties observed in the real world. We demonstrate how the UML can be successfully applied within our framework, and provide a critique of UML's ability to capture concepts fundamental to immunology and biology more generally. We show how specialized, well-explained diagrams with less formal semantics can be used where no suitable UML formalism exists. We highlight UML's lack of expressive ability concerning cyclic feedbacks in cellular networks, and the compounding concurrency arising from huge numbers of stochastic, interacting agents. To compensate for this, we propose several additional relationships for expressing these concepts in UML's activity diagram. We also demonstrate the ambiguous nature of class diagrams when applied to complex biology, and question their utility in modelling such dynamic systems. Models created through our framework are non-executable, and expressly free of simulation implementation concerns. They are a valuable complement and precursor to simulation specifications and implementations, focusing purely on thoroughly exploring the biology, recording hypotheses and assumptions, and serve as a communication medium detailing exactly how a simulation relates to the real biology. PMID:25142524

  17. Predicting adsorptive removal of chlorophenol from aqueous solution using artificial intelligence based modeling approaches.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Ojha, Priyanka; Rai, Premanjali

    2013-04-01

    The research aims to develop artificial intelligence (AI)-based model to predict the adsorptive removal of 2-chlorophenol (CP) in aqueous solution by coconut shell carbon (CSC) using four operational variables (pH of solution, adsorbate concentration, temperature, and contact time), and to investigate their effects on the adsorption process. Accordingly, based on a factorial design, 640 batch experiments were conducted. Nonlinearities in experimental data were checked using Brock-Dechert-Scheimkman (BDS) statistics. Five nonlinear models were constructed to predict the adsorptive removal of CP in aqueous solution by CSC using four variables as input. Performances of the constructed models were evaluated and compared using statistical criteria. BDS statistics revealed strong nonlinearity in experimental data. Performance of all the models constructed here was satisfactory. Radial basis function network (RBFN) and multilayer perceptron network (MLPN) models performed better than generalized regression neural network, support vector machines, and gene expression programming models. Sensitivity analysis revealed that the contact time had highest effect on adsorption followed by the solution pH, temperature, and CP concentration. The study concluded that all the models constructed here were capable of capturing the nonlinearity in data. A better generalization and predictive performance of RBFN and MLPN models suggested that these can be used to predict the adsorption of CP in aqueous solution using CSC.

  18. Dynamics of Postcombustion CO2 Capture Plants: Modeling, Validation, and Case Study

    PubMed Central

    2017-01-01

    The capture of CO2 from power plant flue gases provides an opportunity to mitigate emissions that are harmful to the global climate. While the process of CO2 capture using an aqueous amine solution is well-known from experience in other technical sectors (e.g., acid gas removal in the gas processing industry), its operation combined with a power plant still needs investigation because in this case, the interaction with power plants that are increasingly operated dynamically poses control challenges. This article presents the dynamic modeling of CO2 capture plants followed by a detailed validation using transient measurements recorded from the pilot plant operated at the Maasvlakte power station in the Netherlands. The model predictions are in good agreement with the experimental data related to the transient changes of the main process variables such as flow rate, CO2 concentrations, temperatures, and solvent loading. The validated model was used to study the effects of fast power plant transients on the capture plant operation. A relevant result of this work is that an integrated CO2 capture plant might enable more dynamic operation of retrofitted fossil fuel power plants because the large amount of steam needed by the capture process can be diverted rapidly to and from the power plant. PMID:28413256

  19. Modeling radiation belt electron dynamics during GEM challenge intervals with the DREAM3D diffusion model

    NASA Astrophysics Data System (ADS)

    Tu, Weichao; Cunningham, G. S.; Chen, Y.; Henderson, M. G.; Camporeale, E.; Reeves, G. D.

    2013-10-01

    a response to the Geospace Environment Modeling (GEM) "Global Radiation Belt Modeling Challenge," a 3D diffusion model is used to simulate the radiation belt electron dynamics during two intervals of the Combined Release and Radiation Effects Satellite (CRRES) mission, 15 August to 15 October 1990 and 1 February to 31 July 1991. The 3D diffusion model, developed as part of the Dynamic Radiation Environment Assimilation Model (DREAM) project, includes radial, pitch angle, and momentum diffusion and mixed pitch angle-momentum diffusion, which are driven by dynamic wave databases from the statistical CRRES wave data, including plasmaspheric hiss, lower-band, and upper-band chorus. By comparing the DREAM3D model outputs to the CRRES electron phase space density (PSD) data, we find that, with a data-driven boundary condition at Lmax = 5.5, the electron enhancements can generally be explained by radial diffusion, though additional local heating from chorus waves is required. Because the PSD reductions are included in the boundary condition at Lmax = 5.5, our model captures the fast electron dropouts over a large L range, producing better model performance compared to previous published results. Plasmaspheric hiss produces electron losses inside the plasmasphere, but the model still sometimes overestimates the PSD there. Test simulations using reduced radial diffusion coefficients or increased pitch angle diffusion coefficients inside the plasmasphere suggest that better wave models and more realistic radial diffusion coefficients, both inside and outside the plasmasphere, are needed to improve the model performance. Statistically, the results show that, with the data-driven outer boundary condition, including radial diffusion and plasmaspheric hiss is sufficient to model the electrons during geomagnetically quiet times, but to best capture the radiation belt variations during active times, pitch angle and momentum diffusion from chorus waves are required.

  20. Direct comparisons of ice cloud macro- and microphysical properties simulated by the Community Atmosphere Model version 5 with HIPPO aircraft observations

    NASA Astrophysics Data System (ADS)

    Wu, Chenglai; Liu, Xiaohong; Diao, Minghui; Zhang, Kai; Gettelman, Andrew; Lu, Zheng; Penner, Joyce E.; Lin, Zhaohui

    2017-04-01

    In this study we evaluate cloud properties simulated by the Community Atmosphere Model version 5 (CAM5) using in situ measurements from the HIAPER Pole-to-Pole Observations (HIPPO) campaign for the period of 2009 to 2011. The modeled wind and temperature are nudged towards reanalysis. Model results collocated with HIPPO flight tracks are directly compared with the observations, and model sensitivities to the representations of ice nucleation and growth are also examined. Generally, CAM5 is able to capture specific cloud systems in terms of vertical configuration and horizontal extension. In total, the model reproduces 79.8 % of observed cloud occurrences inside model grid boxes and even higher (94.3 %) for ice clouds (T ≤ -40 °C). The missing cloud occurrences in the model are primarily ascribed to the fact that the model cannot account for the high spatial variability of observed relative humidity (RH). Furthermore, model RH biases are mostly attributed to the discrepancies in water vapor, rather than temperature. At the micro-scale of ice clouds, the model captures the observed increase of ice crystal mean sizes with temperature, albeit with smaller sizes than the observations. The model underestimates the observed ice number concentration (Ni) and ice water content (IWC) for ice crystals larger than 75 µm in diameter. Modeled IWC and Ni are more sensitive to the threshold diameter for autoconversion of cloud ice to snow (Dcs), while simulated ice crystal mean size is more sensitive to ice nucleation parameterizations than to Dcs. Our results highlight the need for further improvements to the sub-grid RH variability and ice nucleation and growth in the model.

  1. Newly-Developed 3D GRMHD Code and its Application to Jet Formation

    NASA Technical Reports Server (NTRS)

    Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.

    2006-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated constrained transport scheme is used to maintain a divergence-free magnetic field. We have performed various 1-dimensional test problems in both special and general relativity by using several reconstruction methods and found that the new 3D GRMHD code shows substantial improvements over our previous model. The . preliminary results show the jet formations from a geometrically thin accretion disk near a non-rotating and a rotating black hole. We will discuss the jet properties depended on the rotation of a black hole and the magnetic field strength.

  2. Syndromes of the global water crisis - exploring the emergent dynamics through socio-hydrological modeling

    NASA Astrophysics Data System (ADS)

    Kuil, Linda; Levy, Morgan; Pavao-Zuckerman, Mitch; Penny, Gopal; Scott, Christopher; Srinivasan, Veena; Thompson, Sally; Troy, Tara

    2014-05-01

    There is a great variety of human water systems at the global scale due to the types and timing of water supply/availability, and the high diversity in water use, management, and abstraction methods. Importantly, this is largely driven by differences in welfare, social values, institutional frameworks, and cultural traditions of communities. The observed trend of a growing world population in combination with changing habits that generally increase our water consumption per capita implies that an increasing number of communities will face water scarcity. Over the years much research has been done in order to increase our understanding of human water systems and their associated water problems, using both top-down and bottom-up approaches. Despite these efforts, the challenge has remained to generalize findings beyond the areas of interests and to establish a common framework in order to compare and learn from different cases as a basis for finding solutions. In a recent analysis of multiple interdisciplinary subnational water resources case studies, it was shown that a suite of distinct resources utilization patterns leading to a water crisis can be identified, namely: 1) groundwater depletion, 2) ecological destruction, 3) drought-driven conflicts, 4) unmet subsistence needs, 5) resource capture by elite and 6) water reallocation to nature (Srinivasan et al., 2012). The effects of these syndromes on long-lasting human wellbeing can be grouped in the following outcomes: unsustainability, vulnerability, chronic scarcity and adaptation. The aim of this group collaboration is to build on this work through the development of a socio-hydrological model that is capable of reproducing the above syndromes and outcomes, ultimately giving insight in the different pathways leading to the syndromes. The resulting model will be distinct compared to existing model frameworks for two reasons. First of all, feedback loops between the hydrological, the environmental and the human agency components of the model are central to the model structure, thereby accounting for the co-evolutionary nature of human-water systems. Second, the model is designed to be general and integrative aimed at the simulation of emergent qualitative dynamics of the human-water system. The explicit inclusion of feedbacks and the aim of the model to capture the general dynamics as opposed to case-specific trajectories will allow us to deepen our fundamental understanding of the causal pathways leading to water crises across multiple locations. All authors contributed equally to this work. Srinivasan, V., Lambin, E.F., Gorelick, S.M., Thompson, B.H., Rozelle, S., 2012. The nature and causes of the global water crisis: Syndromes from a meta-analysis of coupled human-water studies. Water Resources Research 48(10), doi:10.1029/2011WR011087

  3. Physical-mathematical model of condensation process of the sub-micron dust capture in sprayer scrubber

    NASA Astrophysics Data System (ADS)

    Shilyaev, M. I.; Khromova, E. M.; Grigoriev, A. V.; Tumashova, A. V.

    2011-09-01

    A physical-mathematical model of the heat and mass exchange process and condensation capture of sub-micron dust particles on the droplets of dispersed liquid in a sprayer scrubber is proposed and analysed. A satisfactory agreement of computed results and experimental data on soot capturing from the cracking gases is obtained.

  4. A minimalistic approach to static and dynamic electron correlations: Amending generalized valence bond method with extended random phase approximation correlation correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Koushik; Jawulski, Konrad; Pastorczak, Ewa

    A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples ofmore » systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.« less

  5. From Visual Exploration to Storytelling and Back Again.

    PubMed

    Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M

    2016-06-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).

  6. An Assessment of CFD/CSD Prediction State-of-the-Art by Using the HART II International Workshop Data

    NASA Technical Reports Server (NTRS)

    Smith, Marilyn J.; Lim, Joon W.; vanderWall, Berend G.; Baeder, James D.; Biedron, Robert T.; Boyd, D. Douglas, Jr.; Jayaraman, Buvana; Jung, Sung N.; Min, Byung-Young

    2012-01-01

    Over the past decade, there have been significant advancements in the accuracy of rotor aeroelastic simulations with the application of computational fluid dynamics methods coupled with computational structural dynamics codes (CFD/CSD). The HART II International Workshop database, which includes descent operating conditions with strong blade-vortex interactions (BVI), provides a unique opportunity to assess the ability of CFD/CSD to capture these physics. In addition to a baseline case with BVI, two additional cases with 3/rev higher harmonic blade root pitch control (HHC) are available for comparison. The collaboration during the workshop permits assessment of structured, unstructured, and hybrid overset CFD/CSD methods from across the globe on the dynamics, aerodynamics, and wake structure. Evaluation of the plethora of CFD/CSD methods indicate that the most important numerical variables associated with most accurately capturing BVI are a two-equation or detached eddy simulation (DES)-based turbulence model and a sufficiently small time step. An appropriate trade-off between grid fidelity and spatial accuracy schemes also appears to be pertinent for capturing BVI on the advancing rotor disk. Overall, the CFD/CSD methods generally fall within the same accuracy; cost-effective hybrid Navier-Stokes/Lagrangian wake methods provide accuracies within 50% the full CFD/CSD methods for most parameters of interest, except for those highly influenced by torsion. The importance of modeling the fuselage is observed, and other computational requirements are discussed.

  7. From Visual Exploration to Storytelling and Back Again

    PubMed Central

    Gratzl, S.; Lex, A.; Gehlenborg, N.; Cosgrove, N.; Streit, M.

    2016-01-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author “Vistories”, visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract) PMID:27942091

  8. Diagnosing observed characteristics of the wet season across Africa to identify deficiencies in climate model simulations

    NASA Astrophysics Data System (ADS)

    Dunning, C.; Black, E.; Allan, R. P.

    2017-12-01

    The seasonality of rainfall over Africa plays a key role in determining socio-economic impacts for agricultural stakeholders, influences energy supply from hydropower, affects the length of the malaria transmission season and impacts surface water supplies. Hence, failure or delays of these rains can lead to significant socio-economic impacts. Diagnosing and interpreting interannual variability and long-term trends in seasonality, and analysing the physical driving mechanisms, requires a robust definition of African precipitation seasonality, applicable to both observational datasets and model simulations. Here we present a methodology for objectively determining the onset and cessation of multiple wet seasons across the whole of Africa. Compatibility with known physical drivers of African rainfall, consistency with indigenous methods, and generally strong agreement between satellite-based rainfall data sets confirm that the method is capturing the correct seasonal progression of African rainfall. Application of this method to observational datasets reveals that over East Africa cessation of the short rains is 5 days earlier in La Nina years, and the failure of the rains and subsequent humanitarian disaster is associated with shorter as well as weaker rainy seasons over this region. The method is used to examine the representation of the seasonality of African precipitation in CMIP5 model simulations. Overall, atmosphere-only and fully coupled CMIP5 historical simulations represent essential aspects of the seasonal cycle; patterns of seasonal progression of the rainy season are captured, for the most part mean model onset/ cessation dates agree with mean observational dates to within 18 days. However, unlike the atmosphere-only simulations, the coupled simulations do not capture the biannual regime over the southern West African coastline, linked to errors in Gulf of Guinea Sea Surface Temperature. Application to both observational and climate model datasets, and good agreement with agricultural onset methods, indicates the potential applicability of this method to a variety of meteorological and climate impact studies.

  9. Change in BMI accurately predicted by social exposure to acquaintances.

    PubMed

    Oloritun, Rahman O; Ouarda, Taha B M J; Moturu, Sai; Madan, Anmol; Pentland, Alex Sandy; Khayal, Inas

    2013-01-01

    Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R(2). This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends.

  10. A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation

    NASA Astrophysics Data System (ADS)

    Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.

    2016-12-01

    Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.

  11. A New Method for Computing Three-Dimensional Capture Fraction in Heterogeneous Regional Systems using the MODFLOW Adjoint Code

    NASA Astrophysics Data System (ADS)

    Clemo, T. M.; Ramarao, B.; Kelly, V. A.; Lavenue, M.

    2011-12-01

    Capture is a measure of the impact of groundwater pumping upon groundwater and surface water systems. The computation of capture through analytical or numerical methods has been the subject of articles in the literature for several decades (Bredehoeft et al., 1982). Most recently Leake et al. (2010) described a systematic way to produce capture maps in three-dimensional systems using a numerical perturbation approach in which capture from streams was computed using unit rate pumping at many locations within a MODFLOW model. The Leake et al. (2010) method advances the current state of computing capture. A limitation stems from the computational demand required by the perturbation approach wherein days or weeks of computational time might be required to obtain a robust measure of capture. In this paper, we present an efficient method to compute capture in three-dimensional systems based upon adjoint states. The efficiency of the adjoint method will enable uncertainty analysis to be conducted on capture calculations. The USGS and INTERA have collaborated to extend the MODFLOW Adjoint code (Clemo, 2007) to include stream-aquifer interaction and have applied it to one of the examples used in Leake et al. (2010), the San Pedro Basin MODFLOW model. With five layers and 140,800 grid blocks per layer, the San Pedro Basin model, provided an ideal example data set to compare the capture computed from the perturbation and the adjoint methods. The capture fraction map produced from the perturbation method for the San Pedro Basin model required significant computational time to compute and therefore the locations for the pumping wells were limited to 1530 locations in layer 4. The 1530 direct simulations of capture require approximately 76 CPU hours. Had capture been simulated in each grid block in each layer, as is done in the adjoint method, the CPU time would have been on the order of 4 years. The MODFLOW-Adjoint produced the capture fraction map of the San Pedro Basin model at 704,000 grid blocks (140,800 grid blocks x 5 layers) in just 6 minutes. The capture fraction maps from the perturbation and adjoint methods agree closely. The results of this study indicate that the adjoint capture method and its associated computational efficiency will enable scientists and engineers facing water resource management decisions to evaluate the sensitivity and uncertainty of impacts to regional water resource systems as part of groundwater supply strategies. Bredehoeft, J.D., S.S. Papadopulos, and H.H. Cooper Jr, Groundwater: The water budget myth. In Scientific Basis of Water-Resources Management, ed. National Research Council (U.S.), Geophysical Study Committee, 51-57. Washington D.C.: National Academy Press, 1982. Clemo, Tom, MODFLOW-2005 Ground-Water Model-Users Guide to Adjoint State based Sensitivity Process (ADJ), BSU CGISS 07-01, Center for the Geophysical Investigation of the Shallow Subsurface, Boise State University, 2007. Leake, S.A., H.W. Reeves, and J.E. Dickinson, A New Capture Fraction Method to Map How Pumpage Affects Surface Water Flow, Ground Water, 48(5), 670-700, 2010.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blennow, Mattias; Clementz, Stefan, E-mail: emb@kth.se, E-mail: scl@kth.se

    Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles andmore » anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.« less

  13. Generalized receptor law governs phototaxis in the phytoplankton Euglena gracilis

    PubMed Central

    Giometto, Andrea; Altermatt, Florian; Maritan, Amos; Stocker, Roman; Rinaldo, Andrea

    2015-01-01

    Phototaxis, the process through which motile organisms direct their swimming toward or away from light, is implicated in key ecological phenomena (including algal blooms and diel vertical migration) that shape the distribution, diversity, and productivity of phytoplankton and thus energy transfer to higher trophic levels in aquatic ecosystems. Phototaxis also finds important applications in biofuel reactors and microbiopropellers and is argued to serve as a benchmark for the study of biological invasions in heterogeneous environments owing to the ease of generating stochastic light fields. Despite its ecological and technological relevance, an experimentally tested, general theoretical model of phototaxis seems unavailable to date. Here, we present accurate measurements of the behavior of the alga Euglena gracilis when exposed to controlled light fields. Analysis of E. gracilis’ phototactic accumulation dynamics over a broad range of light intensities proves that the classic Keller–Segel mathematical framework for taxis provides an accurate description of both positive and negative phototaxis only when phototactic sensitivity is modeled by a generalized “receptor law,” a specific nonlinear response function to light intensity that drives algae toward beneficial light conditions and away from harmful ones. The proposed phototactic model captures the temporal dynamics of both cells’ accumulation toward light sources and their dispersion upon light cessation. The model could thus be of use in integrating models of vertical phytoplankton migrations in marine and freshwater ecosystems, and in the design of bioreactors. PMID:25964338

  14. Generalized receptor law governs phototaxis in the phytoplankton Euglena gracilis.

    PubMed

    Giometto, Andrea; Altermatt, Florian; Maritan, Amos; Stocker, Roman; Rinaldo, Andrea

    2015-06-02

    Phototaxis, the process through which motile organisms direct their swimming toward or away from light, is implicated in key ecological phenomena (including algal blooms and diel vertical migration) that shape the distribution, diversity, and productivity of phytoplankton and thus energy transfer to higher trophic levels in aquatic ecosystems. Phototaxis also finds important applications in biofuel reactors and microbiopropellers and is argued to serve as a benchmark for the study of biological invasions in heterogeneous environments owing to the ease of generating stochastic light fields. Despite its ecological and technological relevance, an experimentally tested, general theoretical model of phototaxis seems unavailable to date. Here, we present accurate measurements of the behavior of the alga Euglena gracilis when exposed to controlled light fields. Analysis of E. gracilis' phototactic accumulation dynamics over a broad range of light intensities proves that the classic Keller-Segel mathematical framework for taxis provides an accurate description of both positive and negative phototaxis only when phototactic sensitivity is modeled by a generalized "receptor law," a specific nonlinear response function to light intensity that drives algae toward beneficial light conditions and away from harmful ones. The proposed phototactic model captures the temporal dynamics of both cells' accumulation toward light sources and their dispersion upon light cessation. The model could thus be of use in integrating models of vertical phytoplankton migrations in marine and freshwater ecosystems, and in the design of bioreactors.

  15. Sequence Capture versus Restriction Site Associated DNA Sequencing for Shallow Systematics.

    PubMed

    Harvey, Michael G; Smith, Brian Tilston; Glenn, Travis C; Faircloth, Brant C; Brumfield, Robb T

    2016-09-01

    Sequence capture and restriction site associated DNA sequencing (RAD-Seq) are two genomic enrichment strategies for applying next-generation sequencing technologies to systematics studies. At shallow timescales, such as within species, RAD-Seq has been widely adopted among researchers, although there has been little discussion of the potential limitations and benefits of RAD-Seq and sequence capture. We discuss a series of issues that may impact the utility of sequence capture and RAD-Seq data for shallow systematics in non-model species. We review prior studies that used both methods, and investigate differences between the methods by re-analyzing existing RAD-Seq and sequence capture data sets from a Neotropical bird (Xenops minutus). We suggest that the strengths of RAD-Seq data sets for shallow systematics are the wide dispersion of markers across the genome, the relative ease and cost of laboratory work, the deep coverage and read overlap at recovered loci, and the high overall information that results. Sequence capture's benefits include flexibility and repeatability in the genomic regions targeted, success using low-quality samples, more straightforward read orthology assessment, and higher per-locus information content. The utility of a method in systematics, however, rests not only on its performance within a study, but on the comparability of data sets and inferences with those of prior work. In RAD-Seq data sets, comparability is compromised by low overlap of orthologous markers across species and the sensitivity of genetic diversity in a data set to an interaction between the level of natural heterozygosity in the samples examined and the parameters used for orthology assessment. In contrast, sequence capture of conserved genomic regions permits interrogation of the same loci across divergent species, which is preferable for maintaining comparability among data sets and studies for the purpose of drawing general conclusions about the impact of historical processes across biotas. We argue that sequence capture should be given greater attention as a method of obtaining data for studies in shallow systematics and comparative phylogeography. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Effects of heat exchanger tubes on hydrodynamics and CO 2 capture of a sorbent-based fluidized bed reactor

    DOE PAGES

    Lai, Canhai; Xu, Zhijie; Li, Tingwen; ...

    2017-08-05

    In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber's performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered sub-grid models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable accuracymore » and manageable computational effort. Previously developed filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical tubes) on the adsorber's hydrodynamics and CO 2 capture performance are then examined. A one-dimensional three-region process model is briefly introduced for comparison purpose. The CFD model matches reasonably well with the process model while provides additional information about the flow field that is not available with the process model.« less

  17. Multiscale model reduction for shale gas transport in poroelastic fractured media

    NASA Astrophysics Data System (ADS)

    Akkutlu, I. Yucel; Efendiev, Yalchin; Vasilyeva, Maria; Wang, Yuhe

    2018-01-01

    Inherently coupled flow and geomechanics processes in fractured shale media have implications for shale gas production. The system involves highly complex geo-textures comprised of a heterogeneous anisotropic fracture network spatially embedded in an ultra-tight matrix. In addition, nonlinearities due to viscous flow, diffusion, and desorption in the matrix and high velocity gas flow in the fractures complicates the transport. In this paper, we develop a multiscale model reduction approach to couple gas flow and geomechanics in fractured shale media. A Discrete Fracture Model (DFM) is used to treat the complex network of fractures on a fine grid. The coupled flow and geomechanics equations are solved using a fixed stress-splitting scheme by solving the pressure equation using a continuous Galerkin method and the displacement equation using an interior penalty discontinuous Galerkin method. We develop a coarse grid approximation and coupling using the Generalized Multiscale Finite Element Method (GMsFEM). GMsFEM constructs the multiscale basis functions in a systematic way to capture the fracture networks and their interactions with the shale matrix. Numerical results and an error analysis is provided showing that the proposed approach accurately captures the coupled process using a few multiscale basis functions, i.e. a small fraction of the degrees of freedom of the fine-scale problem.

  18. Simulation of summertime ozone over North America

    NASA Technical Reports Server (NTRS)

    Jacob, Daniel J.; Logan, Jennifer A.; Yevich, Rose M.; Gardner, Geraldine M.; Spivakovsky, Clarisa M.; Wofsy, Steven C.; Munger, J. W.; Sillman, Sanford; Prather, Michael J.; Rogers, Michael O.

    1993-01-01

    The concentrations of O3 and its precursors over North America are simulated for three summer months with a 3D, continental-scale photochemical model using meteorological input from the Goddard Institute for Space Studies (GISS) GCM. The model has 4 x 5 deg grid resolution and represents nonlinear chemistry in urban and industrial plumes with a subgrid nested scheme. Simulated median afternoon O3 concentrations at rural U.S. sites are within 5 ppb of observations in most cases, except in the south central U.S., where concentrations are overpredicted by 15-20 ppb. The model captures successfully the development of regional high-O3 episodes over the northeastern United States on the back side of weak, warm, stagnant anticyclones. Simulated concentrations of CO and nonmethane hydrocarbons are generally in good agreement with observations, concentrations of NO(x) are underpredicted by 10-30 percent, and concentrations of PANs are overpredicted by a factor of 2 to 3. The overprediction of PANs is attributed to flaws in the photochemical mechanism, including excessive production from oxidation of isoprene, and may also reflect an underestimate of PANs deposition. Subgrid nonlinear chemistry as captured by the nested plumes scheme decreases the net O3 production computed in the U.S. boundary layer by 8 percent on average.

  19. Uncovering a latent multinomial: Analysis of mark-recapture data with misidentification

    USGS Publications Warehouse

    Link, W.A.; Yoshizaki, J.; Bailey, L.L.; Pollock, K.H.

    2010-01-01

    Natural tags based on DNA fingerprints or natural features of animals are now becoming very widely used in wildlife population biology. However, classic capture-recapture models do not allow for misidentification of animals which is a potentially very serious problem with natural tags. Statistical analysis of misidentification processes is extremely difficult using traditional likelihood methods but is easily handled using Bayesian methods. We present a general framework for Bayesian analysis of categorical data arising from a latent multinomial distribution. Although our work is motivated by a specific model for misidentification in closed population capture-recapture analyses, with crucial assumptions which may not always be appropriate, the methods we develop extend naturally to a variety of other models with similar structure. Suppose that observed frequencies f are a known linear transformation f = A???x of a latent multinomial variable x with cell probability vector ?? = ??(??). Given that full conditional distributions [?? | x] can be sampled, implementation of Gibbs sampling requires only that we can sample from the full conditional distribution [x | f, ??], which is made possible by knowledge of the null space of A???. We illustrate the approach using two data sets with individual misidentification, one simulated, the other summarizing recapture data for salamanders based on natural marks. ?? 2009, The International Biometric Society.

  20. Uncovering a Latent Multinomial: Analysis of Mark-Recapture Data with Misidentification

    USGS Publications Warehouse

    Link, W.A.; Yoshizaki, J.; Bailey, L.L.; Pollock, K.H.

    2009-01-01

    Natural tags based on DNA fingerprints or natural features of animals are now becoming very widely used in wildlife population biology. However, classic capture-recapture models do not allow for misidentification of animals which is a potentially very serious problem with natural tags. Statistical analysis of misidentification processes is extremely difficult using traditional likelihood methods but is easily handled using Bayesian methods. We present a general framework for Bayesian analysis of categorical data arising from a latent multinomial distribution. Although our work is motivated by a specific model for misidentification in closed population capture-recapture analyses, with crucial assumptions which may not always be appropriate, the methods we develop extend naturally to a variety of other models with similar structure. Suppose that observed frequencies f are a known linear transformation f=A'x of a latent multinomial variable x with cell probability vector pi= pi(theta). Given that full conditional distributions [theta | x] can be sampled, implementation of Gibbs sampling requires only that we can sample from the full conditional distribution [x | f, theta], which is made possible by knowledge of the null space of A'. We illustrate the approach using two data sets with individual misidentification, one simulated, the other summarizing recapture data for salamanders based on natural marks.

  1. Recommended Isolated-Line Profile for Representing High-Resolution Spectroscoscopic Transitions

    NASA Astrophysics Data System (ADS)

    Tennyson, J.; Bernath, P. F.; Campargue, A.; Császár, A. G.; Daumont, L.; Gamache, R. R.; Hodges, J. T.; Lisak, D.; Naumenko, O. V.; Rothman, L. S.; Tran, H.; Hartmann, J.-M.; Zobov, N. F.; Buldyreva, J.; Boone, C. D.; De Vizia, M. Domenica; Gianfrani, L.; McPheat, R.; Weidmann, D.; Murray, J.; Ngo, N. H.; Polyansky, O. L.

    2014-06-01

    Recommendations of an IUPAC Task Group, formed in 2011 on "Intensities and line shapes in high-resolution spectra of water isotopologues from experiment and theory" (Project No. 2011-022-2-100), on line profiles of isolated high-resolution rotational-vibrational transitions perturbed by neutral gas-phase molecules are presented. The well-documented inadequacies of the Voigt profile, used almost universally by databases and radiative-transfer codes to represent pressure effects and Doppler broadening in isolated vibrational-rotational and pure rotational transitions of the water molecule, have resulted in the development of a variety of alternative line profile models. These models capture more of the physics of the influence of pressure on line shapes but, in general, at the price of greater complexity. The Task Group recommends that the partially-Correlated quadratic-Speed-Dependent Hard-Collision profile should be adopted as the appropriate model for high-resolution spectroscopy. For simplicity this should be called the Hartmann-Tran profile (HTP). This profile is sophisticated enough to capture the various collisional contributions to the isolated line shape, can be computed in a straightforward and rapid manner, and reduces to simpler profiles, including the Voigt profile, under certain simplifying assumptions. For further details see: J. Tennyson et al, Pure Appl. Chem., 2014, in press.

  2. Informational Entropy and Bridge Scour Estimation under Complex Hydraulic Scenarios

    NASA Astrophysics Data System (ADS)

    Pizarro, Alonso; Link, Oscar; Fiorentino, Mauro; Samela, Caterina; Manfreda, Salvatore

    2017-04-01

    Bridges are important for society because they allow social, cultural and economic connectivity. Flood events can compromise the safety of bridge piers up to the complete collapse. The Bridge Scour phenomena has been described by empirical formulae deduced from hydraulic laboratory experiments. The range of applicability of such models is restricted by the specific hydraulic conditions or flume geometry used for their derivation (e.g., water depth, mean flow velocity, pier diameter and sediment properties). We seek to identify a general formulation able to capture the main dynamic of the process in order to cover a wide range of hydraulic and geometric configuration, allowing to extend our analysis in different contexts. Therefore, exploiting the Principle of Maximum Entropy (POME) and applying it on the recently proposed dimensionless Effective flow work, W*, we derived a simple model characterized by only one parameter. The proposed Bridge Scour Entropic (BRISENT) model shows good performances under complex hydraulic conditions as well as under steady-state flow. Moreover, the model was able to capture the evolution of scour in several hydraulic configurations even if the model contains only one parameter. Furthermore, results show that the model parameter is controlled by the geometric configurations of the experiment. This offers a possible strategy to obtain a priori model parameter calibration. The BRISENT model represents a good candidate for estimating the time-dependent scour depth under complex hydraulic scenarios. The authors are keen to apply this idea for describing the scour behavior during a real flood event. Keywords: Informational entropy, Sediment transport, Bridge pier scour, Effective flow work.

  3. Analysis of mortality data from the former USSR: age-period-cohort analysis.

    PubMed

    Willekens, F; Scherbov, S

    1992-01-01

    The objective of this article is to review research on age-period-cohort (APC) analysis of mortality and to trace the effects of contemporary and historical factors on mortality change in the former USSR. Several events in USSR history have exerted a lasting influence on its people. These influences may be captured by an APC model in which the period effects measure the impact of contemporary factors and the cohort effects the past history of individuals which cannot be attributed to age or stage in the life cycle. APC models are extensively applied in the study of mortality. This article presents the statistical theory of the APC models and shows that they belong to the family of generalized linear models. The parameters of the APC model may therefore be estimated by any package of loglinear analysis that allows for hybrid loglinear models.

  4. Evaluation of geometrically personalized THUMS pedestrian model response against sedan-pedestrian PMHS impact test data.

    PubMed

    Chen, Huipeng; Poulard, David; Forman, Jason; Crandall, Jeff; Panzer, Matthew B

    2018-07-04

    Evaluating the biofidelity of pedestrian finite element models (PFEM) using postmortem human subjects (PMHS) is a challenge because differences in anthropometry between PMHS and PFEM could limit a model's capability to accurately capture cadaveric responses. Geometrical personalization via morphing can modify the PFEM geometry to match the specific PMHS anthropometry, which could alleviate this issue. In this study, the Total Human Model for Safety (THUMS) PFEM (Ver 4.01) was compared to the cadaveric response in vehicle-pedestrian impacts using geometrically personalized models. The AM50 THUMS PFEM was used as the baseline model, and 2 morphed PFEM were created to the anthropometric specifications of 2 obese PMHS used in a previous pedestrian impact study with a mid-size sedan. The same measurements as those obtained during the PMHS tests were calculated from the simulations (kinematics, accelerations, strains), and biofidelity metrics based on signals correlation (correlation and analysis, CORA) were established to compare the response of the models to the experiments. Injury outcomes were predicted deterministically (through strain-based threshold) and probabilistically (with injury risk functions) and compared with the injuries reported in the necropsy. The baseline model could not accurately capture all aspects of the PMHS kinematics, strain, and injury risks, whereas the morphed models reproduced biofidelic response in terms of trajectory (CORA score = 0.927 ± 0.092), velocities (0.975 ± 0.027), accelerations (0.862 ± 0.072), and strains (0.707 ± 0.143). The personalized THUMS models also generally predicted injuries consistent with those identified during posttest autopsy. The study highlights the need to control for pedestrian anthropometry when validating pedestrian human body models against PMHS data. The information provided in the current study could be useful for improving model biofidelity for vehicle-pedestrian impact scenarios.

  5. Contingent capture and inhibition of return: a comparison of mechanisms.

    PubMed

    Prinzmetal, William; Taylor, Jordan A; Myers, Loretta Barry; Nguyen-Espino, Jacqueline

    2011-09-01

    We investigated the cause(s) of two effects associated with involuntary attention in the spatial cueing task: contingent capture and inhibition of return (IOR). Previously, we found that there were two mechanisms of involuntary attention in this task: (1) a (serial) search mechanism that predicts a larger cueing effect in reaction time with more display locations and (2) a decision (threshold) mechanism that predicts a smaller cueing effect with more display locations (Prinzmetal et al. 2010). In the present study, contingent capture and IOR had completely different patterns of results when we manipulated the number of display locations and the presence of distractors. Contingent capture was best described by a search model, whereas the inhibition of return was best described by a decision model. Furthermore, we fit a linear ballistic accumulator model to the results and IOR was accounted for by a change of threshold, whereas the results from contingent capture experiments could not be fit with a change of threshold and were better fit by a search model.

  6. Systematics of capture and fusion dynamics in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui

    2017-03-01

    We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.

  7. Circulation and rainfall climatology of a 10-year (1979 - 1988) integration with the Goddard Laboratory for atmospheres general circulation model

    NASA Technical Reports Server (NTRS)

    Kim, J.-H.; Sud, Y. C.

    1993-01-01

    A 10-year (1979-1988) integration of Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) under Atmospheric Model Intercomparison Project (AMIP) is analyzed and compared with observation. The first momentum fields of circulation variables and also hydrological variables including precipitation, evaporation, and soil moisture are presented. Our goals are (1) to produce a benchmark documentation of the GLA GCM for future model improvements; (2) to examine systematic errors between the simulated and the observed circulation, precipitation, and hydrologic cycle; (3) to examine the interannual variability of the simulated atmosphere and compare it with observation; and (4) to examine the ability of the model to capture the major climate anomalies in response to events such as El Nino and La Nina. The 10-year mean seasonal and annual simulated circulation is quite reasonable compared to the analyzed circulation, except the polar regions and area of high orography. Precipitation over tropics are quite well simulated, and the signal of El Nino/La Nina episodes can be easily identified. The time series of evaporation and soil moisture in the 12 biomes of the biosphere also show reasonable patterns compared to the estimated evaporation and soil moisture.

  8. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  9. Modelling foraging movements of diving predators: a theoretical study exploring the effect of heterogeneous landscapes on foraging efficiency

    PubMed Central

    Bartoń, Kamil A.; Scott, Beth E.; Travis, Justin M.J.

    2014-01-01

    Foraging in the marine environment presents particular challenges for air-breathing predators. Information about prey capture rates, the strategies that diving predators use to maximise prey encounter rates and foraging success are still largely unknown and difficult to observe. As well, with the growing awareness of potential climate change impacts and the increasing interest in the development of renewable sources it is unknown how the foraging activity of diving predators such as seabirds will respond to both the presence of underwater structures and the potential corresponding changes in prey distributions. Motivated by this issue we developed a theoretical model to gain general understanding of how the foraging efficiency of diving predators may vary according to landscape structure and foraging strategy. Our theoretical model highlights that animal movements, intervals between prey capture and foraging efficiency are likely to critically depend on the distribution of the prey resource and the size and distribution of introduced underwater structures. For multiple prey loaders, changes in prey distribution affected the searching time necessary to catch a set amount of prey which in turn affected the foraging efficiency. The spatial aggregation of prey around small devices (∼ 9 × 9 m) created a valuable habitat for a successful foraging activity resulting in shorter intervals between prey captures and higher foraging efficiency. The presence of large devices (∼ 24 × 24 m) however represented an obstacle for predator movement, thus increasing the intervals between prey captures. In contrast, for single prey loaders the introduction of spatial aggregation of the resources did not represent an advantage suggesting that their foraging efficiency is more strongly affected by other factors such as the timing to find the first prey item which was found to occur faster in the presence of large devices. The development of this theoretical model represents a useful starting point to understand the energetic reasons for a range of potential predator responses to spatial heterogeneity and environmental uncertainties in terms of search behaviour and predator–prey interactions. We highlight future directions that integrated empirical and modelling studies should take to improve our ability to predict how diving predators will be impacted by the deployment of manmade structures in the marine environment. PMID:25250211

  10. Intelligent control of robotic arm/hand systems for the NASA EVA retriever using neural networks

    NASA Technical Reports Server (NTRS)

    Mclauchlan, Robert A.

    1989-01-01

    Adaptive/general learning algorithms using varying neural network models are considered for the intelligent control of robotic arm plus dextrous hand/manipulator systems. Results are summarized and discussed for the use of the Barto/Sutton/Anderson neuronlike, unsupervised learning controller as applied to the stabilization of an inverted pendulum on a cart system. Recommendations are made for the application of the controller and a kinematic analysis for trajectory planning to simple object retrieval (chase/approach and capture/grasp) scenarios in two dimensions.

  11. Using a hybrid subtyping model to capture patterns and dimensionality of depressive and anxiety symptomatology in the general population.

    PubMed

    Wardenaar, Klaas J; Wanders, Rob B K; Ten Have, Margreet; de Graaf, Ron; de Jonge, Peter

    2017-06-01

    Researchers have tried to identify more homogeneous subtypes of major depressive disorder (MDD) with latent class analyses (LCA). However, this approach does no justice to the dimensional nature of psychopathology. In addition, anxiety and functioning-levels have seldom been integrated in subtyping efforts. Therefore, this study used a hybrid discrete-dimensional approach to identify subgroups with shared patterns of depressive and anxiety symptomatology, while accounting for functioning-levels. The Comprehensive International Diagnostic Interview (CIDI) 1.1 was used to assess previous-year depressive and anxiety symptoms in the Netherlands Mental Health Survey and Incidence Study-1 (NEMESIS-1; n=5583). The data were analyzed with factor analyses, LCA and hybrid mixed-measurement item response theory (MM-IRT) with and without functioning covariates. Finally, the classes' predictors (measured one year earlier) and outcomes (measured two years later) were investigated. A 3-class MM-IRT model with functioning covariates best described the data and consisted of a 'healthy class' (74.2%) and two symptomatic classes ('sleep/energy' [13.4%]; 'mood/anhedonia' [12.4%]). Factors including older age, urbanicity, higher severity and presence of 1-year MDD predicted membership of either symptomatic class vs. the healthy class. Both symptomatic classes showed poorer 2-year outcomes (i.e. disorders, poor functioning) than the healthy class. The odds of MDD after two years were especially increased in the mood/anhedonia class. Symptoms were assessed for the past year whereas current functioning was assessed. Heterogeneity of depression and anxiety symptomatology are optimally captured by a hybrid discrete-dimensional subtyping model. Importantly, accounting for functioning-levels helps to capture clinically relevant interpersonal differences. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. An Ontology-Based Archive Information Model for the Planetary Science Community

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris

    2008-01-01

    The Planetary Data System (PDS) information model is a mature but complex model that has been used to capture over 30 years of planetary science data for the PDS archive. As the de-facto information model for the planetary science data archive, it is being adopted by the International Planetary Data Alliance (IPDA) as their archive data standard. However, after seventeen years of evolutionary change the model needs refinement. First a formal specification is needed to explicitly capture the model in a commonly accepted data engineering notation. Second, the core and essential elements of the model need to be identified to help simplify the overall archive process. A team of PDS technical staff members have captured the PDS information model in an ontology modeling tool. Using the resulting knowledge-base, work continues to identify the core elements, identify problems and issues, and then test proposed modifications to the model. The final deliverables of this work will include specifications for the next generation PDS information model and the initial set of IPDA archive data standards. Having the information model captured in an ontology modeling tool also makes the model suitable for use by Semantic Web applications.

  13. Multi-phase CFD modeling of solid sorbent carbon capture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, E. M.; DeCroix, D.; Breault, R.

    2013-07-01

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  14. Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; DeCroix, David; Breault, Ronald W.

    2013-07-30

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  15. Controlling reactivity of nanoporous catalyst materials by tuning reaction product-pore interior interactions: Statistical mechanical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Ackerman, David M.; Lin, Victor S.-Y.

    2013-04-02

    Statistical mechanical modeling is performed of a catalytic conversion reaction within a functionalized nanoporous material to assess the effect of varying the reaction product-pore interior interaction from attractive to repulsive. A strong enhancement in reactivity is observed not just due to the shift in reaction equilibrium towards completion but also due to enhanced transport within the pore resulting from reduced loading. The latter effect is strongest for highly restricted transport (single-file diffusion), and applies even for irreversible reactions. The analysis is performed utilizing a generalized hydrodynamic formulation of the reaction-diffusion equations which can reliably capture the complex interplay between reactionmore » and restricted transport.« less

  16. Modeling and Analysis of Large Amplitude Flight Maneuvers

    NASA Technical Reports Server (NTRS)

    Anderson, Mark R.

    2004-01-01

    Analytical methods for stability analysis of large amplitude aircraft motion have been slow to develop because many nonlinear system stability assessment methods are restricted to a state-space dimension of less than three. The proffered approach is to create regional cell-to-cell maps for strategically located two-dimensional subspaces within the higher-dimensional model statespace. These regional solutions capture nonlinear behavior better than linearized point solutions. They also avoid the computational difficulties that emerge when attempting to create a cell map for the entire state-space. Example stability results are presented for a general aviation aircraft and a micro-aerial vehicle configuration. The analytical results are consistent with characteristics that were discovered during previous flight-testing.

  17. Exertional myopathy in a grizzly bear (Ursus arctos) captured by leghold snare.

    PubMed

    Cattet, Marc; Stenhouse, Gordon; Bollinger, Trent

    2008-10-01

    We diagnosed exertional myopathy (EM) in a grizzly bear (Ursus arctos) that died approximately 10 days after capture by leghold snare in west-central Alberta, Canada, in June 2003. The diagnosis was based on history, post-capture movement data, gross necropsy, histopathology, and serum enzyme levels. We were unable to determine whether EM was the primary cause of death because autolysis precluded accurate evaluation of all tissues. Nevertheless, comparison of serum aspartate aminotransferase and creatine kinase concentrations and survival between the affected bear and other grizzly bears captured by leghold snare in the same research project suggests EM also occurred in other bears, but that it is not generally a cause of mortality. We propose, however, occurrence of nonfatal EM in grizzly bears after capture by leghold snare has potential implications for use of this capture method, including negative effects on wildlife welfare and research data.

  18. Electrochemical capture and release of carbon dioxide

    DOE PAGES

    Rheinhardt, Joseph H.; Singh, Poonam; Tarakeshwar, Pilarisetty; ...

    2017-01-18

    Understanding the chemistry of carbon dioxide is key to affecting changes in atmospheric concentrations. One area of intense interest is CO 2 capture in chemically reversible cycles relevant to carbon capture technologies. Most CO 2 capture methods involve thermal cycles in which a nucleophilic agent captures CO 2 from impure gas streams (e.g., flue gas), followed by a thermal process in which pure CO 2 is released. Several reviews have detailed progress in these approaches. A less explored strategy uses electrochemical cycles to capture CO 2 and release it in pure form. These cycles typically rely on electrochemical generation ofmore » nucleophiles that attack CO 2 at the electrophilic carbon atom, forming a CO 2 adduct. Then, CO 2 is released in pure form via a subsequent electrochemical step. In this Perspective, we describe electrochemical cycles for CO 2 capture and release, emphasizing electrogenerated nucleophiles. As a result, we also discuss some advantages and disadvantages inherent in this general approach.« less

  19. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David; Agarwal, Deborah A.; Sun, Xin

    2011-09-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  20. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.; Agarwal, D.; Sun, X.

    2011-01-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  1. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  2. Capture mechanism in Palaeotropical pitcher plants (Nepenthaceae) is constrained by climate

    PubMed Central

    Moran, Jonathan A.; Gray, Laura K.; Clarke, Charles; Chin, Lijin

    2013-01-01

    Background and Aims Nepenthes (Nepenthaceae, approx. 120 species) are carnivorous pitcher plants with a centre of diversity comprising the Philippines, Borneo, Sumatra and Sulawesi. Nepenthes pitchers use three main mechanisms for capturing prey: epicuticular waxes inside the pitcher; a wettable peristome (a collar-shaped structure around the opening); and viscoelastic fluid. Previous studies have provided evidence suggesting that the first mechanism may be more suited to seasonal climates, whereas the latter two might be more suited to perhumid environments. In this study, this idea was tested using climate envelope modelling. Methods A total of 94 species, comprising 1978 populations, were grouped by prey capture mechanism (large peristome, small peristome, waxy, waxless, viscoelastic, non-viscoelastic, ‘wet’ syndrome and ‘dry’ syndrome). Nineteen bioclimatic variables were used to model habitat suitability at approx. 1 km resolution for each group, using Maxent, a presence-only species distribution modelling program. Key Results Prey capture groups putatively associated with perhumid conditions (large peristome, waxless, viscoelastic and ‘wet’ syndrome) had more restricted areas of probable habitat suitability than those associated putatively with less humid conditions (small peristome, waxy, non-viscoelastic and‘dry’ syndrome). Overall, the viscoelastic group showed the most restricted area of modelled suitable habitat. Conclusions The current study is the first to demonstrate that the prey capture mechanism in a carnivorous plant is constrained by climate. Nepenthes species employing peristome-based and viscoelastic fluid-based capture are largely restricted to perhumid regions; in contrast, the wax-based mechanism allows successful capture in both perhumid and more seasonal areas. Possible reasons for the maintenance of peristome-based and viscoelastic fluid-based capture mechanisms in Nepenthes are discussed in relation to the costs and benefits associated with a given prey capture strategy. PMID:23975653

  3. Capture mechanism in Palaeotropical pitcher plants (Nepenthaceae) is constrained by climate.

    PubMed

    Moran, Jonathan A; Gray, Laura K; Clarke, Charles; Chin, Lijin

    2013-11-01

    Nepenthes (Nepenthaceae, approx. 120 species) are carnivorous pitcher plants with a centre of diversity comprising the Philippines, Borneo, Sumatra and Sulawesi. Nepenthes pitchers use three main mechanisms for capturing prey: epicuticular waxes inside the pitcher; a wettable peristome (a collar-shaped structure around the opening); and viscoelastic fluid. Previous studies have provided evidence suggesting that the first mechanism may be more suited to seasonal climates, whereas the latter two might be more suited to perhumid environments. In this study, this idea was tested using climate envelope modelling. A total of 94 species, comprising 1978 populations, were grouped by prey capture mechanism (large peristome, small peristome, waxy, waxless, viscoelastic, non-viscoelastic, 'wet' syndrome and 'dry' syndrome). Nineteen bioclimatic variables were used to model habitat suitability at approx. 1 km resolution for each group, using Maxent, a presence-only species distribution modelling program. Prey capture groups putatively associated with perhumid conditions (large peristome, waxless, viscoelastic and 'wet' syndrome) had more restricted areas of probable habitat suitability than those associated putatively with less humid conditions (small peristome, waxy, non-viscoelastic and'dry' syndrome). Overall, the viscoelastic group showed the most restricted area of modelled suitable habitat. The current study is the first to demonstrate that the prey capture mechanism in a carnivorous plant is constrained by climate. Nepenthes species employing peristome-based and viscoelastic fluid-based capture are largely restricted to perhumid regions; in contrast, the wax-based mechanism allows successful capture in both perhumid and more seasonal areas. Possible reasons for the maintenance of peristome-based and viscoelastic fluid-based capture mechanisms in Nepenthes are discussed in relation to the costs and benefits associated with a given prey capture strategy.

  4. Deepwater sculpin status and recovery in Lake Ontario

    USGS Publications Warehouse

    Weidel, Brian C.; Walsh, Maureen; Connerton, Michael J.; Lantry, Brian F.; Lantry, Jana R.; Holden, Jeremy P.; Yuille, Michael J.; Hoyle, James A.

    2017-01-01

    Deepwater sculpin are important in oligotrophic lakes as one of the few fishes that use deep profundal habitats and link invertebrates in those habitats to piscivores. In Lake Ontario the species was once abundant, however drastic declines in the mid-1900s led some to suggest the species had been extirpated and ultimately led Canadian and U.S. agencies to elevate the species' conservation status. Following two decades of surveys with no captures, deepwater sculpin were first caught in low numbers in 1996 and by the early 2000s there were indications of population recovery. We updated the status of Lake Ontario deepwater sculpin through 2016 to inform resource management and conservation. Our data set was comprised of 8431 bottom trawls sampled from 1996 to 2016, in U.S. and Canadian waters spanning depths from 5 to 225 m. Annual density estimates generally increased from 1996 through 2016, and an exponential model estimated the rate of population increase was ~ 59% per year. The mean total length and the proportion of fish greater than the estimated length at maturation (~ 116 mm) generally increased until a peak in 2013. In addition, the mean length of all deepwater sculpin captured in a trawl significantly increased with depth. Across all years examined, deepwater sculpin densities generally increased with depth, increasing sharply at depths > 150 m. Bottom trawl observations suggest the Lake Ontario deepwater sculpin population has recovered and current densities and biomass densities may now be similar to the other Great Lakes.

  5. Generalization of value in reinforcement learning by humans

    PubMed Central

    Wimmer, G. Elliott; Daw, Nathaniel D.; Shohamy, Daphna

    2012-01-01

    Research in decision making has focused on the role of dopamine and its striatal targets in guiding choices via learned stimulus-reward or stimulus-response associations, behavior that is well-described by reinforcement learning (RL) theories. However, basic RL is relatively limited in scope and does not explain how learning about stimulus regularities or relations may guide decision making. A candidate mechanism for this type of learning comes from the domain of memory, which has highlighted a role for the hippocampus in learning of stimulus-stimulus relations, typically dissociated from the role of the striatum in stimulus-response learning. Here, we used fMRI and computational model-based analyses to examine the joint contributions of these mechanisms to RL. Humans performed an RL task with added relational structure, modeled after tasks used to isolate hippocampal contributions to memory. On each trial participants chose one of four options, but the reward probabilities for pairs of options were correlated across trials. This (uninstructed) relationship between pairs of options potentially enabled an observer to learn about options’ values based on experience with the other options and to generalize across them. We observed BOLD activity related to learning in the striatum and also in the hippocampus. By comparing a basic RL model to one augmented to allow feedback to generalize between correlated options, we tested whether choice behavior and BOLD activity were influenced by the opportunity to generalize across correlated options. Although such generalization goes beyond standard computational accounts of RL and striatal BOLD, both choices and striatal BOLD were better explained by the augmented model. Consistent with the hypothesized role for the hippocampus in this generalization, functional connectivity between the ventral striatum and hippocampus was modulated, across participants, by the ability of the augmented model to capture participants’ choice. Our results thus point toward an interactive model in which striatal RL systems may employ relational representations typically associated with the hippocampus. PMID:22487039

  6. Precipitation extreme changes exceeding moisture content increases in MIROC and IPCC climate models

    PubMed Central

    Sugiyama, Masahiro; Shiogama, Hideo; Emori, Seita

    2010-01-01

    Precipitation extreme changes are often assumed to scale with, or are constrained by, the change in atmospheric moisture content. Studies have generally confirmed the scaling based on moisture content for the midlatitudes but identified deviations for the tropics. In fact half of the twelve selected Intergovernmental Panel on Climate Change (IPCC) models exhibit increases faster than the climatological-mean precipitable water change for high percentiles of tropical daily precipitation, albeit with significant intermodel scatter. Decomposition of the precipitation extreme changes reveals that the variations among models can be attributed primarily to the differences in the upward velocity. Both the amplitude and vertical profile of vertical motion are found to affect precipitation extremes. A recently proposed scaling that incorporates these dynamical effects can capture the basic features of precipitation changes in both the tropics and midlatitudes. In particular, the increases in tropical precipitation extremes significantly exceed the precipitable water change in Model for Interdisciplinary Research on Climate (MIROC), a coupled general circulation model with the highest resolution among IPCC climate models whose precipitation characteristics have been shown to reasonably match those of observations. The expected intensification of tropical disturbances points to the possibility of precipitation extreme increases beyond the moisture content increase as is found in MIROC and some of IPCC models. PMID:20080720

  7. Modeling for (physical) biologists: an introduction to the rule-based approach

    PubMed Central

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S

    2015-01-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions. PMID:26178138

  8. Validated simulator for space debris removal with nets and other flexible tethers applications

    NASA Astrophysics Data System (ADS)

    Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil

    2016-12-01

    In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and typical use cases are discussed showing that the software may be used to design throw nets for space debris capturing, but also to simulate deorbitation process, chaser control system or general interactions between rigid and elastic bodies - all in convenient and efficient way. The presented work was led by SKA Polska under the ESA contract, within the CleanSpace initiative.

  9. Communication: Fragment-based Hamiltonian model of electronic charge-excitation gaps and gap closure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valone, Steven Michael; Pilania, Ghanshyam; Liu, Xiang-Yang

    Capturing key electronic properties such as charge excitation gaps within models at or above the atomic scale presents an ongoing challenge to understanding molecular, nanoscale, and condensed phase systems. One strategy is to describe the system in terms of properties of interacting material fragments, but it is unclear how to accomplish this for charge-excitation and charge-transfer phenomena. Hamiltonian models such as the Hubbard model provide formal frameworks for analyzing gap properties but are couched purely in terms of states of electrons, rather than the states of the fragments at the scale of interest. The recently introduced Fragment Hamiltonian (FH) modelmore » uses fragments in different charge states as its building blocks, enabling a uniform, quantum-mechanical treatment that captures the charge-excitation gap. These gaps are preserved in terms of inter-fragment charge-transferhopping integrals T and on-fragment parameters U (FH). The FH model generalizes the standard Hubbard model (a single intra-band hopping integral t and on-site repulsion U) from quantum states for electrons to quantum states for fragments. In this paper, we demonstrate that even for simple two-fragment and multi-fragment systems, gap closure is enabled once T exceeds the threshold set by U (FH), thus providing new insight into the nature of metal-insulator transitions. Finally, this result is in contrast to the standard Hubbard model for 1d rings, for which Lieb and Wu proved that gap closure was impossible, regardless of the choices for t and U.« less

  10. Communication: Fragment-based Hamiltonian model of electronic charge-excitation gaps and gap closure

    DOE PAGES

    Valone, Steven Michael; Pilania, Ghanshyam; Liu, Xiang-Yang; ...

    2015-11-13

    Capturing key electronic properties such as charge excitation gaps within models at or above the atomic scale presents an ongoing challenge to understanding molecular, nanoscale, and condensed phase systems. One strategy is to describe the system in terms of properties of interacting material fragments, but it is unclear how to accomplish this for charge-excitation and charge-transfer phenomena. Hamiltonian models such as the Hubbard model provide formal frameworks for analyzing gap properties but are couched purely in terms of states of electrons, rather than the states of the fragments at the scale of interest. The recently introduced Fragment Hamiltonian (FH) modelmore » uses fragments in different charge states as its building blocks, enabling a uniform, quantum-mechanical treatment that captures the charge-excitation gap. These gaps are preserved in terms of inter-fragment charge-transferhopping integrals T and on-fragment parameters U (FH). The FH model generalizes the standard Hubbard model (a single intra-band hopping integral t and on-site repulsion U) from quantum states for electrons to quantum states for fragments. In this paper, we demonstrate that even for simple two-fragment and multi-fragment systems, gap closure is enabled once T exceeds the threshold set by U (FH), thus providing new insight into the nature of metal-insulator transitions. Finally, this result is in contrast to the standard Hubbard model for 1d rings, for which Lieb and Wu proved that gap closure was impossible, regardless of the choices for t and U.« less

  11. Modeling fuel succession

    USGS Publications Warehouse

    Davis, Brett; Van Wagtendonk, Jan W.; Beck, Jen; van Wagtendonk, Kent A.

    2009-01-01

    Surface fuels data are of critical importance for supporting fire incident management, risk assessment, and fuel management planning, but the development of surface fuels data can be expensive and time consuming. The data development process is extensive, generally beginning with acquisition of remotely sensed spatial data such as aerial photography or satellite imagery (Keane and others 2001). The spatial vegetation data are then crosswalked to a set of fire behavior fuel models that describe the available fuels (the burnable portions of the vegetation) (Anderson 1982, Scott and Burgan 2005). Finally, spatial fuels data are used as input to tools such as FARSITE and FlamMap to model current and potential fire spread and behavior (Finney 1998, Finney 2006). The capture date of the remotely sensed data defines the period for which the vegetation, and, therefore, fuels, data are most accurate. The more time that passes after the capture date, the less accurate the data become due to vegetation growth and processes such as fire. Subsequently, the results of any fire simulation based on these data become less accurate as the data age. Because of the amount of labor and expense required to develop these data, keeping them updated may prove to be a challenge. In this article, we describe the Sierra Nevada Fuel Succession Model, a modeling tool that can quickly and easily update surface fuel models with a minimum of additional input data. Although it was developed for use by Yosemite, Sequoia, and Kings Canyon National Parks, it is applicable to much of the central and southern Sierra Nevada. Furthermore, the methods used to develop the model have national applicability.

  12. Influence of atrial substrate on local capture induced by rapid pacing of atrial fibrillation.

    PubMed

    Rusu, Alexandru; Jacquemet, Vincent; Vesin, Jean-Marc; Virag, Nathalie

    2014-05-01

    Preliminary studies showed that the septum area was the only location allowing local capture of both the atria during rapid pacing of atrial fibrillation (AF) from a single site. The present model-based study investigated the influence of atrial substrate on the ability to capture AF when pacing the septum. Three biophysical models of AF with an identical anatomy from human atria but with different AF substrates were used: (i) AF based on multiple wavelets, (ii) AF based on heterogeneities in vagal activation, (iii) AF based on heterogeneities in repolarization. A fourth anatomical model without Bachmann's bundle (BB) was also implemented. Rapid pacing was applied from the septum at pacing cycle lengths in the range of 50-100% of AF cycle length. Local capture was automatically assessed with 24 pairs of electrodes evenly distributed on the atrial surface. The results were averaged over 16 AF simulations. In the homogeneous substrate, AF capture could reach 80% of the atrial surface. Heterogeneities degraded the ability to capture during AF. In the vagal substrate, the capture tended to be more regular and the degradation of the capture was not directly related to the spatial extent of the heterogeneities. In the third substrate, heterogeneities induced wave anchorings and wavebreaks even in areas close to the pacing site, with a more dramatic effect on AF capture. Finally, BB did not significantly affect the ability to capture. Atrial fibrillation substrate had a significant effect on rapid pacing outcomes. The response to therapeutic pacing may therefore be specific to each patient.

  13. Simulating Eastern- and Central-Pacific Type ENSO Using a Simple Coupled Model

    NASA Astrophysics Data System (ADS)

    Fang, Xianghui; Zheng, Fei

    2018-06-01

    Severe biases exist in state-of-the-art general circulation models (GCMs) in capturing realistic central-Pacific (CP) El Niño structures. At the same time, many observational analyses have emphasized that thermocline (TH) feedback and zonal advective (ZA) feedback play dominant roles in the development of eastern-Pacific (EP) and CP El Niño-Southern Oscillation (ENSO), respectively. In this work, a simple linear air-sea coupled model, which can accurately depict the strength distribution of the TH and ZA feedbacks in the equatorial Pacific, is used to investigate these two types of El Niño. The results indicate that the model can reproduce the main characteristics of CP ENSO if the TH feedback is switched off and the ZA feedback is retained as the only positive feedback, confirming the dominant role played by ZA feedback in the development of CP ENSO. Further experiments indicate that, through a simple nonlinear control approach, many ENSO characteristics, including the existence of both CP and EP El Niño and the asymmetries between El Niño and La Niña, can be successfully captured using the simple linear air-sea coupled model. These analyses indicate that an accurate depiction of the climatological sea surface temperature distribution and the related ZA feedback, which are the subject of severe biases in GCMs, is very important in simulating a realistic CP El Niño.

  14. Comparative system identification of flower tracking performance in three hawkmoth species reveals adaptations for dim light vision.

    PubMed

    Stöckl, Anna L; Kihlström, Klara; Chandler, Steven; Sponberg, Simon

    2017-04-05

    Flight control in insects is heavily dependent on vision. Thus, in dim light, the decreased reliability of visual signal detection also prompts consequences for insect flight. We have an emerging understanding of the neural mechanisms that different species employ to adapt the visual system to low light. However, much less explored are comparative analyses of how low light affects the flight behaviour of insect species, and the corresponding links between physiological adaptations and behaviour. We investigated whether the flower tracking behaviour of three hawkmoth species with different diel activity patterns revealed luminance-dependent adaptations, using a system identification approach. We found clear luminance-dependent differences in flower tracking in all three species, which were explained by a simple luminance-dependent delay model, which generalized across species. We discuss physiological and anatomical explanations for the variance in tracking responses, which could not be explained by such simple models. Differences between species could not be explained by the simple delay model. However, in several cases, they could be explained through the addition on a second model parameter, a simple scaling term, that captures the responsiveness of each species to flower movements. Thus, we demonstrate here that much of the variance in the luminance-dependent flower tracking responses of hawkmoths with different diel activity patterns can be captured by simple models of neural processing.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).

  15. A generalized multi-dimensional mathematical model for charging and discharging processes in a supercapacitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A

    A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]).more » In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors« less

  16. Vaccination Against Porcine Circovirus-2 Reduces Severity of Tuberculosis in Wild Boar.

    PubMed

    Risco, David; Bravo, María; Martínez, Remigio; Torres, Almudena; Gonçalves, Pilar; Cuesta, Jesús; García-Jiménez, Waldo; Cerrato, Rosario; Iglesias, Rocío; Galapero, Javier; Serrano, Emmanuel; Gómez, Luis; Fernández-Llario, Pedro; Hermoso de Mendoza, Javier

    2018-03-09

    Tuberculosis (TB) in wild boar (Sus scrofa) may be affected by coinfections with other pathogens, such as porcine circovirus type 2 (PCV2). Therefore, sanitary measures focused on controlling PCV2 could be useful in reducing the impact of TB in this wild suid. The aim of this study was to explore whether vaccination against PCV2 targeting young animals affects TB prevalence and TB severity in wild boar. The study was conducted on a game estate in mid-western Spain. Seventy animals of ages ranging from 4 to 8 months were captured, individually identified, vaccinated against PCV2 and released, forming a vaccinated group. Not-captured animals cohabiting with the vaccinated wild boar constituted the control group. Animals from both groups were hunted between 2013 and 2016 and a TB diagnosis based on pathological assessment and microbiological culture was made in all of them. The effect of PCV2 vaccination on TB prevalence and severity was explored using generalized lineal models. Whereas TB prevalence was similar in vaccinated and control groups (54.55 vs. 57.78%), vaccinated animals showed less probabilities to develop generalized TB lesions. Furthermore, mean TB severity score was significantly lower in vaccinated animals (1.55 vs. 2.42) suggesting a positive effect of PCV2 vaccination.

  17. Inter-decadal modulation of ENSO teleconnections to the Indian Ocean in a coupled model: Special emphasis on decay phase of El Niño

    NASA Astrophysics Data System (ADS)

    Chowdary, J. S.; Parekh, Anant; Gnanaseelan, C.; Sreenivas, P.

    2014-01-01

    Inter-decadal modulation of El Niño-Southern Oscillation (ENSO) teleconnections to tropical Indian Ocean (TIO) is investigated in the coupled general circulation model Climate Forecast System (CFS) using a hundred year integration. The model is able to capture the periodicity of El Niño variability, which is similar to that of the observations. The maximum TIO/north Indian Ocean (NIO) SST warming (during spring following the decay phase of El Niño) associated with El Niño is well captured by the model. Detailed analysis reveals that the surface heat flux variations mainly contribute to the El Niño forced TIO SST variations both in observations and model. However, spring warming is nearly stationary throughout the model integration period, indicating poor inter-decadal El Niño teleconnections. The observations on the other hand displayed maximum SST warming with strong seasonality from epoch to epoch. The model El Niño decay delayed by more than two seasons, results in persistent TIO/NIO SST warming through the following December unlike in the observations. The ocean wave adjustments and persistent westerly wind anomalies over the equatorial Pacific are responsible for late decay of El Niño in the model. Consistent late decay of El Niño, throughout the model integration period (low variance), is mainly responsible for the poor inter-decadal ENSO teleconnections to TIO/NIO. This study deciphers that the model needs to produce El Niño decay phase variability correctly to obtain decadal-modulations in ENSO teleconnection.

  18. Cross-scale assessment of potential habitat shifts in a rapidly changing climate

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Holcombe, Tracy R.; Bella, Elizabeth S.; Carlson, Matthew L.; Graziano, Gino; Lamb, Melinda; Seefeldt, Steven S.; Morisette, Jeffrey T.

    2014-01-01

    We assessed the ability of climatic, environmental, and anthropogenic variables to predict areas of high-risk for plant invasion and consider the relative importance and contribution of these predictor variables by considering two spatial scales in a region of rapidly changing climate. We created predictive distribution models, using Maxent, for three highly invasive plant species (Canada thistle, white sweetclover, and reed canarygrass) in Alaska at both a regional scale and a local scale. Regional scale models encompassed southern coastal Alaska and were developed from topographic and climatic data at a 2 km (1.2 mi) spatial resolution. Models were applied to future climate (2030). Local scale models were spatially nested within the regional area; these models incorporated physiographic and anthropogenic variables at a 30 m (98.4 ft) resolution. Regional and local models performed well (AUC values > 0.7), with the exception of one species at each spatial scale. Regional models predict an increase in area of suitable habitat for all species by 2030 with a general shift to higher elevation areas; however, the distribution of each species was driven by different climate and topographical variables. In contrast local models indicate that distance to right-of-ways and elevation are associated with habitat suitability for all three species at this spatial level. Combining results from regional models, capturing long-term distribution, and local models, capturing near-term establishment and distribution, offers a new and effective tool for highlighting at-risk areas and provides insight on how variables acting at different scales contribute to suitability predictions. The combinations also provides easy comparison, highlighting agreement between the two scales, where long-term distribution factors predict suitability while near-term do not and vice versa.

  19. The dynamics of temperature and light on the growth of phytoplankton.

    PubMed

    Chen, Ming; Fan, Meng; Liu, Rui; Wang, Xiaoyu; Yuan, Xing; Zhu, Huaiping

    2015-11-21

    Motivated by some lab and field observations of the hump shaped effects of water temperature and light on the growth of phytoplankton, a bottom-up nutrient phytoplankton model, which incorporates the combined effects of temperature and light, is proposed and analyzed to explore the dynamics of phytoplankton bloom. The population growth model reasonably captures such observed dynamics qualitatively. An ecological reproductive index is defined to characterize the growth of the phytoplankton which also allows a comprehensive analysis of the role of temperature and light on the growth and reproductive characteristics of phytoplankton in general. The model provides a framework to study the mechanisms of phytoplankton dynamics in shallow lake and may even be employed to study the controlled phytoplankton bloom. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A model for combined targeting and tracking tasks in computer applications.

    PubMed

    Senanayake, Ransalu; Hoffmann, Errol R; Goonetilleke, Ravindra S

    2013-11-01

    Current models for targeted-tracking are discussed and shown to be inadequate as a means of understanding the combined task of tracking, as in the Drury's paradigm, and having a final target to be aimed at, as in the Fitts' paradigm. It is shown that the task has to be split into components that are, in general, performed sequentially and have a movement time component dependent on the difficulty of the individual component of the task. In some cases, the task time may be controlled by the Fitts' task difficulty, and in others, it may be dominated by the Drury's task difficulty. Based on an experiment carried out that captured movement time in combinations of visually controlled and ballistic movements, a model for movement time in targeted-tracking was developed.

  1. Interaction with a field: a simple integrable model with backreaction

    NASA Astrophysics Data System (ADS)

    Mouchet, Amaury

    2008-09-01

    The classical model of an oscillator linearly coupled to a string captures, for a low price in technique, many general features of more realistic models for describing a particle interacting with a field or an atom in an electromagnetic cavity. The scattering matrix and the asymptotic in and out-waves on the string can be computed exactly and the phenomenon of resonant scattering can be introduced in the simplest way. The dissipation induced by the coupling of the oscillator to the string can be studied completely. In the case of a d'Alembert string, the backreaction leads to an Abraham-Lorentz-Dirac-like equation. In the case of a Klein-Gordon string, one can see explicitly how radiation governs the (meta)stability of the (quasi)bounded mode.

  2. Evaluation of different strategies for magnetic particle functionalization with DNA aptamers.

    PubMed

    Pérez-Ruiz, Elena; Lammertyn, Jeroen; Spasic, Dragana

    2016-12-25

    The optimal bio-functionalization of magnetic particles is essential for developing magnetic particle-based bioassays. Whereas functionalization with antibodies is generally well established, immobilization of DNA probes, such as aptamers, is not yet fully explored. In this work, four different types of commercially available magnetic particles, coated with streptavidin, maleimide or carboxyl groups, were evaluated for their surface coverage with aptamer bioreceptors, efficiency in capturing target protein and non-specific protein adsorption on their surface. A recently developed aptamer against the peanut allergen, Ara h 1 protein, was used as a model system. Conjugation of biotinylated Ara h 1 aptamer to the streptavidin particles led to the highest surface coverage, whereas the coverage of maleimide particles was 25% lower. Carboxylated particles appeared to be inadequate for DNA functionalization. Streptavidin particles also showed the greatest target capturing efficiency, comparable to the one of particles functionalized with anti-Ara h 1 antibody. The performance of streptavidin particles was additionally tested in a sandwich assay with the aptamer as a capture receptor on the particle surface. While the limit of detection obtained was comparable to the same assay system with antibody as capture receptor, it was superior to previously reported values using the same aptamer in similar assay schemes with different detection platforms. These results point to the promising application of the Ara h 1 aptamer-functionalized particles in bioassay development. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Greenhouse gas mitigation in a carbon constrained world - the role of CCS in Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumacher, Katja; Sands, Ronald D.

    2009-01-05

    In a carbon constrained world, at least four classes of greenhouse gas mitigation options are available: energy efficiency, switching to low or carbon-free energy sources, introduction of carbon dioxide capture and storage along with electric generating technologies, and reductions in emissions of non-CO2 greenhouse gases. The contribution of each option to overall greenhouse gas mitigation varies by cost, scale, and timing. In particular, carbon dioxide capture and storage (CCS) promises to allow for low-emissions fossil-fuel based power generation. This is particularly relevant for Germany, where electricity generation is largely coal-based and, at the same time, ambitious climate targets are inmore » place. Our objective is to provide a balanced analysis of the various classes of greenhouse gas mitigation options with a particular focus on CCS for Germany. We simulate the potential role of advanced fossil fuel based electricity generating technologies with CCS (IGCC, NGCC) as well the potential for retrofit with CCS for existing and currently built fossil plants from the present through 2050. We employ a computable general equilibrium (CGE) economic model as a core model and integrating tool.« less

  4. Multi-dimensional upwinding-based implicit LES for the vorticity transport equations

    NASA Astrophysics Data System (ADS)

    Foti, Daniel; Duraisamy, Karthik

    2017-11-01

    Complex turbulent flows such as rotorcraft and wind turbine wakes are characterized by the presence of strong coherent structures that can be compactly described by vorticity variables. The vorticity-velocity formulation of the incompressible Navier-Stokes equations is employed to increase numerical efficiency. Compared to the traditional velocity-pressure formulation, high order numerical methods and sub-grid scale models for the vorticity transport equation (VTE) have not been fully investigated. Consistent treatment of the convection and stretching terms also needs to be addressed. Our belief is that, by carefully designing sharp gradient-capturing numerical schemes, coherent structures can be more efficiently captured using the vorticity-velocity formulation. In this work, a multidimensional upwind approach for the VTE is developed using the generalized Riemann problem-based scheme devised by Parish et al. (Computers & Fluids, 2016). The algorithm obtains high resolution by augmenting the upwind fluxes with transverse and normal direction corrections. The approach is investigated with several canonical vortex-dominated flows including isolated and interacting vortices and turbulent flows. The capability of the technique to represent sub-grid scale effects is also assessed. Navy contract titled ``Turbulence Modelling Across Disparate Length Scales for Naval Computational Fluid Dynamics Applications,'' through Continuum Dynamics, Inc.

  5. Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People

    NASA Astrophysics Data System (ADS)

    Thomik, Andreas; Faisal, A. Aldo

    2015-03-01

    Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of ``eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex. Andreas is supported by the Luxemburg Research Fund (1229297).

  6. A Method to Capture Macroslip at Bolted Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, Ronald Neil; Heitman, Lili Anne Akin

    2015-10-01

    Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less

  7. A Method to Capture Macroslip at Bolted Interfaces [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, Ronald Neil; Heitman, Lili Anne Akin

    2016-01-01

    Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less

  8. A MECHANISTIC MODEL FOR MERCURY CAPTURE WITH IN-SITU GENERATED TITANIA PARTICLES: ROLE OF WATER VAPOR

    EPA Science Inventory

    A mechanistic model to predict the capture of gas phase mercury species using in-situ generated titania nanosize particles activated by UV irradiation is developed. The model is an extension of a recently reported model1 for photochemical reactions that accounts for the rates of...

  9. Nonlinear Thermoelastic Model for SMAs and SMA Hybrid Composites

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2004-01-01

    A constitutive mathematical model has been developed that predicts the nonlinear thermomechanical behaviors of shape-memory-alloys (SMAs) and of shape-memory-alloy hybrid composite (SMAHC) structures, which are composite-material structures that contain embedded SMA actuators. SMAHC structures have been investigated for their potential utility in a variety of applications in which there are requirements for static or dynamic control of the shapes of structures, control of the thermoelastic responses of structures, or control of noise and vibrations. The present model overcomes deficiencies of prior, overly simplistic or qualitative models that have proven ineffective or intractable for engineering of SMAHC structures. The model is sophisticated enough to capture the essential features of the mechanics of SMAHC structures yet simple enough to accommodate input from fundamental engineering measurements and is in a form that is amenable to implementation in general-purpose structural analysis environments.

  10. Requirements engineering for cross-sectional information chain models

    PubMed Central

    Hübner, U; Cruel, E; Gök, M; Garthaus, M; Zimansky, M; Remmers, H; Rienhoff, O

    2012-01-01

    Despite the wealth of literature on requirements engineering, little is known about engineering very generic, innovative and emerging requirements, such as those for cross-sectional information chains. The IKM health project aims at building information chain reference models for the care of patients with chronic wounds, cancer-related pain and back pain. Our question therefore was how to appropriately capture information and process requirements that are both generally applicable and practically useful. To this end, we started with recommendations from clinical guidelines and put them up for discussion in Delphi surveys and expert interviews. Despite the heterogeneity we encountered in all three methods, it was possible to obtain requirements suitable for building reference models. We evaluated three modelling languages and then chose to write the models in UML (class and activity diagrams). On the basis of the current project results, the pros and cons of our approach are discussed. PMID:24199080

  11. Effects of surface wave breaking on the oceanic boundary layer

    NASA Astrophysics Data System (ADS)

    He, Hailun; Chen, Dake

    2011-04-01

    Existing laboratory studies suggest that surface wave breaking may exert a significant impact on the formation and evolution of oceanic surface boundary layer, which plays an important role in the ocean-atmosphere coupled system. However, present climate models either neglect the effects of wave breaking or treat them implicitly through some crude parameterization. Here we use a one-dimensional ocean model (General Ocean Turbulence Model, GOTM) to investigate the effects of wave breaking on the oceanic boundary layer on diurnal to seasonal time scales. First a set of idealized experiments are carried out to demonstrate the basic physics and the necessity to include wave breaking. Then the model is applied to simulating observations at the northern North Sea and the Ocean Weather Station Papa, which shows that properly accounting for wave breaking effects can improve model performance and help it to successfully capture the observed upper ocean variability.

  12. A Catchment-Based Approach to Modeling Land Surface Processes in a GCM. Part 2; Parameter Estimation and Model Demonstration

    NASA Technical Reports Server (NTRS)

    Ducharne, Agnes; Koster, Randal D.; Suarez, Max J.; Stieglitz, Marc; Kumar, Praveen

    2000-01-01

    The viability of a new catchment-based land surface model (LSM) developed for use with general circulation models is demonstrated. First, simple empirical functions -- tractable enough for operational use in the LSM -- are established that faithfully capture the control of topography on the subgrid variability of soil moisture and the surface water budget, as predicted by theory. Next, the full LSM is evaluated offline. Using forcing and validation datasets developed for PILPS Phase 2c, the minimally calibrated model is shown to reproduce observed evaporation and runoff fluxes successfully in the Red-Arkansas River Basin. A complementary idealized study that employs the range of topographic variability seen over North America demonstrates that the simulated surface water budget does vary strongly with topography, which can, by itself, induce variations in annual evaporation as high as 20%.

  13. Physics of Intact Capture of Cometary Coma Dust Samples

    NASA Astrophysics Data System (ADS)

    Anderson, William

    2011-06-01

    In 1986, Tom Ahrens and I developed a simple model for hypervelocity capture in low density foams, aimed in particular at the suggestion that such techniques could be used to capture dust during flyby of an active comet nucleus. While the model was never published in printed form, it became known to many in the cometary dust sampling community. More sophisticated models have been developed since, but our original model still retains superiority for some applications and elucidates the physics of the capture process in a more intuitive way than the more recent models. The model makes use of the small value of the Hugoniot intercept typical of highly distended media to invoke analytic expressions with functional forms common to fluid dynamics. The model successfully describes the deceleration and ablation of a particle that is large enough to see the foam as a low density continuum. I will present that model, updated with improved calculations of the temperature in the shocked foam, and show its continued utility in elucidating the phenomena of hypervelocity penetration of low-density foams.

  14. Use of the Open Field Maze to measure locomotor and anxiety-like behavior in mice.

    PubMed

    Seibenhener, Michael L; Wooten, Michael C

    2015-02-06

    Animal models have proven to be invaluable to researchers trying to answer questions regarding the mechanisms of behavior. The Open Field Maze is one of the most commonly used platforms to measure behaviors in animal models. It is a fast and relatively easy test that provides a variety of behavioral information ranging from general ambulatory ability to data regarding the emotionality of the subject animal. As it relates to rodent models, the procedure allows the study of different strains of mice or rats both laboratory bred and wild-captured. The technique also readily lends itself to the investigation of different pharmacological compounds for anxiolytic or anxiogenic effects. Here, a protocol for use of the open field maze to describe mouse behaviors is detailed and a simple analysis of general locomotor ability and anxiety-related emotional behaviors between two strains of C57BL/6 mice is performed. Briefly, using the described protocol we show Wild Type mice exhibited significantly less anxiety related behaviors than did age-matched Knock Out mice while both strains exhibited similar ambulatory ability.

  15. A Design Support Framework through Dynamic Deployment of Hypothesis and Verification in the Design Process

    NASA Astrophysics Data System (ADS)

    Nomaguch, Yutaka; Fujita, Kikuo

    This paper proposes a design support framework, named DRIFT (Design Rationale Integration Framework of Three layers), which dynamically captures and manages hypothesis and verification in the design process. A core of DRIFT is a three-layered design process model of action, model operation and argumentation. This model integrates various design support tools and captures design operations performed on them. Action level captures the sequence of design operations. Model operation level captures the transition of design states, which records a design snapshot over design tools. Argumentation level captures the process of setting problems and alternatives. The linkage of three levels enables to automatically and efficiently capture and manage iterative hypothesis and verification processes through design operations over design tools. In DRIFT, such a linkage is extracted through the templates of design operations, which are extracted from the patterns embeded in design tools such as Design-For-X (DFX) approaches, and design tools are integrated through ontology-based representation of design concepts. An argumentation model, gIBIS (graphical Issue-Based Information System), is used for representing dependencies among problems and alternatives. A mechanism of TMS (Truth Maintenance System) is used for managing multiple hypothetical design stages. This paper also demonstrates a prototype implementation of DRIFT and its application to a simple design problem. Further, it is concluded with discussion of some future issues.

  16. Temporal variability of local abundance, sex ratio and activity in the Sardinian chalk hill blue butterfly

    USGS Publications Warehouse

    Casula, P.; Nichols, J.D.

    2003-01-01

    When capturing and marking of individuals is possible, the application of newly developed capture-recapture models can remove several sources of bias in the estimation of population parameters such as local abundance and sex ratio. For example, observation of distorted sex ratios in counts or captures can reflect either different abundances of the sexes or different sex-specific capture probabilities, and capture-recapture models can help distinguish between these two possibilities. Robust design models and a model selection procedure based on information-theoretic methods were applied to study the local population structure of the endemic Sardinian chalk hill blue butterfly, Polyommatus coridon gennargenti. Seasonal variations of abundance, plus daily and weather-related variations of active populations of males and females were investigated. Evidence was found of protandry and male pioneering of the breeding space. Temporary emigration probability, which describes the proportion of the population not exposed to capture (e.g. absent from the study area) during the sampling process, was estimated, differed between sexes, and was related to temperature, a factor known to influence animal activity. The correlation between temporary emigration and average daily temperature suggested interpreting temporary emigration as inactivity of animals. Robust design models were used successfully to provide a detailed description of the population structure and activity in this butterfly and are recommended for studies of local abundance and animal activity in the field.

  17. Attrition-enhanced sulfur capture by limestone particles in fluidized beds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saastamoinen, J.J.; Shimizu, T.

    2007-02-14

    Sulfur capture by limestone particles in fluidized beds is a well-established technology. The underlying chemical and physical phenomena of the process have been extensively studied and modeled. However, most of the studies have been focused on the relatively brief initial stage of the process, which extends from a few minutes to hours, yet the residence time of the particles in the boiler is much longer. Following the initial stage, a dense product layer will be formed on the particle surface, which decreases the rate of sulfur capture and the degree of utilization of the sorbent. Attrition can enhance sulfur capturemore » by removing this layer. A particle model for sulfur capture has been incorporated with an attrition model. After the initial stage, the rate of sulfur capture stabilizes, so that attrition removes the surface at the same rate as diffusion and chemical reaction produces new product in a thin surface layer of a particle. An analytical solution for the conversion of particles for this regime is presented. The solution includes the effects of the attrition rate, diffusion, chemical kinetics, pressure, and SO{sub 2} concentration, relative to conversion-dependent diffusivity and the rate of chemical reaction. The particle model results in models that describe the conversion of limestone in both fly ash and bottom ash. These are incorporated with the residence time (or reactor) models to calculate the average conversion of the limestone in fly ash and bottom ash, as well as the efficiency of sulfur capture. Data from a large-scale pressurized fluidized bed are compared with the model results.« less

  18. A new capture fraction method to map how pumpage affects surface water flow.

    PubMed

    Leake, Stanley A; Reeves, Howard W; Dickinson, Jesse E

    2010-01-01

    All groundwater pumped is balanced by removal of water somewhere, initially from storage in the aquifer and later from capture in the form of increase in recharge and decrease in discharge. Capture that results in a loss of water in streams, rivers, and wetlands now is a concern in many parts of the United States. Hydrologists commonly use analytical and numerical approaches to study temporal variations in sources of water to wells for select points of interest. Much can be learned about coupled surface/groundwater systems, however, by looking at the spatial distribution of theoretical capture for select times of interest. Development of maps of capture requires (1) a reasonably well-constructed transient or steady state model of an aquifer with head-dependent flow boundaries representing surface water features or evapotranspiration and (2) an automated procedure to run the model repeatedly and extract results, each time with a well in a different location. This paper presents new methods for simulating and mapping capture using three-dimensional groundwater flow models and presents examples from Arizona, Oregon, and Michigan.

  19. Performance analysis of a generalized upset detection procedure

    NASA Technical Reports Server (NTRS)

    Blough, Douglas M.; Masson, Gerald M.

    1987-01-01

    A general procedure for upset detection in complex systems, called the data block capture and analysis upset monitoring process is described and analyzed. The process consists of repeatedly recording a fixed amount of data from a set of predetermined observation lines of the system being monitored (i.e., capturing a block of data), and then analyzing the captured block in an attempt to determine whether the system is functioning correctly. The algorithm which analyzes the data blocks can be characterized in terms of the amount of time it requires to examine a given length data block to ascertain the existence of features/conditions that have been predetermined to characterize the upset-free behavior of the system. The performance of linear, quadratic, and logarithmic data analysis algorithms is rigorously characterized in terms of three performance measures: (1) the probability of correctly detecting an upset; (2) the expected number of false alarms; and (3) the expected latency in detecting upsets.

  20. Serial assessment of the physiological status of leatherback turtles (Dermochelys coriacea) during direct capture events in the northwestern Atlantic Ocean: comparison of post-capture and pre-release data.

    PubMed

    Innis, Charles J; Merigo, Constance; Cavin, Julie M; Hunt, Kathleen; Dodge, Kara L; Lutcavage, Molly

    2014-01-01

    The physiological status of seven leatherback turtles (Dermochelys coriacea) was assessed at two time points during ecological research capture events in the northwestern Atlantic Ocean. Data were collected as soon as possible after securing each turtle onboard the capture vessel and again immediately prior to release. Measured parameters included sea surface temperature, body temperature, morphometric data, sex, heart rate, respiratory rate and various haematological and blood biochemical variables. Results indicated generally stable physiological status in comparison to previously published studies of this species. However, blood pH and blood potassium concentrations increased significantly between the two time points (P = 0.0018 and P = 0.0452, respectively). Turtles were affected by a mild initial acidosis (mean [SD] temperature-corrected pH = 7.29 [0.07]), and blood pH increased prior to release (mean [SD] = 7.39 [0.07]). Initial blood potassium concentrations were considered normal (mean [SD] = 4.2 [0.9] mmol/l), but turtles experienced a mild to moderate increase in blood potassium concentrations during the event (mean [SD] pre-release potassium = 5.9 [1.7] mmol/l, maximum = 8.5 mmol/l). While these data support the general safety of direct capture for study of this species, the observed changes in blood potassium concentrations are of potential concern due to possible adverse effects of hyperkalaemia on cardiac function. The results of this study highlight the importance of physiological monitoring during scientific capture events. The results are also likely to be relevant to unintentional leatherback capture events (e.g. fisheries interactions), when interactions may be more prolonged or extreme.

Top