Sample records for model captures important

  1. Do little interactions get lost in dark random forests?

    PubMed

    Wright, Marvin N; Ziegler, Andreas; König, Inke R

    2016-03-31

    Random forests have often been claimed to uncover interaction effects. However, if and how interaction effects can be differentiated from marginal effects remains unclear. In extensive simulation studies, we investigate whether random forest variable importance measures capture or detect gene-gene interactions. With capturing interactions, we define the ability to identify a variable that acts through an interaction with another one, while detection is the ability to identify an interaction effect as such. Of the single importance measures, the Gini importance captured interaction effects in most of the simulated scenarios, however, they were masked by marginal effects in other variables. With the permutation importance, the proportion of captured interactions was lower in all cases. Pairwise importance measures performed about equal, with a slight advantage for the joint variable importance method. However, the overall fraction of detected interactions was low. In almost all scenarios the detection fraction in a model with only marginal effects was larger than in a model with an interaction effect only. Random forests are generally capable of capturing gene-gene interactions, but current variable importance measures are unable to detect them as interactions. In most of the cases, interactions are masked by marginal effects and interactions cannot be differentiated from marginal effects. Consequently, caution is warranted when claiming that random forests uncover interactions.

  2. Information technology in the foxhole.

    PubMed

    Eyestone, S M

    1995-08-01

    The importance of digital data capture at the point of health care service within the military environment is highlighted. Current paper-based data capture does not allow for efficient data reuse throughout the medical support information domain. A simple, high-level process and data flow model is used to demonstrate the importance of data capture at point of service. The Department of Defense is developing a personal digital assistant, called MEDTAG, that accomplishes point of service data capture in the field using a prototype smart card as a data store in austere environments.

  3. Contributions of Microtubule Dynamic Instability and Rotational Diffusion to Kinetochore Capture.

    PubMed

    Blackwell, Robert; Sweezy-Schindler, Oliver; Edelmaier, Christopher; Gergely, Zachary R; Flynn, Patrick J; Montes, Salvador; Crapo, Ammon; Doostan, Alireza; McIntosh, J Richard; Glaser, Matthew A; Betterton, Meredith D

    2017-02-07

    Microtubule dynamic instability allows search and capture of kinetochores during spindle formation, an important process for accurate chromosome segregation during cell division. Recent work has found that microtubule rotational diffusion about minus-end attachment points contributes to kinetochore capture in fission yeast, but the relative contributions of dynamic instability and rotational diffusion are not well understood. We have developed a biophysical model of kinetochore capture in small fission-yeast nuclei using hybrid Brownian dynamics/kinetic Monte Carlo simulation techniques. With this model, we have studied the importance of dynamic instability and microtubule rotational diffusion for kinetochore capture, both to the lateral surface of a microtubule and at or near its end. Over a range of biologically relevant parameters, microtubule rotational diffusion decreased capture time, but made a relatively small contribution compared to dynamic instability. At most, rotational diffusion reduced capture time by 25%. Our results suggest that while microtubule rotational diffusion can speed up kinetochore capture, it is unlikely to be the dominant physical mechanism for typical conditions in fission yeast. In addition, we found that when microtubules undergo dynamic instability, lateral captures predominate even in the absence of rotational diffusion. Counterintuitively, adding rotational diffusion to a dynamic microtubule increases the probability of end-on capture. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  4. The use of auxiliary variables in capture-recapture and removal experiments

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1984-01-01

    The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.

  5. FRAP Analysis: Accounting for Bleaching during Image Capture

    PubMed Central

    Wu, Jun; Shekhar, Nandini; Lele, Pushkar P.; Lele, Tanmay P.

    2012-01-01

    The analysis of Fluorescence Recovery After Photobleaching (FRAP) experiments involves mathematical modeling of the fluorescence recovery process. An important feature of FRAP experiments that tends to be ignored in the modeling is that there can be a significant loss of fluorescence due to bleaching during image capture. In this paper, we explicitly include the effects of bleaching during image capture in the model for the recovery process, instead of correcting for the effects of bleaching using reference measurements. Using experimental examples, we demonstrate the usefulness of such an approach in FRAP analysis. PMID:22912750

  6. The Husting dilemma: A methodological note

    USGS Publications Warehouse

    Nichols, J.D.; Hepp, G.R.; Pollock, K.H.; Hines, J.E.

    1987-01-01

    Recently, Gill (1985) discussed the interpretation of capture history data resulting from his own studies on the red-spotted newt, Notophthalmus viridescens , and work by Husting (1965) on spotted salamanders, Ambystoma maculatum. Gill (1985) noted that gaps in capture histories (years in which individuals were not captured, preceded and followed by years in which they were) could result from either of two very different possibilities: (1) failure of the animal to return to the fenced pond to breed (the alternative Husting (1965) favored), or (2) return of the animal to the breeding pond, but failure of the investigator to capture it and detect its presence. The authors agree entirely with Gill (1985) that capture history data such as his or those of Husting (1965) should be analyzed using models that recognize the possibility of 'census error,' and that it is important to try to distinguish between such 'error' and skipped breeding efforts. The purpose of this note is to point out the relationship between Gill's (1985:347) null model and certain capture-recapture models, and to use capture-recapture models and tests to analyze the original data of Husting (1965).

  7. On valuing patches: estimating contributions to metapopulation growth with reverse-time capture-recapture modeling

    Treesearch

    Jamie S. Sanderlin; Peter M. Waser; James E. Hines; James D. Nichols

    2012-01-01

    Metapopulation ecology has historically been rich in theory, yet analytical approaches for inferring demographic relationships among local populations have been few. We show how reverse-time multi-state capture­recapture models can be used to estimate the importance of local recruitment and interpopulation dispersal to metapopulation growth. We use 'contribution...

  8. Experimental and Theoretical Understanding of Neutron Capture on Uranium Isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ullmann, John Leonard

    2017-09-21

    Neutron capture cross sections on uranium isotopes are important quantities needed to model nuclear explosion performance, nuclear reactor design, nuclear test diagnostics, and nuclear forensics. It has been difficult to calculate capture accurately, and factors of 2 or more be- tween calculation and measurements are not uncommon, although normalization to measurements of the average capture width and nuclear level density can improve the result. The calculations of capture for 233,235,237,239U are further complicated by the need to accurately include the fission channel.

  9. Investigations of potential bias in the estimation of lambda using Pradel's (1996) model for capture-recapture data

    USGS Publications Warehouse

    Hines, James E.; Nichols, James D.

    2002-01-01

    Pradel's (1996) temporal symmetry model permitting direct estimation and modelling of population growth rate, u i , provides a potentially useful tool for the study of population dynamics using marked animals. Because of its recent publication date, the approach has not seen much use, and there have been virtually no investigations directed at robustness of the resulting estimators. Here we consider several potential sources of bias, all motivated by specific uses of this estimation approach. We consider sampling situations in which the study area expands with time and present an analytic expression for the bias in u i We next consider trap response in capture probabilities and heterogeneous capture probabilities and compute large-sample and simulation-based approximations of resulting bias in u i . These approximations indicate that trap response is an especially important assumption violation that can produce substantial bias. Finally, we consider losses on capture and emphasize the importance of selecting the estimator for u i that is appropriate to the question being addressed. For studies based on only sighting and resighting data, Pradel's (1996) u i ' is the appropriate estimator.

  10. Multivariate Models of Parent-Late Adolescent Gender Dyads: The Importance of Parenting Processes in Predicting Adjustment

    ERIC Educational Resources Information Center

    McKinney, Cliff; Renk, Kimberly

    2008-01-01

    Although parent-adolescent interactions have been examined, relevant variables have not been integrated into a multivariate model. As a result, this study examined a multivariate model of parent-late adolescent gender dyads in an attempt to capture important predictors in late adolescents' important and unique transition to adulthood. The sample…

  11. Use of models to map potential capture of surface water

    USGS Publications Warehouse

    Leake, Stanley A.

    2006-01-01

    The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.

  12. Neutron Capture Measurements on 97Mo with the DANCE Array

    NASA Astrophysics Data System (ADS)

    Walker, Carrie L.

    Neutron capture is a process that is crucial to understanding nucleosynthesis, reactors, and nuclear weapons. Precise knowledge of neutron capture cross-sections and level densities is necessary in order to model these high-flux environments. High-confidence spin and parity assignments for neutron resonances are of critical importance to this end. For nuclei in the A=100 mass region, the p-wave neutron strength function is at a maximum, and the s-wave strength function is at a minimum, producing up to six possible Jpi combinations. Parity determination becomes important to assigning spins in this mass region, and the large number of spin groups adds complexity to the problem. In this work, spins and parities for 97Mo resonances are assigned, and best fit models for photon strength function and level density are determined. The neutron capture-cross section for 97Mo is also determined, as are resonance parameters for neutron energies ranging from 16 eV to 2 keV.

  13. A Numerical Analysis on the Effects of Self-Excited Tip Flow Unsteadiness and Upstream Blade Row Interactions on the Performance Predictions of a Transonic Compressor

    NASA Astrophysics Data System (ADS)

    Heberling, Brian

    Computational fluid dynamics (CFD) simulations can offer a detailed view of the complex flow fields within an axial compressor and greatly aid the design process. However, the desire for quick turnaround times raises the question of how exact the model must be. At design conditions, steady CFD simulating an isolated blade row can accurately predict the performance of a rotor. However, as a compressor is throttled and mass flow rate decreased, axial flow becomes weaker making the capturing of unsteadiness, wakes, or other flow features more important. The unsteadiness of the tip clearance flow and upstream blade wake can have a significant impact on a rotor. At off-design conditions, time-accurate simulations or modeling multiple blade rows can become necessary in order to receive accurate performance predictions. Unsteady and multi- bladerow simulations are computationally expensive, especially when used in conjunction. It is important to understand which features are important to model in order to accurately capture a compressor's performance. CFD simulations of a transonic axial compressor throttling from the design point to stall are presented. The importance of capturing the unsteadiness of the rotor tip clearance flow versus capturing upstream blade-row interactions is examined through steady and unsteady, single- and multi-bladerow computations. It is shown that there are significant differences at near stall conditions between the different types of simulations.

  14. Hybrid automata models of cardiac ventricular electrophysiology for real-time computational applications.

    PubMed

    Andalam, Sidharta; Ramanna, Harshavardhan; Malik, Avinash; Roop, Parthasarathi; Patel, Nitish; Trew, Mark L

    2016-08-01

    Virtual heart models have been proposed for closed loop validation of safety-critical embedded medical devices, such as pacemakers. These models must react in real-time to off-the-shelf medical devices. Real-time performance can be obtained by implementing models in computer hardware, and methods of compiling classes of Hybrid Automata (HA) onto FPGA have been developed. Models of ventricular cardiac cell electrophysiology have been described using HA which capture the complex nonlinear behavior of biological systems. However, many models that have been used for closed-loop validation of pacemakers are highly abstract and do not capture important characteristics of the dynamic rate response. We developed a new HA model of cardiac cells which captures dynamic behavior and we implemented the model in hardware. This potentially enables modeling the heart with over 1 million dynamic cells, making the approach ideal for closed loop testing of medical devices.

  15. Neutron capture cross sections of Kr

    NASA Astrophysics Data System (ADS)

    Fiebiger, Stefan; Baramsai, Bayarbadrakh; Couture, Aaron; Krtička, Milan; Mosby, Shea; Reifarth, René; O'Donnell, John; Rusev, Gencho; Ullmann, John; Weigand, Mario; Wolf, Clemens

    2018-01-01

    Neutron capture and β- -decay are competing branches of the s-process nucleosynthesis path at 85Kr [1], which makes it an important branching point. The knowledge of its neutron capture cross section is therefore essential to constrain stellar models of nucleosynthesis. Despite its importance for different fields, no direct measurement of the cross section of 85Kr in the keV-regime has been performed. The currently reported uncertainties are still in the order of 50% [2, 3]. Neutron capture cross section measurements on a 4% enriched 85Kr gas enclosed in a stainless steel cylinder were performed at Los Alamos National Laboratory (LANL) using the Detector for Advanced Neutron Capture Experiments (DANCE). 85Kr is radioactive isotope with a half life of 10.8 years. As this was a low-enrichment sample, the main contaminants, the stable krypton isotopes 83Kr and 86Kr, were also investigated. The material was highly enriched and contained in pressurized stainless steel spheres.

  16. Neutron Capture Rates and the r-Process Abundance Pattern in Shocked Neutrino-Driven Winds

    NASA Astrophysics Data System (ADS)

    Barringer, Daniel; Surman, Rebecca

    2009-10-01

    The r-process is an important process in nucleosynthesis in which nuclei will undergo rapid neutron captures. Models of the r-process require nuclear data such as neutron capture rates for thousands of individual nuclei, many of which lie far from stability. Among the potential sites for the r-process, and the one that we investigate, is the shocked neutrino-driven wind in core-collapse supernovae. Here we examine the importance of the neutron capture rates of specific, individual nuclei in the second r-process abundance peak occurring at A ˜ 130 for a range of parameterized neutrino-driven wind trajectories. Of specific interest are the nuclei whose capture rates affect the abundances of nuclei outside of the A ˜ 130 peak. We found that increasing the neutron capture rate for a number of nuclei including ^135In, ^132Sn, ^133Sb, ^137Sb, and ^136Te can produce changes in the resulting abundance pattern of up to 13%.

  17. Investigations of potential bias in the estimation of lambda using Pradel's (1996) model for capture-recapture data

    USGS Publications Warehouse

    Hines, J.E.; Nichols, J.D.

    2002-01-01

    Pradel's (1996) temporal symmetry model permitting direct estimation and modelling of population growth rate, lambda sub i provides a potentially useful tool for the study of population dynamics using marked animals. Because of its recent publication date, the approach has not seen much use, and there have been virtually no investigations directed at robustness of the resulting estimators. Here we consider several potential sources of bias, all motivated by specific uses of this estimation approach. We consider sampling situations in which the study area expands with time and present an analytic expression for the bias in lambda hat sub i. We next consider trap response in capture probabilities and heterogeneous capture probabilities and compute large-sample and simulation-based approximations of resulting bias in lambda hat sub i. These approximations indicate that trap response is an especially important assumption violation that can produce substantial bias. Finally, we consider losses on capture and emphasize the importance of selecting the estimator for lambda sub i that is appropriate to the question being addressed. For studies based on only sighting and resighting data, Pradel's (1996) lambda hat prime sub i is the appropriate estimator.

  18. A HIERARCHICAL MODELING FRAMEWORK FOR GEOLOGICAL STORAGE OF CARBON DIOXIDE

    EPA Science Inventory

    Carbon Capture and Storage, or CCS, is likely to be an important technology in a carbonconstrained world. CCS will involve subsurface injection of massive amounts of captured CO2, on a scale that has not previously been approached. The unprecedented scale of t...

  19. FY07 LDRD Final Report Neutron Capture Cross-Section Measurements at DANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, W; Agvaanluvsan, U; Wilk, P

    2008-02-08

    We have measured neutron capture cross sections intended to address defense science problems including mix and the Quantification of Margins and Uncertainties (QMU), and provide details about statistical decay of excited nuclei. A major part of this project included developing the ability to produce radioactive targets. The cross-section measurements were made using the white neutron source at the Los Alamos Neutron Science Center, the detector array called DANCE (The Detector for Advanced Neutron Capture Experiments) and targets important for astrophysics and stockpile stewardship. DANCE is at the leading edge of neutron capture physics and represents a major leap forward inmore » capability. The detector array was recently built with LDRD money. Our measurements are a significant part of the early results from the new experimental DANCE facility. Neutron capture reactions are important for basic nuclear science, including astrophysics and the statistics of the {gamma}-ray cascades, and for applied science, including stockpile science and technology. We were most interested in neutron capture with neutron energies in the range between 1 eV and a few hundred keV, with targets important to basic science, and the s-process in particular. Of particular interest were neutron capture cross-section measurements of rare isotopes, especially radioactive isotopes. A strong collaboration between universities and Los Alamos due to the Academic Alliance was in place at the start of our project. Our project gave Livermore leverage in focusing on Livermore interests. The Lawrence Livermore Laboratory did not have a resident expert in cross-section measurements; this project allowed us to develop this expertise. For many radionuclides, the cross sections for destruction, especially (n,{gamma}), are not well known, and there is no adequate model that describes neutron capture. The modeling problem is significant because, at low energies where capture reactions are important, the neutron reaction cross sections show resonance behavior or follow 1/v of the incident neutrons. In the case of odd-odd nuclei, the modeling problem is particularly difficult because degenerate states (rotational bands) present in even-even nuclei have separated in energy. Our work included interpretation of the {gamma}-ray spectra to compare with the Statistical Model and provides information on level density and statistical decay. Neutron capture cross sections are of programmatic interest to defense sciences because many elements were added to nuclear devices in order to determine various details of the nuclear detonation, including fission yields, fusion yields, and mix. Both product nuclei created by (n,2n) reactions and reactant nuclei are transmuted by neutron capture during the explosion. Very few of the (n,{gamma}) cross sections for reactions that create products measured by radiochemists have ever been experimentally determined; most are calculated by radiochemical equivalences. Our new experimentally measured capture cross sections directly impact our knowledge about the uncertainties in device performances, which enhances our capability of carrying out our stockpile stewardship program. Europium and gadolinium cross sections are important for both astrophysics and defense programs. Measurements made prior to this project on stable europium targets differ by 30-40%, which was considered to be significantly disparate. Of the gadolinium isotopes, {sup 151}Gd is important for stockpile stewardship, and {sup 153}Gd is of high interest to astrophysics, and nether of these (radioactive) gadolinium (n,{gamma}) cross sections have been measured. Additional stable gadolinium isotopes, including {sup 157,160}Gd are of interest to astrophysics. Historical measurements of gadolinium isotopes, including {sup 152,154}Gd, had disagreements similar to the 30-40% disagreements found in the historical europium data. Actinide capture cross section measurements are important for both Stockpile Stewardship and for nuclear forensics. We focused on the {sup 242m}Am(n,{gamma}) measurement, as there was no existing capture measurement for this isotope. The cross-section measurements (cross section vs. E{sub n}) were made at the Detector for Advanced Neutron Capture Experiments. DANCE is comprised of a highly segmented array of barium fluoride (BaF{sub 2}) crystals specifically designed for neutron capture-gamma measurements, using small radioactive targets (less than one milligram). A picture of half the array, along with a photo of one crystal, is shown in Fig. 1. DANCE provides the world's leading capability for measurements of neutron capture cross sections with radioactive targets. The DANCE is a 4{pi} calorimeter and uses the intense spallation neutron source the Lujan Center at the Los Alamos National Laboratory. The detector array consists of 159 barium fluoride crystals arranged in a sphere around the target.« less

  20. Origin of the main r-process elements

    NASA Astrophysics Data System (ADS)

    Otsuki, K.; Truran, J.; Wiescher, M.; Gorres, J.; Mathews, G.; Frekers, D.; Mengoni, A.; Bartlett, A.; Tostevin, J.

    2006-07-01

    The r-process is supposed to be a primary process which assembles heavy nuclei from a photo-dissociated nucleon gas. Hence, the reaction flow through light elements can be important as a constraint on the conditions for the r-process. We have studied the impact of di-neutron capture and the neutron-capture of light (Z<10) elements on r-process nucleosynthesis in three different environments: neutrino-driven winds in Type II supernovae; the prompt explosion of low mass supernovae; and neutron star mergers. Although the effect of di-neutron capture is not significant for the neutrino-driven wind model or low-mass supernovae, it becomes significant in the neutron-star merger model. The neutron-capture of light elements, which has been studied extensively for neutrino-driven wind models, also impacts the other two models. We show that it may be possible to identify the astrophysical site for the main r-process if the nuclear physics uncertainties in current r-process calculations could be reduced.

  1. Surface Adsorption in Nonpolarizable Atomic Models.

    PubMed

    Whitmer, Jonathan K; Joshi, Abhijeet A; Carlton, Rebecca J; Abbott, Nicholas L; de Pablo, Juan J

    2014-12-09

    Many ionic solutions exhibit species-dependent properties, including surface tension and the salting-out of proteins. These effects may be loosely quantified in terms of the Hofmeister series, first identified in the context of protein solubility. Here, our interest is to develop atomistic models capable of capturing Hofmeister effects rigorously. Importantly, we aim to capture this dependence in computationally cheap "hard" ionic models, which do not exhibit dynamic polarization. To do this, we have performed an investigation detailing the effects of the water model on these properties. Though incredibly important, the role of water models in simulation of ionic solutions and biological systems is essentially unexplored. We quantify this via the ion-dependent surface attraction of the halide series (Cl, Br, I) and, in so doing, determine the relative importance of various hypothesized contributions to ionic surface free energies. Importantly, we demonstrate surface adsorption can result in hard ionic models combined with a thermodynamically accurate representation of the water molecule (TIP4Q). The effect observed in simulations of iodide is commensurate with previous calculations of the surface potential of mean force in rigid molecular dynamics and polarizable density-functional models. Our calculations are direct simulation evidence of the subtle but sensitive role of water thermodynamics in atomistic simulations.

  2. Subtask 2.18 - Advancing CO 2 Capture Technology: Partnership for CO 2 Capture (PCO 2C) Phase III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kay, John; Azenkeng, Alexander; Fiala, Nathan

    2016-03-31

    Industries and utilities continue to investigate ways to decrease their carbon footprint. Carbon capture and storage (CCS) can enable existing power generation facilities to meet the current national CO 2 reduction goals. The Partnership for CO2 Capture Phase III focused on several important research areas in an effort to find ways to decrease the cost of capture across both precombustion and postcombustion platforms. Two flue gas pretreatment technologies for postcombustion capture, an SO 2 reduction scrubbing technology from Cansolv Technologies Inc. and the Tri-Mer filtration technology that combines particulate, NOx, and SO 2 control, were evaluated on the Energy &more » Environmental Research Center’s (EERC’s) pilot-scale test system. Pretreating the flue gas should enable more efficient, and therefore less expensive, CO 2 capture. Both technologies were found to be effective in pretreating flue gas prior to CO 2 capture. Two new postcombustion capture solvents were tested, one from the Korea Carbon Capture and Sequestration R&D Center (KCRC) and one from CO 2 Solutions Incorporated. Both of these solvents showed the ability to capture CO 2 while requiring less regeneration energy, which would reduce the cost of capture. Hydrogen separation membranes from Commonwealth Scientific and Industrial Research Organisation were evaluated through precombustion testing. They are composed of vanadium alloy, which is less expensive than the palladium alloys that are typically used. Their performance was comparable to that of other membranes that have been tested at the EERC. Aspen Plus® software was used to model the KCRC and CO 2 Solutions solvents and found that they would result in significantly improved overall plant performance. The modeling effort also showed that the parasitic steam load at partial capture of 45% is less than half that of 90% overall capture, indicating savings that could be accrued if 90% capture is not required. Modeling of three regional power plants using the Carnegie Mellon Integrated Environmental Control Model showed that, among other things, the use of a bypass during partial capture may minimize the size of the capture tower(s) and result in a slight reduction in the revenue required to operate the capture facility. The results reinforced that a one-size-fits-all approach cannot be taken to adding capture to a power plant. Laboratory testing indicated that Fourier transform infrared spectroscopy could be used to continuously sample stack emissions at CO 2 capture facilities to detect and quantify any residual amine or its degradation products, particularly nitrosamines. The information gathered during Phase III is important for utility stakeholders as they determine how to reduce their CO 2 emissions in a carbon-constrained world. This subtask was funded through the EERC–U.S. Department of Energy (DOE) Joint Program on Research and Development for Fossil Energy-Related Resources Cooperative Agreement No. DE-FC26-08NT43291. Nonfederal funding was provided by the North Dakota Industrial Commission, PPL Montana, Nebraska Public Power District, Tri-Mer Corporation, Montana–Dakota Utilities Co., Basin Electric Power Cooperative, KCRC/Korean Institute of Energy Research, Cansolv Technologies, and CO 2 Solutions, Inc.« less

  3. Simulation of mercury capture by sorbent injection using a simplified model.

    PubMed

    Zhao, Bingtao; Zhang, Zhongxiao; Jin, Jing; Pan, Wei-Ping

    2009-10-30

    Mercury pollution by fossil fuel combustion or solid waste incineration is becoming the worldwide environmental concern. As an effective control technology, powdered sorbent injection (PSI) has been successfully used for mercury capture from flue gas with advantages of low cost and easy operation. In order to predict the mercury capture efficiency for PSI more conveniently, a simplified model, which is based on the theory of mass transfer, isothermal adsorption and mass balance, is developed in this paper. The comparisons between theoretical results of this model and experimental results by Meserole et al. [F.B. Meserole, R. Chang, T.R. Carrey, J. Machac, C.F.J. Richardson, Modeling mercury removal by sorbent injection, J. Air Waste Manage. Assoc. 49 (1999) 694-704] demonstrate that the simplified model is able to provide good predictive accuracy. Moreover, the effects of key parameters including the mass transfer coefficient, sorbent concentration, sorbent physical property and sorbent adsorption capacity on mercury adsorption efficiency are compared and evaluated. Finally, the sensitive analysis of impact factor indicates that the injected sorbent concentration plays most important role for mercury capture efficiency.

  4. Estimation of weapon-radius versus maneuverability trade-off for air-to-air combat

    NASA Technical Reports Server (NTRS)

    Kelley, H. J.; Lefton, L.

    1977-01-01

    A chase in a horizontal plane between a pursuer with a large capture radius and a more maneuverable evading vehicle is examined with constant-speed vehicle models. An approximation to the 'sidestepping' maneuver of the Homicidal Chauffeur Game is modified to account for the effect of evader turning rate, and an estimate of capture radius required is so obtained which agrees remarkably well with Cockayne's point-capture result. The maneuver assumes central importance for barrier surfaces appearing in the Game of Two Cars. Results are given for required weapon capture-radius in terms of the maneuverability of the two vehicles. Some calculations of capture radius are presented.

  5. Modeling misidentification errors that result from use of genetic tags in capture-recapture studies

    USGS Publications Warehouse

    Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.

    2011-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.

  6. Introducing WISDEM:An Integrated System Modeling for Wind Turbines and Plant (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykes, K.; Graf, P.; Scott, G.

    2015-01-01

    The National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems to achieve a better National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems tomore » achieve a better understanding of how to improve system-level performance and achieve system-level cost reductions. This work illustrates a few case studies with WISDEM that focus on the design and analysis of wind turbines and plants at different system levels.« less

  7. Evaluation of neutron total and capture cross sections on 99Tc in the unresolved resonance region

    NASA Astrophysics Data System (ADS)

    Iwamoto, Nobuyuki; Katabuchi, Tatsuya

    2017-09-01

    Long-lived fission product Technetium-99 is one of the most important radioisotopes for nuclear transmutation. The reliable nuclear data are indispensable for a wide energy range up to a few MeV, in order to develop environmental load reducing technology. The statistical analyses of resolved resonances were performed by using the truncated Porter-Thomas distribution, coupled-channels optical model, nuclear level density model and Bayes' theorem on conditional probability. The total and capture cross sections were calculated by a nuclear reaction model code CCONE. The resulting cross sections have statistical consistency between the resolved and unresolved resonance regions. The evaluated capture data reproduce those recently measured at ANNRI of J-PARC/MLF above resolved resonance region up to 800 keV.

  8. Mobile Modelling for Crowdsourcing Building Interior Data

    NASA Astrophysics Data System (ADS)

    Rosser, J.; Morley, J.; Jackson, M.

    2012-06-01

    Indoor spatial data forms an important foundation to many ubiquitous computing applications. It gives context to users operating location-based applications, provides an important source of documentation of buildings and can be of value to computer systems where an understanding of environment is required. Unlike external geographic spaces, no centralised body or agency is charged with collecting or maintaining such information. Widespread deployment of mobile devices provides a potential tool that would allow rapid model capture and update by a building's users. Here we introduce some of the issues involved in volunteering building interior data and outline a simple mobile tool for capture of indoor models. The nature of indoor data is inherently private; however in-depth analysis of this issue and legal considerations are not discussed in detail here.

  9. Verification of Orthogrid Finite Element Modeling Techniques

    NASA Technical Reports Server (NTRS)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  10. Designing capture trajectories to unstable periodic orbits around Europa

    NASA Technical Reports Server (NTRS)

    Russell, Ryan P.; Lam, Try

    2006-01-01

    The hostile environment of third body perturbations restricts a mission designer's ability to find well-behaved reproducible capture trajectories when dealing with limited control authority as is typical with low-thrust missions. The approach outlined in this paper confronts this shortcoming by utilizing dynamical systems theory and an extensive preexisting database of Restricted Three Body Problem (RTBP) periodic orbits. The stable manifolds of unstable periodic orbits are utilized to attract a spacecraft towards Europa. By selecting an appropriate periodic orbit, a mission designer can control important characteristics of the captured state including stability, minimum altitudes, characteristic inclinations, and characteristic radii among others. Several free parameters are optimized in the non-trivial mapping from the RTBP to a more realistic model. Although the ephemeris capture orbit is ballistic by design, low-thrust is used to target the state that leads to the capture orbit, control the spacecraft after arriving on the unstable quasi-periodic orbit, and begin the spiral down towards the science orbit. The approach allows a mission designer to directly target fuel efficient captures at Europa in an ephemeris model. Furthermore, it provides structure and controllability to the design of capture trajectories that reside in a chaotic environment.

  11. Modelling individual difference in visual categorization.

    PubMed

    Shen, Jianhong; Palmeri, Thomas J

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.

  12. Modelling individual difference in visual categorization

    PubMed Central

    Shen, Jianhong; Palmeri, Thomas J.

    2016-01-01

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization. PMID:28154496

  13. Computational Modeling of Statistical Learning: Effects of Transitional Probability versus Frequency and Links to Word Learning

    ERIC Educational Resources Information Center

    Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.

    2010-01-01

    Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…

  14. 3D Hall MHD-EPIC Simulations of Ganymede's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Toth, G.; Jia, X.

    2017-12-01

    Fully kinetic modeling of a complete 3D magnetosphere is still computationally expensive and not feasible on current computers. While magnetohydrodynamic (MHD) models have been successfully applied to a wide range of plasma simulation, they cannot capture some important kinetic effects. We have recently developed a new modeling tool to embed the implicit particle-in-cell (PIC) model iPIC3D into the Block-Adaptive-Tree-Solarwind-Roe-Upwind-Scheme (BATS-R-US) magnetohydrodynamic model. This results in a kinetic model of the regions where kinetic effects are important. In addition to the MHD-EPIC modeling of the magnetosphere, the improved model presented here is now able to represent the moon as a resistive body. We use a stretched spherical grid with adaptive mesh refinement (AMR) to capture the resistive body and its boundary. A semi-implicit scheme is employed for solving the magnetic induction equation to allow time steps that are not limited by the resistivity. We have applied the model to Ganymede, the only moon in the solar system known to possess a strong intrinsic magnetic field, and included finite resistivity beneath the moon`s surface to model the electrical properties of the interior in a self-consistent manner. The kinetic effects of electrons and ions on the dayside magnetopause and tail current sheet are captured with iPIC3D. Magnetic reconnections under different upstream background conditions of several Galileo flybys are simulated to study the global reconnection rate and the magnetospheric dynamics

  15. Meso-scale framework for modeling granular material using computed tomography

    DOE PAGES

    Turner, Anne K.; Kim, Felix H.; Penumadu, Dayakar; ...

    2016-03-17

    Numerical modeling of unconsolidated granular materials is comprised of multiple nonlinear phenomena. Accurately capturing these phenomena, including grain deformation and intergranular forces depends on resolving contact regions several orders of magnitude smaller than the grain size. Here, we investigate a method for capturing the morphology of the individual particles using computed X-ray and neutron tomography, which allows for accurate characterization of the interaction between grains. The ability of these numerical approaches to determine stress concentrations at grain contacts is important in order to capture catastrophic splitting of individual grains, which has been shown to play a key role in themore » plastic behavior of the granular material on the continuum level. Discretization approaches, including mesh refinement and finite element type selection are presented to capture high stress concentrations at contact points between grains. The effect of a grain’s coordination number on the stress concentrations is also investigated.« less

  16. Effects of self-relevant cues and cue valence on autobiographical memory specificity in dysphoria.

    PubMed

    Matsumoto, Noboru; Mochizuki, Satoshi

    2017-04-01

    Reduced autobiographical memory specificity (rAMS) is a characteristic memory bias observed in depression. To corroborate the capture hypothesis in the CaRFAX (capture and rumination, functional avoidance, executive capacity and control) model, we investigated the effects of self-relevant cues and cue valence on rAMS using an adapted Autobiographical Memory Test conducted with a nonclinical population. Hierarchical linear modelling indicated that the main effects of depression and self-relevant cues elicited rAMS. Moreover, the three-way interaction among valence, self-relevance, and depression scores was significant. A simple slope test revealed that dysphoric participants experienced rAMS in response to highly self-relevant positive cues and low self-relevant negative cues. These results partially supported the capture hypothesis in nonclinical dysphoria. It is important to consider cue valence in future studies examining the capture hypothesis.

  17. A semi-analytical model for the acoustic impedance of finite length circular holes with mean flow

    NASA Astrophysics Data System (ADS)

    Yang, Dong; Morgans, Aimee S.

    2016-12-01

    The acoustic response of a circular hole with mean flow passing through it is highly relevant to Helmholtz resonators, fuel injectors, perforated plates, screens, liners and many other engineering applications. A widely used analytical model [M.S. Howe. "Onthe theory of unsteady high Reynolds number flow through a circular aperture", Proc. of the Royal Soc. A. 366, 1725 (1979), 205-223] which assumes an infinitesimally short hole was recently shown to be insufficient for predicting the impedance of holes with a finite length. In the present work, an analytical model based on Green's function method is developed to take the hole length into consideration for "short" holes. The importance of capturing the modified vortex noise accurately is shown. The vortices shed at the hole inlet edge are convected to the hole outlet and further downstream to form a vortex sheet. This couples with the acoustic waves and this coupling has the potential to generate as well as absorb acoustic energy in the low frequency region. The impedance predicted by this model shows the importance of capturing the path of the shed vortex. When the vortex path is captured accurately, the impedance predictions agree well with previous experimental and CFD results, for example predicting the potential for generation of acoustic energy at higher frequencies. For "long" holes, a simplified model which combines Howe's model with plane acoustic waves within the hole is developed. It is shown that the most important effect in this case is the acoustic non-compactness of the hole.

  18. A closure test for time-specific capture-recapture data

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    The assumption of demographic closure in the analysis of capture-recapture data under closed-population models is of fundamental importance. Yet, little progress has been made in the development of omnibus tests of the closure assumption. We present a closure test for time-specific data that, in principle, tests the null hypothesis of closed-population model M(t) against the open-population Jolly-Seber model as a specific alternative. This test is chi-square, and can be decomposed into informative components that can be interpreted to determine the nature of closure violations. The test is most sensitive to permanent emigration and least sensitive to temporary emigration, and is of intermediate sensitivity to permanent or temporary immigration. This test is a versatile tool for testing the assumption of demographic closure in the analysis of capture-recapture data.

  19. Cycle development and design for CO{sub 2} capture from flue gas by vacuum swing adsorption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun Zhang; Paul A. Webley

    CO{sub 2} capture and storage is an important component in the development of clean power generation processes. One CO{sub 2} capture technology is gas-phase adsorption, specifically pressure (or vacuum) swing adsorption. The complexity of these processes makes evaluation and assessment of new adsorbents difficult and time-consuming. In this study, we have developed a simple model specifically targeted at CO{sub 2} capture by pressure swing adsorption and validated our model by comparison with data from a fully instrumented pilot-scale pressure swing adsorption process. The model captures non-isothermal effects as well as nonlinear adsorption and nitrogen coadsorption. Using the model and ourmore » apparatus, we have designed and studied a large number of cycles for CO{sub 2} capture. We demonstrate that by careful management of adsorption fronts and assembly of cycles based on understanding of the roles of individual steps, we are able to quickly assess the effect of adsorbents and process parameters on capture performance and identify optimal operating regimes and cycles. We recommend this approach in contrast to exhaustive parametric studies which tend to depend on specifics of the chosen cycle and adsorbent. We show that appropriate combinations of process steps can yield excellent process performance and demonstrate how the pressure drop, and heat loss, etc. affect process performance through their effect on adsorption fronts and profiles. Finally, cyclic temperature profiles along the adsorption column can be readily used to infer concentration profiles - this has proved to be a very useful tool in cyclic function definition. Our research reveals excellent promise for the application of pressure/vacuum swing adsorption technology in the arena of CO{sub 2} capture from flue gases. 20 refs., 6 figs., 2 tabs.« less

  20. Cycle development and design for CO2 capture from flue gas by vacuum swing adsorption.

    PubMed

    Zhang, Jun; Webley, Paul A

    2008-01-15

    CO2 capture and storage is an important component in the development of clean power generation processes. One CO2 capture technology is gas-phase adsorption, specifically pressure (or vacuum) swing adsorption. The complexity of these processes makes evaluation and assessment of new adsorbents difficult and time-consuming. In this study, we have developed a simple model specifically targeted at CO2 capture by pressure swing adsorption and validated our model by comparison with data from a fully instrumented pilot-scale pressure swing adsorption process. The model captures nonisothermal effects as well as nonlinear adsorption and nitrogen coadsorption. Using the model and our apparatus, we have designed and studied a large number of cycles for CO2 capture. We demonstrate that by careful management of adsorption fronts and assembly of cycles based on understanding of the roles of individual steps, we are able to quickly assess the effect of adsorbents and process parameters on capture performance and identify optimal operating regimes and cycles. We recommend this approach in contrast to exhaustive parametric studies which tend to depend on specifics of the chosen cycle and adsorbent. We show that appropriate combinations of process steps can yield excellent process performance and demonstrate how the pressure drop, and heat loss, etc. affect process performance through their effect on adsorption fronts and profiles. Finally, cyclic temperature profiles along the adsorption column can be readily used to infer concentration profiles-this has proved to be a very useful tool in cyclic function definition. Our research reveals excellent promise for the application of pressure/vacuum swing adsorption technology in the arena of CO2 capture from flue gases.

  1. THE RADIATIVE NEUTRON CAPTURE ON 2H, 6Li, 7Li, 12C AND 13C AT ASTROPHYSICAL ENERGIES

    NASA Astrophysics Data System (ADS)

    Dubovichenko, Sergey; Dzhazairov-Kakhramanov, Albert; Burkova, Natalia

    2013-05-01

    The continued interest in the study of radiative neutron capture on atomic nuclei is due, on the one hand, to the important role played by this process in the analysis of many fundamental properties of nuclei and nuclear reactions, and, on the other hand, to the wide use of the capture cross-section data in the various applications of nuclear physics and nuclear astrophysics, and, also, to the importance of the analysis of primordial nucleosynthesis in the Universe. This paper is devoted to the description of results for the processes of the radiative neutron capture on certain light atomic nuclei at thermal and astrophysical energies. The consideration of these processes is done within the framework of the potential cluster model (PCM), general description of which was given earlier. The methods of usage of the results obtained, based on the phase shift analysis intercluster potentials, are demonstrated in calculations of the radiative capture characteristics. The considered capture reactions are not part of stellar thermonuclear cycles, but involve in the basic reaction chain of primordial nucleosynthesis in the course of the Universe formation.

  2. S-factor for radiative capture reactions for light nuclei at astrophysical energies

    NASA Astrophysics Data System (ADS)

    Ghasemi, Reza; Sadeghi, Hossein

    2018-06-01

    The astrophysical S-factors of thermonuclear reactions, including radiative capture reactions and their analysis in the frame of different theoretical models, are the main source of nuclear processes. We have done research on the radiative capture reactions importance in the framework of a potential model. Investigation of the reactions in the astrophysical energies is of great interest in the aspect of astrophysics and nuclear physics for developing correct models of burning and evolution of stars. The experimental measurements are very difficult and impossible because of these reactions occurrence at low-energies. In this paper we do a calculation on radiative capture astrophysical S-factors for nuclei in the mass region A < 17. We calculate the astrophysical factor for the dipole electronic transition E1 and magnetic dipole transition M1 and electric quadrupole transition E2 by using the M3Y potential for non-resonances and resonances captures. Then we have got the parameter of a central part and spin-orbit part of M3Y potential and spectroscopic factor for reaction channels. For the astrophysical S-factor of this article the good agreement is achieved In comparison with experimental data and other theoretical methods.

  3. Augmenting superpopulation capture-recapture models with population assignment data

    USGS Publications Warehouse

    Wen, Zhi; Pollock, Kenneth; Nichols, James; Waser, Peter

    2011-01-01

    Ecologists applying capture-recapture models to animal populations sometimes have access to additional information about individuals' populations of origin (e.g., information about genetics, stable isotopes, etc.). Tests that assign an individual's genotype to its most likely source population are increasingly used. Here we show how to augment a superpopulation capture-recapture model with such information. We consider a single superpopulation model without age structure, and split each entry probability into separate components due to births in situ and immigration. We show that it is possible to estimate these two probabilities separately. We first consider the case of perfect information about population of origin, where we can distinguish individuals born in situ from immigrants with certainty. Then we consider the more realistic case of imperfect information, where we use genetic or other information to assign probabilities to each individual's origin as in situ or outside the population. We use a resampling approach to impute the true population of origin from imperfect assignment information. The integration of data on population of origin with capture-recapture data allows us to determine the contributions of immigration and in situ reproduction to the growth of the population, an issue of importance to ecologists. We illustrate our new models with capture-recapture and genetic assignment data from a population of banner-tailed kangaroo rats Dipodomys spectabilis in Arizona.

  4. An Efficient Analysis Methodology for Fluted-Core Composite Structures

    NASA Technical Reports Server (NTRS)

    Oremont, Leonard; Schultz, Marc R.

    2012-01-01

    The primary loading condition in launch-vehicle barrel sections is axial compression, and it is therefore important to understand the compression behavior of any structures, structural concepts, and materials considered in launch-vehicle designs. This understanding will necessarily come from a combination of test and analysis. However, certain potentially beneficial structures and structural concepts do not lend themselves to commonly used simplified analysis methods, and therefore innovative analysis methodologies must be developed if these structures and structural concepts are to be considered. This paper discusses such an analysis technique for the fluted-core sandwich composite structural concept. The presented technique is based on commercially available finite-element codes, and uses shell elements to capture behavior that would normally require solid elements to capture the detailed mechanical response of the structure. The shell thicknesses and offsets using this analysis technique are parameterized, and the parameters are adjusted through a heuristic procedure until this model matches the mechanical behavior of a more detailed shell-and-solid model. Additionally, the detailed shell-and-solid model can be strategically placed in a larger, global shell-only model to capture important local behavior. Comparisons between shell-only models, experiments, and more detailed shell-and-solid models show excellent agreement. The discussed analysis methodology, though only discussed in the context of fluted-core composites, is widely applicable to other concepts.

  5. Capture-recapture studies for multiple strata including non-markovian transitions

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.; Pollock, K.H.; Hestbeck, J.B.

    1993-01-01

    We consider capture-recapture studies where release and recapture data are available from each of a number of strata on every capture occasion. Strata may, for example, be geographic locations or physiological states. Movement of animals among strata occurs with unknown probabilities, and estimation of these unknown transition probabilities is the objective. We describe a computer routine for carrying out the analysis under a model that assumes Markovian transitions and under reduced parameter versions of this model. We also introduce models that relax the Markovian assumption and allow 'memory' to operate (i.e., allow dependence of the transition probabilities on the previous state). For these models, we sugg st an analysis based on a conditional likelihood approach. Methods are illustrated with data from a large study on Canada geese (Branta canadensis) banded in three geographic regions. The assumption of Markovian transitions is rejected convincingly for these data, emphasizing the importance of the more general models that allow memory.

  6. Multiscale Modeling of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Mital, Subodh K.; Pineda, Evan J.; Arnold, Steven M.

    2015-01-01

    Results of multiscale modeling simulations of the nonlinear response of SiC/SiC ceramic matrix composites are reported, wherein the microstructure of the ceramic matrix is captured. This micro scale architecture, which contains free Si material as well as the SiC ceramic, is responsible for residual stresses that play an important role in the subsequent thermo-mechanical behavior of the SiC/SiC composite. Using the novel Multiscale Generalized Method of Cells recursive micromechanics theory, the microstructure of the matrix, as well as the microstructure of the composite (fiber and matrix) can be captured.

  7. Linking animal-borne video to accelerometers reveals prey capture variability.

    PubMed

    Watanabe, Yuuki Y; Takahashi, Akinori

    2013-02-05

    Understanding foraging is important in ecology, as it determines the energy gains and, ultimately, the fitness of animals. However, monitoring prey captures of individual animals is difficult. Direct observations using animal-borne videos have short recording periods, and indirect signals (e.g., stomach temperature) are never validated in the field. We took an integrated approach to monitor prey captures by a predator by deploying a video camera (lasting for 85 min) and two accelerometers (on the head and back, lasting for 50 h) on free-swimming Adélie penguins. The movies showed that penguins moved the heads rapidly to capture krill in midwater and fish (Pagothenia borchgrevinki) underneath the sea ice. Captures were remarkably fast (two krill per second in swarms) and efficient (244 krill or 33 P. borchgrevinki in 78-89 min). Prey captures were detected by the signal of head acceleration relative to body acceleration with high sensitivity and specificity (0.83-0.90), as shown by receiver-operating characteristic analysis. Extension of signal analysis to the entire behavioral records showed that krill captures were spatially and temporally more variable than P. borchgrevinki captures. Notably, the frequency distribution of krill capture rate closely followed a power-law model, indicating that the foraging success of penguins depends on a small number of very successful dives. The three steps illustrated here (i.e., video observations, linking video to behavioral signals, and extension of signal analysis) are unique approaches to understanding the spatial and temporal variability of ecologically important events such as foraging.

  8. Likelihood analysis of spatial capture-recapture models for stratified or class structured populations

    USGS Publications Warehouse

    Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.

    2015-01-01

    We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.

  9. Informing neutron capture nucleosynthesis on short-lived nuclei with (d,p) reactions

    NASA Astrophysics Data System (ADS)

    Cizewski, Jolie A.; Ratkiewicz, Andrew; Escher, Jutta E.; Lepailleur, Alexandre; Pain, Steven D.; Potel, Gregory

    2018-01-01

    Neutron capture on unstable nuclei is important in understanding abundances in r-process nucleosynthesis. Previously, the non-elastic breakup of the deuteron in the (d,p) reaction has been shown to provide a neutron that can be captured by the nucleus and the gamma-ray decay of the subsequent compound nucleus can be modelled to predict the gamma-ray decay of the compound nucleus in the (n,γ) reaction. Preliminary results from the 95Mo(d,pγ) reaction in normal kinematics support the (d,pγ) reaction as a valid surrogate for neutron capture. The techniques to measure the (d,pγ) reaction in inverse kinematics have been developed.

  10. Comparing Models and Methods for the Delineation of Stream Baseflow Contribution Areas

    NASA Astrophysics Data System (ADS)

    Chow, R.; Frind, M.; Frind, E. O.; Jones, J. P.; Sousa, M.; Rudolph, D. L.; Nowak, W.

    2016-12-01

    This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to parameter non-uniqueness, discretization schemes, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternate approach that provides probability intervals for the baseflow contribution areas. In situations where the two approaches agree, the confidence in the delineation is reinforced.

  11. Delineating baseflow contribution areas for streams - A model and methods comparison

    NASA Astrophysics Data System (ADS)

    Chow, Reynold; Frind, Michael E.; Frind, Emil O.; Jones, Jon P.; Sousa, Marcelo R.; Rudolph, David L.; Molson, John W.; Nowak, Wolfgang

    2016-12-01

    This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to model characteristics, discretization schemes, delineation methods, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternative and more reliable approach that provides probability intervals for the baseflow contribution areas, taking uncertainty into account. The two approaches can be used together to enhance the confidence in the final outcome.

  12. A novel metadata management model to capture consent for record linkage in longitudinal research studies.

    PubMed

    McMahon, Christiana; Denaxas, Spiros

    2017-11-06

    Informed consent is an important feature of longitudinal research studies as it enables the linking of the baseline participant information with administrative data. The lack of standardized models to capture consent elements can lead to substantial challenges. A structured approach to capturing consent-related metadata can address these. a) Explore the state-of-the-art for recording consent; b) Identify key elements of consent required for record linkage; and c) Create and evaluate a novel metadata management model to capture consent-related metadata. The main methodological components of our work were: a) a systematic literature review and qualitative analysis of consent forms; b) the development and evaluation of a novel metadata model. We qualitatively analyzed 61 manuscripts and 30 consent forms. We extracted data elements related to obtaining consent for linkage. We created a novel metadata management model for consent and evaluated it by comparison with the existing standards and by iteratively applying it to case studies. The developed model can facilitate the standardized recording of consent for linkage in longitudinal research studies and enable the linkage of external participant data. Furthermore, it can provide a structured way of recording consent-related metadata and facilitate the harmonization and streamlining of processes.

  13. Bayesian inference in camera trapping studies for a class of spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Karanth, K. Ullas; Gopalaswamy, Arjun M.; Kumar, N. Samba

    2009-01-01

    We develop a class of models for inference about abundance or density using spatial capture-recapture data from studies based on camera trapping and related methods. The model is a hierarchical model composed of two components: a point process model describing the distribution of individuals in space (or their home range centers) and a model describing the observation of individuals in traps. We suppose that trap- and individual-specific capture probabilities are a function of distance between individual home range centers and trap locations. We show that the models can be regarded as generalized linear mixed models, where the individual home range centers are random effects. We adopt a Bayesian framework for inference under these models using a formulation based on data augmentation. We apply the models to camera trapping data on tigers from the Nagarahole Reserve, India, collected over 48 nights in 2006. For this study, 120 camera locations were used, but cameras were only operational at 30 locations during any given sample occasion. Movement of traps is common in many camera-trapping studies and represents an important feature of the observation model that we address explicitly in our application.

  14. Characteristics of fishing operations, environment and life history contributing to small cetacean bycatch in the northeast Atlantic.

    PubMed

    Brown, Susie; Reid, David; Rogan, Emer

    2014-01-01

    Fisheries bycatch is a key threat to cetacean species globally. Managing the impact requires an understanding of the conditions under which animals are caught and the sections of the population affected. We used observer data collected on an albacore tuna gillnet fishery in the northeast Atlantic, to assess operational and environmental factors contributing to bycatch of common and striped dolphins, using generalised linear models and model averaging. Life history demographics of the captured animals were also investigated. In both species, young males dominated the catch. The age ratio of common dolphins was significantly different from that estimated for the population in the region, based on life tables (G = 17.1, d.f. = 2, p = 0.002). Skewed age and sex ratios may reflect varying vulnerability to capture, through differences in behaviour or segregation in populations. Adult females constituted the second largest portion of the bycatch for both species, with potential consequences for population sustainability. Depth was the most important parameter influencing bycatch of both species and reflected what is known about common and striped dolphin habitat use in the region as the probability of catching common dolphins decreased, and striped dolphins increased, with increasing depth. Striped dolphin capture was similarly influenced by the extent to which operations were conducted in daylight, with the probability of capture increasing with increased operations in the pre-sunset and post-sunrise period, potentially driven by increased ability of observers to record animals during daylight operations, or by diurnal movements increasing contact with the fishery. Effort, based on net length and soak time, had little influence on the probability of capturing either species. Our results illustrate the importance of assessing the demographic of the animals captured during observer programmes and, perhaps more importantly, suggest that effort restrictions alone may not be sufficient to eradicate bycatch in areas where driftnets and small cetaceans co-occur.

  15. Characteristics of Fishing Operations, Environment and Life History Contributing to Small Cetacean Bycatch in the Northeast Atlantic

    PubMed Central

    Brown, Susie; Reid, David; Rogan, Emer

    2014-01-01

    Fisheries bycatch is a key threat to cetacean species globally. Managing the impact requires an understanding of the conditions under which animals are caught and the sections of the population affected. We used observer data collected on an albacore tuna gillnet fishery in the northeast Atlantic, to assess operational and environmental factors contributing to bycatch of common and striped dolphins, using generalised linear models and model averaging. Life history demographics of the captured animals were also investigated. In both species, young males dominated the catch. The age ratio of common dolphins was significantly different from that estimated for the population in the region, based on life tables (G = 17.1, d.f. = 2, p = 0.002). Skewed age and sex ratios may reflect varying vulnerability to capture, through differences in behaviour or segregation in populations. Adult females constituted the second largest portion of the bycatch for both species, with potential consequences for population sustainability. Depth was the most important parameter influencing bycatch of both species and reflected what is known about common and striped dolphin habitat use in the region as the probability of catching common dolphins decreased, and striped dolphins increased, with increasing depth. Striped dolphin capture was similarly influenced by the extent to which operations were conducted in daylight, with the probability of capture increasing with increased operations in the pre-sunset and post-sunrise period, potentially driven by increased ability of observers to record animals during daylight operations, or by diurnal movements increasing contact with the fishery. Effort, based on net length and soak time, had little influence on the probability of capturing either species. Our results illustrate the importance of assessing the demographic of the animals captured during observer programmes and, perhaps more importantly, suggest that effort restrictions alone may not be sufficient to eradicate bycatch in areas where driftnets and small cetaceans co-occur. PMID:25121802

  16. Modelling evapotranspiration during precipitation deficits: Identifying critical processes in a land surface model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leafmore » area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Lastly, our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.« less

  17. Modelling evapotranspiration during precipitation deficits: Identifying critical processes in a land surface model

    DOE PAGES

    Ukkola, Anna M.; Pitman, Andy J.; Decker, Mark; ...

    2016-06-21

    Surface fluxes from land surface models (LSMs) have traditionally been evaluated against monthly, seasonal or annual mean states. The limited ability of LSMs to reproduce observed evaporative fluxes under water-stressed conditions has been previously noted, but very few studies have systematically evaluated these models during rainfall deficits. We evaluated latent heat fluxes simulated by the Community Atmosphere Biosphere Land Exchange (CABLE) LSM across 20 flux tower sites at sub-annual to inter-annual timescales, in particular focusing on model performance during seasonal-scale rainfall deficits. The importance of key model processes in capturing the latent heat flux was explored by employing alternative representations of hydrology, leafmore » area index, soil properties and stomatal conductance. We found that the representation of hydrological processes was critical for capturing observed declines in latent heat during rainfall deficits. By contrast, the effects of soil properties, LAI and stomatal conductance were highly site-specific. Whilst the standard model performs reasonably well at annual scales as measured by common metrics, it grossly underestimates latent heat during rainfall deficits. A new version of CABLE, with a more physically consistent representation of hydrology, captures the variation in the latent heat flux during seasonal-scale rainfall deficits better than earlier versions, but remaining biases point to future research needs. Lastly, our results highlight the importance of evaluating LSMs under water-stressed conditions and across multiple plant functional types and climate regimes.« less

  18. Particle-scale CO2 adsorption kinetics modeling considering three reaction mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suh, Dong-Myung; Sun, Xin

    2013-09-01

    In the presence of water (H2O), dry and wet adsorptions of carbon dioxide (CO2) and physical adsorption of H2O happen concurrently in a sorbent particle. The three reactions depend on each other and have a complicated, but important, effect on CO2 capturing via a solid sorbent. In this study, transport phenomena in the sorbent were modeled, including the tree reactions, and a numerical solving procedure for the model also was explained. The reaction variable distribution in the sorbent and their average values were calculated, and simulation results were compared with experimental data to validate the proposed model. Some differences, causedmore » by thermodynamic parameters, were observed between them. However, the developed model reasonably simulated the adsorption behaviors of a sorbent. The weight gained by each adsorbed species, CO2 and H2O, is difficult to determine experimentally. It is known that more CO2 can be captured in the presence of water. Still, it is not yet known quantitatively how much more CO2 the sorbent can capture, nor is it known how much dry and wet adsorptions separately account for CO2 capture. This study addresses those questions by modeling CO2 adsorption in a particle and simulating the adsorption process using the model. As adsorption temperature changed into several values, the adsorbed amount of each species was calculated. The captured CO2 in the sorbent particle was compared quantitatively between dry and wet conditions. As the adsorption temperature decreased, wet adsorption increased. However, dry adsorption was reduced.« less

  19. Landscape effects on diets of two canids in Northwestern Texas: A multinomial modeling approach

    USGS Publications Warehouse

    Lemons, P.R.; Sedinger, J.S.; Herzog, M.P.; Gipson, P.S.; Gilliland, R.L.

    2010-01-01

    Analyses of feces, stomach contents, and regurgitated pellets are common techniques for assessing diets of vertebrates and typically contain more than 1 food item per sampling unit. When analyzed, these individual food items have traditionally been treated as independent, which represents pseudoreplication. When food types are recorded as present or absent, these samples can be treated as multinomial vectors of food items, with each vector representing 1 realization of a possible diet. We suggest such data have a similar structure to capture histories for closed-capture, capturemarkrecapture data. To assess the effects of landscapes and presence of a potential competitor, we used closed-capture models implemented in program MARK into analyze diet data generated from feces of swift foxes (Vulpes velox) and coyotes (Canis latrans) in northwestern Texas. The best models of diet contained season and location for both swift foxes and coyotes, but year accounted for less variation, suggesting that landscape type is an important predictor of diets of both species. Models containing the effect of coyote reduction were not competitive (??QAICc 53.6685), consistent with the hypothesis that presence of coyotes did not influence diet of swift foxes. Our findings suggest that landscape type may have important influences on diets of both species. We believe that multinomial models represent an effective approach to assess hypotheses when diet studies have a data structure similar to ours. ?? 2010 American Society of Mammalogists.

  20. Polar bears in the Beaufort Sea: A 30-year mark-recapture case history

    USGS Publications Warehouse

    Amstrup, Steven C.; McDonald, T.L.; Stirling, I.

    2001-01-01

    Knowledge of population size and trend is necessary to manage anthropogenic risks to polar bears (Ursus maritimus). Despite capturing over 1,025 females between 1967 and 1998, previously calculated estimates of the size of the southern Beaufort Sea (SBS) population have been unreliable. We improved estimates of numbers of polar bears by modeling heterogeneity in capture probability with covariates. Important covariates referred to the year of the study, age of the bear, capture effort, and geographic location. Our choice of best approximating model was based on the inverse relationship between variance in parameter estimates and likelihood of the fit and suggested a growth from ≈ 500 to over 1,000 females during this study. The mean coefficient of variation on estimates for the last decade of the study was 0.16—the smallest yet derived. A similar model selection approach is recommended for other projects where a best model is not identified by likelihood criteria alone.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kress, Joel David

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework,more » and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO 2 capture. The sorbent model includes a detailed treatment of transport and amine-CO 2- H 2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.« less

  2. Methodology to develop crash modification functions for road safety treatments with fully specified and hierarchical models.

    PubMed

    Chen, Yongsheng; Persaud, Bhagwant

    2014-09-01

    Crash modification factors (CMFs) for road safety treatments are developed as multiplicative factors that are used to reflect the expected changes in safety performance associated with changes in highway design and/or the traffic control features. However, current CMFs have methodological drawbacks. For example, variability with application circumstance is not well understood, and, as important, correlation is not addressed when several CMFs are applied multiplicatively. These issues can be addressed by developing safety performance functions (SPFs) with components of crash modification functions (CM-Functions), an approach that includes all CMF related variables, along with others, while capturing quantitative and other effects of factors and accounting for cross-factor correlations. CM-Functions can capture the safety impact of factors through a continuous and quantitative approach, avoiding the problematic categorical analysis that is often used to capture CMF variability. There are two formulations to develop such SPFs with CM-Function components - fully specified models and hierarchical models. Based on sample datasets from two Canadian cities, both approaches are investigated in this paper. While both model formulations yielded promising results and reasonable CM-Functions, the hierarchical model was found to be more suitable in retaining homogeneity of first-level SPFs, while addressing CM-Functions in sub-level modeling. In addition, hierarchical models better capture the correlations between different impact factors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Beyond the Papilionoids – What can We Learn from Chamaecrista?

    USDA-ARS?s Scientific Manuscript database

    Expanding legume research beyond the model papilionoids is necessary if we wish to capture more of the diversity of the enormous, economically important legume family. Chamaecrista fasciculata is emerging as a non-papilionoid model, belonging to the paraphyletic subfamily Caesalpinioideae within th...

  4. Decision exploration lab: a visual analytics solution for decision management.

    PubMed

    Broeksema, Bertjan; Baudel, Thomas; Telea, Arthur G; Crisafulli, Paolo

    2013-12-01

    We present a visual analytics solution designed to address prevalent issues in the area of Operational Decision Management (ODM). In ODM, which has its roots in Artificial Intelligence (Expert Systems) and Management Science, it is increasingly important to align business decisions with business goals. In our work, we consider decision models (executable models of the business domain) as ontologies that describe the business domain, and production rules that describe the business logic of decisions to be made over this ontology. Executing a decision model produces an accumulation of decisions made over time for individual cases. We are interested, first, to get insight in the decision logic and the accumulated facts by themselves. Secondly and more importantly, we want to see how the accumulated facts reveal potential divergences between the reality as captured by the decision model, and the reality as captured by the executed decisions. We illustrate the motivation, added value for visual analytics, and our proposed solution and tooling through a business case from the car insurance industry.

  5. Linking animal-borne video to accelerometers reveals prey capture variability

    PubMed Central

    Watanabe, Yuuki Y.; Takahashi, Akinori

    2013-01-01

    Understanding foraging is important in ecology, as it determines the energy gains and, ultimately, the fitness of animals. However, monitoring prey captures of individual animals is difficult. Direct observations using animal-borne videos have short recording periods, and indirect signals (e.g., stomach temperature) are never validated in the field. We took an integrated approach to monitor prey captures by a predator by deploying a video camera (lasting for 85 min) and two accelerometers (on the head and back, lasting for 50 h) on free-swimming Adélie penguins. The movies showed that penguins moved the heads rapidly to capture krill in midwater and fish (Pagothenia borchgrevinki) underneath the sea ice. Captures were remarkably fast (two krill per second in swarms) and efficient (244 krill or 33 P. borchgrevinki in 78–89 min). Prey captures were detected by the signal of head acceleration relative to body acceleration with high sensitivity and specificity (0.83–0.90), as shown by receiver-operating characteristic analysis. Extension of signal analysis to the entire behavioral records showed that krill captures were spatially and temporally more variable than P. borchgrevinki captures. Notably, the frequency distribution of krill capture rate closely followed a power-law model, indicating that the foraging success of penguins depends on a small number of very successful dives. The three steps illustrated here (i.e., video observations, linking video to behavioral signals, and extension of signal analysis) are unique approaches to understanding the spatial and temporal variability of ecologically important events such as foraging. PMID:23341596

  6. Delineating baseflow contribution areas for streams - A model and methods comparison.

    PubMed

    Chow, Reynold; Frind, Michael E; Frind, Emil O; Jones, Jon P; Sousa, Marcelo R; Rudolph, David L; Molson, John W; Nowak, Wolfgang

    2016-12-01

    This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to model characteristics, discretization schemes, delineation methods, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternative and more reliable approach that provides probability intervals for the baseflow contribution areas, taking uncertainty into account. The two approaches can be used together to enhance the confidence in the final outcome. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  7. TLS for generating multi-LOD of 3D building model

    NASA Astrophysics Data System (ADS)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  8. Use of Multivariate Techniques to Validate and Improve the Current USAF Pilot Candidate Selection Model

    DTIC Science & Technology

    2003-03-01

    organizations . Reducing attrition rates through optimal selection decisions can “reduce training cost, improve job performance, and enhance...capturing the weights for use in the SNR method is not straightforward. A special VBA application had to be written to capture and organize the network...before the VBA application can be used. Appendix D provides the VBA code used to import and organize the network weights and input standardization

  9. Genetic and Environmental Influences on Behavior: Capturing All the Interplay

    ERIC Educational Resources Information Center

    Johnson, Wendy

    2007-01-01

    Basic quantitative genetic models of human behavioral variation have made clear that individual differences in behavior cannot be understood without acknowledging the importance of genetic influences. Yet these basic models estimate average, population-level genetic and environmental influences, obscuring differences that might exist within the…

  10. The Effect of modeled recharge distribution on simulated groundwater availability and capture

    USGS Publications Warehouse

    Tillman, Fred D.; Pool, Donald R.; Leake, Stanley A.

    2015-01-01

    Simulating groundwater flow in basin-fill aquifers of the semiarid southwestern United States commonly requires decisions about how to distribute aquifer recharge. Precipitation can recharge basin-fill aquifers by direct infiltration and transport through faults and fractures in the high-elevation areas, by flowing overland through high-elevation areas to infiltrate at basin-fill margins along mountain fronts, by flowing overland to infiltrate along ephemeral channels that often traverse basins in the area, or by some combination of these processes. The importance of accurately simulating recharge distributions is a current topic of discussion among hydrologists and water managers in the region, but no comparative study has been performed to analyze the effects of different recharge distributions on groundwater simulations. This study investigates the importance of the distribution of aquifer recharge in simulating regional groundwater flow in basin-fill aquifers by calibrating a groundwater-flow model to four different recharge distributions, all with the same total amount of recharge. Similarities are seen in results from steady-state models for optimized hydraulic conductivity values, fit of simulated to observed hydraulic heads, and composite scaled sensitivities of conductivity parameter zones. Transient simulations with hypothetical storage properties and pumping rates produce similar capture rates and storage change results, but differences are noted in the rate of drawdown at some well locations owing to the differences in optimized hydraulic conductivity. Depending on whether the purpose of the groundwater model is to simulate changes in groundwater levels or changes in storage and capture, the distribution of aquifer recharge may or may not be of primary importance.

  11. A mechanistic model for mercury capture with in situ-generated titania particles: role of water vapor.

    PubMed

    Rodríguez, Sylian; Almquist, Catherine; Lee, Tai Gyu; Furuuchi, Masami; Hedrick, Elizabeth; Biswas, Pratim

    2004-02-01

    A mechanistic model to predict the capture of gas-phase mercury (Hg) species using in situ-generated titania nanosize particles activated by UV irradiation is developed. The model is an extension of a recently reported model for photochemical reactions by Almquist and Biswas that accounts for the rates of electron-hole pair generation, the adsorption of the compound to be oxidized, and the adsorption of water vapor. The role of water vapor in the removal efficiency of Hg was investigated to evaluate the rates of Hg oxidation at different water vapor concentrations. As the water vapor concentration is increased, more hydroxy radical species are generated on the surface of the titania particle, increasing the number of active sites for the photooxidation and capture of Hg. At very high water vapor concentrations, competitive adsorption is expected to be important and reduce the number of sites available for photooxidation of Hg. The predictions of the developed phenomenological model agreed well with the measured Hg oxidation rates in this study and with the data on oxidation of organic compounds reported in the literature.

  12. Automated Transformation of CDISC ODM to OpenClinica.

    PubMed

    Gessner, Sophia; Storck, Michael; Hegselmann, Stefan; Dugas, Martin; Soto-Rey, Iñaki

    2017-01-01

    Due to the increasing use of electronic data capture systems for clinical research, the interest in saving resources by automatically generating and reusing case report forms in clinical studies is growing. OpenClinica, an open-source electronic data capture system enables the reuse of metadata in its own Excel import template, hampering the reuse of metadata defined in other standard formats. One of these standard formats is the Operational Data Model for metadata, administrative and clinical data in clinical studies. This work suggests a mapping from Operational Data Model to OpenClinica and describes the implementation of a converter to automatically generate OpenClinica conform case report forms based upon metadata in the Operational Data Model.

  13. Data integration for inference about spatial processes: A model-based approach to test and account for data inconsistency

    PubMed Central

    Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio

    2017-01-01

    Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034

  14. Estimating population density and connectivity of American mink using spatial capture-recapture.

    PubMed

    Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P

    2016-06-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  15. Capture and dissociation in the complex-forming CH + H2 → CH2 + H, CH + H2 reactions.

    PubMed

    González, Miguel; Saracibar, Amaia; Garcia, Ernesto

    2011-02-28

    The rate coefficients for the capture process CH + H(2)→ CH(3) and the reactions CH + H(2)→ CH(2) + H (abstraction), CH + H(2) (exchange) have been calculated in the 200-800 K temperature range, using the quasiclassical trajectory (QCT) method and the most recent global potential energy surface. The reactions, which are of interest in combustion and in astrochemistry, proceed via the formation of long-lived CH(3) collision complexes, and the three H atoms become equivalent. QCT rate coefficients for capture are in quite good agreement with experiments. However, an important zero point energy (ZPE) leakage problem occurs in the QCT calculations for the abstraction, exchange and inelastic exit channels. To account for this issue, a pragmatic but accurate approach has been applied, leading to a good agreement with experimental abstraction rate coefficients. Exchange rate coefficients have also been calculated using this approach. Finally, calculations employing QCT capture/phase space theory (PST) models have been carried out, leading to similar values for the abstraction rate coefficients as the QCT and previous quantum mechanical capture/PST methods. This suggests that QCT capture/PST models are a good alternative to the QCT method for this and similar systems.

  16. Wind Turbine Modeling Overview for Control Engineers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriarty, P. J.; Butterfield, S. B.

    2009-01-01

    Accurate modeling of wind turbine systems is of paramount importance for controls engineers seeking to reduce loads and optimize energy capture of operating turbines in the field. When designing control systems, engineers often employ a series of models developed in the different disciplines of wind energy. The limitations and coupling of each of these models is explained to highlight how these models might influence control system design.

  17. MAC/FAC: A Model of Similarity-Based Retrieval. Technical Report #59.

    ERIC Educational Resources Information Center

    Forbus, Kenneth D.; And Others

    A model of similarity-based retrieval is presented that attempts to capture these seemingly contradictory psychological phenomena: (1) structural commonalities are weighed more heavily than surface commonalities in soundness or similarity judgments (when both members are present); (2) superficial similarity is more important in retrieval from…

  18. Technical and Energy Performance of an Advanced, Aqueous Ammonia-Based CO2 Capture Technology for a 500 MW Coal-Fired Power Station.

    PubMed

    Li, Kangkang; Yu, Hai; Feron, Paul; Tade, Moses; Wardhaugh, Leigh

    2015-08-18

    Using a rate-based model, we assessed the technical feasibility and energy performance of an advanced aqueous-ammonia-based postcombustion capture process integrated with a coal-fired power station. The capture process consists of three identical process trains in parallel, each containing a CO2 capture unit, an NH3 recycling unit, a water separation unit, and a CO2 compressor. A sensitivity study of important parameters, such as NH3 concentration, lean CO2 loading, and stripper pressure, was performed to minimize the energy consumption involved in the CO2 capture process. Process modifications of the rich-split process and the interheating process were investigated to further reduce the solvent regeneration energy. The integrated capture system was then evaluated in terms of the mass balance and the energy consumption of each unit. The results show that our advanced ammonia process is technically feasible and energy-competitive, with a low net power-plant efficiency penalty of 7.7%.

  19. Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers

    PubMed Central

    Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.

    2012-01-01

    Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559

  20. Estimation of population size using open capture-recapture models

    USGS Publications Warehouse

    McDonald, T.L.; Amstrup, Steven C.

    2001-01-01

    One of the most important needs for wildlife managers is an accurate estimate of population size. Yet, for many species, including most marine species and large mammals, accurate and precise estimation of numbers is one of the most difficult of all research challenges. Open-population capture-recapture models have proven useful in many situations to estimate survival probabilities but typically have not been used to estimate population size. We show that open-population models can be used to estimate population size by developing a Horvitz-Thompson-type estimate of population size and an estimator of its variance. Our population size estimate keys on the probability of capture at each trap occasion and therefore is quite general and can be made a function of external covariates measured during the study. Here we define the estimator and investigate its bias, variance, and variance estimator via computer simulation. Computer simulations make extensive use of real data taken from a study of polar bears (Ursus maritimus) in the Beaufort Sea. The population size estimator is shown to be useful because it was negligibly biased in all situations studied. The variance estimator is shown to be useful in all situations, but caution is warranted in cases of extreme capture heterogeneity.

  1. Modeling misidentification errors in capture-recapture studies using photographic identification of evolving marks

    USGS Publications Warehouse

    Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.

    2009-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.

  2. Estimation of M 1 scissors mode strength for deformed nuclei in the medium- to heavy-mass region by statistical Hauser-Feshbach model calculations

    NASA Astrophysics Data System (ADS)

    Mumpower, M. R.; Kawano, T.; Ullmann, J. L.; Krtička, M.; Sprouse, T. M.

    2017-08-01

    Radiative neutron capture is an important nuclear reaction whose accurate description is needed for many applications ranging from nuclear technology to nuclear astrophysics. The description of such a process relies on the Hauser-Feshbach theory which requires the nuclear optical potential, level density, and γ -strength function as model inputs. It has recently been suggested that the M 1 scissors mode may explain discrepancies between theoretical calculations and evaluated data. We explore statistical model calculations with the strength of the M 1 scissors mode estimated to be dependent on the nuclear deformation of the compound system. We show that the form of the M 1 scissors mode improves the theoretical description of evaluated data and the match to experiment in both the fission product and actinide regions. Since the scissors mode occurs in the range of a few keV to a few MeV, it may also impact the neutron capture cross sections of neutron-rich nuclei that participate in the rapid neutron capture process of nucleosynthesis. We comment on the possible impact to nucleosynthesis by evaluating neutron capture rates for neutron-rich nuclei with the M 1 scissors mode active.

  3. A policy-capturing study of the simultaneous effects of fit with jobs, groups, and organizations.

    PubMed

    Kristof-Brown, Amy L; Jansen, Karen J; Colbert, Amy E

    2002-10-01

    The authors report an experimental policy-capturing study that examines the simultaneous impact of person-job (PJ), person-group (PG), and person-organization (PO) fit on work satisfaction. Using hierarchical linear modeling, the authors determined that all 3 types of fit had important, independent effects on satisfaction. Work experience explained systematic differences in how participants weighted each type of fit. Multiple interactions also showed participants used complex strategies for combining fit cues.

  4. Trap configuration and spacing influences parameter estimates in spatial capture-recapture models

    USGS Publications Warehouse

    Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew

    2014-01-01

    An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.

  5. Stochastic molecular model of enzymatic hydrolysis of cellulose for ethanol production

    PubMed Central

    2013-01-01

    Background During cellulosic ethanol production, cellulose hydrolysis is achieved by synergistic action of cellulase enzyme complex consisting of multiple enzymes with different mode of actions. Enzymatic hydrolysis of cellulose is one of the bottlenecks in the commercialization of the process due to low hydrolysis rates and high cost of enzymes. A robust hydrolysis model that can predict hydrolysis profile under various scenarios can act as an important forecasting tool to improve the hydrolysis process. However, multiple factors affecting hydrolysis: cellulose structure and complex enzyme-substrate interactions during hydrolysis make it diffucult to develop mathematical kinetic models that can simulate hydrolysis in presence of multiple enzymes with high fidelity. In this study, a comprehensive hydrolysis model based on stochastic molecular modeling approch in which each hydrolysis event is translated into a discrete event is presented. The model captures the structural features of cellulose, enzyme properties (mode of actions, synergism, inhibition), and most importantly dynamic morphological changes in the substrate that directly affect the enzyme-substrate interactions during hydrolysis. Results Cellulose was modeled as a group of microfibrils consisting of elementary fibrils bundles, where each elementary fibril was represented as a three dimensional matrix of glucose molecules. Hydrolysis of cellulose was simulated based on Monte Carlo simulation technique. Cellulose hydrolysis results predicted by model simulations agree well with the experimental data from literature. Coefficients of determination for model predictions and experimental values were in the range of 0.75 to 0.96 for Avicel hydrolysis by CBH I action. Model was able to simulate the synergistic action of multiple enzymes during hydrolysis. The model simulations captured the important experimental observations: effect of structural properties, enzyme inhibition and enzyme loadings on the hydrolysis and degree of synergism among enzymes. Conclusions The model was effective in capturing the dynamic behavior of cellulose hydrolysis during action of individual as well as multiple cellulases. Simulations were in qualitative and quantitative agreement with experimental data. Several experimentally observed phenomena were simulated without the need for any additional assumptions or parameter changes and confirmed the validity of using the stochastic molecular modeling approach to quantitatively and qualitatively describe the cellulose hydrolysis. PMID:23638989

  6. Evaluation of trap capture in a geographically closed population of brown treesnakes on Guam

    USGS Publications Warehouse

    Tyrrell, C.L.; Christy, M.T.; Rodda, G.H.; Yackel Adams, A.A.; Ellingson, A.R.; Savidge, J.A.; Dean-Bradley, K.; Bischof, R.

    2009-01-01

    1. Open population mark-recapture analysis of unbounded populations accommodates some types of closure violations (e.g. emigration, immigration). In contrast, closed population analysis of such populations readily allows estimation of capture heterogeneity and behavioural response, but requires crucial assumptions about closure (e.g. no permanent emigration) that are suspect and rarely tested empirically. 2. In 2003, we erected a double-sided barrier to prevent movement of snakes in or out of a 5-ha semi-forested study site in northern Guam. This geographically closed population of >100 snakes was monitored using a series of transects for visual searches and a 13 ?? 13 trapping array, with the aim of marking all snakes within the site. Forty-five marked snakes were also supplemented into the resident population to quantify the efficacy of our sampling methods. We used the program mark to analyse trap captures (101 occasions), referenced to census data from visual surveys, and quantified heterogeneity, behavioural response, and size bias in trappability. Analytical inclusion of untrapped individuals greatly improved precision in the estimation of some covariate effects. 3. A novel discovery was that trap captures for individual snakes consisted of asynchronous bouts of high capture probability lasting about 7 days (ephemeral behavioural effect). There was modest behavioural response (trap happiness) and significant latent (unexplained) heterogeneity, with small influences on capture success of date, gender, residency status (translocated or not), and body condition. 4. Trapping was shown to be an effective tool for eradicating large brown treesnakes Boiga irregularis (>900 mm snout-vent length, SVL). 5. Synthesis and applications. Mark-recapture modelling is commonly used by ecological managers to estimate populations. However, existing models involve making assumptions about either closure violations or response to capture. Physical closure of our population on a landscape scale allowed us to determine the relative importance of covariates influencing capture probability (body size, trappability periods, and latent heterogeneity). This information was used to develop models in which different segments of the population could be assigned different probabilities of capture, and suggests that modelling of open populations should incorporate easily measured, but potentially overlooked, parameters such as body size or condition. ?? 2008 The Authors.

  7. Nuclear structure and weak rates of heavy waiting point nuclei under rp-process conditions

    NASA Astrophysics Data System (ADS)

    Nabi, Jameel-Un; Böyükata, Mahmut

    2017-01-01

    The structure and the weak interaction mediated rates of the heavy waiting point (WP) nuclei 80Zr, 84Mo, 88Ru, 92Pd and 96Cd along N = Z line were studied within the interacting boson model-1 (IBM-1) and the proton-neutron quasi-particle random phase approximation (pn-QRPA). The energy levels of the N = Z WP nuclei were calculated by fitting the essential parameters of IBM-1 Hamiltonian and their geometric shapes were predicted by plotting potential energy surfaces (PESs). Half-lives, continuum electron capture rates, positron decay rates, electron capture cross sections of WP nuclei, energy rates of β-delayed protons and their emission probabilities were later calculated using the pn-QRPA. The calculated Gamow-Teller strength distributions were compared with previous calculation. We present positron decay and continuum electron capture rates on these WP nuclei under rp-process conditions using the same model. For the rp-process conditions, the calculated total weak rates are twice the Skyrme HF+BCS+QRPA rates for 80Zr. For remaining nuclei the two calculations compare well. The electron capture rates are significant and compete well with the corresponding positron decay rates under rp-process conditions. The finding of the present study supports that electron capture rates form an integral part of the weak rates under rp-process conditions and has an important role for the nuclear model calculations.

  8. Driving personalized medicine: capturing maximum net present value and optimal return on investment.

    PubMed

    Roth, Mollie; Keeling, Peter; Smart, Dave

    2010-01-01

    In order for personalized medicine to meet its potential future promise, a closer focus on the work being carried out today and the foundation it will provide for that future is imperative. While big picture perspectives of this still nascent shift in the drug-development process are important, it is more important that today's work on the first wave of targeted therapies is used to build specific benchmarking and financial models against which further such therapies may be more effectively developed. Today's drug-development teams need a robust tool to identify the exact drivers that will ensure the successful launch and rapid adoption of targeted therapies, and financial metrics to determine the appropriate resource levels to power those drivers. This special report will describe one such benchmarking and financial model that is specifically designed for the personalized medicine field and will explain how the use of this or similar models can help to capture the maximum net present value of targeted therapies and help to realize optimal return on investment.

  9. Assessing the Impact of Capture on Wild Animals: The Case Study of Chemical Immobilisation on Alpine Ibex.

    PubMed

    Brivio, Francesca; Grignolio, Stefano; Sica, Nicoletta; Cerise, Stefano; Bassano, Bruno

    2015-01-01

    The importance of capturing wild animals for research and conservation projects is widely shared. As this activity continues to become more common, the need to assess its negative effects increases so as to ensure ethical standards and the validity of research results. Increasing evidence has revealed that indirect (physiological and behavioural) effects of capture are as important as direct risks (death or injury) and that different capture methodologies can cause heterogeneous effects. We investigated the influence of chemical immobilisation on Alpine ibex (Capra ibex): during the days following the capture we collected data on spatial behaviour, activity levels of both males and females, and male hormone levels. Moreover, we recorded the reproductive status of each marked female during the breeding seasons of 15 years. Then, by several a priori models we investigated the effects of the capture taking into account biological factors and changes in environmental conditions. Our results showed that chemical immobilisation did not affect either spatial behaviour (for both males and females) or male hormone levels, though both sexes showed reduced activity levels up to two days after the capture. The capture did not significantly affect the likelihood for a female to give birth in the following summer. Our findings highlighted the scarce impact of chemical immobilisation on ibex biology, as we detected alteration of activity levels only immediately after the capture if compared to the following days (i.e., baseline situation). Hence, the comparison of our findings with previous research showed that our methodology is one of the less invasive procedures to capture large mammals. Nonetheless, in areas characterised by high predator density, we suggest that animals released be carefully monitored for some hours after the capture. Moreover, researchers should avoid considering data collected during the first days after the manipulation in order to avoid biased information.

  10. Assessing the Impact of Capture on Wild Animals: The Case Study of Chemical Immobilisation on Alpine Ibex

    PubMed Central

    Brivio, Francesca; Grignolio, Stefano; Sica, Nicoletta; Cerise, Stefano; Bassano, Bruno

    2015-01-01

    The importance of capturing wild animals for research and conservation projects is widely shared. As this activity continues to become more common, the need to assess its negative effects increases so as to ensure ethical standards and the validity of research results. Increasing evidence has revealed that indirect (physiological and behavioural) effects of capture are as important as direct risks (death or injury) and that different capture methodologies can cause heterogeneous effects. We investigated the influence of chemical immobilisation on Alpine ibex (Capra ibex): during the days following the capture we collected data on spatial behaviour, activity levels of both males and females, and male hormone levels. Moreover, we recorded the reproductive status of each marked female during the breeding seasons of 15 years. Then, by several a priori models we investigated the effects of the capture taking into account biological factors and changes in environmental conditions. Our results showed that chemical immobilisation did not affect either spatial behaviour (for both males and females) or male hormone levels, though both sexes showed reduced activity levels up to two days after the capture. The capture did not significantly affect the likelihood for a female to give birth in the following summer. Our findings highlighted the scarce impact of chemical immobilisation on ibex biology, as we detected alteration of activity levels only immediately after the capture if compared to the following days (i.e., baseline situation). Hence, the comparison of our findings with previous research showed that our methodology is one of the less invasive procedures to capture large mammals. Nonetheless, in areas characterised by high predator density, we suggest that animals released be carefully monitored for some hours after the capture. Moreover, researchers should avoid considering data collected during the first days after the manipulation in order to avoid biased information. PMID:26111118

  11. Statistical inference for capture-recapture experiments

    USGS Publications Warehouse

    Pollock, Kenneth H.; Nichols, James D.; Brownie, Cavell; Hines, James E.

    1990-01-01

    This monograph presents a detailed, practical exposition on the design, analysis, and interpretation of capture-recapture studies. The Lincoln-Petersen model (Chapter 2) and the closed population models (Chapter 3) are presented only briefly because these models have been covered in detail elsewhere. The Jolly- Seber open population model, which is central to the monograph, is covered in detail in Chapter 4. In Chapter 5 we consider the "enumeration" or "calendar of captures" approach, which is widely used by mammalogists and other vertebrate ecologists. We strongly recommend that it be abandoned in favor of analyses based on the Jolly-Seber model. We consider 2 restricted versions of the Jolly-Seber model. We believe the first of these, which allows losses (mortality or emigration) but not additions (births or immigration), is likely to be useful in practice. Another series of restrictive models requires the assumptions of a constant survival rate or a constant survival rate and a constant capture rate for the duration of the study. Detailed examples are given that illustrate the usefulness of these restrictions. There often can be a substantial gain in precision over Jolly-Seber estimates. In Chapter 5 we also consider 2 generalizations of the Jolly-Seber model. The temporary trap response model allows newly marked animals to have different survival and capture rates for 1 period. The other generalization is the cohort Jolly-Seber model. Ideally all animals would be marked as young, and age effects considered by using the Jolly-Seber model on each cohort separately. In Chapter 6 we present a detailed description of an age-dependent Jolly-Seber model, which can be used when 2 or more identifiable age classes are marked. In Chapter 7 we present a detailed description of the "robust" design. Under this design each primary period contains several secondary sampling periods. We propose an estimation procedure based on closed and open population models that allows for heterogeneity and trap response of capture rates (hence the name robust design). We begin by considering just 1 age class and then extend to 2 age classes. When there are 2 age classes it is possible to distinguish immigrants and births. In Chapter 8 we give a detailed discussion of the design of capture-recapture studies. First, capture-recapture is compared to other possible sampling procedures. Next, the design of capture-recapture studies to minimize assumption violations is considered. Finally, we consider the precision of parameter estimates and present figures on proportional standard errors for a variety of initial parameter values to aid the biologist about to plan a study. A new program, JOLLY, has been written to accompany the material on the Jolly-Seber model (Chapter 4) and its extensions (Chapter 5). Another new program, JOLLYAGE, has been written for a special case of the age-dependent model (Chapter 6) where there are only 2 age classes. In Chapter 9 a brief description of the different versions of the 2 programs is given. Chapter 10 gives a brief description of some alternative approaches that were not considered in this monograph. We believe that an excellent overall view of capture- recapture models may be obtained by reading the monograph by White et al. (1982) emphasizing closed models and then reading this monograph where we concentrate on open models. The important recent monograph by Burnham et al. (1987) could then be read if there were interest in the comparison of different populations.

  12. Investigation of magnetic nanoparticle targeting in a simplified model of small vessel aneurysm

    NASA Astrophysics Data System (ADS)

    Mirzababaei, S. N.; Gorji, Tahereh B.; Baou, M.; Gorji-Bandpy, M.; Fatouraee, Nasser

    2017-03-01

    An in simulacra study was conducted to investigate the capture efficiency (CE) of magnetic nanoparticles (MNPs) in aneurysm model, under the effect of a bipolar permanent magnetic system positioned at the vicinity of the model vessel. The bipolar magnetic system with an active space of 9 cm was designed by FEMM software. The MNPs were magnetite nanoparticles synthesized by the hydrothermal method which were characterized by X-ray diffraction, Fourier transform infrared spectroscopy, scanning electron microscope and magnetometer measurements. Ferrofluid velocity, magnetic field strength, and aneurysm volume all proved to be important parameters which affect the capturing of MNPs. Overall, the results of this in simulacra study confirmed the effectiveness of magnetic targeting for possible aneurysm embolization.

  13. Hydrodynamic capture of microswimmers into sphere-bound orbits.

    PubMed

    Takagi, Daisuke; Palacci, Jérémie; Braunschweig, Adam B; Shelley, Michael J; Zhang, Jun

    2014-03-21

    Self-propelled particles can exhibit surprising non-equilibrium behaviors, and how they interact with obstacles or boundaries remains an important open problem. Here we show that chemically propelled micro-rods can be captured, with little change in their speed, into close orbits around solid spheres resting on or near a horizontal plane. We show that this interaction between sphere and particle is short-range, occurring even for spheres smaller than the particle length, and for a variety of sphere materials. We consider a simple model, based on lubrication theory, of a force- and torque-free swimmer driven by a surface slip (the phoretic propulsion mechanism) and moving near a solid surface. The model demonstrates capture, or movement towards the surface, and yields speeds independent of distance. This study reveals the crucial aspects of activity–driven interactions of self-propelled particles with passive objects, and brings into question the use of colloidal tracers as probes of active matter.

  14. The effect of modeled recharge distribution on simulated groundwater availability and capture.

    PubMed

    Tillman, F D; Pool, D R; Leake, S A

    2015-01-01

    Simulating groundwater flow in basin-fill aquifers of the semiarid southwestern United States commonly requires decisions about how to distribute aquifer recharge. Precipitation can recharge basin-fill aquifers by direct infiltration and transport through faults and fractures in the high-elevation areas, by flowing overland through high-elevation areas to infiltrate at basin-fill margins along mountain fronts, by flowing overland to infiltrate along ephemeral channels that often traverse basins in the area, or by some combination of these processes. The importance of accurately simulating recharge distributions is a current topic of discussion among hydrologists and water managers in the region, but no comparative study has been performed to analyze the effects of different recharge distributions on groundwater simulations. This study investigates the importance of the distribution of aquifer recharge in simulating regional groundwater flow in basin-fill aquifers by calibrating a groundwater-flow model to four different recharge distributions, all with the same total amount of recharge. Similarities are seen in results from steady-state models for optimized hydraulic conductivity values, fit of simulated to observed hydraulic heads, and composite scaled sensitivities of conductivity parameter zones. Transient simulations with hypothetical storage properties and pumping rates produce similar capture rates and storage change results, but differences are noted in the rate of drawdown at some well locations owing to the differences in optimized hydraulic conductivity. Depending on whether the purpose of the groundwater model is to simulate changes in groundwater levels or changes in storage and capture, the distribution of aquifer recharge may or may not be of primary importance. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  15. Towards an Integrated Conceptual Model of International Student Adjustment and Adaptation

    ERIC Educational Resources Information Center

    Schartner, Alina; Young, Tony Johnstone

    2016-01-01

    Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…

  16. Transverse thermal conductivity of porous materials made from aligned nano- and microcylindrical pores

    NASA Astrophysics Data System (ADS)

    Prasher, Ravi

    2006-09-01

    Nanoporous and microporous materials made from aligned cylindrical pores play important roles in present technologies and will play even bigger roles in future technologies. The insight into the phonon thermal conductivity of these materials is important and relevant in many technologies and applications. Since the mean free path of phonons can be comparable to the pore size and interpore distance, diffusion-approximation based effective medium models cannot be used to predict the thermal conductivity of these materials. Strictly speaking, the Boltzmann transport equation (BTE) must be solved to capture the ballistic nature of thermal transport; however, solving BTE in such a complex network of pores is impractical. As an alternative, we propose an approximate ballistic-diffusive microscopic effective medium model for predicting the thermal conductivity of phonons in two-dimensional nanoporous and microporous materials made from aligned cylindrical pores. The model captures the size effects due to the pore diameter and the interpore distance and reduces to diffusion-approximation based models for macroporous materials. The results are in good agreement with experimental data.

  17. Estimating breeding proportions and testing hypotheses about costs of reproduction with capture-recapture data

    USGS Publications Warehouse

    Nichols, James D.; Hines, James E.; Pollock, Kenneth H.; Hinz, Robert L.; Link, William A.

    1994-01-01

    The proportion of animals in a population that breeds is an important determinant of population growth rate. Usual estimates of this quantity from field sampling data assume that the probability of appearing in the capture or count statistic is the same for animals that do and do not breed. A similar assumption is required by most existing methods used to test ecologically interesting hypotheses about reproductive costs using field sampling data. However, in many field sampling situations breeding and nonbreeding animals are likely to exhibit different probabilities of being seen or caught. In this paper, we propose the use of multistate capture-recapture models for these estimation and testing problems. This methodology permits a formal test of the hypothesis of equal capture/sighting probabilities for breeding and nonbreeding individuals. Two estimators of breeding proportion (and associated standard errors) are presented, one for the case of equal capture probabilities and one for the case of unequal capture probabilities. The multistate modeling framework also yields formal tests of hypotheses about reproductive costs to future reproduction or survival or both fitness components. The general methodology is illustrated using capture-recapture data on female meadow voles, Microtus pennsylvanicus. Resulting estimates of the proportion of reproductively active females showed strong seasonal variation, as expected, with low breeding proportions in midwinter. We found no evidence of reproductive costs extracted in subsequent survival or reproduction. We believe that this methodological framework has wide application to problems in animal ecology concerning breeding proportions and phenotypic reproductive costs.

  18. An optimization model for carbon capture & storage/utilization vs. carbon trading: A case study of fossil-fired power plants in Turkey.

    PubMed

    Ağralı, Semra; Üçtuğ, Fehmi Görkem; Türkmen, Burçin Atılgan

    2018-06-01

    We consider fossil-fired power plants that operate in an environment where a cap and trade system is in operation. These plants need to choose between carbon capture and storage (CCS), carbon capture and utilization (CCU), or carbon trading in order to obey emissions limits enforced by the government. We develop a mixed-integer programming model that decides on the capacities of carbon capture units, if it is optimal to install them, the transportation network that needs to be built for transporting the carbon captured, and the locations of storage sites, if they are decided to be built. Main restrictions on the system are the minimum and maximum capacities of the different parts of the pipeline network, the amount of carbon that can be sold to companies for utilization, and the capacities on the storage sites. Under these restrictions, the model aims to minimize the net present value of the sum of the costs associated with installation and operation of the carbon capture unit and the transportation of carbon, the storage cost in case of CCS, the cost (or revenue) that results from the emissions trading system, and finally the negative revenue of selling the carbon to other entities for utilization. We implement the model on General Algebraic Modeling System (GAMS) by using data associated with two coal-fired power plants located in different regions of Turkey. We choose enhanced oil recovery (EOR) as the process in which carbon would be utilized. The results show that CCU is preferable to CCS as long as there is sufficient demand in the EOR market. The distance between the location of emission and location of utilization/storage, and the capacity limits on the pipes are an important factor in deciding between carbon capture and carbon trading. At carbon prices over $15/ton, carbon capture becomes preferable to carbon trading. These results show that as far as Turkey is concerned, CCU should be prioritized as a means of reducing nation-wide carbon emissions in an environmentally and economically rewarding manner. The model developed in this study is generic, and it can be applied to any industry at any location, as long as the required inputs are available. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Argon Bubble Transport and Capture in Continuous Casting with an External Magnetic Field Using GPU-Based Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Jin, Kai

    Continuous casting produces over 95% of steel in the world today, hence even small improvements to this important industrial process can have large economic impact. In the continuous casting of steel process, argon gas is usually injected at the slide gate or stopper rod to prevent clogging, but entrapped bubbles may cause defects in the final product. Many defects in this process are related to the transient fluid flow in the mold region of the caster. Electromagnetic braking (EMBr) device is often used at high casting speed to modify the mold flow, reduce the surface velocity and fluctuation. This work studies the physics in continuous casting process including effects of EMBr on the motion of fluid flow in the mold region, and transport and capture of bubbles in the solidification processes. A computational effective Reynolds-averaged Navier-Stokes (RANS) model and a high fidelity Large Eddy Simulation (LES) model are used to understand the motion of the molten steel flow. A general purpose multi-GPU Navier-Stokes solver, CUFLOW, is developed. A Coherent-Structure Smagorinsky LES model is implemented to model the turbulent flow. A two-way coupled Lagrangian particle tracking model is added to track the motion of argon bubbles. A particle/bubble capture model based on force balance at dendrite tips is validated and used to study the capture of argon bubbles by the solidifying steel shell. To investigate the effects of EMBr on the turbulent molten steel flow and bubble transport, an electrical potential method is implemented to solve the magnetohydrodynamics equations. Volume of Fluid (VOF) simulations are carried out to understand the additional resistance force on moving argon bubbles caused by adding transverse magnetic field. A modified drag coefficient is extrapolated from the results and used in the two-way coupled Eulerian-Lagrangian model to predict the argon bubble transport in a caster with EMBr. A hook capture model is developed to understand the effects of hooks on argon bubble capture.

  20. ELECTRON-CAPTURE AND β-DECAY RATES FOR sd-SHELL NUCLEI IN STELLAR ENVIRONMENTS RELEVANT TO HIGH-DENSITY O–NE–MG CORES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Toshio; Toki, Hiroshi; Nomoto, Ken’ichi, E-mail: suzuki@phys.chs.nihon-u.ac.jp

    Electron-capture and β-decay rates for nuclear pairs in the sd-shell are evaluated at high densities and high temperatures relevant to the final evolution of electron-degenerate O–Ne–Mg cores of stars with initial masses of 8–10 M{sub ⊙}. Electron capture induces a rapid contraction of the electron-degenerate O–Ne–Mg core. The outcome of rapid contraction depends on the evolutionary changes in the central density and temperature, which are determined by the competing processes of contraction, cooling, and heating. The fate of the stars is determined by these competitions, whether they end up with electron-capture supernovae or Fe core-collapse supernovae. Since the competing processes aremore » induced by electron capture and β-decay, the accurate weak rates are crucially important. The rates are obtained for pairs with A = 20, 23, 24, 25, and 27 by shell-model calculations in the sd-shell with the USDB Hamiltonian. Effects of Coulomb corrections on the rates are evaluated. The rates for pairs with A = 23 and 25 are important for nuclear Urca processes that determine the cooling rate of the O–Ne–Mg core, while those for pairs with A = 20 and 24 are important for the core contraction and heat generation rates in the core. We provide these nuclear rates at stellar environments in tables with fine enough meshes at various densities and temperatures for studies of astrophysical processes sensitive to the rates. In particular, the accurate rate tables are crucially important for the final fates of not only O–Ne–Mg cores but also a wider range of stars, such as C–O cores of lower-mass stars.« less

  1. Technical and economic evaluation of biogas capture and treatment for the Piedras Blancas landfill in Córdoba, Argentina.

    PubMed

    Francisca, Franco Matías; Montoro, Marcos Alexis; Glatstein, Daniel Alejandro

    2017-05-01

    Landfill gas (LFG) management is one of the most important tasks for landfill operation and closure because of its impact in potential global warming. The aim of this work is to present a case history evaluating an LFG capture and treatment system for the present landfill facility in Córdoba, Argentina. The results may be relevant for many developing countries around the world where landfill gas is not being properly managed. The LFG generation is evaluated by modeling gas production applying the zero-order model, Landfill Gas Emissions Model (LandGEM; U.S. Environmental Protection Agency [EPA]), Scholl Canyon model, and triangular model. Variability in waste properties, weather, and landfill management conditions are analyzed in order to evaluate the feasibility of implementing different treatment systems. The results show the advantages of capturing and treating LFG in order to reduce the emissions of gases responsible for global warming and to determine the revenue rate needed for the project's financial requirements. This particular project reduces by half the emission of equivalent tons of carbon dioxide (CO 2 ) compared with the situation where there is no gas treatment. In addition, the study highlights the need for a change in the electricity prices if it is to be economically feasible to implement the project in the current Argentinean electrical market. Methane has 21 times more greenhouse gas potential than carbon dioxide. Because of that, it is of great importance to adequately manage biogas emissions from landfills. In addition, it is environmentally convenient to use this product as an alternative energy source, since it prevents methane emissions while preventing fossil fuel consumption, minimizing carbon dioxide emissions. Performed analysis indicated that biogas capturing and energy generation implies 3 times less equivalent carbon dioxide emissions; however, a change in the Argentinean electrical market fees are required to guarantee the financial feasibility of the project.

  2. Modeling and optimal design of CO2 Direct Air Capture systems in large arrays

    NASA Astrophysics Data System (ADS)

    Sadri Irani, Samaneh; Luzzatto-Fegiz, Paolo

    2017-11-01

    As noted by the 2014 IPCC report, while the rise in atmospheric CO2 would be slowed by emissions reductions, removing atmospheric CO2 is an important part of possible paths to climate stabilization. Direct Air Capture of CO2 with chemicals (DAC) is one of several proposed carbon capture technologies. There is an ongoing debate on whether DAC is an economically viable approach to alleviate climate change. In addition, like all air capture strategies, DAC is strongly constrained by the net-carbon problem, namely the need to control CO2 emissions associated with the capture process (for example, if DAC not powered by renewables). Research to date has focused on the chemistry and economics of individual DAC devices. However, the fluid mechanics of their large-scale deployment has not been examined in the literature, to the best of our knowledge. In this presentation, we develop a model for flow through an array of DAC devices, varying their lateral extent and their separation. We build on a recent theory of canopy flows, introducing terms for CO2 entrainment into the array boundary layer, and transport into the farm. In addition, we examine the possibility of driving flow passively by wind, thereby reducing energy consumption. The optimal operational design is established considering the total cost, drag force, energy consumption and total CO2 capture.

  3. Selection based on the size of the black tie of the great tit may be reversed in urban habitats.

    PubMed

    Senar, Juan Carlos; Conroy, Michael J; Quesada, Javier; Mateos-Gonzalez, Fernando

    2014-07-01

    A standard approach to model how selection shapes phenotypic traits is the analysis of capture-recapture data relating trait variation to survival. Divergent selection, however, has never been analyzed by the capture-recapture approach. Most reported examples of differences between urban and nonurban animals reflect behavioral plasticity rather than divergent selection. The aim of this paper was to use a capture-recapture approach to test the hypothesis that divergent selection can also drive local adaptation in urban habitats. We focused on the size of the black breast stripe (i.e., tie width) of the great tit (Parus major), a sexual ornament used in mate choice. Urban great tits display smaller tie sizes than forest birds. Because tie size is mostly genetically determined, it could potentially respond to selection. We analyzed capture/recapture data of male great tits in Barcelona city (N = 171) and in a nearby (7 km) forest (N = 324) from 1992 to 2008 using MARK. When modelling recapture rate, we found it to be strongly influenced by tie width, so that both for urban and forest habitats, birds with smaller ties were more trap-shy and more cautious than their larger tied counterparts. When modelling survival, we found that survival prospects in forest great tits increased the larger their tie width (i.e., directional positive selection), but the reverse was found for urban birds, with individuals displaying smaller ties showing higher survival (i.e., directional negative selection). As melanin-based tie size seems to be related to personality, and both are heritable, results may be explained by cautious personalities being favored in urban environments. More importantly, our results show that divergent selection can be an important mechanism in local adaptation to urban habitats and that capture-recapture is a powerful tool to test it.

  4. High-fidelity simulation of transcutaneous cardiac pacing: characteristics and limitations of available high-fidelity simulators, and description of an alternative two-mannequin model.

    PubMed

    Robitaille, Arnaud; Perron, Roger; Germain, Jean-François; Tanoubi, Issam; Georgescu, Mihai

    2015-04-01

    Transcutaneous cardiac pacing (TCP) is a potentially lifesaving technique that is part of the recommended treatment for symptomatic bradycardia. Transcutaneous cardiac pacing however is used uncommonly, and its successful application is not straightforward. Simulation could, therefore, play an important role in the teaching and assessment of TCP competence. However, even the highest-fidelity mannequins available on the market have important shortcomings, which limit the potential of simulation. Six criteria defining clinical competency in TCP were established and used as a starting point in the creation of an improved TCP simulator. The goal was a model that could be used to assess experienced clinicians, an objective that justifies the additional effort required by the increased fidelity. The proposed 2-mannequin model (TMM) combines a highly modified Human Patient Simulator with a SimMan 3G, the latter being used solely to provide the electrocardiography (ECG) tracing. The TMM improves the potential of simulation to assess experienced clinicians (1) by reproducing key features of TCP, like using the same multifunctional pacing electrodes used clinically, allowing dual ECG monitoring, and responding with upper body twitching when stimulated, but equally importantly (2) by reproducing key pitfalls of the technique, like allowing pacing electrode misplacement and reproducing false signs of ventricular capture, commonly, but erroneously, used clinically to establish that effective pacing has been achieved (like body twitching, electrical artifact on the ECG, and electrical capture without ventricular capture). The proposed TMM uses a novel combination of 2 high-fidelity mannequins to improve TCP simulation until upgraded mannequins become commercially available.

  5. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  6. MPCV Exercise Operational Volume Analysis

    NASA Technical Reports Server (NTRS)

    Godfrey, A.; Humphreys, B.; Funk, J.; Perusek, G.; Lewandowski, B. E.

    2017-01-01

    In order to minimize the loss of bone and muscle mass during spaceflight, the Multi-purpose Crew Vehicle (MPCV) will include an exercise device and enough free space within the cabin for astronauts to use the device effectively. The NASA Digital Astronaut Project (DAP) has been tasked with using computational modeling to aid in determining whether or not the available operational volume is sufficient for in-flight exercise.Motion capture data was acquired using a 12-camera Smart DX system (BTS Bioengineering, Brooklyn, NY), while exercisers performed 9 resistive exercises without volume restrictions in a 1g environment. Data were collected from two male subjects, one being in the 99th percentile of height and the other in the 50th percentile of height, using between 25 and 60 motion capture markers. Motion capture data was also recorded as a third subject, also near the 50th percentile in height, performed aerobic rowing during a parabolic flight. A motion capture system and algorithms developed previously and presented at last years HRP-IWS were utilized to collect and process the data from the parabolic flight [1]. These motions were applied to a scaled version of a biomechanical model within the biomechanical modeling software OpenSim [2], and the volume sweeps of the motions were visually assessed against an imported CAD model of the operational volume. Further numerical analysis was performed using Matlab (Mathworks, Natick, MA) and the OpenSim API. This analysis determined the location of every marker in space over the duration of the exercise motion, and the distance of each marker to the nearest surface of the volume. Containment of the exercise motions within the operational volume was determined on a per-exercise and per-subject basis. The orientation of the exerciser and the angle of the footplate were two important factors upon which containment was dependent. Regions where the exercise motion exceeds the bounds of the operational volume have been identified by determining which markers from the motion capture exceed the operational volume and by how much. A credibility assessment of this analysis was performed in accordance with NASA-STD-7009 prior to delivery to the MPCV program.

  7. A Hybrid Multi-Scale Model of Crystal Plasticity for Handling Stress Concentrations

    DOE PAGES

    Sun, Shang; Ramazani, Ali; Sundararaghavan, Veera

    2017-09-04

    Microstructural effects become important at regions of stress concentrators such as notches, cracks and contact surfaces. A multiscale model is presented that efficiently captures microstructural details at such critical regions. The approach is based on a multiresolution mesh that includes an explicit microstructure representation at critical regions where stresses are localized. At regions farther away from the stress concentration, a reduced order model that statistically captures the effect of the microstructure is employed. The statistical model is based on a finite element representation of the orientation distribution function (ODF). As an illustrative example, we have applied the multiscaling method tomore » compute the stress intensity factor K I around the crack tip in a wedge-opening load specimen. The approach is verified with an analytical solution within linear elasticity approximation and is then extended to allow modeling of microstructural effects on crack tip plasticity.« less

  8. A Hybrid Multi-Scale Model of Crystal Plasticity for Handling Stress Concentrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Shang; Ramazani, Ali; Sundararaghavan, Veera

    Microstructural effects become important at regions of stress concentrators such as notches, cracks and contact surfaces. A multiscale model is presented that efficiently captures microstructural details at such critical regions. The approach is based on a multiresolution mesh that includes an explicit microstructure representation at critical regions where stresses are localized. At regions farther away from the stress concentration, a reduced order model that statistically captures the effect of the microstructure is employed. The statistical model is based on a finite element representation of the orientation distribution function (ODF). As an illustrative example, we have applied the multiscaling method tomore » compute the stress intensity factor K I around the crack tip in a wedge-opening load specimen. The approach is verified with an analytical solution within linear elasticity approximation and is then extended to allow modeling of microstructural effects on crack tip plasticity.« less

  9. Reliable Communication Models in Interdependent Critical Infrastructure Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Chinthavali, Supriya; Shankar, Mallikarjun

    Modern critical infrastructure networks are becoming increasingly interdependent where the failures in one network may cascade to other dependent networks, causing severe widespread national-scale failures. A number of previous efforts have been made to analyze the resiliency and robustness of interdependent networks based on different models. However, communication network, which plays an important role in today's infrastructures to detect and handle failures, has attracted little attention in the interdependency studies, and no previous models have captured enough practical features in the critical infrastructure networks. In this paper, we study the interdependencies between communication network and other kinds of critical infrastructuremore » networks with an aim to identify vulnerable components and design resilient communication networks. We propose several interdependency models that systematically capture various features and dynamics of failures spreading in critical infrastructure networks. We also discuss several research challenges in building reliable communication solutions to handle failures in these models.« less

  10. Estimation of M 1 scissors mode strength for deformed nuclei in the medium- to heavy-mass region by statistical Hauser-Feshbach model calculations

    DOE PAGES

    Mumpower, Matthew Ryan; Kawano, Toshihiko; Ullmann, John Leonard; ...

    2017-08-17

    Radiative neutron capture is an important nuclear reaction whose accurate description is needed for many applications ranging from nuclear technology to nuclear astrophysics. The description of such a process relies on the Hauser-Feshbach theory which requires the nuclear optical potential, level density, and γ-strength function as model inputs. It has recently been suggested that the M1 scissors mode may explain discrepancies between theoretical calculations and evaluated data. We explore statistical model calculations with the strength of the M1 scissors mode estimated to be dependent on the nuclear deformation of the compound system. We show that the form of the M1more » scissors mode improves the theoretical description of evaluated data and the match to experiment in both the fission product and actinide regions. Since the scissors mode occurs in the range of a few keV to a few MeV, it may also impact the neutron capture cross sections of neutron-rich nuclei that participate in the rapid neutron capture process of nucleosynthesis. As a result, we comment on the possible impact to nucleosynthesis by evaluating neutron capture rates for neutron-rich nuclei with the M1 scissors mode active.« less

  11. Indoor Modelling Benchmark for 3D Geometry Extraction

    NASA Astrophysics Data System (ADS)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  12. Neutron Capture Energies for Flux Normalization and Approximate Model for Gamma-Smeared Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Liu, Yuxuan

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) Virtual Environment for Reactor Applications (VERA) neutronics simulator MPACT has used a single recoverable fission energy for each fissionable nuclide assuming that all recoverable energies come only from fission reaction, for which capture energy is merged with fission energy. This approach includes approximations and requires improvement by separating capture energy from the merged effective recoverable energy. This report documents the procedure to generate recoverable neutron capture energies and the development of a program called CapKappa to generate capture energies. Recoverable neutron capture energies have been generated by using CapKappa withmore » the evaluated nuclear data file (ENDF)/B-7.0 and 7.1 cross section and decay libraries. The new capture kappas were compared to the current SCALE-6.2 and the CASMO-5 capture kappas. These new capture kappas have been incorporated into the Simplified AMPX 51- and 252-group libraries, and they can be used for the AMPX multigroup (MG) libraries and the SCALE code package. The CASL VERA neutronics simulator MPACT does not include a gamma transport capability, which limits it to explicitly estimating local energy deposition from fission, neutron, and gamma slowing down and capture. Since the mean free path of gamma rays is typically much longer than that for the neutron, and the total gamma energy is about 10% to the total energy, the gamma-smeared power distribution is different from the fission power distribution. Explicit local energy deposition through neutron and gamma transport calculation is significantly important in multi-physics whole core simulation with thermal-hydraulic feedback. Therefore, the gamma transport capability should be incorporated into the CASL neutronics simulator MPACT. However, this task will be timeconsuming in developing the neutron induced gamma production and gamma cross section libraries. This study is to investigate an approximate model to estimate gammasmeared power distribution without performing any gamma transport calculation. A simple approximate gamma smearing model has been investigated based on the facts that pinwise gamma energy depositions are almost flat over a fuel assembly, and assembly-wise gamma energy deposition is proportional to kappa-fission energy deposition. The approximate gamma smearing model works well for single assembly cases, and can partly improve the gamma smeared power distribution for the whole core model. Although the power distributions can be improved by the approximate gamma smearing model, still there is an issue to explicitly obtain local energy deposition. A new simple approach or gamma transport/diffusion capability may need to be incorporated into MPACT to estimate local energy deposition for more robust multi-physics simulation.« less

  13. The non-storm time corrugated upper thermosphere: What is beyond MSIS?

    NASA Astrophysics Data System (ADS)

    Liu, Huixin; Thayer, Jeff; Zhang, Yongliang; Lee, Woo Kyoung

    2017-06-01

    Observations in the recent decade have revealed many thermospheric density corrugations/perturbations under nonstorm conditions (Kp < 2). They are generally not captured by empirical models like Mass Spectrometer Incoherent Scatter (MSIS) but are operationally important for long-term orbital evolution of Low Earth Orbiting satellites and theoretically for coupling processes in the atmosphere-ionosphere system. We review these density corrugations by classifying them into three types which are driven respectively by the lower atmosphere, ionosphere, and solar wind/magnetosphere. Model capabilities in capturing these features are discussed. A summary table of these corrugations is included to provide a quick guide on their magnitudes, occurring latitude, local time, and season.

  14. Simulating spatial and temporally related fire weather

    Treesearch

    Isaac C. Grenfell; Mark Finney; Matt Jolly

    2010-01-01

    Use of fire behavior models has assumed an increasingly important role for managers of wildfire incidents to make strategic decisions. For fire risk assessments and danger rating at very large spatial scales, these models depend on fire weather variables or fire danger indices. Here, we describe a method to simulate fire weather at a national scale that captures the...

  15. Transforming Patient-Centered Care: Development of the Evidence Informed Decision Making through Engagement Model.

    PubMed

    Moore, Jennifer E; Titler, Marita G; Kane Low, Lisa; Dalton, Vanessa K; Sampselle, Carolyn M

    2015-01-01

    In response to the passage of the Affordable Care Act in the United States, clinicians and researchers are critically evaluating methods to engage patients in implementing evidence-based care to improve health outcomes. However, most models on implementation only target clinicians or health systems as the adopters of evidence. Patients are largely ignored in these models. A new implementation model that captures the complex but important role of patients in the uptake of evidence may be a critical missing link. Through a process of theory evaluation and development, we explore patient-centered concepts (patient activation and shared decision making) within an implementation model by mapping qualitative data from an elective induction of labor study to assess the model's ability to capture these key concepts. The process demonstrated that a new, patient-centered model for implementation is needed. In response, the Evidence Informed Decision Making through Engagement Model is presented. We conclude that, by fully integrating women into an implementation model, outcomes that are important to both the clinician and patient will improve. In the interest of providing evidence-based care to women during pregnancy and childbirth, it is essential that care is patient centered. The inclusion of concepts discussed in this article has the potential to extend beyond maternity care and influence other clinical areas. Utilizing the newly developed Evidence Informed Decision Making through Engagement Model provides a framework for utilizing evidence and translating it into practice while acknowledging the important role that women have in the process. Published by Elsevier Inc.

  16. Optimizing Aerobot Exploration of Venus

    NASA Astrophysics Data System (ADS)

    Ford, Kevin S.

    1997-03-01

    Venus Flyer Robot (VFR) is an aerobot; an autonomous balloon probe designed for remote exploration of Earth's sister planet in 2003. VFR's simple navigation and control system permits travel to virtually any location on Venus, but it can survive for only a limited duration in the harsh Venusian environment. To help address this limitation, we develop: (1) a global circulation model that captures the most important characteristics of the Venusian atmosphere; (2) a simple aerobot model that captures thermal restrictions faced by VFR at Venus; and (3) one exact and two heuristic algorithms that, using abstractions (1) and (2), construct routes making the best use of VFR's limited lifetime. We demonstrate this modeling by planning several small example missions and a prototypical mission that explores numerous interesting sites recently documented in the plane tary geology literature.

  17. Optimizing Aerobot Exploration of Venus

    NASA Technical Reports Server (NTRS)

    Ford, Kevin S.

    1997-01-01

    Venus Flyer Robot (VFR) is an aerobot; an autonomous balloon probe designed for remote exploration of Earth's sister planet in 2003. VFR's simple navigation and control system permits travel to virtually any location on Venus, but it can survive for only a limited duration in the harsh Venusian environment. To help address this limitation, we develop: (1) a global circulation model that captures the most important characteristics of the Venusian atmosphere; (2) a simple aerobot model that captures thermal restrictions faced by VFR at Venus; and (3) one exact and two heuristic algorithms that, using abstractions (1) and (2), construct routes making the best use of VFR's limited lifetime. We demonstrate this modeling by planning several small example missions and a prototypical mission that explores numerous interesting sites recently documented in the plane tary geology literature.

  18. Mitotic wavefronts mediated by mechanical signaling in early Drosophila embryos

    NASA Astrophysics Data System (ADS)

    Kang, Louis; Idema, Timon; Liu, Andrea; Lubensky, Tom

    2013-03-01

    Mitosis in the early Drosophila embryo demonstrates spatial and temporal correlations in the form of wavefronts that travel across the embryo in each cell cycle. This coordinated phenomenon requires a signaling mechanism, which we suggest is mechanical in origin. We have constructed a theoretical model that supports nonlinear wavefront propagation in a mechanically-excitable medium. Previously, we have shown that this model captures quantitatively the wavefront speed as it varies with cell cycle number, for reasonable values of the elastic moduli and damping coefficient of the medium. Now we show that our model also captures the displacements of cell nuclei in the embryo in response to the traveling wavefront. This new result further supports that mechanical signaling may play an important role in mediating mitotic wavefronts.

  19. Modeling spatial variation in avian survival and residency probabilities

    USGS Publications Warehouse

    Saracco, James F.; Royle, J. Andrew; DeSante, David F.; Gardner, Beth

    2010-01-01

    The importance of understanding spatial variation in processes driving animal population dynamics is widely recognized. Yet little attention has been paid to spatial modeling of vital rates. Here we describe a hierarchical spatial autoregressive model to provide spatially explicit year-specific estimates of apparent survival (phi) and residency (pi) probabilities from capture-recapture data. We apply the model to data collected on a declining bird species, Wood Thrush (Hylocichla mustelina), as part of a broad-scale bird-banding network, the Monitoring Avian Productivity and Survivorship (MAPS) program. The Wood Thrush analysis showed variability in both phi and pi among years and across space. Spatial heterogeneity in residency probability was particularly striking, suggesting the importance of understanding the role of transients in local populations. We found broad-scale spatial patterning in Wood Thrush phi and pi that lend insight into population trends and can direct conservation and research. The spatial model developed here represents a significant advance over approaches to investigating spatial pattern in vital rates that aggregate data at coarse spatial scales and do not explicitly incorporate spatial information in the model. Further development and application of hierarchical capture-recapture models offers the opportunity to more fully investigate spatiotemporal variation in the processes that drive population changes.

  20. Microporoelastic Modeling of Organic-Rich Shales

    NASA Astrophysics Data System (ADS)

    Khosh Sokhan Monfared, S.; Abedi, S.; Ulm, F. J.

    2014-12-01

    Organic-rich shale is an extremely complex, naturally occurring geo-composite. The heterogeneous nature of organic-rich shale and its anisotropic behavior pose grand challenges for characterization, modeling and engineering design The intricacy of organic-rich shale, in the context of its mechanical and poromechanical properties, originates in the presence of organic/inorganic constituents and their interfaces as well as the occurrence of porosity and elastic anisotropy, at multiple length scales. To capture the contributing mechanisms, of 1st order, responsible for organic-rich shale complex behavior, we introduce an original approach for micromechanical modeling of organic-rich shales which accounts for the effect of maturity of organics on the overall elasticity through morphology considerations. This morphology contribution is captured by means of an effective media theory that bridges the gap between immature and mature systems through the choice of system's microtexture; namely a matrix-inclusion morphology (Mori-Tanaka) for immature systems and a polycrystal/granular morphology for mature systems. Also, we show that interfaces play a role on the effective elasticity of mature, organic-rich shales. The models are calibrated by means of ultrasonic pulse velocity measurements of elastic properties and validated by means of nanoindentation results. Sensitivity analyses using Spearman's Partial Rank Correlation Coefficient shows the importance of porosity and Total Organic Carbon (TOC) as key input parameters for accurate model predictions. These modeling developments pave the way to reach a "unique" set of clay properties and highlight the importance of depositional environment, burial and diagenetic processes on overall mechanical and poromechanical behavior of organic-rich shale. These developments also emphasize the importance of understanding and modeling clay elasticity and organic maturity on the overall rock behavior which is of critical importance for a practical rock physics model that accounts for time dependent phenomena which can be employed for seismic inversion.

  1. Recent Development of an Earth Science App - FieldMove Clino

    NASA Astrophysics Data System (ADS)

    Vaughan, Alan; Collins, Nathan; Krus, Mike; Rourke, Peter

    2014-05-01

    As geological modelling and analysis move into 3D digital space, it becomes increasingly important to be able to rapidly integrate new data with existing databases, without the potential degradation caused by repeated manual transcription of numeric, graphical and meta-data. Digital field mapping offers significant benefits when compared with traditional paper mapping techniques, in that it can directly and interactively feed and be guided by downstream geological modelling and analysis. One of the most important pieces of equipment used by the field geologists is the compass clinometer. Midland Valley's development team have recently release their highly anticipated FieldMove Clino App. FieldMove Clino is a digital compass-clinometer for data capture on a smartphone. The app allows the user to use their phone as a traditional hand-held bearing compass, as well as a digital compass-clinometer for rapidly measuring and capturing the georeferenced location and orientation of planar and linear features in the field. The user can also capture and store digital photographs and text notes. FieldMove Clino supports online Google Maps as well as offline maps, so that the user can import their own georeferenced basemaps. Data can be exported as comma-separated values (.csv) or Move™ (.mve) files and then imported directly into FieldMove™, Move™ or other applications. Midland Valley is currently pioneering tablet-based mapping and, along with its industrial and academic partners, will be using the application in field based projects throughout this year and will be integrating feedback in further developments of this technology.

  2. Design of high productivity antibody capture by protein A chromatography using an integrated experimental and modeling approach.

    PubMed

    Ng, Candy K S; Osuna-Sanchez, Hector; Valéry, Eric; Sørensen, Eva; Bracewell, Daniel G

    2012-06-15

    An integrated experimental and modeling approach for the design of high productivity protein A chromatography is presented to maximize productivity in bioproduct manufacture. The approach consists of four steps: (1) small-scale experimentation, (2) model parameter estimation, (3) productivity optimization and (4) model validation with process verification. The integrated use of process experimentation and modeling enables fewer experiments to be performed, and thus minimizes the time and materials required in order to gain process understanding, which is of key importance during process development. The application of the approach is demonstrated for the capture of antibody by a novel silica-based high performance protein A adsorbent named AbSolute. In the example, a series of pulse injections and breakthrough experiments were performed to develop a lumped parameter model, which was then used to find the best design that optimizes the productivity of a batch protein A chromatographic process for human IgG capture. An optimum productivity of 2.9 kg L⁻¹ day⁻¹ for a column of 5mm diameter and 8.5 cm length was predicted, and subsequently verified experimentally, completing the whole process design approach in only 75 person-hours (or approximately 2 weeks). Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Evaluation of bias associated with capture maps derived from nonlinear groundwater flow models

    USGS Publications Warehouse

    Nadler, Cara; Allander, Kip K.; Pohll, Greg; Morway, Eric D.; Naranjo, Ramon C.; Huntington, Justin

    2018-01-01

    The impact of groundwater withdrawal on surface water is a concern of water users and water managers, particularly in the arid western United States. Capture maps are useful tools to spatially assess the impact of groundwater pumping on water sources (e.g., streamflow depletion) and are being used more frequently for conjunctive management of surface water and groundwater. Capture maps have been derived using linear groundwater flow models and rely on the principle of superposition to demonstrate the effects of pumping in various locations on resources of interest. However, nonlinear models are often necessary to simulate head-dependent boundary conditions and unconfined aquifers. Capture maps developed using nonlinear models with the principle of superposition may over- or underestimate capture magnitude and spatial extent. This paper presents new methods for generating capture difference maps, which assess spatial effects of model nonlinearity on capture fraction sensitivity to pumping rate, and for calculating the bias associated with capture maps. The sensitivity of capture map bias to selected parameters related to model design and conceptualization for the arid western United States is explored. This study finds that the simulation of stream continuity, pumping rates, stream incision, well proximity to capture sources, aquifer hydraulic conductivity, and groundwater evapotranspiration extinction depth substantially affect capture map bias. Capture difference maps demonstrate that regions with large capture fraction differences are indicative of greater potential capture map bias. Understanding both spatial and temporal bias in capture maps derived from nonlinear groundwater flow models improves their utility and defensibility as conjunctive-use management tools.

  4. Exercise Sensing and Pose Recovery Inference Tool (ESPRIT) - A Compact Stereo-based Motion Capture Solution For Exercise Monitoring

    NASA Technical Reports Server (NTRS)

    Lee, Mun Wai

    2015-01-01

    Crew exercise is important during long-duration space flight not only for maintaining health and fitness but also for preventing adverse health problems, such as losses in muscle strength and bone density. Monitoring crew exercise via motion capture and kinematic analysis aids understanding of the effects of microgravity on exercise and helps ensure that exercise prescriptions are effective. Intelligent Automation, Inc., has developed ESPRIT to monitor exercise activities, detect body markers, extract image features, and recover three-dimensional (3D) kinematic body poses. The system relies on prior knowledge and modeling of the human body and on advanced statistical inference techniques to achieve robust and accurate motion capture. In Phase I, the company demonstrated motion capture of several exercises, including walking, curling, and dead lifting. Phase II efforts focused on enhancing algorithms and delivering an ESPRIT prototype for testing and demonstration.

  5. A Self-Assessment Stereo Capture Model Applicable to the Internet of Things

    PubMed Central

    Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems—toed-in camera configuration and parallel camera configuration—are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting. PMID:26308004

  6. Attention capture without awareness in a non-spatial selection task.

    PubMed

    Oriet, Chris; Pandey, Mamata; Kawahara, Jun-Ichiro

    2017-02-01

    Distractors presented prior to a critical target in a rapid sequence of visually-presented items induce a lag-dependent deficit in target identification, particularly when the distractor shares a task-relevant feature of the target. Presumably, such capture of central attention is important for bringing a target into awareness. The results of the present investigation suggest that greater capture of attention by a distractor is not accompanied by greater awareness of it. Moreover, awareness tends to be limited to superficial characteristics of the target such as colour. The findings are interpreted within the context of a model that assumes sudden increases in arousal trigger selection of information for consolidation in working memory. In this conceptualization, prolonged analysis of distractor items sharing task-relevant features leads to larger target identification deficits (i.e., greater capture) but no increase in awareness. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Quenching measurements and modeling of a boron-loaded organic liquid scintillator

    DOE PAGES

    Westerdale, S.; Xu, J.; Shields, E.; ...

    2017-08-03

    We present that organic liquid scintillators are used in a wide variety of applications in experimental nuclear and particle physics. Boron-loaded scintillators are particularly useful for detecting neutron captures, due to the high thermal neutron capture cross section of 10B. These scintillators are commonly used in neutron detectors, including the DarkSide-50 neutron veto, where the neutron may produce a signal when it scatters off protons in the scintillator or when it captures on 10B. Reconstructing the energy of these recoils is complicated by scintillation quenching. Understanding how nuclear recoils are quenched in these scintillators is an important and difficult problem.more » In this article, we present a set of measurements of neutron-induced proton recoils in a boron-loaded organic liquid scintillator at recoil energies ranging from 57–467 keV, and we compare these measurements to predictions from different quenching models. We find that a modified Birks' model whose denominator is quadratic in dE/dx best describes the measurements, with χ2/NDF=1.6. In conclusion, this result will help model nuclear recoil scintillation in similar detectors and can be used to improve their neutron tagging efficiency.« less

  8. Quenching measurements and modeling of a boron-loaded organic liquid scintillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westerdale, S.; Xu, J.; Shields, E.

    Organic liquid scintillators are used in a wide variety of applications in experimental nuclear and particle physics. Boron-loaded scintillators are particularly useful for detecting neutron captures, due to the high thermal neutron capture cross section of 10B. These scintillators are commonly used in neutron detectors, including the DarkSide-50 neutron veto, where the neutron may produce a signal when it scatters o protons in the scintillator or when it captures on 10B. Reconstructing the energy of these recoils is complicated by scintillation quenching. Understanding how nuclear recoils are quenched in these scintillators is an important and dicult problem. In this article,more » we present a set of measurements of neutron-induced proton recoils in a boron-loaded organic liquid scintillator at recoil energies ranging from 57-467 keV, and we compare these measurements to predictions from different quenching models. We and that a modified Birks' model whose denominator is quadratic in dE=dx best describes the measurements, with χ 2/NDF = 1:6. This result will help model nuclear recoil scintillation in similar detectors and can be used to improve their neutron tagging efficiency.« less

  9. Quenching measurements and modeling of a boron-loaded organic liquid scintillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westerdale, S.; Xu, J.; Shields, E.

    We present that organic liquid scintillators are used in a wide variety of applications in experimental nuclear and particle physics. Boron-loaded scintillators are particularly useful for detecting neutron captures, due to the high thermal neutron capture cross section of 10B. These scintillators are commonly used in neutron detectors, including the DarkSide-50 neutron veto, where the neutron may produce a signal when it scatters off protons in the scintillator or when it captures on 10B. Reconstructing the energy of these recoils is complicated by scintillation quenching. Understanding how nuclear recoils are quenched in these scintillators is an important and difficult problem.more » In this article, we present a set of measurements of neutron-induced proton recoils in a boron-loaded organic liquid scintillator at recoil energies ranging from 57–467 keV, and we compare these measurements to predictions from different quenching models. We find that a modified Birks' model whose denominator is quadratic in dE/dx best describes the measurements, with χ2/NDF=1.6. In conclusion, this result will help model nuclear recoil scintillation in similar detectors and can be used to improve their neutron tagging efficiency.« less

  10. Quenching measurements and modeling of a boron-loaded organic liquid scintillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westerdale, S.; Xu, J.; Shields, E.

    Organic liquid scintillators are used in a wide variety of applications in experimental nuclear and particle physics. Boron-loaded scintillators are particularly useful for detecting neutron captures, due to the high thermal neutron capture cross section ofmore » $$^{10}$$B. These scintillators are commonly used in neutron detectors, including the DarkSide-50 neutron veto, where the neutron may produce a signal when it scatters off protons in the scintillator or when it captures on $$^{10}$$B. Reconstructing the energy of these recoils is complicated by scintillation quenching. Understanding how nuclear recoils are quenched in these scintillators is an important and difficult problem. In this article, we present a set of measurements of neutron-induced proton recoils in a boron-loaded organic liquid scintillator at recoil energies ranging from 57--467 keV, and we compare these measurements to predictions from different quenching models. We find that a modified Birks' model whose denominator is quadratic in $dE/dx$ best describes the measurements, with $$\\chi^2$$/NDF$=1.6$. This result will help model nuclear recoil scintillation in similar detectors and can be used to improve their neutron tagging efficiency.« less

  11. Study on Capturing Functional Requirements of the New Product Based on Evolution

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Song, Liya; Bai, Zhonghang; Zhang, Peng

    In order to exist in an increasingly competitive global marketplace, it is important for corporations to forecast the evolutionary direction of new products rapidly and effectively. Most products in the world are developed based on the design of existing products. In the product design, capturing functional requirements is a key step. Function is continuously evolving, which is driven by the evolution of needs and technologies. So the functional requirements of new product can be forecasted based on the functions of existing product. Eight laws of function evolution are put forward in this paper. The process model of capturing the functional requirements of new product based on function evolution is proposed. An example illustrates the design process.

  12. Dwarf galaxies: a lab to investigate the neutron capture elements production

    NASA Astrophysics Data System (ADS)

    Cescutti, Gabriele

    2018-06-01

    In this contribution, I focus on the neutron capture elements observed in the spectra of old halo and ultra faint galaxies stars. Adopting a stochastic chemical evolution model and the Galactic halo as a benchmark, I present new constraints on the rate and time scales of r-process events, based on the discovery of the r-process rich stars in the ultra faint galaxy Reticulum 2. I also show that an s-process activated by rotation in massive stars can play an important role in the production of heavy elements.

  13. Improvements in continuum modeling for biomolecular systems

    NASA Astrophysics Data System (ADS)

    Yu, Qiao; Ben-Zhuo, Lu

    2016-01-01

    Modeling of biomolecular systems plays an essential role in understanding biological processes, such as ionic flow across channels, protein modification or interaction, and cell signaling. The continuum model described by the Poisson- Boltzmann (PB)/Poisson-Nernst-Planck (PNP) equations has made great contributions towards simulation of these processes. However, the model has shortcomings in its commonly used form and cannot capture (or cannot accurately capture) some important physical properties of the biological systems. Considerable efforts have been made to improve the continuum model to account for discrete particle interactions and to make progress in numerical methods to provide accurate and efficient simulations. This review will summarize recent main improvements in continuum modeling for biomolecular systems, with focus on the size-modified models, the coupling of the classical density functional theory and the PNP equations, the coupling of polar and nonpolar interactions, and numerical progress. Project supported by the National Natural Science Foundation of China (Grant No. 91230106) and the Chinese Academy of Sciences Program for Cross & Cooperative Team of the Science & Technology Innovation.

  14. Exploring new topography-based subgrid spatial structures for improving land surface modeling

    DOE PAGES

    Tesfa, Teklu K.; Leung, Lai-Yung Ruby

    2017-02-22

    Topography plays an important role in land surface processes through its influence on atmospheric forcing, soil and vegetation properties, and river network topology and drainage area. Land surface models with a spatial structure that captures spatial heterogeneity, which is directly affected by topography, may improve the representation of land surface processes. Previous studies found that land surface modeling, using subbasins instead of structured grids as computational units, improves the scalability of simulated runoff and streamflow processes. In this study, new land surface spatial structures are explored by further dividing subbasins into subgrid structures based on topographic properties, including surface elevation,more » slope and aspect. Two methods (local and global) of watershed discretization are applied to derive two types of subgrid structures (geo-located and non-geo-located) over the topographically diverse Columbia River basin in the northwestern United States. In the global method, a fixed elevation classification scheme is used to discretize subbasins. The local method utilizes concepts of hypsometric analysis to discretize each subbasin, using different elevation ranges that also naturally account for slope variations. The relative merits of the two methods and subgrid structures are investigated for their ability to capture topographic heterogeneity and the implications of this on representations of atmospheric forcing and land cover spatial patterns. Results showed that the local method reduces the standard deviation (SD) of subgrid surface elevation in the study domain by 17 to 19 % compared to the global method, highlighting the relative advantages of the local method for capturing subgrid topographic variations. The comparison between the two types of subgrid structures showed that the non-geo-located subgrid structures are more consistent across different area threshold values than the geo-located subgrid structures. Altogether the local method and non-geo-located subgrid structures effectively and robustly capture topographic, climatic and vegetation variability, which is important for land surface modeling.« less

  15. Exploring new topography-based subgrid spatial structures for improving land surface modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tesfa, Teklu K.; Leung, Lai-Yung Ruby

    Topography plays an important role in land surface processes through its influence on atmospheric forcing, soil and vegetation properties, and river network topology and drainage area. Land surface models with a spatial structure that captures spatial heterogeneity, which is directly affected by topography, may improve the representation of land surface processes. Previous studies found that land surface modeling, using subbasins instead of structured grids as computational units, improves the scalability of simulated runoff and streamflow processes. In this study, new land surface spatial structures are explored by further dividing subbasins into subgrid structures based on topographic properties, including surface elevation,more » slope and aspect. Two methods (local and global) of watershed discretization are applied to derive two types of subgrid structures (geo-located and non-geo-located) over the topographically diverse Columbia River basin in the northwestern United States. In the global method, a fixed elevation classification scheme is used to discretize subbasins. The local method utilizes concepts of hypsometric analysis to discretize each subbasin, using different elevation ranges that also naturally account for slope variations. The relative merits of the two methods and subgrid structures are investigated for their ability to capture topographic heterogeneity and the implications of this on representations of atmospheric forcing and land cover spatial patterns. Results showed that the local method reduces the standard deviation (SD) of subgrid surface elevation in the study domain by 17 to 19 % compared to the global method, highlighting the relative advantages of the local method for capturing subgrid topographic variations. The comparison between the two types of subgrid structures showed that the non-geo-located subgrid structures are more consistent across different area threshold values than the geo-located subgrid structures. Altogether the local method and non-geo-located subgrid structures effectively and robustly capture topographic, climatic and vegetation variability, which is important for land surface modeling.« less

  16. Rapid and efficient uranium(VI) capture by phytic acid/polyaniline/FeOOH composites.

    PubMed

    Wei, Xintao; Liu, Qi; Zhang, Hongsen; Liu, Jingyuan; Chen, Rongrong; Li, Rumin; Li, Zhangshuang; Liu, Peili; Wang, Jun

    2018-02-01

    Uranium plays an indispensable role in nuclear energy, but there are limited land resources to meet the ever growing demand; therefore, a need exists to develop efficient materials for capturing uranium from water. Herein, we synthesize a promising adsorbent of phytic acid/polyaniline/FeOOH composites (PA/PANI/FeOOH) by oxidative polymerization. Phytic acid, acting asa gelator and dopant, plays an important role in the formation of polyaniline (PANI). The PA/PANI/FeOOH exhibites high adsorption capacity (q m =555.8mgg -1 , T=298K), rapid adsorption rate (within 5min), excellent selectivity and cyclic stability. In addition, the results show that the adsorption isotherm is well fitted to the Langmuir isotherm model, and the adsorption kinetics agree with a pseudo-second order model. XPS analysis indicates that the removal of uranium is mainly attributed to abundant amine and imine groups on the surface of PA/PANI/FeOOH. Importantly, the removal of uranium from low concentrations of simulated seawater is highly efficient with a removal rate exceeding 92%. From our study, superior adsorption capacities, along with a low-cost, environmentally friendly and facile synthesis, reveal PA/PANI/FeOOH asa promising material for uranium capture. Copyright © 2017. Published by Elsevier Inc.

  17. Astrophysical 3He(α ,γ )7Be and 3H(α ,γ )7Li direct capture reactions in a potential-model approach

    NASA Astrophysics Data System (ADS)

    Tursunov, E. M.; Turakulov, S. A.; Kadyrov, A. S.

    2018-03-01

    The astrophysical 3He(α ,γ )7Be and 3H(α ,γ )7Li direct capture processes are studied in the framework of the two-body model with potentials of a simple Gaussian form, which describe correctly the phase shifts in the s , p , d , and f waves, as well as the binding energy and the asymptotic normalization constant of the ground p3 /2 and the first excited p1 /2 bound states. It is shown that the E 1 transition from the initial s wave to the final p waves is strongly dominant in both capture reactions. On this basis the s -wave potential parameters are adjusted to reproduce the new data of the LUNA Collaboration around 100 keV and the newest data at the Gamov peak estimated with the help of the observed neutrino fluxes from the sun, S34(23-5+6keV ) =0.548 ±0.054 keV b for the astrophysical S factor of the capture process 3He(α ,γ )7Be . The resulting model describes well the astrophysical S factor in the low-energy big-bang nucleosynthesis region of 180-400 keV; however, it has a tendency to underestimate the data above 0.5 MeV. The energy dependence of the S factor is mostly consistent with the data and the results of the no-core shell model with continuum, but substantially different from the fermionic molecular dynamics model predictions. Two-body potentials, adjusted for the properties of the 7Be nucleus, 3He+α elastic scattering data, and the astrophysical S factor of the 3He(α ,γ )7Be direct capture reaction, are able to reproduce the properties of the 7Li nucleus, the binding energies of the ground 3 /2- and first excited 1 /2- states, and phase shifts of the 3H+α elastic scattering in partial waves. Most importantly, these potential models can successfully describe both absolute value and energy dependence of the existing experimental data for the mirror astrophysical 3H(α ,γ )7Li capture reaction without any additional adjustment of the parameters.

  18. Modeling irrigation behavior in groundwater systems

    NASA Astrophysics Data System (ADS)

    Foster, Timothy; Brozović, Nicholas; Butler, Adrian P.

    2014-08-01

    Integrated hydro-economic models have been widely applied to water management problems in regions of intensive groundwater-fed irrigation. However, policy interpretations may be limited as most existing models do not explicitly consider two important aspects of observed irrigation decision making, namely the limits on instantaneous irrigation rates imposed by well yield and the intraseasonal structure of irrigation planning. We develop a new modeling approach for determining irrigation demand that is based on observed farmer behavior and captures the impacts on production and water use of both well yield and climate. Through a case study of irrigated corn production in the Texas High Plains region of the United States we predict optimal irrigation strategies under variable levels of groundwater supply, and assess the limits of existing models for predicting land and groundwater use decisions by farmers. Our results show that irrigation behavior exhibits complex nonlinear responses to changes in groundwater availability. Declining well yields induce large reductions in the optimal size of irrigated area and irrigation use as constraints on instantaneous application rates limit the ability to maintain sufficient soil moisture to avoid negative impacts on crop yield. We demonstrate that this important behavioral response to limited groundwater availability is not captured by existing modeling approaches, which therefore may be unreliable predictors of irrigation demand, agricultural profitability, and resilience to climate change and aquifer depletion.

  19. The mechanisms underlying overgeneral autobiographical memory: An evaluative review of evidence for the CaR-FA-X model

    PubMed Central

    Sumner, Jennifer A.

    2011-01-01

    Overgeneral autobiographical memory (OGM) has been found to be an important cognitive phenomenon with respect to depression and trauma-related psychopathology (e.g., posttraumatic stress disorder), and researchers have been interested in better understanding the factors that contribute to this proposed vulnerability factor. The most prominent model of mechanisms underlying OGM to date is Williams et al.’s (2007) CaR-FA-X model. This model proposes that three processes influence OGM: capture and rumination, functional avoidance, and impaired executive control. The author reviews the current state of support for the CaR-FA-X model by evaluating 38 studies that have examined OGM and one or more mechanisms of the model. Collectively, these studies reveal robust support for associations between OGM and both rumination and impaired executive control. OGM also appears to be a cognitive avoidance strategy, and there is evidence that avoiding the retrieval of specific memories reduces distress after an aversive event, at least in the short term. Important issues that have been left unresolved are highlighted, including the nature of the capture phenomenon, the role of trauma in functional avoidance, and the developmental nature of functional avoidance. Recommendations for future research that will enhance understanding of the factors that contribute to OGM are suggested. PMID:22142837

  20. Description and application of capture zone delineation for a wellfield at Hilton Head Island, South Carolina

    USGS Publications Warehouse

    Landmeyer, J.E.

    1994-01-01

    Ground-water capture zone boundaries for individual pumped wells in a confined aquffer were delineated by using groundwater models. Both analytical and numerical (semi-analytical) models that more accurately represent the $round-water-flow system were used. All models delineated 2-dimensional boundaries (capture zones) that represent the areal extent of groundwater contribution to a pumped well. The resultant capture zones were evaluated on the basis of the ability of each model to realistically rapresent the part of the ground-water-flow system that contributed water to the pumped wells. Analytical models used were based on a fixed radius approach, and induded; an arbitrary radius model, a calculated fixed radius model based on the volumetric-flow equation with a time-of-travel criterion, and a calculated fixed radius model derived from modification of the Theis model with a drawdown criterion. Numerical models used induded the 2-dimensional, finite-difference models RESSQC and MWCAP. The arbitrary radius and Theis analytical models delineated capture zone boundaries that compared least favorably with capture zones delineated using the volumetric-flow analytical model and both numerical models. The numerical models produced more hydrologically reasonable capture zones (that were oriented parallel to the regional flow direction) than the volumetric-flow equation. The RESSQC numerical model computed more hydrologically realistic capture zones than the MWCAP numerical model by accounting for changes in the shape of capture zones caused by multiple-well interference. The capture zone boundaries generated by using both analytical and numerical models indicated that the curnmtly used 100-foot radius of protection around a wellhead in South Carolina is an underestimate of the extent of ground-water capture for pumped wetis in this particular wellfield in the Upper Floridan aquifer. The arbitrary fixed radius of 100 feet was shown to underestimate the upgradient contribution of ground-water flow to a pumped well.

  1. Inferring species interactions through joint mark–recapture analysis

    USGS Publications Warehouse

    Yackulic, Charles B.; Korman, Josh; Yard, Michael D.; Dzul, Maria C.

    2018-01-01

    Introduced species are frequently implicated in declines of native species. In many cases, however, evidence linking introduced species to native declines is weak. Failure to make strong inferences regarding the role of introduced species can hamper attempts to predict population viability and delay effective management responses. For many species, mark–recapture analysis is the more rigorous form of demographic analysis. However, to our knowledge, there are no mark–recapture models that allow for joint modeling of interacting species. Here, we introduce a two‐species mark–recapture population model in which the vital rates (and capture probabilities) of one species are allowed to vary in response to the abundance of the other species. We use a simulation study to explore bias and choose an approach to model selection. We then use the model to investigate species interactions between endangered humpback chub (Gila cypha) and introduced rainbow trout (Oncorhynchus mykiss) in the Colorado River between 2009 and 2016. In particular, we test hypotheses about how two environmental factors (turbidity and temperature), intraspecific density dependence, and rainbow trout abundance are related to survival, growth, and capture of juvenile humpback chub. We also project the long‐term effects of different rainbow trout abundances on adult humpback chub abundances. Our simulation study suggests this approach has minimal bias under potentially challenging circumstances (i.e., low capture probabilities) that characterized our application and that model selection using indicator variables could reliably identify the true generating model even when process error was high. When the model was applied to rainbow trout and humpback chub, we identified negative relationships between rainbow trout abundance and the survival, growth, and capture probability of juvenile humpback chub. Effects on interspecific interactions on survival and capture probability were strongly supported, whereas support for the growth effect was weaker. Environmental factors were also identified to be important and in many cases stronger than interspecific interactions, and there was still substantial unexplained variation in growth and survival rates. The general approach presented here for combining mark–recapture data for two species is applicable in many other systems and could be modified to model abundance of the invader via other modeling approaches.

  2. Discontinuous Galerkin methods for modeling Hurricane storm surge

    NASA Astrophysics Data System (ADS)

    Dawson, Clint; Kubatko, Ethan J.; Westerink, Joannes J.; Trahan, Corey; Mirabito, Christopher; Michoski, Craig; Panda, Nishant

    2011-09-01

    Storm surge due to hurricanes and tropical storms can result in significant loss of life, property damage, and long-term damage to coastal ecosystems and landscapes. Computer modeling of storm surge can be used for two primary purposes: forecasting of surge as storms approach land for emergency planning and evacuation of coastal populations, and hindcasting of storms for determining risk, development of mitigation strategies, coastal restoration and sustainability. Storm surge is modeled using the shallow water equations, coupled with wind forcing and in some events, models of wave energy. In this paper, we will describe a depth-averaged (2D) model of circulation in spherical coordinates. Tides, riverine forcing, atmospheric pressure, bottom friction, the Coriolis effect and wind stress are all important for characterizing the inundation due to surge. The problem is inherently multi-scale, both in space and time. To model these problems accurately requires significant investments in acquiring high-fidelity input (bathymetry, bottom friction characteristics, land cover data, river flow rates, levees, raised roads and railways, etc.), accurate discretization of the computational domain using unstructured finite element meshes, and numerical methods capable of capturing highly advective flows, wetting and drying, and multi-scale features of the solution. The discontinuous Galerkin (DG) method appears to allow for many of the features necessary to accurately capture storm surge physics. The DG method was developed for modeling shocks and advection-dominated flows on unstructured finite element meshes. It easily allows for adaptivity in both mesh ( h) and polynomial order ( p) for capturing multi-scale spatial events. Mass conservative wetting and drying algorithms can be formulated within the DG method. In this paper, we will describe the application of the DG method to hurricane storm surge. We discuss the general formulation, and new features which have been added to the model to better capture surge in complex coastal environments. These features include modifications to the method to handle spherical coordinates and maintain still flows, improvements in the stability post-processing (i.e. slope-limiting), and the modeling of internal barriers for capturing overtopping of levees and other structures. We will focus on applications of the model to recent events in the Gulf of Mexico, including Hurricane Ike.

  3. Complex Greenland outlet glacier flow captured

    PubMed Central

    Aschwanden, Andy; Fahnestock, Mark A.; Truffer, Martin

    2016-01-01

    The Greenland Ice Sheet is losing mass at an accelerating rate due to increased surface melt and flow acceleration in outlet glaciers. Quantifying future dynamic contributions to sea level requires accurate portrayal of outlet glaciers in ice sheet simulations, but to date poor knowledge of subglacial topography and limited model resolution have prevented reproduction of complex spatial patterns of outlet flow. Here we combine a high-resolution ice-sheet model coupled to uniformly applied models of subglacial hydrology and basal sliding, and a new subglacial topography data set to simulate the flow of the Greenland Ice Sheet. Flow patterns of many outlet glaciers are well captured, illustrating fundamental commonalities in outlet glacier flow and highlighting the importance of efforts to map subglacial topography. Success in reproducing present day flow patterns shows the potential for prognostic modelling of ice sheets without the need for spatially varying parameters with uncertain time evolution. PMID:26830316

  4. Policy Capturing with Local Models: The Application of the AID technique in Modeling Judgment

    DTIC Science & Technology

    1972-12-01

    or coding phases have upon the derived policy modelo . Particularly important aspects of these subtasks include: 1) Initial identification and coding of...in o c building pJha~sed a.ird the 1 50 a ~pli- atuls f the cr osi - vuljdatiof po[pulationl. Th.is iiicreasv iii aitr ilvatabl to Lxo ba sic fa ctu r

  5. A powerful and efficient set test for genetic markers that handles confounders

    PubMed Central

    Listgarten, Jennifer; Lippert, Christoph; Kang, Eun Yong; Xiang, Jing; Kadie, Carl M.; Heckerman, David

    2013-01-01

    Motivation: Approaches for testing sets of variants, such as a set of rare or common variants within a gene or pathway, for association with complex traits are important. In particular, set tests allow for aggregation of weak signal within a set, can capture interplay among variants and reduce the burden of multiple hypothesis testing. Until now, these approaches did not address confounding by family relatedness and population structure, a problem that is becoming more important as larger datasets are used to increase power. Results: We introduce a new approach for set tests that handles confounders. Our model is based on the linear mixed model and uses two random effects—one to capture the set association signal and one to capture confounders. We also introduce a computational speedup for two random-effects models that makes this approach feasible even for extremely large cohorts. Using this model with both the likelihood ratio test and score test, we find that the former yields more power while controlling type I error. Application of our approach to richly structured Genetic Analysis Workshop 14 data demonstrates that our method successfully corrects for population structure and family relatedness, whereas application of our method to a 15 000 individual Crohn’s disease case–control cohort demonstrates that it additionally recovers genes not recoverable by univariate analysis. Availability: A Python-based library implementing our approach is available at http://mscompbio.codeplex.com. Contact: jennl@microsoft.com or lippert@microsoft.com or heckerma@microsoft.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23599503

  6. Capturing strain localization behind a geosynthetic-reinforced soil wall

    NASA Astrophysics Data System (ADS)

    Lai, Timothy Y.; Borja, Ronaldo I.; Duvernay, Blaise G.; Meehan, Richard L.

    2003-04-01

    This paper presents the results of finite element (FE) analyses of shear strain localization that occurred in cohesionless soils supported by a geosynthetic-reinforced retaining wall. The innovative aspects of the analyses include capturing of the localized deformation and the accompanying collapse mechanism using a recently developed embedded strong discontinuity model. The case study analysed, reported in previous publications, consists of a 3.5-m tall, full-scale reinforced wall model deforming in plane strain and loaded by surcharge at the surface to failure. Results of the analysis suggest strain localization developing from the toe of the wall and propagating upward to the ground surface, forming a curved failure surface. This is in agreement with a well-documented failure mechanism experienced by the physical wall model showing internal failure surfaces developing behind the wall as a result of the surface loading. Important features of the analyses include mesh sensitivity studies and a comparison of the localization properties predicted by different pre-localization constitutive models, including a family of three-invariant elastoplastic constitutive models appropriate for frictional/dilatant materials. Results of the analysis demonstrate the potential of the enhanced FE method for capturing a collapse mechanism characterized by the presence of a failure, or slip, surface through earthen materials.

  7. Linking vegetation structure, function and physiology through spectroscopic remote sensing

    NASA Astrophysics Data System (ADS)

    Serbin, S.; Singh, A.; Couture, J. J.; Shiklomanov, A. N.; Rogers, A.; Desai, A. R.; Kruger, E. L.; Townsend, P. A.

    2015-12-01

    Terrestrial ecosystem process models require detailed information on ecosystem states and canopy properties to properly simulate the fluxes of carbon (C), water and energy from the land to the atmosphere and assess the vulnerability of ecosystems to perturbations. Current models fail to adequately capture the magnitude, spatial variation, and seasonality of terrestrial C uptake and storage, leading to significant uncertainties in the size and fate of the terrestrial C sink. By and large, these parameter and process uncertainties arise from inadequate spatial and temporal representation of plant traits, vegetation structure, and functioning. With increases in computational power and changes to model architecture and approaches, it is now possible for models to leverage detailed, data rich and spatially explicit descriptions of ecosystems to inform parameter distributions and trait tradeoffs. In this regard, spectroscopy and imaging spectroscopy data have been shown to be invaluable observational datasets to capture broad-scale spatial and, eventually, temporal dynamics in important vegetation properties. We illustrate the linkage of plant traits and spectral observations to supply key data constraints for model parameterization. These constraints can come either in the form of the raw spectroscopic data (reflectance, absorbtance) or physiological traits derived from spectroscopy. In this presentation we highlight our ongoing work to build ecological scaling relationships between critical vegetation characteristics and optical properties across diverse and complex canopies, including temperate broadleaf and conifer forests, Mediterranean vegetation, Arctic systems, and agriculture. We focus on work at the leaf, stand, and landscape scales, illustrating the importance of capturing the underlying variability in a range of parameters (including vertical variation within canopies) to enable more efficient scaling of traits related to functional diversity of ecosystems.

  8. Enhanced science-stakeholder communication to improve ecosystem model performances for climate change impact assessments.

    PubMed

    Jönsson, Anna Maria; Anderbrant, Olle; Holmér, Jennie; Johansson, Jacob; Schurgers, Guy; Svensson, Glenn P; Smith, Henrik G

    2015-04-01

    In recent years, climate impact assessments of relevance to the agricultural and forestry sectors have received considerable attention. Current ecosystem models commonly capture the effect of a warmer climate on biomass production, but they rarely sufficiently capture potential losses caused by pests, pathogens and extreme weather events. In addition, alternative management regimes may not be integrated in the models. A way to improve the quality of climate impact assessments is to increase the science-stakeholder collaboration, and in a two-way dialog link empirical experience and impact modelling with policy and strategies for sustainable management. In this paper we give a brief overview of different ecosystem modelling methods, discuss how to include ecological and management aspects, and highlight the importance of science-stakeholder communication. By this, we hope to stimulate a discussion among the science-stakeholder communities on how to quantify the potential for climate change adaptation by improving the realism in the models.

  9. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  10. Analysis of precision and accuracy in a simple model of machine learning

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2017-12-01

    Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.

  11. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    PubMed

    Whittington, Jesse; Sawaya, Michael A

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park's population of grizzly bears requires continued conservation-oriented management actions.

  12. Rational and Mechanistic Perspectives on Reinforcement Learning

    ERIC Educational Resources Information Center

    Chater, Nick

    2009-01-01

    This special issue describes important recent developments in applying reinforcement learning models to capture neural and cognitive function. But reinforcement learning, as a theoretical framework, can apply at two very different levels of description: "mechanistic" and "rational." Reinforcement learning is often viewed in mechanistic terms--as…

  13. A simulation study on the abatement of CO2 emissions by de-absorption with monoethanolamine.

    PubMed

    Greer, T; Bedelbayev, A; Igreja, J M; Gomes, J F; Lie, B

    2010-01-01

    Because of the adverse effect of CO2 from fossil fuel combustion on the earth's ecosystems, the most cost-effective method for CO2 capture is an important area of research. The predominant process for CO2 capture currently employed by industry is chemical absorption in amine solutions. A dynamic model for the de-absorption process was developed with monoethanolamine (MEA) solution. Henry's law was used for modelling the vapour phase equilibrium of the CO2, and fugacity ratios calculated by the Peng-Robinson equation of state (EOS) were used for H2O, MEA, N2 and O2. Chemical reactions between CO2 and MEA were included in the model along with the enhancement factor for chemical absorption. Liquid and vapour energy balances were developed to calculate the liquid and vapour temperature, respectively.

  14. The zoom lens of attention: Simulating shuffled versus normal text reading using the SWIFT model

    PubMed Central

    Schad, Daniel J.; Engbert, Ralf

    2012-01-01

    Assumptions on the allocation of attention during reading are crucial for theoretical models of eye guidance. The zoom lens model of attention postulates that attentional deployment can vary from a sharp focus to a broad window. The model is closely related to the foveal load hypothesis, i.e., the assumption that the perceptual span is modulated by the difficulty of the fixated word. However, these important theoretical concepts for cognitive research have not been tested quantitatively in eye movement models. Here we show that the zoom lens model, implemented in the SWIFT model of saccade generation, captures many important patterns of eye movements. We compared the model's performance to experimental data from normal and shuffled text reading. Our results demonstrate that the zoom lens of attention might be an important concept for eye movement control in reading. PMID:22754295

  15. Effects of sampling conditions on DNA-based estimates of American black bear abundance

    USGS Publications Warehouse

    Laufenberg, Jared S.; Van Manen, Frank T.; Clark, Joseph D.

    2013-01-01

    DNA-based capture-mark-recapture techniques are commonly used to estimate American black bear (Ursus americanus) population abundance (N). Although the technique is well established, many questions remain regarding study design. In particular, relationships among N, capture probability of heterogeneity mixtures A and B (pA and pB, respectively, or p, collectively), the proportion of each mixture (π), number of capture occasions (k), and probability of obtaining reliable estimates of N are not fully understood. We investigated these relationships using 1) an empirical dataset of DNA samples for which true N was unknown and 2) simulated datasets with known properties that represented a broader array of sampling conditions. For the empirical data analysis, we used the full closed population with heterogeneity data type in Program MARK to estimate N for a black bear population in Great Smoky Mountains National Park, Tennessee. We systematically reduced the number of those samples used in the analysis to evaluate the effect that changes in capture probabilities may have on parameter estimates. Model-averaged N for females and males were 161 (95% CI = 114–272) and 100 (95% CI = 74–167), respectively (pooled N = 261, 95% CI = 192–419), and the average weekly p was 0.09 for females and 0.12 for males. When we reduced the number of samples of the empirical data, support for heterogeneity models decreased. For the simulation analysis, we generated capture data with individual heterogeneity covering a range of sampling conditions commonly encountered in DNA-based capture-mark-recapture studies and examined the relationships between those conditions and accuracy (i.e., probability of obtaining an estimated N that is within 20% of true N), coverage (i.e., probability that 95% confidence interval includes true N), and precision (i.e., probability of obtaining a coefficient of variation ≤20%) of estimates using logistic regression. The capture probability for the larger of 2 mixture proportions of the population (i.e., pA or pB, depending on the value of π) was most important for predicting accuracy and precision, whereas capture probabilities of both mixture proportions (pA and pB) were important to explain variation in coverage. Based on sampling conditions similar to parameter estimates from the empirical dataset (pA = 0.30, pB = 0.05, N = 250, π = 0.15, and k = 10), predicted accuracy and precision were low (60% and 53%, respectively), whereas coverage was high (94%). Increasing pB, the capture probability for the predominate but most difficult to capture proportion of the population, was most effective to improve accuracy under those conditions. However, manipulation of other parameters may be more effective under different conditions. In general, the probabilities of obtaining accurate and precise estimates were best when p≥ 0.2. Our regression models can be used by managers to evaluate specific sampling scenarios and guide development of sampling frameworks or to assess reliability of DNA-based capture-mark-recapture studies.

  16. Demography and population dynamics of the mouse opossum (Thylamys elegans) in semi-arid Chile: seasonality, feedback structure and climate.

    PubMed Central

    Lima, M.; Stenseth, N. C.; Yoccoz, N. G.; Jaksic, F. M.

    2001-01-01

    Here we present, to the authors' knowledge for the very first time for a small marsupial, a thorough analysis of the demography and population dynamics of the mouse opossum (Thylamys elegans) in western South America. We test the relative importance of feedback structure and climatic factors (rainfall and the Southern Oscillation Index) in explaining the temporal variation in the demography of the mouse opossum. The demographic information was incorporated into a stage-structured population dynamics model and the model's predictions were compared with observed patterns. The mouse opossum's capture rates showed seasonal (within-year) and between-year variability, with individuals having higher capture rates during late summer and autumn and lower capture rates during winter and spring. There was also a strong between-year effect on capture probabilities. The reproductive (the fraction of reproductively active individuals) and recruitment rates showed a clear seasonal and a between-year pattern of variation with the peak of reproductive activity occuring during winter and early spring. In addition, the fraction of reproductive individuals was positively related to annual rainfall, while population density and annual rainfall positively influenced the recruitment rate. The survival rates were negatively related to annual rainfall. The average finite population growth rate during the study period was estimated to be 1.011 +/- 0.0019 from capture-recapture estimates. While the annual growth rate estimated from the seasonal linear matrix models was 1.026, the subadult and adult survival and maturation rates represent between 54% (winter) and 81% (summer) of the impact on the annual growth rate. PMID:11571053

  17. How many tigers Panthera tigris are there in Huai Kha Khaeng Wildlife Sanctuary, Thailand? An estimate using photographic capture-recapture sampling

    USGS Publications Warehouse

    Simcharoen, S.; Pattanavibool, A.; Karanth, K.U.; Nichols, J.D.; Kumar, N.S.

    2007-01-01

    We used capture-recapture analyses to estimate the density of a tiger Panthera tigris population in the tropical forests of Huai Kha Khaeng Wildlife Sanctuary, Thailand, from photographic capture histories of 15 distinct individuals. The closure test results (z = 0.39, P = 0.65) provided some evidence in support of the demographic closure assumption. Fit of eight plausible closed models to the data indicated more support for model Mh, which incorporates individual heterogeneity in capture probabilities. This model generated an average capture probability $\\hat p$ = 0.42 and an abundance estimate of $\\widehat{N}(\\widehat{SE}[\\widehat{N}])$ = 19 (9.65) tigers. The sampled area of $\\widehat{A}(W)(\\widehat{SE}[\\widehat{A}(W)])$ = 477.2 (58.24) km2 yielded a density estimate of $\\widehat{D}(\\widehat{SE}[\\widehat{D}])$ = 3.98 (0.51) tigers per 100 km2. Huai Kha Khaeng Wildlife Sanctuary could therefore hold 113 tigers and the entire Western Forest Complex c. 720 tigers. Although based on field protocols that constrained us to use sub-optimal analyses, this estimated tiger density is comparable to tiger densities in Indian reserves that support moderate prey abundances. However, tiger densities in well-protected Indian reserves with high prey abundances are three times higher. If given adequate protection we believe that the Western Forest Complex of Thailand could potentially harbour >2,000 wild tigers, highlighting its importance for global tiger conservation. The monitoring approaches we recommend here would be useful for managing this tiger population.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buitrago, Paula A.; Morrill, Mike; Lighty, JoAnn S.

    This report presents experimental and modeling mercury oxidation and adsorption data. Fixed-bed and single-particle models of mercury adsorption were developed. The experimental data were obtained with two reactors: a 300-W, methane-fired, tubular, quartz-lined reactor for studying homogeneous oxidation reactions and a fixed-bed reactor, also of quartz, for studying heterogeneous reactions. The latter was attached to the exit of the former to provide realistic combustion gases. The fixed-bed reactor contained one gram of coconut-shell carbon and remained at a temperature of 150°C. All methane, air, SO 2, and halogen species were introduced through the burner to produce a radical pool representativemore » of real combustion systems. A Tekran 2537A Analyzer coupled with a wet conditioning system provided speciated mercury concentrations. At 150°C and in the absence of HCl or HBr, the mercury uptake was about 20%. The addition of 50 ppm HCl caused complete capture of all elemental and oxidized mercury species. In the absence of halogens, SO 2 increased the mercury adsorption efficiency to up to 30 percent. The extent of adsorption decreased with increasing SO 2 concentration when halogens were present. Increasing the HCl concentration to 100 ppm lessened the effect of SO 2. The fixed-bed model incorporates Langmuir adsorption kinetics and was developed to predict adsorption of elemental mercury and the effect of multiple flue gas components. This model neglects intraparticle diffusional resistances and is only applicable to pulverized carbon sorbents. It roughly describes experimental data from the literature. The current version includes the ability to account for competitive adsorption between mercury, SO 2, and NO 2. The single particle model simulates in-flight sorbent capture of elemental mercury. This model was developed to include Langmuir and Freundlich isotherms, rate equations, sorbent feed rate, and intraparticle diffusion. The Freundlich isotherm more accurately described in-flight mercury capture. Using these parameters, very little intraparticle diffusion was evident. Consistent with other data, smaller particles resulted in higher mercury uptake due to available surface area. Therefore, it is important to capture the particle size distribution in the model. At typical full-scale sorbent feed rates, the calculations under-predicted adsorption, suggesting that wall effects can account for as much as 50 percent of the removal, making it an important factor in entrained-mercury adsorption models.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buitrago, Paula A.; Morrill, Mike; Lighty, JoAnn S.

    This report presents experimental and modeling mercury oxidation and adsorption data. Fixed-bed and single-particle models of mercury adsorption were developed. The experimental data were obtained with two reactors: a 300-W, methane-fired, tubular, quartz-lined reactor for studying homogeneous oxidation reactions and a fixed-bed reactor, also of quartz, for studying heterogeneous reactions. The latter was attached to the exit of the former to provide realistic combustion gases. The fixed-bed reactor contained one gram of coconut-shell carbon and remained at a temperature of 150°C. All methane, air, SO 2, and halogen species were introduced through the burner to produce a radical pool representativemore » of real combustion systems. A Tekran 2537A Analyzer coupled with a wet conditioning system provided speciated mercury concentrations. At 150°C and in the absence of HCl or HBr, the mercury uptake was about 20%. The addition of 50 ppm HCl caused complete capture of all elemental and oxidized mercury species. In the absence of halogens, SO 2 increased the mercury adsorption efficiency to up to 30 percent. The extent of adsorption decreased with increasing SO 2 concentration when halogens were present. Increasing the HCl concentration to 100 ppm lessened the effect of SO 2. The fixed-bed model incorporates Langmuir adsorption kinetics and was developed to predict adsorption of elemental mercury and the effect of multiple flue gas components. This model neglects intraparticle diffusional resistances and is only applicable to pulverized carbon sorbents. It roughly describes experimental data from the literature. The current version includes the ability to account for competitive adsorption between mercury, SO 2, and NO 2. The single particle model simulates in-flight sorbent capture of elemental mercury. This model was developed to include Langmuir and Freundlich isotherms, rate equations, sorbent feed rate, and intraparticle diffusion. The Freundlich isotherm more accurately described in-flight mercury capture. Using these parameters, very little intraparticle diffusion was evident. Consistent with other data, smaller particles resulted in higher mercury uptake due to available surface area. Therefore, it is important to capture the particle size distribution in the model. At typical full-scale sorbent feed rates, the calculations underpredicted adsorption, suggesting that wall effects can account for as much as 50 percent of the removal, making it an important factor in entrained-mercury adsorption models.« less

  20. Sustainable Capture: Concepts for Managing Stream-Aquifer Systems.

    PubMed

    Davids, Jeffrey C; Mehl, Steffen W

    2015-01-01

    Most surface water bodies (i.e., streams, lakes, etc.) are connected to the groundwater system to some degree so that changes to surface water bodies (either diversions or importations) can change flows in aquifer systems, and pumping from an aquifer can reduce discharge to, or induce additional recharge from streams, springs, and lakes. The timescales of these interactions are often very long (decades), making sustainable management of these systems difficult if relying only on observations of system responses. Instead, management scenarios are often analyzed based on numerical modeling. In this paper we propose a framework and metrics that can be used to relate the Theis concepts of capture to sustainable measures of stream-aquifer systems. We introduce four concepts: Sustainable Capture Fractions, Sustainable Capture Thresholds, Capture Efficiency, and Sustainable Groundwater Storage that can be used as the basis for developing metrics for sustainable management of stream-aquifer systems. We demonstrate their utility on a hypothetical stream-aquifer system where pumping captures both streamflow and discharge to phreatophytes at different amounts based on pumping location. In particular, Capture Efficiency (CE) can be easily understood by both scientists and non-scientist alike, and readily identifies vulnerabilities to sustainable stream-aquifer management when its value exceeds 100%. © 2014, National Ground Water Association.

  1. Practical modeling approaches for geological storage of carbon dioxide.

    PubMed

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  2. Progress on the Europium Neutron-Capture Study using DANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agvaanluvsan, U; Becker, J A; Macri, R A

    2006-09-05

    The accurate measurement of neutron-capture cross sections of the Eu isotopes is important for many reasons including nuclear astrophysics and nuclear diagnostics. Neutron capture excitation functions of {sup 151,153}Eu targets were measured recently using a 4{pi} {gamma}-ray calorimeter array DANCE located at the Los Alamos Neutron Science Center for E{sub n} = 0.1-100 keV. The progress on the data analysis efforts is given in the present paper. The {gamma}-ray multiplicity distributions for the Eu targets and Be backing are significantly different. The {gamma}-ray multiplicity distribution is found to be the same for different neutron energies for both {sup 151}Eu andmore » {sup 153}Eu. The statistical simulation to model the {gamma}-ray decay cascade is summarized.« less

  3. Is it growing exponentially fast? -- Impact of assuming exponential growth for characterizing and forecasting epidemics with initial near-exponential growth dynamics.

    PubMed

    Chowell, Gerardo; Viboud, Cécile

    2016-10-01

    The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.

  4. Using infrastructure optimization to reduce greenhouse gas emissions from oil sands extraction and processing.

    PubMed

    Middleton, Richard S; Brandt, Adam R

    2013-02-05

    The Alberta oil sands are a significant source of oil production and greenhouse gas emissions, and their importance will grow as the region is poised for decades of growth. We present an integrated framework that simultaneously considers economic and engineering decisions for the capture, transport, and storage of oil sands CO(2) emissions. The model optimizes CO(2) management infrastructure at a variety of carbon prices for the oil sands industry. Our study reveals several key findings. We find that the oil sands industry lends itself well to development of CO(2) trunk lines due to geographic coincidence of sources and sinks. This reduces the relative importance of transport costs compared to nonintegrated transport systems. Also, the amount of managed oil sands CO(2) emissions, and therefore the CCS infrastructure, is very sensitive to the carbon price; significant capture and storage occurs only above 110$/tonne CO(2) in our simulations. Deployment of infrastructure is also sensitive to CO(2) capture decisions and technology, particularly the fraction of capturable CO(2) from oil sands upgrading and steam generation facilities. The framework will help stakeholders and policy makers understand how CCS infrastructure, including an extensive pipeline system, can be safely and cost-effectively deployed.

  5. The limited importance of size-asymmetric light competition and growth of pioneer species in early secondary forest succession in Vietnam.

    PubMed

    van Kuijk, Marijke; Anten, N P R; Oomen, R J; van Bentum, D W; Werger, M J A

    2008-08-01

    It is generally believed that asymmetric competition for light plays a predominant role in determining the course of succession by increasing size inequalities between plants. Size-related growth is the product of size-related light capture and light-use efficiency (LUE). We have used a canopy model to calculate light capture and photosynthetic rates of pioneer species in sequential vegetation stages of a young secondary forest stand. Growth of the same saplings was followed in time as succession proceeded. Photosynthetic rate per unit plant mass (P(mass): mol C g(-1) day(-1)), a proxy for plant growth, was calculated as the product of light capture efficiency [Phi(mass): mol photosynthetic photon flux density (PPFD) g(-1) day(-1)] and LUE (mol C mol PPFD(-1)). Species showed different morphologies and photosynthetic characteristics, but their light-capturing and light-use efficiencies, and thus P (mass), did not differ much. This was also observed in the field: plant growth was not size-asymmetric. The size hierarchy that was present from the very early beginning of succession remained for at least the first 5 years. We conclude, therefore, that in slow-growing regenerating vegetation stands, the importance of asymmetric competition for light and growth can be much less than is often assumed.

  6. Integrating resource selection information with spatial capture--recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.

    2013-01-01

    4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.

  7. The effect of capturing the correct turbulence dissipation rate in BHR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwarzkopf, John Dennis; Ristorcelli, Raymond

    In this manuscript, we discuss the shortcoming of a quasi-equilibrium assumption made in the BHR closure model. Turbulence closure models generally assume fully developed turbulence, which is not applicable to 1) non-equilibrium turbulence (e.g. change in mean pressure gradient) or 2) laminar-turbulence transition flows. Based on DNS data, we show that the current BHR dissipation equation [modeled based on the fully developed turbulence phenomenology] does not capture important features of nonequilibrium flows. To demonstrate our thesis, we use the BHR equations to predict a non-equilibrium flow both with the BHR dissipation and the dissipation from DNS. We find that themore » prediction can be substantially improved, both qualitatively and quantitatively, with the correct dissipation rate. We conclude that a new set of nonequilibrium phenomenological assumptions must be used to develop a new model equation for the dissipation to accurately predict the turbulence time scale used by other models.« less

  8. A unified model explains commonness and rarity on coral reefs.

    PubMed

    Connolly, Sean R; Hughes, Terry P; Bellwood, David R

    2017-04-01

    Abundance patterns in ecological communities have important implications for biodiversity maintenance and ecosystem functioning. However, ecological theory has been largely unsuccessful at capturing multiple macroecological abundance patterns simultaneously. Here, we propose a parsimonious model that unifies widespread ecological relationships involving local aggregation, species-abundance distributions, and species associations, and we test this model against the metacommunity structure of reef-building corals and coral reef fishes across the western and central Pacific. For both corals and fishes, the unified model simultaneously captures extremely well local species-abundance distributions, interspecific variation in the strength of spatial aggregation, patterns of community similarity, species accumulation, and regional species richness, performing far better than alternative models also examined here and in previous work on coral reefs. Our approach contributes to the development of synthetic theory for large-scale patterns of community structure in nature, and to addressing ongoing challenges in biodiversity conservation at macroecological scales. © 2017 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  9. RACER a Coarse-Grained RNA Model for Capturing Folding Free Energy in Molecular Dynamics Simulations

    NASA Astrophysics Data System (ADS)

    Cheng, Sara; Bell, David; Ren, Pengyu

    RACER is a coarse-grained RNA model that can be used in molecular dynamics simulations to predict native structures and sequence-specific variation of free energy of various RNA structures. RACER is capable of accurate prediction of native structures of duplexes and hairpins (average RMSD of 4.15 angstroms), and RACER can capture sequence-specific variation of free energy in excellent agreement with experimentally measured stabilities (r-squared =0.98). The RACER model implements a new effective non-bonded potential and re-parameterization of hydrogen bond and Debye-Huckel potentials. Insights from the RACER model include the importance of treating pairing and stacking interactions separately in order to distinguish folded an unfolded states and identification of hydrogen-bonding, base stacking, and electrostatic interactions as essential driving forces for RNA folding. Future applications of the RACER model include predicting free energy landscapes of more complex RNA structures and use of RACER for multiscale simulations.

  10. Gyrofluid Modeling of Turbulent, Kinetic Physics

    NASA Astrophysics Data System (ADS)

    Despain, Kate Marie

    2011-12-01

    Gyrofluid models to describe plasma turbulence combine the advantages of fluid models, such as lower dimensionality and well-developed intuition, with those of gyrokinetics models, such as finite Larmor radius (FLR) effects. This allows gyrofluid models to be more tractable computationally while still capturing much of the physics related to the FLR of the particles. We present a gyrofluid model derived to capture the behavior of slow solar wind turbulence and describe the computer code developed to implement the model. In addition, we describe the modifications we made to a gyrofluid model and code that simulate plasma turbulence in tokamak geometries. Specifically, we describe a nonlinear phase mixing phenomenon, part of the E x B term, that was previously missing from the model. An inherently FLR effect, it plays an important role in predicting turbulent heat flux and diffusivity levels for the plasma. We demonstrate this importance by comparing results from the updated code to studies done previously by gyrofluid and gyrokinetic codes. We further explain what would be necessary to couple the updated gyrofluid code, gryffin, to a turbulent transport code, thus allowing gryffin to play a role in predicting profiles for fusion devices such as ITER and to explore novel fusion configurations. Such a coupling would require the use of Graphical Processing Units (GPUs) to make the modeling process fast enough to be viable. Consequently, we also describe our experience with GPU computing and demonstrate that we are poised to complete a gryffin port to this innovative architecture.

  11. Neuronal network model of interictal and recurrent ictal activity

    NASA Astrophysics Data System (ADS)

    Lopes, M. A.; Lee, K.-E.; Goltsev, A. V.

    2017-12-01

    We propose a neuronal network model which undergoes a saddle node on an invariant circle bifurcation as the mechanism of the transition from the interictal to the ictal (seizure) state. In the vicinity of this transition, the model captures important dynamical features of both interictal and ictal states. We study the nature of interictal spikes and early warnings of the transition predicted by this model. We further demonstrate that recurrent seizures emerge due to the interaction between two networks.

  12. Predicting Culex pipiens/restuans population dynamics by interval lagged weather data

    PubMed Central

    2013-01-01

    Background Culex pipiens/restuans mosquitoes are important vectors for a variety of arthropod borne viral infections. In this study, the associations between 20 years of mosquito capture data and the time lagged environmental quantities daytime length, temperature, precipitation, relative humidity and wind speed were used to generate a predictive model for the population dynamics of this vector species. Methods Mosquito population in the study area was represented by averaged time series of mosquitos counts captured at 6 sites in Cook County (Illinois, USA). Cross-correlation maps (CCMs) were compiled to investigate the association between mosquito abundances and environmental quantities. The results obtained from the CCMs were incorporated into a Poisson regression to generate a predictive model. To optimize the predictive model the time lags obtained from the CCMs were adjusted using a genetic algorithm. Results CCMs for weekly data showed a highly positive correlation of mosquito abundances with daytime length 4 to 5 weeks prior to capture (quantified by a Spearman rank order correlation of rS = 0.898) and with temperature during 2 weeks prior to capture (rS = 0.870). Maximal negative correlations were found for wind speed averaged over 3 week prior to capture (rS = −0.621). Cx. pipiens/restuans population dynamics was predicted by integrating the CCM results in Poisson regression models. They were used to simulate the average seasonal cycle of the mosquito abundance. Verification with observations resulted in a correlation of rS = 0.899 for daily and rS = 0.917 for weekly data. Applying the optimized models to the entire 20-years time series also resulted in a suitable fit with rS = 0.876 for daily and rS = 0.899 for weekly data. Conclusions The study demonstrates the application of interval lagged weather data to predict mosquito abundances with a feasible accuracy, especially when related to weekly Cx. pipiens/restuans populations. PMID:23634763

  13. Predicting mixed-gas adsorption equilibria on activated carbon for precombustion CO2 capture.

    PubMed

    García, S; Pis, J J; Rubiera, F; Pevida, C

    2013-05-21

    We present experimentally measured adsorption isotherms of CO2, H2, and N2 on a phenol-formaldehyde resin-based activated carbon, which had been previously synthesized for the separation of CO2 in a precombustion capture process. The single component adsorption isotherms were measured in a magnetic suspension balance at three different temperatures (298, 318, and 338 K) and over a large range of pressures (from 0 to 3000-4000 kPa). These values cover the temperature and pressure conditions likely to be found in a precombustion capture scenario, where CO2 needs to be separated from a CO2/H2/N2 gas stream at high pressure (~1000-1500 kPa) and with a high CO2 concentration (~20-40 vol %). Data on the pure component isotherms were correlated using the Langmuir, Sips, and dual-site Langmuir (DSL) models, i.e., a two-, three-, and four-parameter model, respectively. By using the pure component isotherm fitting parameters, adsorption equilibrium was then predicted for multicomponent gas mixtures by the extended models. The DSL model was formulated considering the energetic site-matching concept, recently addressed in the literature. Experimental gas-mixture adsorption equilibrium data were calculated from breakthrough experiments conducted in a lab-scale fixed-bed reactor and compared with the predictions from the models. Breakthrough experiments were carried out at a temperature of 318 K and five different pressures (300, 500, 1000, 1500, and 2000 kPa) where two different CO2/H2/N2 gas mixtures were used as the feed gas in the adsorption step. The DSL model was found to be the one that most accurately predicted the CO2 adsorption equilibrium in the multicomponent mixture. The results presented in this work highlight the importance of performing experimental measurements of mixture adsorption equilibria, as they are of utmost importance to discriminate between models and to correctly select the one that most closely reflects the actual process.

  14. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models

    PubMed Central

    Whittington, Jesse; Sawaya, Michael A.

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal’s home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786–1.071) for females, 0.844 (0.703–0.975) for males, and 0.882 (0.779–0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758–1.024) for females, 0.825 (0.700–0.948) for males, and 0.863 (0.771–0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park’s population of grizzly bears requires continued conservation-oriented management actions. PMID:26230262

  15. NATO Human View Architecture and Human Networks

    NASA Technical Reports Server (NTRS)

    Handley, Holly A. H.; Houston, Nancy P.

    2010-01-01

    The NATO Human View is a system architectural viewpoint that focuses on the human as part of a system. Its purpose is to capture the human requirements and to inform on how the human impacts the system design. The viewpoint contains seven static models that include different aspects of the human element, such as roles, tasks, constraints, training and metrics. It also includes a Human Dynamics component to perform simulations of the human system under design. One of the static models, termed Human Networks, focuses on the human-to-human communication patterns that occur as a result of ad hoc or deliberate team formation, especially teams distributed across space and time. Parameters of human teams that effect system performance can be captured in this model. Human centered aspects of networks, such as differences in operational tempo (sense of urgency), priorities (common goal), and team history (knowledge of the other team members), can be incorporated. The information captured in the Human Network static model can then be included in the Human Dynamics component so that the impact of distributed teams is represented in the simulation. As the NATO militaries transform to a more networked force, the Human View architecture is an important tool that can be used to make recommendations on the proper mix of technological innovations and human interactions.

  16. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.

    Increased coupling between critical infrastructure networks, such as power and communication systems, has important implications for the reliability and security of these systems. To understand the effects of power-communication coupling, several researchers have studied models of interdependent networks and reported that increased coupling can increase vulnerability. However, these conclusions come largely from models that have substantially different mechanisms of cascading failure, relative to those found in actual power and communication networks, and that do not capture the benefits of connecting systems with complementary capabilities. In order to understand the importance of these details, this paper compares network vulnerability in simplemore » topological models and in models that more accurately capture the dynamics of cascading in power systems. First, we compare a simple model of topological contagion to a model of cascading in power systems and find that the power grid model shows a higher level of vulnerability, relative to the contagion model. Second, we compare a percolation model of topological cascading in coupled networks to three different models of power networks coupled to communication systems. Again, the more accurate models suggest very different conclusions than the percolation model. In all but the most extreme case, the physics-based power grid models indicate that increased power-communication coupling decreases vulnerability. This is opposite from what one would conclude from the percolation model, in which zero coupling is optimal. Only in an extreme case, in which communication failures immediately cause grid failures, did we find that increased coupling can be harmful. Together, these results suggest design strategies for reducing the risk of cascades in interdependent infrastructure systems.« less

  17. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    DOE PAGES

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; ...

    2017-03-20

    Increased coupling between critical infrastructure networks, such as power and communication systems, has important implications for the reliability and security of these systems. To understand the effects of power-communication coupling, several researchers have studied models of interdependent networks and reported that increased coupling can increase vulnerability. However, these conclusions come largely from models that have substantially different mechanisms of cascading failure, relative to those found in actual power and communication networks, and that do not capture the benefits of connecting systems with complementary capabilities. In order to understand the importance of these details, this paper compares network vulnerability in simplemore » topological models and in models that more accurately capture the dynamics of cascading in power systems. First, we compare a simple model of topological contagion to a model of cascading in power systems and find that the power grid model shows a higher level of vulnerability, relative to the contagion model. Second, we compare a percolation model of topological cascading in coupled networks to three different models of power networks coupled to communication systems. Again, the more accurate models suggest very different conclusions than the percolation model. In all but the most extreme case, the physics-based power grid models indicate that increased power-communication coupling decreases vulnerability. This is opposite from what one would conclude from the percolation model, in which zero coupling is optimal. Only in an extreme case, in which communication failures immediately cause grid failures, did we find that increased coupling can be harmful. Together, these results suggest design strategies for reducing the risk of cascades in interdependent infrastructure systems.« less

  18. The mechanisms underlying overgeneral autobiographical memory: an evaluative review of evidence for the CaR-FA-X model.

    PubMed

    Sumner, Jennifer A

    2012-02-01

    Overgeneral autobiographical memory (OGM) has been found to be an important cognitive phenomenon with respect to depression and trauma-related psychopathology (e.g., posttraumatic stress disorder), and researchers have been interested in better understanding the factors that contribute to this proposed vulnerability factor. The most prominent model of mechanisms underlying OGM to date is Williams et al.'s (2007) CaR-FA-X model. This model proposes that three processes influence OGM: capture and rumination, functional avoidance, and impaired executive control. The author reviews the current state of support for the CaR-FA-X model by evaluating 38 studies that have examined OGM and one or more mechanisms of the model. Collectively, these studies reveal robust support for associations between OGM and both rumination and impaired executive control. OGM also appears to be a cognitive avoidance strategy, and there is evidence that avoiding the retrieval of specific memories reduces distress after an aversive event, at least in the short term. Important issues that have been left unresolved are highlighted, including the nature of the capture phenomenon, the role of trauma in functional avoidance, and the developmental nature of functional avoidance. Recommendations for future research that will enhance understanding of the factors that contribute to OGM are suggested. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.

  20. Coffee Agroforests Remain Beneficial for Neotropical Bird Community Conservation across Seasons

    PubMed Central

    Peters, Valerie E.; Cooper, Robert J.; Carroll, C. Ron

    2013-01-01

    Coffee agroforestry systems and secondary forests have been shown to support similar bird communities but comparing these habitat types are challenged by potential biases due to differences in detectability between habitats. Furthermore, seasonal dynamics may influence bird communities differently in different habitat types and therefore seasonal effects should be considered in comparisons. To address these issues, we incorporated seasonal effects and factors potentially affecting bird detectability into models to compare avian community composition and dynamics between coffee agroforests and secondary forest fragments. In particular, we modeled community composition and community dynamics of bird functional groups based on habitat type (coffee agroforest vs. secondary forest) and season while accounting for variation in capture probability (i.e. detectability). The models we used estimated capture probability to be similar between habitat types for each dietary guild, but omnivores had a lower capture probability than frugivores and insectivores. Although apparent species richness was higher in coffee agroforest than secondary forest, model results indicated that omnivores and insectivores were more common in secondary forest when accounting for heterogeneity in capture probability. Our results largely support the notion that shade-coffee can serve as a surrogate habitat for secondary forest with respect to avian communities. Small coffee agroforests embedded within the typical tropical countryside matrix of secondary forest patches and small-scale agriculture, therefore, may host avian communities that resemble those of surrounding secondary forest, and may serve as viable corridors linking patches of forest within these landscapes. This information is an important step toward effective landscape-scale conservation in Neotropical agricultural landscapes. PMID:24058437

  1. A holographic model for the fractional quantum Hall effect

    NASA Astrophysics Data System (ADS)

    Lippert, Matthew; Meyer, René; Taliotis, Anastasios

    2015-01-01

    Experimental data for fractional quantum Hall systems can to a large extent be explained by assuming the existence of a Γ0(2) modular symmetry group commuting with the renormalization group flow and hence mapping different phases of two-dimensional electron gases into each other. Based on this insight, we construct a phenomenological holographic model which captures many features of the fractional quantum Hall effect. Using an -invariant Einstein-Maxwell-axio-dilaton theory capturing the important modular transformation properties of quantum Hall physics, we find dyonic diatonic black hole solutions which are gapped and have a Hall conductivity equal to the filling fraction, as expected for quantum Hall states. We also provide several technical results on the general behavior of the gauge field fluctuations around these dyonic dilatonic black hole solutions: we specify a sufficient criterion for IR normalizability of the fluctuations, demonstrate the preservation of the gap under the action, and prove that the singularity of the fluctuation problem in the presence of a magnetic field is an accessory singularity. We finish with a preliminary investigation of the possible IR scaling solutions of our model and some speculations on how they could be important for the observed universality of quantum Hall transitions.

  2. Separable Bilayer Microfiltration Device for Viable Label-free Enrichment of Circulating Tumour Cells

    NASA Astrophysics Data System (ADS)

    Zhou, Ming-Da; Hao, Sijie; Williams, Anthony J.; Harouaka, Ramdane A.; Schrand, Brett; Rawal, Siddarth; Ao, Zheng; Brennaman, Randall; Gilboa, Eli; Lu, Bo; Wang, Shuwen; Zhu, Jiyue; Datar, Ram; Cote, Richard; Tai, Yu-Chong; Zheng, Si-Yang

    2014-12-01

    The analysis of circulating tumour cells (CTCs) in cancer patients could provide important information for therapeutic management. Enrichment of viable CTCs could permit performance of functional analyses on CTCs to broaden understanding of metastatic disease. However, this has not been widely accomplished. Addressing this challenge, we present a separable bilayer (SB) microfilter for viable size-based CTC capture. Unlike other single-layer CTC microfilters, the precise gap between the two layers and the architecture of pore alignment result in drastic reduction in mechanical stress on CTCs, capturing them viably. Using multiple cancer cell lines spiked in healthy donor blood, the SB microfilter demonstrated high capture efficiency (78-83%), high retention of cell viability (71-74%), high tumour cell enrichment against leukocytes (1.7-2 × 103), and widespread ability to establish cultures post-capture (100% of cell lines tested). In a metastatic mouse model, SB microfilters successfully enriched viable mouse CTCs from 0.4-0.6 mL whole mouse blood samples and established in vitro cultures for further genetic and functional analysis. Our preliminary studies reflect the efficacy of the SB microfilter device to efficiently and reliably enrich viable CTCs in animal model studies, constituting an exciting technology for new insights in cancer research.

  3. Electricity from fossil fuels without CO2 emissions: assessing the costs of carbon dioxide capture and sequestration in U.S. electricity markets.

    PubMed

    Johnson, T L; Keith, D W

    2001-10-01

    The decoupling of fossil-fueled electricity production from atmospheric CO2 emissions via CO2 capture and sequestration (CCS) is increasingly regarded as an important means of mitigating climate change at a reasonable cost. Engineering analyses of CO2 mitigation typically compare the cost of electricity for a base generation technology to that for a similar plant with CO2 capture and then compute the carbon emissions mitigated per unit of cost. It can be hard to interpret mitigation cost estimates from this plant-level approach when a consistent base technology cannot be identified. In addition, neither engineering analyses nor general equilibrium models can capture the economics of plant dispatch. A realistic assessment of the costs of carbon sequestration as an emissions abatement strategy in the electric sector therefore requires a systems-level analysis. We discuss various frameworks for computing mitigation costs and introduce a simplified model of electric sector planning. Results from a "bottom-up" engineering-economic analysis for a representative U.S. North American Electric Reliability Council (NERC) region illustrate how the penetration of CCS technologies and the dispatch of generating units vary with the price of carbon emissions and thereby determine the relationship between mitigation cost and emissions reduction.

  4. Electricity from Fossil Fuels without CO2 Emissions: Assessing the Costs of Carbon Dioxide Capture and Sequestration in U.S. Electricity Markets.

    PubMed

    Johnson, Timothy L; Keith, David W

    2001-10-01

    The decoupling of fossil-fueled electricity production from atmospheric CO 2 emissions via CO 2 capture and sequestration (CCS) is increasingly regarded as an important means of mitigating climate change at a reasonable cost. Engineering analyses of CO 2 mitigation typically compare the cost of electricity for a base generation technology to that for a similar plant with CO 2 capture and then compute the carbon emissions mitigated per unit of cost. It can be hard to interpret mitigation cost estimates from this plant-level approach when a consistent base technology cannot be identified. In addition, neither engineering analyses nor general equilibrium models can capture the economics of plant dispatch. A realistic assessment of the costs of carbon sequestration as an emissions abatement strategy in the electric sector therefore requires a systems-level analysis. We discuss various frameworks for computing mitigation costs and introduce a simplified model of electric sector planning. Results from a "bottom-up" engineering-economic analysis for a representative U.S. North American Electric Reliability Council (NERC) region illustrate how the penetration of CCS technologies and the dispatch of generating units vary with the price of carbon emissions and thereby determine the relationship between mitigation cost and emissions reduction.

  5. A Dynamical Systems Model for Understanding Behavioral Interventions for Weight Loss

    NASA Astrophysics Data System (ADS)

    Navarro-Barrientos, J.-Emeterio; Rivera, Daniel E.; Collins, Linda M.

    We propose a dynamical systems model that captures the daily fluctuations of human weight change, incorporating both physiological and psychological factors. The model consists of an energy balance integrated with a mechanistic behavioral model inspired by the Theory of Planned Behavior (TPB); the latter describes how important variables in a behavioral intervention can influence healthy eating habits and increased physical activity over time. The model can be used to inform behavioral scientists in the design of optimized interventions for weight loss and body composition change.

  6. Modeling association among demographic parameters in analysis of open population capture-recapture data.

    PubMed

    Link, William A; Barker, Richard J

    2005-03-01

    We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  7. The Importance of Capturing Topographic Features for Modeling Groundwater Flow and Transport in Mountainous Watersheds

    NASA Astrophysics Data System (ADS)

    Wang, C.; Gomez-Velez, J. D.; Wilson, J. L.

    2017-12-01

    Groundwater plays a key role in runoff generation and stream water chemistry from reach to watershed scales. The spatial distribution of ridges and streams can influence the spatial patterns of groundwater recharge and drainage, specially in mountainous terrains where these features are more prominent. However, typical modeling efforts simplify or ignore some of these features due to computational limitations without a systematic investigation of the implications for flow and transport within the watershed. In this study, we investigate the effect of capturing key topographic features on modeled groundwater flow and transport characteristics in a mountainous watershed. We build model scenarios of different topographic complexity levels (TCLs) to capture different levels of representation of streams and ridges in the model. Modeled baseflow and groundwater mean residence time (MRT) are used to quantify the differences among TCLs. Our results show that capturing the streams and ridges has a significant influence on simulated groundwater flow and transport patterns. Topographic complexity controls the proportion of baseflow generated from local, intermediate, and regional flow paths, thus influencing the amount and MRT of basefow flowing into streams of different Horton-Strahler orders. We further simulate the concentration of solute exported into streams from subsurface chemical weathering. The concentration of chemical weathering products in streams is less sensitive to model TCL due to the thermodynamic constraint on the equilibrium concentration of the chemical weathering. We also tested the influence of geology on the effect of TCL. The effect of TCL is consistent under different geological conditions; however, it is enhanced in models with low hydraulic conductivity because more of the flow is forced into shallow and local flow paths. All of these changes can affect our ability to interpret environmental tracer data and predict bio- and geo-chemical evolution of stream water in mountainous watersheds.

  8. Data-Model Comparisons of the October, 2002 Event Using the Space Weather Modeling Framework

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Chappell, C. R.; Schunk, R. W.; Barakat, A. R.; Eccles, V.; Glocer, A.; Kistler, L. M.; Haaland, S.; Moore, T. E.

    2014-12-01

    The September 27 - October 4, 2002 time period has been selected by the Geospace Environment Modeling Ionospheric Outflow focus group for community collaborative study because of its high magnetospheric activity and extensive data coverage. The FAST, Polar, and Cluster missions, as well as others, all made key observations during this period, creating a prime event for data-model comparisons. The GEM community has come together to simulate this period using many different methods in order to evaluate models, compare results, and expand our knowledge of ionospheric outflow and its effects on global dynamics. This paper presents Space Weather Modeling Framework (SWMF) simulations of this important period compared against observations from the Polar TIDE, Cluster CODIF and EFW instruments. Density and velocity of oxygen and hydrogen throughout the lobes, plasmasheet, and inner magnetosphere will be the focus of these comparisons. For these simulations, the SWMF couples the multifluid version of BATS-R-US MHD to a variety of ionospheric outflow models of varying complexity. The simplest is outflow arising from constant MHD inner boundary conditions. Two first-principles-based models are also leveraged: the Polar Wind Outflow Model (PWOM), a fluid treatment of outflow dynamics, and the Generalized Polar Wind (GPW) model, which combines fluid and particle-in-cell approaches. Each model is capable of capturing a different set of energization mechanisms, yielding different outflow results. The data-model comparisons will illustrate how well each approach captures reality and which energization mechanisms are most important. This work will also assess our current capability to reproduce ionosphere-magnetosphere mass coupling.

  9. Capturing spatial and temporal patterns of widespread, extreme flooding across Europe

    NASA Astrophysics Data System (ADS)

    Busby, Kathryn; Raven, Emma; Liu, Ye

    2013-04-01

    Statistical characterisation of physical hazards is an integral part of probabilistic catastrophe models used by the reinsurance industry to estimate losses from large scale events. Extreme flood events are not restricted by country boundaries which poses an issue for reinsurance companies as their exposures often extend beyond them. We discuss challenges and solutions that allow us to appropriately capture the spatial and temporal dependence of extreme hydrological events on a continental-scale, which in turn enables us to generate an industry-standard stochastic event set for estimating financial losses for widespread flooding. By presenting our event set methodology, we focus on explaining how extreme value theory (EVT) and dependence modelling are used to account for short, inconsistent hydrological data from different countries, and how to make appropriate statistical decisions that best characterise the nature of flooding across Europe. The consistency of input data is of vital importance when identifying historical flood patterns. Collating data from numerous sources inherently causes inconsistencies and we demonstrate our robust approach to assessing the data and refining it to compile a single consistent dataset. This dataset is then extrapolated using a parameterised EVT distribution to estimate extremes. Our method then captures the dependence of flood events across countries using an advanced multivariate extreme value model. Throughout, important statistical decisions are explored including: (1) distribution choice; (2) the threshold to apply for extracting extreme data points; (3) a regional analysis; (4) the definition of a flood event, which is often linked with reinsurance industry's hour's clause; and (5) handling of missing values. Finally, having modelled the historical patterns of flooding across Europe, we sample from this model to generate our stochastic event set comprising of thousands of events over thousands of years. We then briefly illustrate how this is applied within a probabilistic model to estimate catastrophic loss curves used by the reinsurance industry.

  10. Spatial organization of the budding yeast genome in the cell nucleus and identification of specific chromatin interactions from multi-chromosome constrained chromatin model.

    PubMed

    Gürsoy, Gamze; Xu, Yun; Liang, Jie

    2017-07-01

    Nuclear landmarks and biochemical factors play important roles in the organization of the yeast genome. The interaction pattern of budding yeast as measured from genome-wide 3C studies are largely recapitulated by model polymer genomes subject to landmark constraints. However, the origin of inter-chromosomal interactions, specific roles of individual landmarks, and the roles of biochemical factors in yeast genome organization remain unclear. Here we describe a multi-chromosome constrained self-avoiding chromatin model (mC-SAC) to gain understanding of the budding yeast genome organization. With significantly improved sampling of genome structures, both intra- and inter-chromosomal interaction patterns from genome-wide 3C studies are accurately captured in our model at higher resolution than previous studies. We show that nuclear confinement is a key determinant of the intra-chromosomal interactions, and centromere tethering is responsible for the inter-chromosomal interactions. In addition, important genomic elements such as fragile sites and tRNA genes are found to be clustered spatially, largely due to centromere tethering. We uncovered previously unknown interactions that were not captured by genome-wide 3C studies, which are found to be enriched with tRNA genes, RNAPIII and TFIIS binding. Moreover, we identified specific high-frequency genome-wide 3C interactions that are unaccounted for by polymer effects under landmark constraints. These interactions are enriched with important genes and likely play biological roles.

  11. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    PubMed

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  12. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  13. (n,{gamma}) Experiments on tin isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baramsai, B.; Mitchell, G. E.; Walker, C. L.

    2013-04-19

    Neutron capture experiments on highly enriched {sup 117,119}Sn isotopes were performed with the DANCE detector array located at the Los Alamos Neutron Science Center. The DANCE detector provides detailed information about the multi-step {gamma}-ray cascade following neutron capture. Analysis of the experimental data provides important information to improve understanding of the neutron capture reaction, including a test of the statistical model, the assignment of spins and parities of neutron resonances, and information concerning the Photon Strength Function (PSF) and Level Density (LD) below the neutron separation energy. Preliminary results for the (n,{gamma}) reaction on {sup 117,119}Sn are presented. Resonance spinsmore » of the odd-A tin isotopes were almost completely unknown. Resonance spins and parities have been assigned via analysis of the multi-step {gamma}-ray spectra and directional correlations.« less

  14. Application of a multistate model to estimate culvert effects on movement of small fishes

    USGS Publications Warehouse

    Norman, J.R.; Hagler, M.M.; Freeman, Mary C.; Freeman, B.J.

    2009-01-01

    While it is widely acknowledged that culverted road-stream crossings may impede fish passage, effects of culverts on movement of nongame and small-bodied fishes have not been extensively studied and studies generally have not accounted for spatial variation in capture probabilities. We estimated probabilities for upstream and downstream movement of small (30-120 mm standard length) benthic and water column fishes across stream reaches with and without culverts at four road-stream crossings over a 4-6-week period. Movement and reach-specific capture probabilities were estimated using multistate capture-recapture models. Although none of the culverts were complete barriers to passage, only a bottomless-box culvert appeared to permit unrestricted upstream and downstream movements by benthic fishes based on model estimates of movement probabilities. At two box culverts that were perched above the water surface at base flow, observed movements were limited to water column fishes and to intervals when runoff from storm events raised water levels above the perched level. Only a single fish was observed to move through a partially embedded pipe culvert. Estimates for probabilities of movement over distances equal to at least the length of one culvert were low (e.g., generally ???0.03, estimated for 1-2-week intervals) and had wide 95% confidence intervals as a consequence of few observed movements to nonadjacent reaches. Estimates of capture probabilities varied among reaches by a factor of 2 to over 10, illustrating the importance of accounting for spatially variable capture rates when estimating movement probabilities with capture-recapture data. Longer-term studies are needed to evaluate temporal variability in stream fish passage at culverts (e.g., in relation to streamflow variability) and to thereby better quantify the degree of population fragmentation caused by road-stream crossings with culverts. ?? American Fisheries Society 2009.

  15. Functional response and capture timing in an individual-based model: predation by northern squawfish (Ptychocheilus oregonensis) on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, James H.; DeAngelis, Donald L.

    1992-01-01

    The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.

  16. Fermionic topological quantum states as tensor networks

    NASA Astrophysics Data System (ADS)

    Wille, C.; Buerschaper, O.; Eisert, J.

    2017-06-01

    Tensor network states, and in particular projected entangled pair states, play an important role in the description of strongly correlated quantum lattice systems. They do not only serve as variational states in numerical simulation methods, but also provide a framework for classifying phases of quantum matter and capture notions of topological order in a stringent and rigorous language. The rapid development in this field for spin models and bosonic systems has not yet been mirrored by an analogous development for fermionic models. In this work, we introduce a tensor network formalism capable of capturing notions of topological order for quantum systems with fermionic components. At the heart of the formalism are axioms of fermionic matrix-product operator injectivity, stable under concatenation. Building upon that, we formulate a Grassmann number tensor network ansatz for the ground state of fermionic twisted quantum double models. A specific focus is put on the paradigmatic example of the fermionic toric code. This work shows that the program of describing topologically ordered systems using tensor networks carries over to fermionic models.

  17. Modeling On-Body DTN Packet Routing Delay in the Presence of Postural Disconnections.

    PubMed

    Quwaider, Muhannad; Taghizadeh, Mahmoud; Biswas, Subir

    2011-01-01

    This paper presents a stochastic modeling framework for store-and-forward packet routing in Wireless Body Area Networks ( WBAN ) with postural partitioning. A prototype WBANs has been constructed for experimentally characterizing and capturing on-body topology disconnections in the presence of ultrashort range radio links, unpredictable RF attenuation, and human postural mobility. Delay modeling techniques for evaluating single-copy on-body DTN routing protocols are then developed. End-to-end routing delay for a series of protocols including opportunistic, randomized, and two other mechanisms that capture multiscale topological localities in human postural movements have been evaluated. Performance of the analyzed protocols are then evaluated experimentally and via simulation to compare with the results obtained from the developed model. Finally, a mechanism for evaluating the topological importance of individual on-body sensor nodes is developed. It is shown that such information can be used for selectively reducing the on-body sensor-count without substantially sacrificing the packet delivery delay.

  18. Modeling On-Body DTN Packet Routing Delay in the Presence of Postural Disconnections

    PubMed Central

    Quwaider, Muhannad; Taghizadeh, Mahmoud; Biswas, Subir

    2014-01-01

    This paper presents a stochastic modeling framework for store-and-forward packet routing in Wireless Body Area Networks (WBAN) with postural partitioning. A prototype WBANs has been constructed for experimentally characterizing and capturing on-body topology disconnections in the presence of ultrashort range radio links, unpredictable RF attenuation, and human postural mobility. Delay modeling techniques for evaluating single-copy on-body DTN routing protocols are then developed. End-to-end routing delay for a series of protocols including opportunistic, randomized, and two other mechanisms that capture multiscale topological localities in human postural movements have been evaluated. Performance of the analyzed protocols are then evaluated experimentally and via simulation to compare with the results obtained from the developed model. Finally, a mechanism for evaluating the topological importance of individual on-body sensor nodes is developed. It is shown that such information can be used for selectively reducing the on-body sensor-count without substantially sacrificing the packet delivery delay. PMID:25530749

  19. Network community-based model reduction for vortical flows

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko

    2018-06-01

    A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.

  20. A capture-recapture survival analysis model for radio-tagged animals

    USGS Publications Warehouse

    Pollock, K.H.; Bunck, C.M.; Winterstein, S.R.; Chen, C.-L.; North, P.M.; Nichols, J.D.

    1995-01-01

    In recent years, survival analysis of radio-tagged animals has developed using methods based on the Kaplan-Meier method used in medical and engineering applications (Pollock et al., 1989a,b). An important assumption of this approach is that all tagged animals with a functioning radio can be relocated at each sampling time with probability 1. This assumption may not always be reasonable in practice. In this paper, we show how a general capture-recapture model can be derived which allows for some probability (less than one) for animals to be relocated. This model is not simply a Jolly-Seber model because it is possible to relocate both dead and live animals, unlike when traditional tagging is used. The model can also be viewed as a generalization of the Kaplan-Meier procedure, thus linking the Jolly-Seber and Kaplan-Meier approaches to survival estimation. We present maximum likelihood estimators and discuss testing between submodels. We also discuss model assumptions and their validity in practice. An example is presented based on canvasback data collected by G. M. Haramis of Patuxent Wildlife Research Center, Laurel, Maryland, USA.

  1. Assessment of Solid Sorbent Systems for Post-Combustion Carbon Dioxide Capture at Coal-Fired Power Plants

    NASA Astrophysics Data System (ADS)

    Glier, Justin C.

    In an effort to lower future CO2 emissions, a wide range of technologies are being developed to scrub CO2 from the flue gases of fossil fuel-based electric power and industrial plants. This thesis models one of several early-stage post-combustion CO2 capture technologies, solid sorbent-based CO2 capture process, and presents performance and cost estimates of this system on pulverized coal power plants. The spreadsheet-based software package Microsoft Excel was used in conjunction with AspenPlus modelling results and the Integrated Environmental Control Model to develop performance and cost estimates for the solid sorbent-based CO2 capture technology. A reduced order model also was created to facilitate comparisons among multiple design scenarios. Assumptions about plant financing and utilization, as well as uncertainties in heat transfer and material design that affect heat exchanger and reactor design were found to produce a wide range of cost estimates for solid sorbent-based systems. With uncertainties included, costs for a supercritical power plant with solid sorbent-based CO2 capture ranged from 167 to 533 per megawatt hour for a first-of-a-kind installation (with all costs in constant 2011 US dollars) based on a 90% confidence interval. The median cost was 209/MWh. Post-combustion solid sorbent-based CO2 capture technology is then evaluated in terms of the potential cost for a mature system based on historic experience as technologies are improved with sequential iterations of the currently available system. The range costs for a supercritical power plant with solid sorbent-based CO2 capture was found to be 118 to 189 per megawatt hour with a nominal value of 163 per megawatt hour given the expected range of technological improvement in the capital and operating costs and efficiency of the power plant after 100 GW of cumulative worldwide experience. These results suggest that the solid sorbent-based system will not be competitive with currently available liquid amine-systems in the absence of significant new improvements in solid sorbent properties and process system design to reduce the heat exchange surface area in the regenerator and cross-flow heat exchanger. Finally, the importance of these estimates for policy makers is discussed.

  2. Comparing post-combustion CO2 capture operation at retrofitted coal-fired power plants in the Texas and Great Britain electric grids

    NASA Astrophysics Data System (ADS)

    Cohen, Stuart M.; Chalmers, Hannah L.; Webber, Michael E.; King, Carey W.

    2011-04-01

    This work analyses the carbon dioxide (CO2) capture system operation within the Electric Reliability Council of Texas (ERCOT) and Great Britain (GB) electric grids using a previously developed first-order hourly electricity dispatch and pricing model. The grids are compared in their 2006 configuration with the addition of coal-based CO2 capture retrofits and emissions penalties from 0 to 100 US dollars per metric ton of CO2 (USD/tCO2). CO2 capture flexibility is investigated by comparing inflexible CO2 capture systems to flexible ones that can choose between full- and zero-load CO2 capture depending on which operating mode has lower costs or higher profits. Comparing these two grids is interesting because they have similar installed capacity and peak demand, and both are isolated electricity systems with competitive wholesale electricity markets. However, differences in capacity mix, demand patterns, and fuel markets produce diverging behaviours of CO2 capture at coal-fired power plants. Coal-fired facilities are primarily base load in ERCOT for a large range of CO2 prices but are comparably later in the dispatch order in GB and consequently often supply intermediate load. As a result, the ability to capture CO2 is more important for ensuring dispatch of coal-fired facilities in GB than in ERCOT when CO2 prices are high. In GB, higher overall coal prices mean that CO2 prices must be slightly higher than in ERCOT before the emissions savings of CO2 capture offset capture energy costs. However, once CO2 capture is economical, operating CO2 capture on half the coal fleet in each grid achieves greater emissions reductions in GB because the total coal-based capacity is 6 GW greater than in ERCOT. The market characteristics studied suggest greater opportunity for flexible CO2 capture to improve operating profits in ERCOT, but profit improvements can be offset by a flexibility cost penalty.

  3. Hyperspectral evaluation of Venturia inaequalis management using the disease predictive model RIMpro in the northeastern U.S.

    USDA-ARS?s Scientific Manuscript database

    Use of hyperspectral spectroradiometers allows for information on different light bands to be captured, allowing for much easier identification of plant health status. Apple scab, caused by the ascomycete Venturia inaequalis, is globally the most important disease in the production of apples. RIMpro...

  4. Message Integrity Model for Wireless Sensor Networks

    ERIC Educational Resources Information Center

    Qleibo, Haider W.

    2009-01-01

    WSNs are susceptible to a variety of attacks. These attacks vary in the way they are performed and executed; they include but not limited to node capture, physical tampering, denial of service, and message alteration. It is of paramount importance to protect gathered data by WSNs and defend the network against illegal access and malicious…

  5. Gravitational Capture of Small Bodies by Gas Drag Developed Using Hydrodynamic Equations

    NASA Astrophysics Data System (ADS)

    Pereira de Lima, Nicole; Neto, E. V.

    2013-05-01

    Abstract (2,250 Maximum Characters): The giant planets of the Solar System have two kinds of satellites, the regular and the irregular ones. The irregular ones are supposed to come from other regions were captured by the planet. Using the dynamics of the three-body problem it is possible to explain the gravitational capture of these satellites except for the fact that these captures are only temporary. For this reason it is necessary an additional effect to turn these temporary captures into a permanent ones. In this work we will explore the gas drag mechanism. In the last stage of the giant planets formation a gas envelope formed around each one of them. During the flyby of the satellite this envelope can dissipate energy enough to make it a “prisoner” of the planet. We have made some simulations considering the classical case. In these simulations the classical gas was characterized by ordinary differential equations that describe the velocity and density of it. However this model is a simplified case. To make our model more realistic we use the hydrodynamic model. Thus some modification in the early code were required. One important code changes was the way used to describe the gas. In this new model a region (called cell) and not a point is used to characterize the gas. After making some adjusts we have checked the precision of cells and verified its correlation with other parameters. At this step we have to test the new code trying to reproduce and improve all results obtained before. Meanwhile we are using the software Fargo that creates the hydrodynamic gas to be used as input in the code. After this analysis we will let the gas evolve in time in order to acquire a higher level of realism in this study.

  6. Direct Capture Technologies for Genomics-Guided Discovery of Natural Products.

    PubMed

    Chan, Andrew N; Santa Maria, Kevin C; Li, Bo

    2016-01-01

    Microbes are important producers of natural products, which have played key roles in understanding biology and treating disease. However, the full potential of microbes to produce natural products has yet to be realized; the overwhelming majority of natural product gene clusters encoded in microbial genomes remain "cryptic", and have not been expressed or characterized. In contrast to the fast-growing number of genomic sequences and bioinformatic tools, methods to connect these genes to natural product molecules are still limited, creating a bottleneck in genome-mining efforts to discover novel natural products. Here we review developing technologies that leverage the power of homologous recombination to directly capture natural product gene clusters and express them in model hosts for isolation and structural characterization. Although direct capture is still in its early stages of development, it has been successfully utilized in several different classes of natural products. These early successes will be reviewed, and the methods will be compared and contrasted with existing traditional technologies. Lastly, we will discuss the opportunities for the development of direct capture in other organisms, and possibilities to integrate direct capture with emerging genome-editing techniques to accelerate future study of natural products.

  7. A Multiscale Virtual Fabrication and Lattice Modeling Approach for the Fatigue Performance Prediction of Asphalt Concrete

    NASA Astrophysics Data System (ADS)

    Dehghan Banadaki, Arash

    Predicting the ultimate performance of asphalt concrete under realistic loading conditions is the main key to developing better-performing materials, designing long-lasting pavements, and performing reliable lifecycle analysis for pavements. The fatigue performance of asphalt concrete depends on the mechanical properties of the constituent materials, namely asphalt binder and aggregate. This dependent link between performance and mechanical properties is extremely complex, and experimental techniques often are used to try to characterize the performance of hot mix asphalt. However, given the seemingly uncountable number of mixture designs and loading conditions, it is simply not economical to try to understand and characterize the material behavior solely by experimentation. It is well known that analytical and computational modeling methods can be combined with experimental techniques to reduce the costs associated with understanding and characterizing the mechanical behavior of the constituent materials. This study aims to develop a multiscale micromechanical lattice-based model to predict cracking in asphalt concrete using component material properties. The proposed algorithm, while capturing different phenomena for different scales, also minimizes the need for laboratory experiments. The developed methodology builds on a previously developed lattice model and the viscoelastic continuum damage model to link the component material properties to the mixture fatigue performance. The resulting lattice model is applied to predict the dynamic modulus mastercurves for different scales. A framework for capturing the so-called structuralization effects is introduced that significantly improves the accuracy of the modulus prediction. Furthermore, air voids are added to the model to help capture this important micromechanical feature that affects the fatigue performance of asphalt concrete as well as the modulus value. The effects of rate dependency are captured by implementing the viscoelastic fracture criterion. In the end, an efficient cyclic loading framework is developed to evaluate the damage accumulation in the material that is caused by long-sustained cyclic loads.

  8. Analysis of accuracy of digital elevation models created from captured data by digital photogrammetry method

    NASA Astrophysics Data System (ADS)

    Hudec, P.

    2011-12-01

    A digital elevation model (DEM) is an important part of many geoinformatic applications. For the creation of DEM, spatial data collected by geodetic measurements in the field, photogrammetric processing of aerial survey photographs, laser scanning and secondary sources (analogue maps) are used. It is very important from a user's point of view to know the vertical accuracy of a DEM. The article describes the verification of the vertical accuracy of a DEM for the region of Medzibodrožie, which was created using digital photogrammetry for the purposes of water resources management and modeling and resolving flood cases based on geodetic measurements in the field.

  9. Modelling and simulation techniques for membrane biology.

    PubMed

    Burrage, Kevin; Hancock, John; Leier, André; Nicolau, Dan V

    2007-07-01

    One of the most important aspects of Computational Cell Biology is the understanding of the complicated dynamical processes that take place on plasma membranes. These processes are often so complicated that purely temporal models cannot always adequately capture the dynamics. On the other hand, spatial models can have large computational overheads. In this article, we review some of these issues with respect to chemistry, membrane microdomains and anomalous diffusion and discuss how to select appropriate modelling and simulation paradigms based on some or all the following aspects: discrete, continuous, stochastic, delayed and complex spatial processes.

  10. Assessing the ability of potential evapotranspiration models in capturing dynamics of evaporative demand across various biomes and climatic regimes with ChinaFLUX measurements

    NASA Astrophysics Data System (ADS)

    Zheng, Han; Yu, Guirui; Wang, Qiufeng; Zhu, Xianjin; Yan, Junhua; Wang, Huimin; Shi, Peili; Zhao, Fenghua; Li, Yingnian; Zhao, Liang; Zhang, Junhui; Wang, Yanfen

    2017-08-01

    Estimates of atmospheric evaporative demand have been widely required for a variety of hydrological analyses, with potential evapotranspiration (PET) being an important measure representing evaporative demand of actual vegetated surfaces under given metrological conditions. In this study, we assessed the ability of various PET models in capturing long-term (typically 2003-2011) dynamics of evaporative demand at eight ecosystems across various biomes and climatic regimes in China. Prior to assessing PET dynamics, we first examined the reasonability of fourteen PET models in representing the magnitudes of evaporative demand using eddy-covariance actual evapotranspiration (AET) as an indicator. Results showed that the robustness of the fourteen PET models differed somewhat across the sites, and only three PET models could produce reasonable magnitudes of evaporative demand (i.e., PET ≥ AET on average) for all eight sites, including the: (i) Penman; (ii) Priestly-Taylor and (iii) Linacre models. Then, we assessed the ability of these three PET models in capturing dynamics of evaporative demand by comparing the annual and seasonal trends in PET against the equivalent trends in AET and precipitation (P) for particular sites. Results indicated that nearly all the three PET models could faithfully reproduce the dynamics in evaporative demand for the energy-limited conditions at both annual and seasonal scales, while only the Penman and Linacre models could represent dynamics in evaporative demand for the water-limited conditions. However, the Linacre model was unable to reproduce the seasonal switches between water- and energy-limited states for some sites. Our findings demonstrated that the choice of PET models would be essential for the evaporative demand analyses and other related hydrological analyses at different temporal and spatial scales.

  11. Applying citizen-science data and mark-recapture models to estimate numbers of migrant golden eagles in an important bird area in eastern North America

    USGS Publications Warehouse

    Dennhardt, Andrew J.; Duerr, Adam E.; Brandes, David; Katzner, Todd

    2017-01-01

    Estimates of population abundance are important to wildlife management and conservation. However, it can be difficult to characterize the numbers of broadly distributed, low-density, and elusive bird species. Although Golden Eagles (Aquila chrysaetos) are rare, difficult to detect, and broadly distributed, they are concentrated during their autumn migration at monitoring sites in eastern North America. We used hawk-count data collected by citizen scientists in a virtual mark–recapture modeling analysis to estimate the numbers of Golden Eagles that migrate in autumn along Kittatinny Ridge, an Important Bird Area in Pennsylvania, USA. In order to evaluate the sensitivity of our abundance estimates to variation in eagle capture histories, we applied candidate models to 8 different sets of capture histories, constructed with or without age-class information and using known mean flight speeds 6 1, 2, 4, or 6 SE for eagles to travel between hawk-count sites. Although some abundance estimates were produced by models that poorly fitted the data (ĉ > 3.0), 2 sets of population estimates were produced by acceptably performing models (cˆ less than or equal to 3.0). Application of these models to count data from November, 2002–2011, suggested a mean population abundance of 1,354 6 117 SE (range: 873–1,938). We found that Golden Eagles left the ridgeline at different rates and in different places along the route, and that typically ,50% of individuals were detected at the hawk-count sites. Our study demonstrates a useful technique for estimating population abundance that may be applicable to other migrant species that are repeatedly detected at multiple monitoring sites along a topographic diversion or leading line.

  12. Engineering of PDMS Surfaces for use in Microsystems for Capture and Isolation of Complex and Biomedically Important Proteins: Epidermal Growth Factor Receptor as a Model System

    PubMed Central

    Lowe, Aaron M.; Ozer, Byram H.; Wiepz, Gregory J.; Bertics, Paul J.; Abbott, Nicholas L.

    2009-01-01

    Elastomers based on poly(dimethylsiloxane) (PDMS) are promising materials for fabrication of a wide range of microanalytical systems due to their mechanical and optical properties and ease of processing. To date, however, quantitative studies that demonstrate reliable and reproducible methods for attachment of binding groups that capture complex receptor proteins of relevance to biomedical applications of PDMS microsystems have not been reported. Herein we describe methods that lead to the reproducible capture of a transmembrane protein, the human epidermal growth factor (EGF) receptor, onto PDMS surfaces presenting covalently immobilized antibodies for EGF receptor, and subsequent isolation of the captured receptor by mechanical transfer of the receptor onto a chemically functionalized surface of a gold film for detection. This result is particularly significant because the physical properties of transmembrane proteins make this class of proteins a difficult one to analyze. We benchmark the performance of antibodies to the human EGF receptor covalently immobilized on PDMS against the performance of the same antibodies physisorbed to conventional surfaces utilized in ELISA assays through the use of EGF receptor that was 32P-radiolabeled in its autophosphorylation domain. These results reveal that two pan-reactive antibodies for the EGF receptor (H11 and 111.6) and one phosphospecific EGF receptor antibody (pY1068) capture the receptor on both PDMS and ELISA plates. When using H11 antibody to capture EGF receptor and subsequent treatment with a stripping buffer (NaOH and sodium dodecylsulfate) to isolate the receptor, the signal-to-background obtained using the PDMS surface was 82:1, exceeding the signal-to-background measured on the ELISA plate (<48:1). We also characterized the isolation of captured EGF receptor by mechanical contact of the PDMS surface with a chemically functionalized gold film. The efficiency of mechanical transfer of the transmembrane protein from the PDMS surface was found to be 75–81%. However, the transfer of non-specifically bound protein was substantially less than 75%, thus leading to the important finding that mechanical transfer of the EGF receptor leads to an approximately four-fold increase in signal-to-background from 20:1 to 88:1. The signal-to-background obtained following mechanical transfer is also better than that obtained using ELISA plates and stripping buffer (<48:1). The EGF receptor is a clinically important protein and the target of numerous anticancer agents and thus these results, when combined, provide guidance for the design of PDMS-based microanalytical systems for the capture and isolation of complex and clinically important transmembrane proteins. PMID:18651079

  13. Engineering of PDMS surfaces for use in microsystems for capture and isolation of complex and biomedically important proteins: epidermal growth factor receptor as a model system.

    PubMed

    Lowe, Aaron M; Ozer, Byram H; Wiepz, Gregory J; Bertics, Paul J; Abbott, Nicholas L

    2008-08-01

    Elastomers based on poly(dimethylsiloxane) (PDMS) are promising materials for fabrication of a wide range of microanalytical systems due to their mechanical and optical properties and ease of processing. To date, however, quantitative studies that demonstrate reliable and reproducible methods for attachment of binding groups that capture complex receptor proteins of relevance to biomedical applications of PDMS microsystems have not been reported. Herein we describe methods that lead to the reproducible capture of a transmembrane protein, the human epidermal growth factor (EGF) receptor, onto PDMS surfaces presenting covalently immobilized antibodies for EGF receptor, and subsequent isolation of the captured receptor by mechanical transfer of the receptor onto a chemically functionalized surface of a gold film for detection. This result is particularly significant because the physical properties of transmembrane proteins make this class of proteins a difficult one to analyze. We benchmark the performance of antibodies to the human EGF receptor covalently immobilized on PDMS against the performance of the same antibodies physisorbed to conventional surfaces utilized in ELISA assays through the use of EGF receptor that was (32)P-radiolabeled in its autophosphorylation domain. These results reveal that two pan-reactive antibodies for the EGF receptor (clones H11 and 111.6) and one phosphospecific EGF receptor antibody (clone pY1068) capture the receptor on both PDMS and ELISA plates. When using H11 antibody to capture EGF receptor and subsequent treatment with a stripping buffer (NaOH and sodium dodecylsulfate) to isolate the receptor, the signal-to-background obtained using the PDMS surface was 82 : 1, exceeding the signal-to-background measured on the ELISA plate (<48 : 1). We also characterized the isolation of captured EGF receptor by mechanical contact of the PDMS surface with a chemically functionalized gold film. The efficiency of mechanical transfer of the transmembrane protein from the PDMS surface was found to be 75-81%. However, the transfer of non-specifically bound protein was substantially less than 75%, thus leading to the important finding that mechanical transfer of the EGF receptor leads to an approximately four-fold increase in signal-to-background from 20 : 1 to 88 : 1. The signal-to-background obtained following mechanical transfer is also better than that obtained using ELISA plates and stripping buffer (<48 : 1). The EGF receptor is a clinically important protein and the target of numerous anticancer agents and thus these results, when combined, provide guidance for the design of PDMS-based microanalytical systems for the capture and isolation of complex and clinically important transmembrane proteins.

  14. Climate driven crop planting date in the ACME Land Model (ALM): Impacts on productivity and yield

    NASA Astrophysics Data System (ADS)

    Drewniak, B.

    2017-12-01

    Climate is one of the key drivers of crop suitability and productivity in a region. The influence of climate and weather on the growing season determine the amount of time crops spend in each growth phase, which in turn impacts productivity and, more importantly, yields. Planting date can have a strong influence on yields with earlier planting generally resulting in higher yields, a sensitivity that is also present in some crop models. Furthermore, planting date is already changing and may continue, especially if longer growing seasons caused by future climate change drive early (or late) planting decisions. Crop models need an accurate method to predict plant date to allow these models to: 1) capture changes in crop management to adapt to climate change, 2) accurately model the timing of crop phenology, and 3) improve crop simulated influences on carbon, nutrient, energy, and water cycles. Previous studies have used climate as a predictor for planting date. Climate as a plant date predictor has more advantages than fixed plant dates. For example, crop expansion and other changes in land use (e.g., due to changing temperature conditions), can be accommodated without additional model inputs. As such, a new methodology to implement a predictive planting date based on climate inputs is added to the Accelerated Climate Model for Energy (ACME) Land Model (ALM). The model considers two main sources of climate data important for planting: precipitation and temperature. This method expands the current temperature threshold planting trigger and improves the estimated plant date in ALM. Furthermore, the precipitation metric for planting, which synchronizes the crop growing season with the wettest months, allows tropical crops to be introduced to the model. This presentation will demonstrate how the improved model enhances the ability of ALM to capture planting date compared with observations. More importantly, the impact of changing the planting date and introducing tropical crops will be explored. Those impacts include discussions on productivity, yield, and influences on carbon and energy fluxes.

  15. Direct Air Capture of CO2 with an Amine Resin: A Molecular Modeling Study of the CO2 Capturing Process

    PubMed Central

    2017-01-01

    Several reactions, known from other amine systems for CO2 capture, have been proposed for Lewatit R VP OC 1065. The aim of this molecular modeling study is to elucidate the CO2 capture process: the physisorption process prior to the CO2-capture and the reactions. Molecular modeling yields that the resin has a structure with benzyl amine groups on alternating positions in close vicinity of each other. Based on this structure, the preferred adsorption mode of CO2 and H2O was established. Next, using standard Density Functional Theory two catalytic reactions responsible for the actual CO2 capture were identified: direct amine and amine-H2O catalyzed formation of carbamic acid. The latter is a new type of catalysis. Other reactions are unlikely. Quantitative verification of the molecular modeling results with known experimental CO2 adsorption isotherms, applying a dual site Langmuir adsorption isotherm model, further supports all results of this molecular modeling study. PMID:29142339

  16. Movement patterns and study area boundaries: Influences on survival estimation in capture-mark-recapture studies

    USGS Publications Warehouse

    Horton, G.E.; Letcher, B.H.

    2008-01-01

    The inability to account for the availability of individuals in the study area during capture-mark-recapture (CMR) studies and the resultant confounding of parameter estimates can make correct interpretation of CMR model parameter estimates difficult. Although important advances based on the Cormack-Jolly-Seber (CJS) model have resulted in estimators of true survival that work by unconfounding either death or recapture probability from availability for capture in the study area, these methods rely on the researcher's ability to select a method that is correctly matched to emigration patterns in the population. If incorrect assumptions regarding site fidelity (non-movement) are made, it may be difficult or impossible as well as costly to change the study design once the incorrect assumption is discovered. Subtleties in characteristics of movement (e.g. life history-dependent emigration, nomads vs territory holders) can lead to mixtures in the probability of being available for capture among members of the same population. The result of these mixtures may be only a partial unconfounding of emigration from other CMR model parameters. Biologically-based differences in individual movement can combine with constraints on study design to further complicate the problem. Because of the intricacies of movement and its interaction with other parameters in CMR models, quantification of and solutions to these problems are needed. Based on our work with stream-dwelling populations of Atlantic salmon Salmo salar, we used a simulation approach to evaluate existing CMR models under various mixtures of movement probabilities. The Barker joint data model provided unbiased estimates of true survival under all conditions tested. The CJS and robust design models provided similarly unbiased estimates of true survival but only when emigration information could be incorporated directly into individual encounter histories. For the robust design model, Markovian emigration (future availability for capture depends on an individual's current location) was a difficult emigration pattern to detect unless survival and especially recapture probability were high. Additionally, when local movement was high relative to study area boundaries and movement became more diffuse (e.g. a random walk), local movement and permanent emigration were difficult to distinguish and had consequences for correctly interpreting the survival parameter being estimated (apparent survival vs true survival). ?? 2008 The Authors.

  17. Microdosimetric Modeling of Biological Effectiveness for Boron Neutron Capture Therapy Considering Intra- and Intercellular Heterogeneity in 10B Distribution.

    PubMed

    Sato, Tatsuhiko; Masunaga, Shin-Ichiro; Kumada, Hiroaki; Hamada, Nobuyuki

    2018-01-17

    We here propose a new model for estimating the biological effectiveness for boron neutron capture therapy (BNCT) considering intra- and intercellular heterogeneity in 10 B distribution. The new model was developed from our previously established stochastic microdosimetric kinetic model that determines the surviving fraction of cells irradiated with any radiations. In the model, the probability density of the absorbed doses in microscopic scales is the fundamental physical index for characterizing the radiation fields. A new computational method was established to determine the probability density for application to BNCT using the Particle and Heavy Ion Transport code System PHITS. The parameters used in the model were determined from the measured surviving fraction of tumor cells administrated with two kinds of 10 B compounds. The model quantitatively highlighted the indispensable need to consider the synergetic effect and the dose dependence of the biological effectiveness in the estimate of the therapeutic effect of BNCT. The model can predict the biological effectiveness of newly developed 10 B compounds based on their intra- and intercellular distributions, and thus, it can play important roles not only in treatment planning but also in drug discovery research for future BNCT.

  18. Probabilistic parameter estimation in a 2-step chemical kinetics model for n-dodecane jet autoignition

    NASA Astrophysics Data System (ADS)

    Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph

    2018-05-01

    This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.

  19. Evaluating the Vertical Distribution of Ozone and its Relationship to Pollution Events in Air Quality Models using Satellite Data

    NASA Astrophysics Data System (ADS)

    Osterman, G. B.; Neu, J. L.; Eldering, A.; Pinder, R. W.; Tang, Y.; McQueen, J.

    2014-12-01

    Most regional scale models that are used for air quality forecasts and ozone source attribution do not adequately capture the distribution of ozone in the mid- and upper troposphere, but it is unclear how this shortcoming relates to their ability to simulate surface ozone. We combine ozone profile data from the NASA Earth Observing System (EOS) Tropospheric Emission Spectrometer (TES) and a new joint product from TES and the Ozone Monitoring Instrument along with ozonesonde measurements and EPA AirNow ground station ozone data to examine air quality events during August 2006 in the Community Multi-Scale Air Quality (CMAQ) and National Air Quality Forecast Capability (NAQFC) models. We present both aggregated statistics and case-study analyses with the goal of assessing the relationship between the models' ability to reproduce surface air quality events and their ability to capture the vertical distribution of ozone. We find that the models lack the mid-tropospheric ozone variability seen in TES and the ozonesonde data, and discuss the conditions under which this variability appears to be important for surface air quality.

  20. A simple shear limited, single size, time dependent flocculation model

    NASA Astrophysics Data System (ADS)

    Kuprenas, R.; Tran, D. A.; Strom, K.

    2017-12-01

    This research focuses on the modeling of flocculation of cohesive sediment due to turbulent shear, specifically, investigating the dependency of flocculation on the concentration of cohesive sediment. Flocculation is important in larger sediment transport models as cohesive particles can create aggregates which are orders of magnitude larger than their unflocculated state. As the settling velocity of each particle is determined by the sediment size, density, and shape, accounting for this aggregation is important in determining where the sediment is deposited. This study provides a new formulation for flocculation of cohesive sediment by modifying the Winterwerp (1998) flocculation model (W98) so that it limits floc size to that of the Kolmogorov micro length scale. The W98 model is a simple approach that calculates the average floc size as a function of time. Because of its simplicity, the W98 model is ideal for implementing into larger sediment transport models; however, the model tends to over predict the dependency of the floc size on concentration. It was found that the modification of the coefficients within the original model did not allow for the model to capture the dependency on concentration. Therefore, a new term within the breakup kernel of the W98 formulation was added. The new formulation results is a single size, shear limited, and time dependent flocculation model that is able to effectively capture the dependency of the equilibrium size of flocs on both suspended sediment concentration and the time to equilibrium. The overall behavior of the new model is explored and showed align well with other studies on flocculation. Winterwerp, J. C. (1998). A simple model for turbulence induced flocculation of cohesive sediment. .Journal of Hydraulic Research, 36(3):309-326.

  1. Attenuated coupled cluster: a heuristic polynomial similarity transformation incorporating spin symmetry projection into traditional coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Gomez, John A.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2017-11-01

    In electronic structure theory, restricted single-reference coupled cluster (CC) captures weak correlation but fails catastrophically under strong correlation. Spin-projected unrestricted Hartree-Fock (SUHF), on the other hand, misses weak correlation but captures a large portion of strong correlation. The theoretical description of many important processes, e.g. molecular dissociation, requires a method capable of accurately capturing both weak and strong correlation simultaneously, and would likely benefit from a combined CC-SUHF approach. Based on what we have recently learned about SUHF written as particle-hole excitations out of a symmetry-adapted reference determinant, we here propose a heuristic CC doubles model to attenuate the dominant spin collective channel of the quadratic terms in the CC equations. Proof of principle results presented here are encouraging and point to several paths forward for improving the method further.

  2. Program SPACECAP: software for estimating animal density using spatially explicit capture-recapture models

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas

    2012-01-01

    1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.

  3. Satellite passive microwave detection of surface water inundation changes over the pan-Arctic from AMSR

    NASA Astrophysics Data System (ADS)

    Du, J.; Kimball, J. S.; Jones, L. A.; Watts, J. D.

    2016-12-01

    Climate is one of the key drivers of crop suitability and productivity in a region. The influence of climate and weather on the growing season determine the amount of time crops spend in each growth phase, which in turn impacts productivity and, more importantly, yields. Planting date can have a strong influence on yields with earlier planting generally resulting in higher yields, a sensitivity that is also present in some crop models. Furthermore, planting date is already changing and may continue, especially if longer growing seasons caused by future climate change drive early (or late) planting decisions. Crop models need an accurate method to predict plant date to allow these models to: 1) capture changes in crop management to adapt to climate change, 2) accurately model the timing of crop phenology, and 3) improve crop simulated influences on carbon, nutrient, energy, and water cycles. Previous studies have used climate as a predictor for planting date. Climate as a plant date predictor has more advantages than fixed plant dates. For example, crop expansion and other changes in land use (e.g., due to changing temperature conditions), can be accommodated without additional model inputs. As such, a new methodology to implement a predictive planting date based on climate inputs is added to the Accelerated Climate Model for Energy (ACME) Land Model (ALM). The model considers two main sources of climate data important for planting: precipitation and temperature. This method expands the current temperature threshold planting trigger and improves the estimated plant date in ALM. Furthermore, the precipitation metric for planting, which synchronizes the crop growing season with the wettest months, allows tropical crops to be introduced to the model. This presentation will demonstrate how the improved model enhances the ability of ALM to capture planting date compared with observations. More importantly, the impact of changing the planting date and introducing tropical crops will be explored. Those impacts include discussions on productivity, yield, and influences on carbon and energy fluxes.

  4. An Assessment of CFD/CSD Prediction State-of-the-Art by Using the HART II International Workshop Data

    NASA Technical Reports Server (NTRS)

    Smith, Marilyn J.; Lim, Joon W.; vanderWall, Berend G.; Baeder, James D.; Biedron, Robert T.; Boyd, D. Douglas, Jr.; Jayaraman, Buvana; Jung, Sung N.; Min, Byung-Young

    2012-01-01

    Over the past decade, there have been significant advancements in the accuracy of rotor aeroelastic simulations with the application of computational fluid dynamics methods coupled with computational structural dynamics codes (CFD/CSD). The HART II International Workshop database, which includes descent operating conditions with strong blade-vortex interactions (BVI), provides a unique opportunity to assess the ability of CFD/CSD to capture these physics. In addition to a baseline case with BVI, two additional cases with 3/rev higher harmonic blade root pitch control (HHC) are available for comparison. The collaboration during the workshop permits assessment of structured, unstructured, and hybrid overset CFD/CSD methods from across the globe on the dynamics, aerodynamics, and wake structure. Evaluation of the plethora of CFD/CSD methods indicate that the most important numerical variables associated with most accurately capturing BVI are a two-equation or detached eddy simulation (DES)-based turbulence model and a sufficiently small time step. An appropriate trade-off between grid fidelity and spatial accuracy schemes also appears to be pertinent for capturing BVI on the advancing rotor disk. Overall, the CFD/CSD methods generally fall within the same accuracy; cost-effective hybrid Navier-Stokes/Lagrangian wake methods provide accuracies within 50% the full CFD/CSD methods for most parameters of interest, except for those highly influenced by torsion. The importance of modeling the fuselage is observed, and other computational requirements are discussed.

  5. Supporting Intrapersonal Development in Substance Use Disorder Programs: A Conceptual Framework for Client Assessment.

    PubMed

    Turpin, Aaron; Shier, Micheal L

    2017-01-01

    Improvements to intrapersonal development of clients involved with substance use disorder treatment programs has widely been recognized as contributing to the intended goal of reducing substance misuse behaviors. This study sought to identify a broad framework of primary outcomes related to the intrapersonal development of clients in treatment for substance misuse. Using qualitative research methods, individual interviews were conducted with program participants (n = 41) at three treatment programs to identify the ways in which respondents experienced intrapersonal development through participation in treatment. The findings support the development of a conceptual model that captures the importance and manifestation of achieving improvements in the following outcomes: self-awareness, coping ability, self-worth, outlook, and self-determination. The findings provide a conceptual framework for client assessment that captures a broad range of the important intrapersonal development factors utilized as indicators for client development and recovery that should be measured in tandem during assessment.

  6. Modeling of circulating fluised beds for post-combustion carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, A.; Shadle, L.; Miller, D.

    2011-01-01

    A compartment based model for a circulating fluidized bed reactor has been developed based on experimental observations of riser hydrodynamics. The model uses a cluster based approach to describe the two-phase behavior of circulating fluidized beds. Fundamental mass balance equations have been derived to describe the movement of both gas and solids though the system. Additional work is being performed to develop the correlations required to describe the hydrodynamics of the system. Initial testing of the model with experimental data shows promising results and highlights the importance of including end effects within the model.

  7. Ocean Modeling in an Eddying Regime

    NASA Astrophysics Data System (ADS)

    Hecht, Matthew W.; Hasumi, Hiroyasu

    This monograph is the first to survey progress in realistic simulation in a strongly eddying regime made possible by recent increases in computational capability. Its contributors comprise the leading researchers in this important and constantly evolving field. Divided into three parts, • Oceanographic Processes and Regimes: Fundamental Questions • Ocean Dynamics and State: From Regional to Global Scale, and • Modeling at the Mesoscale: State of the Art and Future Directions the volume details important advances in physical oceanography based on eddy resolving ocean modeling. It captures the state of the art and discusses issues that ocean modelers must consider in order to effectively contribute to advancing current knowledge, from subtleties of the underlying fluid dynamical equations to meaningful comparison with oceanographic observations and leading-edge model development. It summarizes many of the important results which have emerged from ocean modeling in an eddying regime, for those interested broadly in the physical science. More technical topics are intended to address the concerns of those actively working in the field.

  8. Atmospheric Chemistry of the Carbon Capture Solvent Monoethanolamine (MEA): A Theoretical Study

    NASA Astrophysics Data System (ADS)

    da Silva, G.

    2012-12-01

    The development of amine solvent technology for carbon capture and storage has the potential to create large new sources of amines to the atmosphere. The atmospheric chemistry of amines generally, and carbon capture solvents in particular, is not well understood. We have used quantum chemistry and master equation modelling to investigate the OH radical initiated oxidation of monoethanolamine (NH2CH2CH2OH), or MEA, the archetypal carbon capture solvent. The OH radical can abstract H atoms from either carbon atom in MEA, with negative reaction barriers. Treating these reactions with a two transition state model can reliably reproduce experimental rate constants and their temperature dependence. The products of the MEA + OH reaction, the NH2CHCH2OH and NH2CH2CHOH radicals, undergo subsequent reaction with O2, which has also been studied. In both cases chemically activated reactions that bypass peroxyl radical intermediates dominate, producing 2-iminoethanol + HO2 (from NH2CHCH2OH) or aminoacetaldehyde + HO2 (from NH2CH2CHOH), making the process HOx-neutral. The operation of chemically activated reaction mechanisms has implications for the ozone forming potential of MEA. The products of MEA photo-oxidation are proposed as important species in the formation of both organic and inorganic secondary aerosols, particularly through uptake of the imine 2-iminoethanol and subsequent hydrolysis to ammonia and glycolaldehyde.

  9. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    NASA Astrophysics Data System (ADS)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  10. The Earth Microbiome Project and modeling the planets microbial potential (Invited)

    NASA Astrophysics Data System (ADS)

    Gilbert, J. A.

    2013-12-01

    The understanding of Earth's climate and ecology requires multiscale observations of the biosphere, of which microbial life are a major component. However, to acquire and process physical samples of soil, water and air that comprise the appropriate spatial and temporal resolution to capture the immense variation in microbial dynamics, would require a herculean effort and immense financial resources dwarfing even the most ambitious projects to date. To overcome this hurdle we created the Earth Microbiome Project, a crowd-sourced effort to acquire physical samples from researchers around the world that are, importantly, contextualized with physical, chemical and biological data detailing the environmental properties of that sample in the location and time it was acquired. The EMP leverages these existing efforts to target a systematic analysis of microbial taxonomic and functional dynamics across a vast array of environmental parameter gradients. The EMP captures the environmental gradients, location, time and sampling protocol information about every sample donated by our valued collaborators. Physical samples are then processed using a standardized DNA extraction, PCR, and shotgun sequencing protocol to generate comparable data regarding the microbial community structure and function in each sample. To date we have processed >17,000 samples from 40 different biomes. One of the key goals of the EMP is to map the spatiotemporal variability of microbial communities to capture the changes in important functional processes that need to be appropriately expressed in models to provide reliable forecasts of ecosystem phenotype across our changing planet. This is essential if we are to develop economically sound strategies to be good stewards of our Earth. The EMP recognizes that environments are comprised of complex sets of interdependent parameters and that the development of useful predictive computational models of both terrestrial and atmospheric systems requires recognition and accommodation of sources of uncertainty.

  11. Initial Risk Analysis and Decision Making Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.

    2012-02-01

    Commercialization of new carbon capture simulation initiative (CCSI) technology will include two key elements of risk management, namely, technical risk (will process and plant performance be effective, safe, and reliable) and enterprise risk (can project losses and costs be controlled within the constraints of market demand to maintain profitability and investor confidence). Both of these elements of risk are incorporated into the risk analysis subtask of Task 7. Thus far, this subtask has developed a prototype demonstration tool that quantifies risk based on the expected profitability of expenditures when retrofitting carbon capture technology on a stylized 650 MW pulverized coalmore » electric power generator. The prototype is based on the selection of specific technical and financial factors believed to be important determinants of the expected profitability of carbon capture, subject to uncertainty. The uncertainty surrounding the technical performance and financial variables selected thus far is propagated in a model that calculates the expected profitability of investments in carbon capture and measures risk in terms of variability in expected net returns from these investments. Given the preliminary nature of the results of this prototype, additional work is required to expand the scope of the model to include additional risk factors, additional information on extant and proposed risk factors, the results of a qualitative risk factor elicitation process, and feedback from utilities and other interested parties involved in the carbon capture project. Additional information on proposed distributions of these risk factors will be integrated into a commercial implementation framework for the purpose of a comparative technology investment analysis.« less

  12. Conceptual-level workflow modeling of scientific experiments using NMR as a case study

    PubMed Central

    Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R

    2007-01-01

    Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870

  13. Conceptual-level workflow modeling of scientific experiments using NMR as a case study.

    PubMed

    Verdi, Kacy K; Ellis, Heidi Jc; Gryk, Michael R

    2007-01-30

    Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.

  14. Modeling User Behavior and Attention in Search

    ERIC Educational Resources Information Center

    Huang, Jeff

    2013-01-01

    In Web search, query and click log data are easy to collect but they fail to capture user behaviors that do not lead to clicks. As search engines reach the limits inherent in click data and are hungry for more data in a competitive environment, mining cursor movements, hovering, and scrolling becomes important. This dissertation investigates how…

  15. Estimation of parameters and basic reproduction ratio for Japanese encephalitis transmission in the Philippines using sequential Monte Carlo filter

    USDA-ARS?s Scientific Manuscript database

    We developed a sequential Monte Carlo filter to estimate the states and the parameters in a stochastic model of Japanese Encephalitis (JE) spread in the Philippines. This method is particularly important for its adaptability to the availability of new incidence data. This method can also capture the...

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooker, A.; Gonder, J.; Lopp, S.

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution ofmore » importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.« less

  17. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  18. Modeling association among demographic parameters in analysis of open population capture-recapture data

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.

    2005-01-01

    We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  19. A theoretical study of concentration of profiles of primary cytochemical-enzyme reaction products in membrane-bound cell organelles and its application to lysosomal acid phosphatase.

    PubMed

    Cornelisse, C J; Hermens, W T; Joe, M T; Duijndam, W A; van Duijn, P

    1976-11-01

    A numerical method was developed for computing the steady-state concentration gradient of a diffusible enzyme reaction product in a membrane-limited compartment of a simplified theoretical cell model. In cytochemical enzyme reactions proceeding according to the metal-capture principle, the local concentration of the primary reaction product is an important factor in the onset of the precipitation process and in the distribution of the final reaction product. The following variables were incorporated into the model: enzyme activity, substrate concentration, Km, diffusion coefficient of substrate and product, particle radius and cell radius. The method was applied to lysosomal acid phosphatase. Numerical values for the variables were estimated from experimental data in the literature. The results show that the calculated phosphate concentrations inside lysosomes are several orders of magnitude lower than the critical concentrations for efficient phosphate capture found in a previous experimental model study. Reasons for this apparent discrepancy are discussed.

  20. Automatic annotation of histopathological images using a latent topic model based on non-negative matrix factorization

    PubMed Central

    Cruz-Roa, Angel; Díaz, Gloria; Romero, Eduardo; González, Fabio A.

    2011-01-01

    Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively. PMID:22811960

  1. Analytical mesoscale modeling of aeolian sand transport

    NASA Astrophysics Data System (ADS)

    Lämmel, Marc; Kroy, Klaus

    2017-11-01

    The mesoscale structure of aeolian sand transport determines a variety of natural phenomena studied in planetary and Earth science. We analyze it theoretically beyond the mean-field level, based on the grain-scale transport kinetics and splash statistics. A coarse-grained analytical model is proposed and verified by numerical simulations resolving individual grain trajectories. The predicted height-resolved sand flux and other important characteristics of the aeolian transport layer agree remarkably well with a comprehensive compilation of field and wind-tunnel data, suggesting that the model robustly captures the essential mesoscale physics. By comparing the predicted saturation length with field data for the minimum sand-dune size, we elucidate the importance of intermittent turbulent wind fluctuations for field measurements and reconcile conflicting previous models for this most enigmatic emergent aeolian scale.

  2. Attentional capture under high perceptual load.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2010-12-01

    Attentional capture by abrupt onsets can be modulated by several factors, including the complexity, or perceptual load, of a scene. We have recently demonstrated that observers are less likely to be captured by abruptly appearing, task-irrelevant stimuli when they perform a search that is high, as opposed to low, in perceptual load (Cosman & Vecera, 2009), consistent with perceptual load theory. However, recent results indicate that onset frequency can influence stimulus-driven capture, with infrequent onsets capturing attention more often than did frequent onsets. Importantly, in our previous task, an abrupt onset was present on every trial, and consequently, attentional capture might have been affected by both onset frequency and perceptual load. In the present experiment, we examined whether onset frequency influences attentional capture under conditions of high perceptual load. When onsets were presented frequently, we replicated our earlier results; attentional capture by onsets was modulated under conditions of high perceptual load. Importantly, however, when onsets were presented infrequently, we observed robust capture effects. These results conflict with a strong form of load theory and, instead, suggest that exposure to the elements of a task (e.g., abrupt onsets) combines with high perceptual load to modulate attentional capture by task-irrelevant information.

  3. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part I: boron neutron capture therapy models.

    PubMed

    Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.

  4. Constant-parameter capture-recapture models

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  5. β-decay studies of r-process nuclei at NSCL

    NASA Astrophysics Data System (ADS)

    Pereira, J.; Aprahamian, A.; Arndt, O.; Becerril, A.; Elliot, T.; Estrade, A.; Galaviz, D.; Hennrich, S.; Hosmer, P.; Schnorrenberger, L.; Kessler, R.; Kratz, K.-L.; Lorusso, G.; Mantica, P. F.; Matos, M.; Montes, F.; Pfeiffer, B.; Quinn, M.; Santi, P.; Schatz, H.; Schertz, F.; Smith, E.; Tomlin, B. E.; Walters, W. B.; Wöhr, A.

    2008-06-01

    Observed neutron-capture elemental abundances in metal-poor stars, along with ongoing analysis of the extremely metal-poor Eu-enriched sub-class provide new guidance for astrophysical models aimed at finding the r-process sites. The present paper emphasizes the importance of nuclear physics parameters entering in these models, particularly β-decay properties of neutron-rich nuclei. In this context, several r-process motivated β-decay experiments performed at the National Superconducting Cyclotron Laboratory (NSCL) are presented, including a summary of results and impact on model calculations.

  6. Early Formulation Model-centric Engineering on Nasa's Europa Mission Concept Study

    NASA Technical Reports Server (NTRS)

    Bayer, Todd; Chung, Seung; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Chris; Gontijo, I.; Lewis, Kari; Moshir, Mehrdad; Rasmussen, Robert; hide

    2012-01-01

    By leveraging the existing Model-Based Systems Engineering (MBSE) infrastructure at JPL and adding a modest investment, the Europa Mission Concept Study made striking advances in mission concept capture and analysis. This effort has reaffirmed the importance of architecting and successfully harnessed the synergistic relationship of system modeling to mission architecting. It clearly demonstrated that MBSE can provide greater agility than traditional systems engineering methods. This paper will describe the successful application of MBSE in the dynamic environment of early mission formulation, the significant results produced and lessons learned in the process.

  7. Model-based decision making in early clinical development: minimizing the impact of a blood pressure adverse event.

    PubMed

    Stroh, Mark; Addy, Carol; Wu, Yunhui; Stoch, S Aubrey; Pourkavoos, Nazaneen; Groff, Michelle; Xu, Yang; Wagner, John; Gottesdiener, Keith; Shadle, Craig; Wang, Hong; Manser, Kimberly; Winchell, Gregory A; Stone, Julie A

    2009-03-01

    We describe how modeling and simulation guided program decisions following a randomized placebo-controlled single-rising oral dose first-in-man trial of compound A where an undesired transient blood pressure (BP) elevation occurred in fasted healthy young adult males. We proposed a lumped-parameter pharmacokinetic-pharmacodynamic (PK/PD) model that captured important aspects of the BP homeostasis mechanism. Four conceptual units characterized the feedback PD model: a sinusoidal BP set point, an effect compartment, a linear effect model, and a system response. To explore approaches for minimizing the BP increase, we coupled the PD model to a modified PK model to guide oral controlled-release (CR) development. The proposed PK/PD model captured the central tendency of the observed data. The simulated BP response obtained with theoretical release rate profiles suggested some amelioration of the peak BP response with CR. This triggered subsequent CR formulation development; we used actual dissolution data from these candidate CR formulations in the PK/PD model to confirm a potential benefit in the peak BP response. Though this paradigm has yet to be tested in the clinic, our model-based approach provided a common rational framework to more fully utilize the limited available information for advancing the program.

  8. A multi-scale methodology for comparing GCM and RCM results over the Eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Samuels, Rana; Krichak, Simon; Breitgand, Joseph; Alpert, Pinhas

    2010-05-01

    The importance of skillful climate modeling is increasingly being realized as results are being incorporated into environmental, economic, and even business planning. Global circulation models (GCMs) employed by the IPCC provide results at spatial scales of hundreds of kilometers, which is useful for understanding global trends but not appropriate for use as input into regional and local impacts models used to inform policy and development. To address this shortcoming, regional climate models (RCMs) which dynamically downscale the results of the GCMs are used. In this study we present first results of a dynamically downscaled RCM focusing on the Eastern Mediterranean region. For the historical 1960-2000 time period, results at a spatial scale of both 25 km and 50 km are compared with historical station data from 5 locations across Israel as well as with the results of 3 GCM models (ECHAM5, NOAA GFDL, and CCCMA) at annual, monthly and daily time scales. Results from a recently completed Japanese GCM at a spatial scale of 20 km are also included. For the historical validation period, we show that as spatial scale increases the skill in capturing annual and inter-annual temperature and rainfall also increases. However, for intra-seasonal rainfall characteristics important for hydrological and agricultural planning (eg. dry and wet spells, number of rain days) the GCM results (including the 20 km Japanese model) capture the historical trends better than the dynamically downscaled RegCM. For future scenarios of temperature and precipitation changes, we compare results across the models for the available time periods, generating a range of future trends.

  9. The Arctic clouds from model simulations and long-term observations at Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Zhao, Ming

    The Arctic is a region that is very sensitive to global climate change while also experiencing significant changes in its surface air temperature, sea-ice cover, atmospheric circulation, precipitation, snowfall, biogeochemical cycling, and land surface. Although previous studies have shown that the arctic clouds play an important role in the arctic climate changes, the arctic clouds are poorly understood and simulated in climate model due to limited observations. Furthermore, most of the studies were based on short-term experiments and typically only cover the warm seasons, which do not provide a full understanding of the seasonal cycle of arctic clouds. To address the above concerns and to improve our understanding of arctic clouds, six years of observational and retrieval data from 1999 to 2004 at the Atmospheric Radiation Management (ARM) Climate Research Facility (ACRF) North Slope of Alaska (NSA) Barrow site are used to understand the arctic clouds and related radiative processes. In particular, we focus on the liquid-ice mass partition in the mixed-phase cloud layer. Statistical results show that aerosol type and concentration are important factors that impact the mixed-phase stratus (MPS) cloud microphysical properties: liquid water path (LWP) and liquid water fraction (LWF) decrease with the increase of cloud condensation nuclei (CCN) number concentration; the high dust loading and dust occurrence in the spring are possible reasons for the much lower LWF than the other seasons. The importance of liquid-ice mass partition on surface radiation budgets was analyzed by comparing cloud longwave radiative forcings under the same LWP but different ice water path (IWP) ranges. Results show the ice phase enhance the surface cloud longwave (LW) forcing by 8˜9 W m-2 in the moderately thin MPS. This result provides an observational evidence on the aerosol glaciation effect in the moderately thin MPS, which is largely unknown so far. The above new insights are important to guide the model parameterizations of liquid-ice mass partition in arctic mixed-phase clouds, and are served as a test bed to cloud models and cloud microphysical schemes. The observational data between 1999 and 2007 are used to assess the performance of the European Center for Medium-Range Weather Forecasts (ECMWF) model in the Arctic region. The ECMWF model-simulated near-surface humidity had seasonal dependent biases as large as 20%, while also experiencing difficulty representing boundary layer (BL) temperature inversion height and strength during the transition seasons. Although the ECMWF model captured the seasonal variation of surface heat fluxes, it had sensible heat flux biases over 20 W m-2 in most of the cold months. Furthermore, even though the model captured the general seasonal variations of low-level cloud fraction (LCF) and LWP, it still overestimated the LCF by 20% or more and underestimated the LWP over 50% in the cold season. On average, the ECMWF model underestimated LWP by ˜30 g m-2 but more accurately predicted ice water path for BL clouds. For BL mixed-phase clouds, the model predicted water-ice mass partition was significantly lower than the observations, largely due to the temperature dependence of water-ice mass partition used in the model. The new cloud and BL schemes of the ECMWF model that were implemented after 2003 only resulted in minor improvements in BL cloud simulations in summer. These results indicate that significant improvements in cold season BL and mixed-phase cloud processes in the model are needed. In this study, single-layer MPS clouds were simulated by the Weather Research and Forecasting (WRF) model under different microphysical schemes and different ice nuclei (IN) number concentrations. Results show that by using proper IN concentration, the WRF model incorporated with Morrison microphysical scheme can reasonably capture the observed seasonal differences in temperature dependent liquid-ice mass partition. However, WRF simulations underestimate both LWP and IWP indicating its deficiency in capturing the radiative impacts of arctic MPS clouds.

  10. Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations

    NASA Astrophysics Data System (ADS)

    Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod

    2016-11-01

    Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.

  11. Quality and noise measurements in mobile phone video capture

    NASA Astrophysics Data System (ADS)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  12. Comparative analysis of the effects of electron and hole capture on the power characteristics of a semiconductor quantum-well laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokolova, Z. N., E-mail: Zina.Sokolova@mail.ioffe.ru; Pikhtin, N. A.; Tarasov, I. S.

    The operating characteristics of a semiconductor quantum-well laser calculated using three models are compared. These models are (i) a model not taking into account differences between the electron and hole parameters and using the electron parameters for both types of charge carriers; (ii) a model, which does not take into account differences between the electron and hole parameters and uses the hole parameters for both types of charge carriers; and (iii) a model taking into account the asymmetry between the electron and hole parameters. It is shown that, at the same velocity of electron and hole capture into an unoccupiedmore » quantum well, the laser characteristics, obtained using the three models, differ considerably. These differences are due to a difference between the filling of the electron and hole subbands in a quantum well. The electron subband is more occupied than the hole subband. As a result, at the same velocities of electron and hole capture into an empty quantum well, the effective electron-capture velocity is lower than the effective hole-capture velocity. Specifically, it is shown that for the laser structure studied the hole-capture velocity of 5 × 10{sup 5} cm/s into an empty quantum well and the corresponding electron-capture velocity of 3 × 10{sup 6} cm/s into an empty quantum well describe the rapid capture of these carriers, at which the light–current characteristic of the laser remains virtually linear up to high pump-current densities. However, an electron-capture velocity of 5 × 10{sup 5} cm/s and a corresponding hole-capture velocity of 8.4 × 10{sup 4} cm/s describe the slow capture of these carriers, causing significant sublinearity in the light–current characteristic.« less

  13. Estimating juvenile Chinook salmon (Oncorhynchus tshawytscha) abundance from beach seine data collected in the Sacramento–San Joaquin Delta and San Francisco Bay, California

    USGS Publications Warehouse

    Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble

    2016-06-17

    Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.

  14. Substratum interfacial energetic effects on the attachment of marine bacteria

    NASA Astrophysics Data System (ADS)

    Ista, Linnea Kathryn

    Biofilms represent an ancient, ubiquitous and influential form of life on earth. Biofilm formation is initiated by attachment of bacterial cells from an aqueous suspension onto a suitable attachment substratum. While in certain, well studied cases initial attachment and subsequent biofilm formation is mediated by specific ligand-receptor pairs on the bacteria and attachment substratum, in the open environment, including the ocean, it is assumed to be non-specific and mediated by processes similar to those that drive adsorption of colloids at the water-solid interface. Colloidal principles are studied to determine the molecular and physicochemical interactions involved in the attachment of the model marine bacterium, Cobetia marina to model self-assembled monolayer surfaces. In the simplest application of colloidal principles the wettability of attachment substrata, as measured by the advancing contact angle of water (theta AW) on the surface, is frequently used as an approximation for the surface tension. We demonstrate the applicability of this approach for attachment of C. marina and algal zoospores and extend it to the development of a means to control attachment and release of microorganisms by altering and tuning surface thetaAW. In many cases, however, thetaAW does not capture all the information necessary to model attachment of bacteria to attachment substrata; SAMs with similar thetaAW attach different number of bacteria. More advanced colloidal models of initial bacterial attachment have evolved over the last several decades, with the emergence of the model proposed by van Oss, Chaudhury and Good (VCG) as preeminent. The VCG model enables calculation of interfacial tensions by dividing these into two major interactions thought to be important at biointerfaces: apolar, Lifshitz-van der Waals and polar, Lewis acid-base (including hydrogen bonding) interactions. These interfacial tensions are combined to yield DeltaGadh, the free energy associated with attachment of bacteria to a substratum. We use VCG to model DeltaGadh and interfacial tensions as they relate to model bacterial attachment on SAMs that accumulate cells to different degrees. Even with the more complex interactions measured by VCG, surface energy of the attachment substratum alone was insufficient to predict attachment. VCG was then employed to model attachment of C. marina to a series of SAMs varying systematically in the number of ethylene glycol residues present in the molecule; an identical series has been previously shown to vary dramatically in the number of cells attached as a function of ethylene glycols present. Our results indicate that while VCG adequately models the interfacial tension between water and ethylene glycol SAMs in a manner that predicts bacterial attachment, DeltaGadh as calculated by VCG neither qualitatively nor quantitatively reflects the attachment data. The VCG model, thus, fails to capture specific information regarding the interactions between the attaching bacteria, water, and the SAM. We show that while hydrogen-bond accepting interactions are very well captured by this model, the ability for SAMs and bacteria to donate hydrogen bonds is not adequately described as the VCG model is currently applied. We also describe ways in which VCG fails to capture two specific biological aspects that may be important in bacterial attachment to surfaces:1.) specific interactions between molecules on the surface and bacteria and 2.) bacterial cell surface heterogeneities that may be important in differential attachment to different substrata.

  15. Novel mathematical model to estimate ball impact force in soccer.

    PubMed

    Iga, Takahito; Nunome, Hiroyuki; Sano, Shinya; Sato, Nahoko; Ikegami, Yasuo

    2017-11-22

    To assess ball impact force during soccer kicking is important to quantify from both performance and chronic injury prevention perspectives. We aimed to verify the appropriateness of previous models used to estimate ball impact force and to propose an improved model to better capture the time history of ball impact force. A soccer ball was fired directly onto a force platform (10 kHz) at five realistic kicking ball velocities and ball behaviour was captured by a high-speed camera (5,000 Hz). The time history of ball impact force was estimated using three existing models and two new models. A new mathematical model that took into account a rapid change in ball surface area and heterogeneous ball deformation showed a distinctive advantage to estimate the peak forces and its occurrence times and to reproduce time history of ball impact forces more precisely, thereby reinforcing the possible mechanics of 'footballer's ankle'. Ball impact time was also systematically shortened when ball velocity increases in contrast to practical understanding for producing faster ball velocity, however, the aspect of ball contact time must be considered carefully from practical point of view.

  16. An efficient energy response model for liquid scintillator detectors

    NASA Astrophysics Data System (ADS)

    Lebanowski, Logan; Wan, Linyan; Ji, Xiangpan; Wang, Zhe; Chen, Shaomin

    2018-05-01

    Liquid scintillator detectors are playing an increasingly important role in low-energy neutrino experiments. In this article, we describe a generic energy response model of liquid scintillator detectors that provides energy estimations of sub-percent accuracy. This model fits a minimal set of physically-motivated parameters that capture the essential characteristics of scintillator response and that can naturally account for changes in scintillator over time, helping to avoid associated biases or systematic uncertainties. The model employs a one-step calculation and look-up tables, yielding an immediate estimation of energy and an efficient framework for quantifying systematic uncertainties and correlations.

  17. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: Evaluating the surface energy budget in a Regional Climate Model with automatic weather station observations

    NASA Astrophysics Data System (ADS)

    Steffensen Schmidt, Louise; Aðalgeirsdóttir, Guðfinna; Guðmundsson, Sverrir; Langen, Peter L.; Pálsson, Finnur; Mottram, Ruth; Gascoin, Simon; Björnsson, Helgi

    2017-04-01

    The evolution of the surface mass balance of Vatnajökull ice cap, Iceland, from 1981 to the present day is estimated by using the Regional Climate Model HIRHAM5 to simulate the surface climate. A new albedo parametrization is used for the simulation, which describes the albedo with an exponential decay with time. In addition, it utilizes a new background map of the ice albedo created from MODIS data. The simulation is validated against observed daily values of weather parameters from five Automatic Weather Stations (AWSs) from 2001-2014, as well as mass balance measurements from 1995-2014. The modelled albedo is overestimated at the AWS sites in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and the model not accounting for dust and ash deposition during dust storms and volcanic eruptions. A comparison with the specific summer, winter, and annual mass balance for all Vatnajökull from 1995-2014 shows a good overall fit during the summer, with the model underestimating the balance by only 0.04 m w. eq. on average. The winter balance, on the other hand, is overestimated by 0.5 m w. eq. on average, mostly due to an overestimation of the precipitation at the highest areas of the ice cap. A simple correction of the accumulation at these points reduced the error to 0.15 m w. eq. The model captures the evolution of the specific mass balance well, for example it captures an observed shift in the balance in the mid-1990s, which gives us confidence in the results for the entire model run. Our results show the importance of bare ice albedo for modelled mass balance and that processes not currently accounted for in RCMs, such as dust storms, are an important source of uncertainty in estimates of the snow melt rate.

  18. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    USGS Publications Warehouse

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  19. An Ensemble System Based on Hybrid EGARCH-ANN with Different Distributional Assumptions to Predict S&P 500 Intraday Volatility

    NASA Astrophysics Data System (ADS)

    Lahmiri, S.; Boukadoum, M.

    2015-10-01

    Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.

  20. A biphasic model for bleeding in soft tissue

    NASA Astrophysics Data System (ADS)

    Chang, Yi-Jui; Chong, Kwitae; Eldredge, Jeff D.; Teran, Joseph; Benharash, Peyman; Dutson, Erik

    2017-11-01

    The modeling of blood passing through soft tissues in the body is important for medical applications. The current study aims to capture the effect of tissue swelling and the transport of blood under bleeding or hemorrhaging conditions. The soft tissue is considered as a non-static poro-hyperelastic material with liquid-filled voids. A biphasic formulation effectively, a generalization of Darcy's law-is utilized, treating the phases as occupying fractions of the same volume. The interaction between phases is captured through a Stokes-like friction force on their relative velocities and a pressure that penalizes deviations from volume fractions summing to unity. The soft tissue is modeled as a hyperelastic material with a typical J-shaped stress-strain curve, while blood is considered as a Newtonian fluid. The method of Smoothed Particle Hydrodynamics is used to discretize the conservation equations based on the ease of treating free surfaces in the liquid. Simulations of swelling under acute hemorrhage and of draining under gravity and compression will be demonstrated. Ongoing progress in modeling of organ tissues under injuries and surgical conditions will be discussed.

  1. Selective attention moderates the relationship between attentional capture by signals of nondrug reward and illicit drug use.

    PubMed

    Albertella, Lucy; Copeland, Jan; Pearson, Daniel; Watson, Poppy; Wiers, Reinout W; Le Pelley, Mike E

    2017-06-01

    The current study examined whether cognitive control moderates the association between (non-drug) reward-modulated attentional capture and use of alcohol and other drugs (AOD). Participants were 66 university students who completed an assessment including questions about AOD use, a visual search task to measure value-modulated attentional capture, and a goal-directed selective attention task as a measure of cognitive control. The association between the effect of value-modulated attentional capture and illicit drug use was moderated by level of cognitive control. Among participants with lower levels of cognitive control, value-modulated attentional capture was associated with illicit drug use. This was not the case among participants with higher levels of cognitive control, who instead showed a significant association between illicit drug use and self-reported impulsivity, as well as alcohol use. These results provide support for models that view addictive behaviours as resulting from interaction and competition between automatic and more reflective processes. That is, the mechanisms that ultimately drive addictive behaviour may differ between people low or high in cognitive control. This has important implications for understanding the development and maintenance of substance use disorders and potentially their treatment and prevention. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Minimum required capture radius in a coplanar model of the aerial combat problem

    NASA Technical Reports Server (NTRS)

    Breakwell, J. V.; Merz, A. W.

    1977-01-01

    Coplanar aerial combat is modeled with constant speeds and specified turn rates. The minimum capture radius which will always permit capture, regardless of the initial conditions, is calculated. This 'critical' capture radius is also the maximum range which the evader can guarantee indefinitely if the initial range, for example, is large. A composite barrier is constructed which gives the boundary, at any heading, of relative positions for which the capture radius is less than critical.

  3. Estimating taxonomic diversity, extinction rates, and speciation rates from fossil data using capture-recapture models

    USGS Publications Warehouse

    Nichols, J.D.; Pollock, K.H.

    1983-01-01

    Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.

  4. Time-Series Approaches for Forecasting the Number of Hospital Daily Discharged Inpatients.

    PubMed

    Ting Zhu; Li Luo; Xinli Zhang; Yingkang Shi; Wenwu Shen

    2017-03-01

    For hospitals where decisions regarding acceptable rates of elective admissions are made in advance based on expected available bed capacity and emergency requests, accurate predictions of inpatient bed capacity are especially useful for capacity reservation purposes. As given, the remaining unoccupied beds at the end of each day, bed capacity of the next day can be obtained by examining the forecasts of the number of discharged patients during the next day. The features of fluctuations in daily discharges like trend, seasonal cycles, special-day effects, and autocorrelation complicate decision optimizing, while time-series models can capture these features well. This research compares three models: a model combining seasonal regression and ARIMA, a multiplicative seasonal ARIMA (MSARIMA) model, and a combinatorial model based on MSARIMA and weighted Markov Chain models in generating forecasts of daily discharges. The models are applied to three years of discharge data of an entire hospital. Several performance measures like the direction of the symmetry value, normalized mean squared error, and mean absolute percentage error are utilized to capture the under- and overprediction in model selection. The findings indicate that daily discharges can be forecast by using the proposed models. A number of important practical implications are discussed, such as the use of accurate forecasts in discharge planning, admission scheduling, and capacity reservation.

  5. Unified Deep Learning Architecture for Modeling Biology Sequence.

    PubMed

    Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang

    2017-10-09

    Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.

  6. Characterizing and Assessing a Large-Scale Software Maintenance Organization

    NASA Technical Reports Server (NTRS)

    Briand, Lionel; Melo, Walcelio; Seaman, Carolyn; Basili, Victor

    1995-01-01

    One important component of a software process is the organizational context in which the process is enacted. This component is often missing or incomplete in current process modeling approaches. One technique for modeling this perspective is the Actor-Dependency (AD) Model. This paper reports on a case study which used this approach to analyze and assess a large software maintenance organization. Our goal was to identify the approach's strengths and weaknesses while providing practical recommendations for improvement and research directions. The AD model was found to be very useful in capturing the important properties of the organizational context of the maintenance process, and aided in the understanding of the flaws found in this process. However, a number of opportunities for extending and improving the AD model were identified. Among others, there is a need to incorporate quantitative information to complement the qualitative model.

  7. A Computer Model of Insect Traps in a Landscape

    NASA Astrophysics Data System (ADS)

    Manoukis, Nicholas C.; Hall, Brian; Geib, Scott M.

    2014-11-01

    Attractant-based trap networks are important elements of invasive insect detection, pest control, and basic research programs. We present a landscape-level, spatially explicit model of trap networks, focused on detection, that incorporates variable attractiveness of traps and a movement model for insect dispersion. We describe the model and validate its behavior using field trap data on networks targeting two species, Ceratitis capitata and Anoplophora glabripennis. Our model will assist efforts to optimize trap networks by 1) introducing an accessible and realistic mathematical characterization of the operation of a single trap that lends itself easily to parametrization via field experiments and 2) allowing direct quantification and comparison of sensitivity between trap networks. Results from the two case studies indicate that the relationship between number of traps and their spatial distribution and capture probability under the model is qualitatively dependent on the attractiveness of the traps, a result with important practical consequences.

  8. Capture Their Attention: Capturing Lessons Using Screen Capture Software

    ERIC Educational Resources Information Center

    Drumheller, Kristina; Lawler, Gregg

    2011-01-01

    When students miss classes for university activities such as athletic and academic events, they inevitably miss important class material. Students can get notes from their peers or visit professors to find out what they missed, but when students miss new and challenging material these steps are sometimes not enough. Screen capture and recording…

  9. Evaluation of XHVRB for Capturing Explosive Shock Desensitization

    NASA Astrophysics Data System (ADS)

    Tuttle, Leah; Schmitt, Robert; Kittell, Dave; Harstad, Eric

    2017-06-01

    Explosive shock desensitization phenomena have been recognized for some time. It has been demonstrated that pressure-based reactive flow models do not adequately capture the basic nature of the explosive behavior. Historically, replacing the local pressure with a shock captured pressure has dramatically improved the numerical modeling approaches. Models based upon shock pressure or functions of entropy have recently been developed. A pseudo-entropy based formulation using the History Variable Reactive Burn model, as proposed by Starkenberg, was implemented into the Eulerian shock physics code CTH. Improvements in the shock capturing algorithm were made. The model is demonstrated to reproduce single shock behavior consistent with published pop plot data. It is also demonstrated to capture a desensitization effect based on available literature data, and to qualitatively capture dead zones from desensitization in 2D corner turning experiments. This models shows promise for use in modeling and simulation problems that are relevant to the desensitization phenomena. Issues are identified with the current implementation and future work is proposed for improving and expanding model capabilities. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  10. Demographic Characteristics of a Maine Woodcock Population and Effects of Habitat Management

    USGS Publications Warehouse

    Dwyer, T.J.; Sepik, G.F.; Derleth, E.L.; McAuley, D.G.

    1988-01-01

    A population of American woodcock (Scolopax minor) was studied on a 3,401-ha area of the Moosehorn National Wildlife Refuge in northeastern Maine from 1976 through 1985. During 1976-83, from 4 to 64 clearcuts were created each year, opening up large contiguous blocks of forest. A combination of mist nets, ground traps, nightlighting techniques, and trained dogs were used to capture and band 1,884 birds during the first 5 years. Capture and recapture data (totaling 3,009 observations) were used with both demographically closed and open population models to estimate population size and, for open population models, summer survival. Flying young, especially young males, represented the greatest proportion of all captures; analysis showed that young males were more prone to capture than young females. Male courtship began about 24 March each year, usually when there was still snow in wooded areas. Males ~2 years old dominated singing grounds during April each year, but this situation changed and first-year males dominated singing grounds in May. Singing males shifted from older established singing grounds to new clearcuts soon after we initiated forest management. Many males were subdominant at singing grounds despite an abundance of unoccupied openings. Three hundred adult females were captured and, except for 1978, the majority were ~2 years old. The year in which female homing rate was lowest(1979) was preceded by the year with the largest number of l-year-old brood female captures and a summer drought. Summer survival of young was lowest in 1978 and was attributed to summer drought. The year 1979 had an abnormally cool and wet spring, and was the poorest for production of young. Capture ratios of young-to-adult females obtained by nightlighting could be used to predict production on our study area. Closed population model estimates did not seem to fit either young or adult data sets well. Instead, a partially open capture-recapture model that allowed death but no immigration seemed to fit best. Only the number of males in the population changed significantly during the study. An increase from 88 males in 1976 to 156 in 1980 was attributed to habitat management. Singingmale surveys on our area detected little change in the number of singing males, but our independent population estimates from mark-recapture data showed a larger total male population by 1980. Annual density estimates for all age and sex classes ranged from 19 to 25 birds/l00 ha. A hypothesis on the breeding system of the American woodcock is presented as well as a discussion of management implications, including the importance of creating high-quality habitat on private lands.

  11. Reasoning about energy in qualitative simulation

    NASA Technical Reports Server (NTRS)

    Fouche, Pierre; Kuipers, Benjamin J.

    1992-01-01

    While possible behaviors of a mechanism that are consistent with an incomplete state of knowledge can be predicted through qualitative modeling and simulation, spurious behaviors corresponding to no solution of any ordinary differential equation consistent with the model may be generated. The present method for energy-related reasoning eliminates an important source of spurious behaviors, as demonstrated by its application to a nonlinear, proportional-integral controlled. It is shown that such qualitative properties of such a system as stability and zero-offset control are captured by the simulation.

  12. Time-domain simulation of nonlinear radiofrequency phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, Thomas G.; Austin, Travis M.; Smithe, David N.

    Nonlinear effects associated with the physics of radiofrequency wave propagation through a plasma are investigated numerically in the time domain, using both fluid and particle-in-cell (PIC) methods. We find favorable comparisons between parametric decay instability scenarios observed on the Alcator C-MOD experiment [J. C. Rost, M. Porkolab, and R. L. Boivin, Phys. Plasmas 9, 1262 (2002)] and PIC models. The capability of fluid models to capture important nonlinear effects characteristic of wave-plasma interaction (frequency doubling, cyclotron resonant absorption) is also demonstrated.

  13. Time-domain simulation of nonlinear radiofrequency phenomena

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Austin, Travis M.; Smithe, David N.; Loverich, John; Hakim, Ammar H.

    2013-01-01

    Nonlinear effects associated with the physics of radiofrequency wave propagation through a plasma are investigated numerically in the time domain, using both fluid and particle-in-cell (PIC) methods. We find favorable comparisons between parametric decay instability scenarios observed on the Alcator C-MOD experiment [J. C. Rost, M. Porkolab, and R. L. Boivin, Phys. Plasmas 9, 1262 (2002)] and PIC models. The capability of fluid models to capture important nonlinear effects characteristic of wave-plasma interaction (frequency doubling, cyclotron resonant absorption) is also demonstrated.

  14. Factors Affecting Acceptance & Use of ReWIND: Validating the Extended Unified Theory of Acceptance and Use of Technology

    ERIC Educational Resources Information Center

    Nair, Pradeep Kumar; Ali, Faizan; Leong, Lim Chee

    2015-01-01

    Purpose: This study aims to explain the factors affecting students' acceptance and usage of a lecture capture system (LCS)--ReWIND--in a Malaysian university based on the extended unified theory of acceptance and use of technology (UTAUT2) model. Technological advances have become an important feature of universities' plans to improve the…

  15. Wind Turbine Optimization with WISDEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykes, Katherine L; Damiani, Rick R; Graf, Peter A

    This presentation for the Fourth Wind Energy Systems Engineering Workshop explains the NREL wind energy systems engineering initiative-developed analysis platform and research capability to capture important system interactions to achieve a better understanding of how to improve system-level performance and achieve system-level cost reductions. Topics include Wind-Plant Integrated System Design and Engineering Model (WISDEM) and multidisciplinary design analysis and optimization.

  16. Metabolic and physiochemical responses to a whole-lake experimental increase in dissolved organic carbon in a north-temperate lake

    Treesearch

    Jacob A. Zwart; Nicola Craig; Patrick T. Kelly; Stephen D. Sebestyen; Christopher T. Solomon; Brian C. Weidel; Stuart E. Jones

    2016-01-01

    Over the last several decades, many lakes globally have increased in dissolved organic carbon (DOC), calling into question how lake functions may respond to increasing DOC. Unfortunately, our basis for making predictions is limited to spatial surveys, modeling, and laboratory experiments, which may not accurately capture important whole-ecosystem processes. In this...

  17. How Low Can You Go? The Importance of Quantifying Minimum Generation Levels for Renewable Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul L; Brinkman, Gregory L; Mai, Trieu T

    One of the significant limitations of solar and wind deployment is declining value caused by the limited correlation of renewable energy supply and electricity demand as well as limited flexibility of the power system. Limited flexibility can result from thermal and hydro plants that cannot turn off or reduce output due to technical or economic limits. These limits include the operating range of conventional thermal power plants, the need for process heat from combined heat and power plants, and restrictions on hydro unit operation. To appropriately analyze regional and national energy policies related to renewable deployment, these limits must bemore » accurately captured in grid planning models. In this work, we summarize data sources and methods for U.S. power plants that can be used to capture minimum generation levels in grid planning tools, such as production cost models. We also provide case studies for two locations in the U.S. (California and Texas) that demonstrate the sensitivity of variable generation (VG) curtailment to grid flexibility assumptions which shows the importance of analyzing (and documenting) minimum generation levels in studies of increased VG penetration.« less

  18. Detecting potential anomalies in projections of rainfall trends and patterns using human observations

    NASA Astrophysics Data System (ADS)

    Kohfeld, K. E.; Savo, V.; Sillmann, J.; Morton, C.; Lepofsky, D.

    2016-12-01

    Shifting precipitation patterns are a well-documented consequence of climate change, but their spatial variability is particularly difficult to assess. While the accuracy of global models has increased, specific regional changes in precipitation regimes are not well captured by these models. Typically, researchers who wish to detect trends and patterns in climatic variables, such as precipitation, use instrumental observations. In our study, we combined observations of rainfall by subsistence-oriented communities with several metrics of rainfall estimated from global instrumental records for comparable time periods (1955 - 2005). This comparison was aimed at identifying: 1) which rainfall metrics best match human observations of changes in precipitation; 2) areas where local communities observe changes not detected by global models. The collated observations ( 3800) made by subsistence-oriented communities covered 129 countries ( 1830 localities). For comparable time periods, we saw a substantial correspondence between instrumental records and human observations (66-77%) at the same locations, regardless of whether we considered trends in general rainfall, drought, or extreme rainfall. We observed a clustering of mismatches in two specific regions, possibly indicating some climatic phenomena not completely captured by the currently available global models. Many human observations also indicated an increased unpredictability in the start, end, duration, and continuity of the rainy seasons, all of which may hamper the performance of subsistence activities. We suggest that future instrumental metrics should capture this unpredictability of rainfall. This information would be important for thousands of subsistence-oriented communities in planning, coping, and adapting to climate change.

  19. Local and landscape scale factors influencing edge effects on woodland salamanders.

    PubMed

    Moseley, Kurtis R; Ford, W Mark; Edwards, John W

    2009-04-01

    We examined local and landscape-scale variable influence on the depth and magnitude of edge effects on woodland salamanders in mature mixed mesophytic and northern hardwood forest adjacent to natural gas well sites maintained as wildlife openings. We surveyed woodland salamander occurrence from June-August 2006 at 33 gas well sites in the Monongahela National Forest, West Virginia. We used an information-theoretic approach to test nine a priori models explaining landscape-scale effects on woodland salamander capture proportion within 20 m of field edge. Salamander capture proportion was greater within 0-60 m than 61-100 m of field edges. Similarly, available coarse woody debris proportion was greater within 0-60 m than 61-100 m of field edge. Our ASPECT model, that incorporated the single variable aspect, received the strongest support for explaining landscape-scale effects on salamander capture proportion within 20 m of opening edge. The ASPECT model indicated that fewer salamanders occurred within 20 m of opening edges on drier, hotter southwestern aspects than in moister, cooler northeastern aspects. Our results suggest that forest habitat adjacent to maintained edges and with sufficient cover still can provide suitable habitat for woodland salamander species in central Appalachian mixed mesophytic and northern hardwood forests. Additionally, our modeling results support the contention that edge effects are more severe on southwesterly aspects. These results underscore the importance of distinguishing among different edge types as well as placing survey locations within a landscape context when investigating edge impacts on woodland salamanders.

  20. Modeling and Measurements of Multiphase Flow and Bubble Entrapment in Steel Continuous Casting

    NASA Astrophysics Data System (ADS)

    Jin, Kai; Thomas, Brian G.; Ruan, Xiaoming

    2016-02-01

    In steel continuous casting, argon gas is usually injected to prevent clogging, but the bubbles also affect the flow pattern, and may become entrapped to form defects in the final product. To investigate this behavior, plant measurements were conducted, and a computational model was applied to simulate turbulent flow of the molten steel and the transport and capture of argon gas bubbles into the solidifying shell in a continuous slab caster. First, the flow field was solved with an Eulerian k- ɛ model of the steel, which was two-way coupled with a Lagrangian model of the large bubbles using a discrete random walk method to simulate their turbulent dispersion. The flow predicted on the top surface agreed well with nailboard measurements and indicated strong cross flow caused by biased flow of Ar gas due to the slide-gate orientation. Then, the trajectories and capture of over two million bubbles (25 μm to 5 mm diameter range) were simulated using two different capture criteria (simple and advanced). Results with the advanced capture criterion agreed well with measurements of the number, locations, and sizes of captured bubbles, especially for larger bubbles. The relative capture fraction of 0.3 pct was close to the measured 0.4 pct for 1 mm bubbles and occurred mainly near the top surface. About 85 pct of smaller bubbles were captured, mostly deeper down in the caster. Due to the biased flow, more bubbles were captured on the inner radius, especially near the nozzle. On the outer radius, more bubbles were captured near to narrow face. The model presented here is an efficient tool to study the capture of bubbles and inclusion particles in solidification processes.

  1. Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network

    NASA Astrophysics Data System (ADS)

    Yang, Bin

    2017-07-01

    Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately

  2. Hidden Markov models of biological primary sequence information.

    PubMed Central

    Baldi, P; Chauvin, Y; Hunkapiller, T; McClure, M A

    1994-01-01

    Hidden Markov model (HMM) techniques are used to model families of biological sequences. A smooth and convergent algorithm is introduced to iteratively adapt the transition and emission parameters of the models from the examples in a given family. The HMM approach is applied to three protein families: globins, immunoglobulins, and kinases. In all cases, the models derived capture the important statistical characteristics of the family and can be used for a number of tasks, including multiple alignments, motif detection, and classification. For K sequences of average length N, this approach yields an effective multiple-alignment algorithm which requires O(KN2) operations, linear in the number of sequences. PMID:8302831

  3. Unified Static and Dynamic Recrystallization Model for the Minerals of Earth's Mantle Using Internal State Variable Model

    NASA Astrophysics Data System (ADS)

    Cho, H. E.; Horstemeyer, M. F.; Baumgardner, J. R.

    2017-12-01

    In this study, we present an internal state variable (ISV) constitutive model developed to model static and dynamic recrystallization and grain size progression in a unified manner. This method accurately captures temperature, pressure and strain rate effect on the recrystallization and grain size. Because this ISV approach treats dislocation density, volume fraction of recrystallization and grain size as internal variables, this model can simultaneously track their history during the deformation with unprecedented realism. Based on this deformation history, this method can capture realistic mechanical properties such as stress-strain behavior in the relationship of microstructure-mechanical property. Also, both the transient grain size during the deformation and the steady-state grain size of dynamic recrystallization can be predicted from the history variable of recrystallization volume fraction. Furthermore, because this model has a capability to simultaneously handle plasticity and creep behaviors (unified creep-plasticity), the mechanisms (static recovery (or diffusion creep), dynamic recovery (or dislocation creep) and hardening) related to dislocation dynamics can also be captured. To model these comprehensive mechanical behaviors, the mathematical formulation of this model includes elasticity to evaluate yield stress, work hardening in treating plasticity, creep, as well as the unified recrystallization and grain size progression. Because pressure sensitivity is especially important for the mantle minerals, we developed a yield function combining Drucker-Prager shear failure and von Mises yield surfaces to model the pressure dependent yield stress, while using pressure dependent work hardening and creep terms. Using these formulations, we calibrated against experimental data of the minerals acquired from the literature. Additionally, we also calibrated experimental data for metals to show the general applicability of our model. Understanding of realistic mantle dynamics can only be acquired once the various deformation regimes and mechanisms are comprehensively modeled. The results of this study demonstrate that this ISV model is a good modeling candidate to help reveal the realistic dynamics of the Earth's mantle.

  4. Variability of Phenology and Fluxes of Water and Carbon with Observed and Simulated Soil Moisture in the Ent Terrestrial Biosphere Model (Ent TBM Version 1.0.1.0.0)

    NASA Technical Reports Server (NTRS)

    Kim, Y.; Moorcroft, P. R.; Aleinov, Igor; Puma, M. J.; Kiang, N. Y.

    2015-01-01

    The Ent Terrestrial Biosphere Model (Ent TBM) is a mixed-canopy dynamic global vegetation model developed specifically for coupling with land surface hydrology and general circulation models (GCMs). This study describes the leaf phenology submodel implemented in the Ent TBM version 1.0.1.0.0 coupled to the carbon allocation scheme of the Ecosystem Demography (ED) model. The phenology submodel adopts a combination of responses to temperature (growing degree days and frost hardening), soil moisture (linearity of stress with relative saturation) and radiation (light length). Growth of leaves, sapwood, fine roots, stem wood and coarse roots is updated on a daily basis. We evaluate the performance in reproducing observed leaf seasonal growth as well as water and carbon fluxes for four plant functional types at five Fluxnet sites, with both observed and prognostic hydrology, and observed and prognostic seasonal leaf area index. The phenology submodel is able to capture the timing and magnitude of leaf-out and senescence for temperate broadleaf deciduous forest (Harvard Forest and Morgan- Monroe State Forest, US), C3 annual grassland (Vaira Ranch, US) and California oak savanna (Tonzi Ranch, US). For evergreen needleleaf forest (Hyytiäla, Finland), the phenology submodel captures the effect of frost hardening of photosynthetic capacity on seasonal fluxes and leaf area. We address the importance of customizing parameter sets of vegetation soil moisture stress response to the particular land surface hydrology scheme. We identify model deficiencies that reveal important dynamics and parameter needs.

  5. Development of a microscale land use regression model for predicting NO2 concentrations at a heavy trafficked suburban area in Auckland, NZ.

    PubMed

    Weissert, L F; Salmond, J A; Miskell, G; Alavi-Shoshtari, M; Williams, D E

    2018-04-01

    Land use regression (LUR) analysis has become a key method to explain air pollutant concentrations at unmeasured sites at city or country scales, but little is known about the applicability of LUR at microscales. We present a microscale LUR model developed for a heavy trafficked section of road in Auckland, New Zealand. We also test the within-city transferability of LUR models developed at different spatial scales (local scale and city scale). Nitrogen dioxide (NO 2 ) was measured during summer at 40 sites and a LUR model was developed based on standard criteria. The results showed that LUR models are able to capture the microscale variability with the model explaining 66% of the variability in NO 2 concentrations. Predictor variables identified at this scale were street width, distance to major road, presence of awnings and number of bus stops, with the latter three also being important determinants at the local scale. This highlights the importance of street and building configurations for individual exposure at the street level. However, within-city transferability was limited with the number of bus stops being the only significant predictor variable at all spatial scales and locations tested, indicating the strong influence of diesel emissions related to bus traffic. These findings show that air quality monitoring is necessary at a high spatial density within cities in capturing small-scale variability in NO 2 concentrations at the street level and assessing individual exposure to traffic related air pollutants. Copyright © 2017. Published by Elsevier B.V.

  6. Variability of phenology and fluxes of water and carbon with observed and simulated soil moisture in the Ent Terrestrial Biosphere Model (Ent TBM version 1.0.1.0.0)

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Moorcroft, P. R.; Aleinov, I.; Puma, M. J.; Kiang, N. Y.

    2015-12-01

    The Ent Terrestrial Biosphere Model (Ent TBM) is a mixed-canopy dynamic global vegetation model developed specifically for coupling with land surface hydrology and general circulation models (GCMs). This study describes the leaf phenology submodel implemented in the Ent TBM version 1.0.1.0.0 coupled to the carbon allocation scheme of the Ecosystem Demography (ED) model. The phenology submodel adopts a combination of responses to temperature (growing degree days and frost hardening), soil moisture (linearity of stress with relative saturation) and radiation (light length). Growth of leaves, sapwood, fine roots, stem wood and coarse roots is updated on a daily basis. We evaluate the performance in reproducing observed leaf seasonal growth as well as water and carbon fluxes for four plant functional types at five Fluxnet sites, with both observed and prognostic hydrology, and observed and prognostic seasonal leaf area index. The phenology submodel is able to capture the timing and magnitude of leaf-out and senescence for temperate broadleaf deciduous forest (Harvard Forest and Morgan-Monroe State Forest, US), C3 annual grassland (Vaira Ranch, US) and California oak savanna (Tonzi Ranch, US). For evergreen needleleaf forest (Hyytiäla, Finland), the phenology submodel captures the effect of frost hardening of photosynthetic capacity on seasonal fluxes and leaf area. We address the importance of customizing parameter sets of vegetation soil moisture stress response to the particular land surface hydrology scheme. We identify model deficiencies that reveal important dynamics and parameter needs.

  7. Deep Neural Networks as a Computational Model for Human Shape Sensitivity

    PubMed Central

    Op de Beeck, Hans P.

    2016-01-01

    Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional ‘deep’ neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development. PMID:27124699

  8. Effects of Reduced Terrestrial LiDAR Point Density on High-Resolution Grain Crop Surface Models in Precision Agriculture

    PubMed Central

    Hämmerle, Martin; Höfle, Bernhard

    2014-01-01

    3D geodata play an increasingly important role in precision agriculture, e.g., for modeling in-field variations of grain crop features such as height or biomass. A common data capturing method is LiDAR, which often requires expensive equipment and produces large datasets. This study contributes to the improvement of 3D geodata capturing efficiency by assessing the effect of reduced scanning resolution on crop surface models (CSMs). The analysis is based on high-end LiDAR point clouds of grain crop fields of different varieties (rye and wheat) and nitrogen fertilization stages (100%, 50%, 10%). Lower scanning resolutions are simulated by keeping every n-th laser beam with increasing step widths n. For each iteration step, high-resolution CSMs (0.01 m2 cells) are derived and assessed regarding their coverage relative to a seamless CSM derived from the original point cloud, standard deviation of elevation and mean elevation. Reducing the resolution to, e.g., 25% still leads to a coverage of >90% and a mean CSM elevation of >96% of measured crop height. CSM types (maximum elevation or 90th-percentile elevation) react differently to reduced scanning resolutions in different crops (variety, density). The results can help to assess the trade-off between CSM quality and minimum requirements regarding equipment and capturing set-up. PMID:25521383

  9. Kinetics of atrial repolarization alternans in a free-behaving ovine model.

    PubMed

    Jousset, Florian; Tenkorang, Joanna; Vesin, Jean-Marc; Pascale, Patrizio; Ruchat, Patrick; Rollin, Anne Garderes; Fromer, Martin; Narayan, Sanjiv M; Pruvot, Etienne

    2012-09-01

    Kinetics of Atrial Repolarization Alternans. Repolarization alternans (Re-ALT), a beat-to-beat alternation in action potential repolarization, promotes dispersion of repolarization, wavebreaks, and reentry. Recently, Re-ALT has been shown to play an important role in the transition from rapid pacing to atrial fibrillation (AF) in humans. The detailed kinetics of atrial Re-ALT, however, has not been reported so far. We developed a chronic free-behaving ovine pacing model to study the kinetics of atrial Re-ALT as a function of pacing rate. Thirteen sheep were chronically implanted with 2 pacemakers for the recording of broadband right atrial unipolar electrograms and delivery of rapid pacing protocols. Beat-to-beat differences in the atrial T-wave apex amplitude as a measure of Re-ALT and activation time were analyzed at incremental pacing rates until the effective refractory period (ERP) defined as stable 2:1 capture. Atrial Re-ALT appeared intermittently but without periodicity, and increased in amplitude as a function of pacing rate until ERP. Intermittent 2:1 atrial capture was observed at pacing cycle lengths 40 ms above ERP, and increased in duration as a function of pacing rate. Episodes of rapid pacing-induced AF were rare, and were preceded by Re-ALT or complex oscillations of atrial repolarization, but without intermittent capture. We show in vivo that atrial Re-ALT developed and increased in magnitude with rate until stable 2:1 capture. In rare instances where capture failure did not occur, Re-ALT and complex oscillations of repolarization surged and preceded AF initiation. (J Cardiovasc Electrophysiol, Vol. 23, pp. 1003-1012, September 2012). © 2012 Wiley Periodicals, Inc.

  10. Protein (multi-)location prediction: utilizing interdependencies via a generative model

    PubMed Central

    Shatkay, Hagit

    2015-01-01

    Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505

  11. Protein (multi-)location prediction: utilizing interdependencies via a generative model.

    PubMed

    Simha, Ramanuja; Briesemeister, Sebastian; Kohlbacher, Oliver; Shatkay, Hagit

    2015-06-15

    Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein's function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. We introduce a probabilistic generative model for protein localization, and develop a system based on it-which we call MDLoc-that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. © The Author 2015. Published by Oxford University Press.

  12. Exploring the importance of quantum effects in nucleation: The archetypical Nen case

    NASA Astrophysics Data System (ADS)

    Unn-Toc, Wesley; Halberstadt, Nadine; Meier, Christoph; Mella, Massimo

    2012-07-01

    The effect of quantum mechanics (QM) on the details of the nucleation process is explored employing Ne clusters as test cases due to their semi-quantal nature. In particular, we investigate the impact of quantum mechanics on both condensation and dissociation rates in the framework of the microcanonical ensemble. Using both classical trajectories and two semi-quantal approaches (zero point averaged dynamics, ZPAD, and Gaussian-based time dependent Hartree, G-TDH) to model cluster and collision dynamics, we simulate the dissociation and monomer capture for Ne8 as a function of the cluster internal energy, impact parameter and collision speed. The results for the capture probability Ps(b) as a function of the impact parameter suggest that classical trajectories always underestimate capture probabilities with respect to ZPAD, albeit at most by 15%-20% in the cases we studied. They also do so in some important situations when using G-TDH. More interestingly, dissociation rates kdiss are grossly overestimated by classical mechanics, at least by one order of magnitude. We interpret both behaviours as mainly due to the reduced amount of kinetic energy available to a quantum cluster for a chosen total internal energy. We also find that the decrease in monomer dissociation energy due to zero point energy effects plays a key role in defining dissociation rates. In fact, semi-quantal and classical results for kdiss seem to follow a common "corresponding states" behaviour when the proper definition of internal and dissociation energies are used in a transition state model estimation of the evaporation rate constants.

  13. A reaction-transport model for calcite precipitation and evaluation of infiltration fluxes in unsaturated fractured rock.

    PubMed

    Xu, Tianfu; Sonnenthal, Eric; Bodvarsson, Gudmundur

    2003-06-01

    The percolation flux in the unsaturated zone (UZ) is an important parameter addressed in site characterization and flow and transport modeling of the potential nuclear-waste repository at Yucca Mountain, NV, USA. The US Geological Survey (USGS) has documented hydrogenic calcite abundances in fractures and lithophysal cavities at Yucca Mountain to provide constraints on percolation fluxes in the UZ. The purpose of this study was to investigate the relationship between percolation flux and measured calcite abundances using reactive transport modeling. Our model considers the following essential factors affecting calcite precipitation: (1) infiltration, (2) the ambient geothermal gradient, (3) gaseous CO(2) diffusive transport and partitioning in liquid and gas phases, (4) fracture-matrix interaction for water flow and chemical constituents, and (5) water-rock interaction. Over a bounding range of 2-20 mm/year infiltration rate, the simulated calcite distributions capture the trend in calcite abundances measured in a deep borehole (WT-24) by the USGS. The calcite is found predominantly in fractures in the welded tuffs, which is also captured by the model simulations. Simulations showed that from about 2 to 6 mm/year, the amount of calcite precipitated in the welded Topopah Spring tuff is sensitive to the infiltration rate. This dependence decreases at higher infiltration rates owing to a modification of the geothermal gradient from the increased percolation flux. The model also confirms the conceptual model for higher percolation fluxes in the fractures compared to the matrix in the welded units, and the significant contribution of Ca from water-rock interaction. This study indicates that reactive transport modeling of calcite deposition can yield important constraints on the unsaturated zone infiltration-percolation flux and provide useful insight into processes such as fracture-matrix interaction as well as conditions and parameters controlling calcite deposition.

  14. Blood-brain barrier-on-a-chip: Microphysiological systems that capture the complexity of the blood-central nervous system interface.

    PubMed

    Phan, Duc Tt; Bender, R Hugh F; Andrejecsk, Jillian W; Sobrino, Agua; Hachey, Stephanie J; George, Steven C; Hughes, Christopher Cw

    2017-11-01

    The blood-brain barrier is a dynamic and highly organized structure that strictly regulates the molecules allowed to cross the brain vasculature into the central nervous system. The blood-brain barrier pathology has been associated with a number of central nervous system diseases, including vascular malformations, stroke/vascular dementia, Alzheimer's disease, multiple sclerosis, and various neurological tumors including glioblastoma multiforme. There is a compelling need for representative models of this critical interface. Current research relies heavily on animal models (mostly mice) or on two-dimensional (2D) in vitro models, neither of which fully capture the complexities of the human blood-brain barrier. Physiological differences between humans and mice make translation to the clinic problematic, while monolayer cultures cannot capture the inherently three-dimensional (3D) nature of the blood-brain barrier, which includes close association of the abluminal side of the endothelium with astrocyte foot-processes and pericytes. Here we discuss the central nervous system diseases associated with blood-brain barrier pathology, recent advances in the development of novel 3D blood-brain barrier -on-a-chip systems that better mimic the physiological complexity and structure of human blood-brain barrier, and provide an outlook on how these blood-brain barrier-on-a-chip systems can be used for central nervous system disease modeling. Impact statement The field of microphysiological systems is rapidly evolving as new technologies are introduced and our understanding of organ physiology develops. In this review, we focus on Blood-Brain Barrier (BBB) models, with a particular emphasis on how they relate to neurological disorders such as Alzheimer's disease, multiple sclerosis, stroke, cancer, and vascular malformations. We emphasize the importance of capturing the three-dimensional nature of the brain and the unique architecture of the BBB - something that until recently had not been well modeled by in vitro systems. Our hope is that this review will provide a launch pad for new ideas and methodologies that can provide us with truly physiological BBB models capable of yielding new insights into the function of this critical interface.

  15. Infrastructure effects on estuarine wetlands increase their vulnerability to sea level rise

    NASA Astrophysics Data System (ADS)

    Rodriguez, Jose; Saco, Patricia; Sandi, Steven; Saintilan, Neil; Riccardi, Gerardo

    2017-04-01

    At the regional and global scales, coastal management and planning for future sea level rise scenarios is typically supported by modelling tools that predict the expected inundation extent. These tools rely on a number of simplifying assumptions that, in some cases, may result in important miscalculation of the inundation effects. One of such cases is estuarine wetlands, where vegetation strongly depends on both the magnitude and the timing of inundation. Many coastal wetlands display flow restrictions due to infrastructure or drainage works, which produce alterations to the inundation patterns that can not be captured by conventional models. In this contribution we explore the effects of flow restrictions on inundation patterns under sea level rise conditions in estuarine wetlands. We use a spatially-distributed dynamic wetland ecogeomorphological model that not only incorporates the effects of flow restrictions due to culverts, bridges and weirs as well as vegetation, but also considers that vegetation changes as a consequence of increasing inundation. We also consider the ability of vegetation to capture sediment and produce accretion. We apply our model to an estuarine wetland in Australia and show that our model predicts a much faster wetland loss due to sea level rise than conventional approaches.

  16. Beyond the standard two-film theory: Computational fluid dynamics simulations for carbon dioxide capture in a wetted wall column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less

  17. Beyond the standard two-film theory: Computational fluid dynamics simulations for carbon dioxide capture in a wetted wall column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO2) capture to predict the CO2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive and reactive massmore » transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less

  18. Analysis and synthesis of laughter

    NASA Astrophysics Data System (ADS)

    Sundaram, Shiva; Narayanan, Shrikanth

    2004-10-01

    There is much enthusiasm in the text-to-speech community for synthesis of emotional and natural speech. One idea being proposed is to include emotion dependent paralinguistic cues during synthesis to convey emotions effectively. This requires modeling and synthesis techniques of various cues for different emotions. Motivated by this, a technique to synthesize human laughter is proposed. Laughter is a complex mechanism of expression and has high variability in terms of types and usage in human-human communication. People have their own characteristic way of laughing. Laughter can be seen as a controlled/uncontrolled physiological process of a person resulting from an initial excitation in context. A parametric model based on damped simple harmonic motion to effectively capture these diversities and also maintain the individuals characteristics is developed here. Limited laughter/speech data from actual humans and synthesis ease are the constraints imposed on the accuracy of the model. Analysis techniques are also developed to determine the parameters of the model for a given individual or laughter type. Finally, the effectiveness of the model to capture the individual characteristics and naturalness compared to real human laughter has been analyzed. Through this the factors involved in individual human laughter and their importance can be better understood.

  19. Beyond the standard two-film theory: Computational fluid dynamics simulations for carbon dioxide capture in a wetted wall column

    DOE PAGES

    Wang, Chao; Xu, Zhijie; Lai, Canhai; ...

    2018-03-27

    The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less

  20. Simply criminal: predicting burglars' occupancy decisions with a simple heuristic.

    PubMed

    Snook, Brent; Dhami, Mandeep K; Kavanagh, Jennifer M

    2011-08-01

    Rational choice theories of criminal decision making assume that offenders weight and integrate multiple cues when making decisions (i.e., are compensatory). We tested this assumption by comparing how well a compensatory strategy called Franklin's Rule captured burglars' decision policies regarding residence occupancy compared to a non-compensatory strategy (i.e., Matching Heuristic). Forty burglars each decided on the occupancy of 20 randomly selected photographs of residences (for which actual occupancy was known when the photo was taken). Participants also provided open-ended reports on the cues that influenced their decisions in each case, and then rated the importance of eight cues (e.g., deadbolt visible) over all decisions. Burglars predicted occupancy beyond chance levels. The Matching Heuristic was a significantly better predictor of burglars' decisions than Franklin's Rule, and cue use in the Matching Heuristic better corresponded to the cue ecological validities in the environment than cue use in Franklin's Rule. The most important cue in burglars' models was also the most ecologically valid or predictive of actual occupancy (i.e., vehicle present). The majority of burglars correctly identified the most important cue in their models, and the open-ended technique showed greater correspondence between self-reported and captured cue use than the rating over decision technique. Our findings support a limited rationality perspective to understanding criminal decision making, and have implications for crime prevention.

  1. Quantifying the Contribution of Wind-Driven Linear Response to the Seasonal and Interannual Variability of Amoc Volume Transports Across 26.5ºN

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; von Storch, J. S.; Haak, H.; Nakayama, K.; Marotzke, J.

    2014-12-01

    Surface wind stress is considered to be an important forcing of the seasonal and interannual variability of Atlantic Meridional Overturning Circulation (AMOC) volume transports. A recent study showed that even linear response to wind forcing captures observed features of the mean seasonal cycle. However, the study did not assess the contribution of wind-driven linear response in realistic conditions against the RAPID/MOCHA array observation or Ocean General Circulation Model (OGCM) simulations, because it applied a linear two-layer model to the Atlantic assuming constant upper layer thickness and density difference across the interface. Here, we quantify the contribution of wind-driven linear response to the seasonal and interannual variability of AMOC transports by comparing wind-driven linear simulations under realistic continuous stratification against the RAPID observation and OCGM (MPI-OM) simulations with 0.4º resolution (TP04) and 0.1º resolution (STORM). All the linear and MPI-OM simulations capture more than 60% of the variance in the observed mean seasonal cycle of the Upper Mid-Ocean (UMO) and Florida Strait (FS) transports, two components of the upper branch of the AMOC. The linear and TP04 simulations also capture 25-40% of the variance in the observed transport time series between Apr 2004 and Oct 2012; the STORM simulation does not capture the observed variance because of the stochastic signal in both datasets. Comparison of half-overlapping 12-month-long segments reveals some periods when the linear and TP04 simulations capture 40-60% of the observed variance, as well as other periods when the simulations capture only 0-20% of the variance. These results show that wind-driven linear response is a major contributor to the seasonal and interannual variability of the UMO and FS transports, and that its contribution varies in an interannual timescale, probably due to the variability of stochastic processes.

  2. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    USGS Publications Warehouse

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (<0.5 L/min) pumping rates during well purging and sampling captures primarily lateral flow from the formation through the well-screened interval at a depth coincident with the pump intake. However, if the intake is adjacent to a low hydraulic conductivity part of the screened formation, this scenario will induce vertical groundwater flow to the pump intake from parts of the screened interval with high hydraulic conductivity. Because less formation water will initially be captured during pumping, a substantial volume of water already in the well (preexisting screen water or screen storage) will be captured during this initial time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  3. Assessing the usefulness of the photogrammetric method in the process of capturing data on parcel boundaries

    NASA Astrophysics Data System (ADS)

    Benduch, Piotr; Pęska-Siwik, Agnieszka

    2017-06-01

    A parcel is the most important object of real estate cadastre. Its primary spatial attribute are boundaries, determining the extent of property rights. Capturing the data on boundaries should be performed in the way ensuring sufficiently high accuracy and reliability. In recent years, as part of the project "ZSIN - Construction of Integrated Real Estate Information System - Stage I", in the territories of the participating districts, actions were taken aimed at the modernization of the register of land and buildings. In many cases, this process was carried out basing on photogrammetric materials. Applicable regulations allow such a possibility. This paper, basing on the documentation from the National Geodetic and Cartographic Documentation Center and on the authors' own surveys attempts to assess the applicability of the photogrammetric method to capture data on the boundaries of cadastral parcels. The scope of the research, most importantly, included the problem of accuracy with which it was possible to determine the position of a boundary point using photogrammetric surveys carried out on the terrain model created from processed aerial photographs. The article demonstrates the manner of recording this information in the cadastral database, as well as the resulting legal consequences. Moreover, the level of reliability of the entered values of the selected attributes of boundary points was assessed.

  4. The utility of live video capture to enhance debriefing following transcatheter aortic valve replacement.

    PubMed

    Seamans, David P; Louka, Boshra F; Fortuin, F David; Patel, Bhavesh M; Sweeney, John P; Lanza, Louis A; DeValeria, Patrick A; Ezrre, Kim M; Ramakrishna, Harish

    2016-10-01

    The surgical and procedural specialties are continually evolving their methods to include more complex and technically difficult cases. These cases can be longer and incorporate multiple teams in a different model of operating room synergy. Patients are frequently older, with comorbidities adding to the complexity of these cases. Recording of this environment has become more feasible recently with advancement in video and audio capture systems often used in the simulation realm. We began using live capture to record a new procedure shortly after starting these cases in our institution. This has provided continued assessment and evaluation of live procedures. The goal of this was to improve human factors and situational challenges by review and debriefing. B-Line Medical's LiveCapture video system was used to record successive transcatheter aortic valve replacement (TAVR) procedures in our cardiac catheterization/laboratory. An illustrative case is used to discuss analysis and debriefing of the case using this system. An illustrative case is presented that resulted in long-term changes to our approach of these cases. The video capture documented rare events during one of our TAVR procedures. Analysis and debriefing led to definitive changes in our practice. While there are hurdles to the use of this technology in every institution, the role for the ongoing use of video capture, analysis, and debriefing may play an important role in the future of patient safety and human factors analysis in the operating environment.

  5. The utility of live video capture to enhance debriefing following transcatheter aortic valve replacement

    PubMed Central

    Seamans, David P.; Louka, Boshra F.; Fortuin, F. David; Patel, Bhavesh M.; Sweeney, John P.; Lanza, Louis A.; DeValeria, Patrick A.; Ezrre, Kim M.; Ramakrishna, Harish

    2016-01-01

    Background: The surgical and procedural specialties are continually evolving their methods to include more complex and technically difficult cases. These cases can be longer and incorporate multiple teams in a different model of operating room synergy. Patients are frequently older, with comorbidities adding to the complexity of these cases. Recording of this environment has become more feasible recently with advancement in video and audio capture systems often used in the simulation realm. Aims: We began using live capture to record a new procedure shortly after starting these cases in our institution. This has provided continued assessment and evaluation of live procedures. The goal of this was to improve human factors and situational challenges by review and debriefing. Setting and Design: B-Line Medical's LiveCapture video system was used to record successive transcatheter aortic valve replacement (TAVR) procedures in our cardiac catheterization/laboratory. An illustrative case is used to discuss analysis and debriefing of the case using this system. Results and Conclusions: An illustrative case is presented that resulted in long-term changes to our approach of these cases. The video capture documented rare events during one of our TAVR procedures. Analysis and debriefing led to definitive changes in our practice. While there are hurdles to the use of this technology in every institution, the role for the ongoing use of video capture, analysis, and debriefing may play an important role in the future of patient safety and human factors analysis in the operating environment. PMID:27762242

  6. Agent-based modeling: a new approach for theory building in social psychology.

    PubMed

    Smith, Eliot R; Conrey, Frederica R

    2007-02-01

    Most social and psychological phenomena occur not as the result of isolated decisions by individuals but rather as the result of repeated interactions between multiple individuals over time. Yet the theory-building and modeling techniques most commonly used in social psychology are less than ideal for understanding such dynamic and interactive processes. This article describes an alternative approach to theory building, agent-based modeling (ABM), which involves simulation of large numbers of autonomous agents that interact with each other and with a simulated environment and the observation of emergent patterns from their interactions. The authors believe that the ABM approach is better able than prevailing approaches in the field, variable-based modeling (VBM) techniques such as causal modeling, to capture types of complex, dynamic, interactive processes so important in the social world. The article elaborates several important contrasts between ABM and VBM and offers specific recommendations for learning more and applying the ABM approach.

  7. Dynamics of aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.

    1991-01-01

    The focus of this research was to address the modeling, including model reduction, of flexible aerospace vehicles, with special emphasis on models used in dynamic analysis and/or guidance and control system design. In the modeling, it is critical that the key aspects of the system being modeled be captured in the model. In this work, therefore, aspects of the vehicle dynamics critical to control design were important. In this regard, fundamental contributions were made in the areas of stability robustness analysis techniques, model reduction techniques, and literal approximations for key dynamic characteristics of flexible vehicles. All these areas are related. In the development of a model, approximations are always involved, so control systems designed using these models must be robust against uncertainties in these models.

  8. Analysis and Synthesis of Load Forecasting Data for Renewable Integration Studies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steckler, N.; Florita, A.; Zhang, J.

    2013-11-01

    As renewable energy constitutes greater portions of the generation fleet, the importance of modeling uncertainty as part of integration studies also increases. In pursuit of optimal system operations, it is important to capture not only the definitive behavior of power plants, but also the risks associated with systemwide interactions. This research examines the dependence of load forecast errors on external predictor variables such as temperature, day type, and time of day. The analysis was utilized to create statistically relevant instances of sequential load forecasts with only a time series of historic, measured load available. The creation of such load forecastsmore » relies on Bayesian techniques for informing and updating the model, thus providing a basis for networked and adaptive load forecast models in future operational applications.« less

  9. Simulation of black carbon aerosol distribution over India: A sensitivity study to different convective schemes

    NASA Astrophysics Data System (ADS)

    Ghosh, Sudipta; Dey, Sagnik; Das, Sushant; Venkataraman, Chandra; Patil, Nitin U.

    2017-04-01

    Black carbon (BC) aerosols absorb solar radiation, thereby causing a warming at the top-of-the-atmosphere (TOA) in contrast to most of the other aerosol species that scatter radiation causing a cooling at TOA. BC is considered to be an important contributor of global warming, second only to CO2 with a net radiative forcing of 1.1 w/m2. They have important regional climate effects, because of their spatially non-uniform heating and cooling. So it is very important to understand the spatio-temporal distribution of BC over India. In this study, we have used a regional climate model RegCM4.5 to simulate BC distribution over India with a focus on the BC estimation. The importance of incorporation of regional emission inventory has been shown and the sensitivity of BC distribution to various convective schemes in the model has been explored. The model output has been validated with in-situ observations. It is quite evident that regional inventory is capturing larger columnar burden of BC and OC than the global inventory. The difference in BC burden is clear at many places with the largest difference (in the order from 2 x 10-11 kg m-2 sec-1 in global inventory to 4 x 10-11 kg m-2 sec-1 in regional inventory) being observed over the Indo-Gangetic Basin. This difference is mainly attributed to the local sources like kerosene lamp burning, residential cooking on solid biomass fuel and agricultural residue burning etc., that are not considered in the global inventory. The difference is also noticeable for OC. Thus BC burden has increased with incorporation of regional emission inventory in the model, suggesting the importance of regional inventory in improved simulation and estimation of aerosols in this region. BC distribution is also sensitive to choice of scheme with Emanuel scheme capturing a comparatively smaller BC burden during the monsoon than Tiedtke scheme. Further long-term simulation with customized model is required to examine impact of BC. Keywords: Black carbon, RegCM4, regional emission inventory, convective schemes.

  10. Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin

    2017-01-02

    In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less

  11. The 'robust' capture-recapture design allows components of recruitment to be estimated

    USGS Publications Warehouse

    Pollock, K.H.; Kendall, W.L.; Nichols, J.D.; Lebreton, J.-D.; North, P.M.

    1993-01-01

    The 'robust' capture-recapture design (Pollock 1982) allows analyses which combine features of closed population model analyses (Otis et aI., 1978, White et aI., 1982) and open population model analyses (Pollock et aI., 1990). Estimators obtained under these analyses are more robust to unequal catch ability than traditional Jolly-Seber estimators (Pollock, 1982; Pollock et al., 1990; Kendall, 1992). The robust design also allows estimation of parameters for population size, survival rate and recruitment numbers for all periods of the study unlike under Jolly-Seber type models. The major advantage of this design that we emphasize in this short review paper is that it allows separate estimation of immigration and in situ recruitment numbers for a two or more age class model (Nichols and Pollock, 1990). This is contrasted with the age-dependent Jolly-Seber model (Pollock, 1981; Stokes, 1984; Pollock et L, 1990) which provides separate estimates for immigration and in situ recruitment for all but the first two age classes where there is at least a three age class model. The ability to achieve this separation of recruitment components can be very important to population modelers and wildlife managers as many species can only be separated into two easily identified age classes in the field.

  12. Goal-Based Domain Modeling as a Basis for Cross-Disciplinary Systems Engineering

    NASA Astrophysics Data System (ADS)

    Jarke, Matthias; Nissen, Hans W.; Rose, Thomas; Schmitz, Dominik

    Small and medium-sized enterprises (SMEs) are important drivers for innovation. In particular, project-driven SMEs that closely cooperate with their customers have specific needs in regard to information engineering of their development process. They need a fast requirements capture since this is most often included in the (unpaid) offer development phase. At the same time, they need to maintain and reuse the knowledge and experiences they have gathered in previous projects extensively as it is their core asset. The situation is complicated further if the application field crosses disciplinary boundaries. To bridge the gaps and perspectives, we focus on shared goals and dependencies captured in models at a conceptual level. Such a model-based approach also offers a smarter connection to subsequent development stages, including a high share of automated code generation. In the approach presented here, the agent- and goal-oriented formalism i * is therefore extended by domain models to facilitate information organization. This extension permits a domain model-based similarity search, and a model-based transformation towards subsequent development stages. Our approach also addresses the evolution of domain models reflecting the experiences from completed projects. The approach is illustrated with a case study on software-intensive control systems in an SME of the automotive domain.

  13. Capturing the Elite in Marine Conservation in Northeast Kalimantan.

    PubMed

    Kusumawati, Rini; Visser, Leontine

    This article takes the existence of power networks of local elites as a social fact of fundamental importance and the starting point for the study of patronage in the governance of the coastal waters of East Kalimantan. We address the question of how to capture the elites for project implementation, rather than assuming the inevitability of elite capture of project funds. We analyze the multiple-scale networks of local power holders ( punggawa ) and the collaboration and friction between the political-economic interests and historical values of local actors and the scientific motivations of international environmental organizations. We describe how collaboration and friction between members of the elite challenge models that categorically exclude or co-opt local elites in foreign projects. In-depth ethnographic study of these networks shows their resilience through flows of knowledge and power in a highly volatile coastal environment. Results indicate the need for inclusion in decision making of local entrepreneurs, and - indirectly - their dependents in decentralized coastal governance.

  14. Mathematical models to characterize early epidemic growth: A Review

    PubMed Central

    Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile

    2016-01-01

    There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-15 Ebola epidemic in West Africa. PMID:27451336

  15. Mathematical models to characterize early epidemic growth: A review

    NASA Astrophysics Data System (ADS)

    Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile

    2016-09-01

    There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-2015 Ebola epidemic in West Africa.

  16. Simple dynamical models capturing the key features of the Central Pacific El Niño.

    PubMed

    Chen, Nan; Majda, Andrew J

    2016-10-18

    The Central Pacific El Niño (CP El Niño) has been frequently observed in recent decades. The phenomenon is characterized by an anomalous warm sea surface temperature (SST) confined to the central Pacific and has different teleconnections from the traditional El Niño. Here, simple models are developed and shown to capture the key mechanisms of the CP El Niño. The starting model involves coupled atmosphere-ocean processes that are deterministic, linear, and stable. Then, systematic strategies are developed for incorporating several major mechanisms of the CP El Niño into the coupled system. First, simple nonlinear zonal advection with no ad hoc parameterization of the background SST gradient is introduced that creates coupled nonlinear advective modes of the SST. Secondly, due to the recent multidecadal strengthening of the easterly trade wind, a stochastic parameterization of the wind bursts including a mean easterly trade wind anomaly is coupled to the simple atmosphere-ocean processes. Effective stochastic noise in the wind burst model facilitates the intermittent occurrence of the CP El Niño with realistic amplitude and duration. In addition to the anomalous warm SST in the central Pacific, other major features of the CP El Niño such as the rising branch of the anomalous Walker circulation being shifted to the central Pacific and the eastern Pacific cooling with a shallow thermocline are all captured by this simple coupled model. Importantly, the coupled model succeeds in simulating a series of CP El Niño that lasts for 5 y, which resembles the two CP El Niño episodes during 1990-1995 and 2002-2006.

  17. Data-Model and Inter-Model Comparisons of the GEM Outflow Events Using the Space Weather Modeling Framework

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Eccles, J. V.; Barakat, A. R.; Kistler, L. M.; Haaland, S.; Schunk, R. W.; Chappell, C. R.

    2015-12-01

    Two storm periods were selected by the Geospace Environment Modeling Ionospheric Outflow focus group for community collaborative study because of its high magnetospheric activity and extensive data coverage: the September 27 - October 4, 2002 corotating interaction region event and the October 22 - 29 coronal mass ejection event. During both events, the FAST, Polar, Cluster, and other missions made key observations, creating prime periods for data-model comparison. The GEM community has come together to simulate this period using many different methods in order to evaluate models, compare results, and expand our knowledge of ionospheric outflow and its effects on global dynamics. This paper presents Space Weather Modeling Framework (SWMF) simulations of these important periods compared against observations from the Polar TIDE, Cluster CODIF and EFW instruments. Emphasis will be given to the second event. Density and velocity of oxygen and hydrogen throughout the lobes, plasma sheet, and inner magnetosphere will be the focus of these comparisons. For these simulations, the SWMF couples the multifluid version of BATS-R-US MHD to a variety of ionospheric outflow models of varying complexity. The simplest is outflow arising from constant MHD inner boundary conditions. Two first-principles-based models are also leveraged: the Polar Wind Outflow Model (PWOM), a fluid treatment of outflow dynamics, and the Generalized Polar Wind (GPW) model, which combines fluid and particle-in-cell approaches. Each model is capable of capturing a different set of energization mechanisms, yielding different outflow results. The data-model comparisons will illustrate how well each approach captures reality and which energization mechanisms are most important. Inter-model comparisons will illustrate how the different outflow specifications affect the magnetosphere. Specifically, it is found that the GPW provides increased heavy ion outflow over a broader spatial range than the alternative models, improving comparisons in some regions but degrading the agreement in others. This work will also assess our current capability to reproduce ionosphere-magnetosphere mass coupling.

  18. Controlling imported malaria cases in the United States of America.

    PubMed

    Dembele, Bassidy; Yakubu, Abdul-Aziz

    2017-02-01

    We extend the mathematical malaria epidemic model framework of Dembele et al. and use it to ``capture" the 2013 Centers for Disease Control and Prevention (CDC) reported data on the 2011 number of imported malaria cases in the USA. Furthermore, we use our ``fitted" malaria models for the top 20 countries of malaria acquisition by USA residents to study the impact of protecting USA residents from malaria infection when they travel to malaria endemic areas, the impact of protecting residents of malaria endemic regions from mosquito bites and the impact of killing mosquitoes in those endemic areas on the CDC number of imported malaria cases in USA. To significantly reduce the number of imported malaria cases in USA, for each top 20 country of malaria acquisition by USA travelers, we compute the optimal proportion of USA international travelers that must be protected against malaria infection and the optimal proportion of mosquitoes that must be killed.

  19. Highly Coarse-Grained Representations of Transmembrane Proteins

    PubMed Central

    2017-01-01

    Numerous biomolecules and biomolecular complexes, including transmembrane proteins (TMPs), are symmetric or at least have approximate symmetries. Highly coarse-grained models of such biomolecules, aiming at capturing the essential structural and dynamical properties on resolution levels coarser than the residue scale, must preserve the underlying symmetry. However, making these models obey the correct physics is in general not straightforward, especially at the highly coarse-grained resolution where multiple (∼3–30 in the current study) amino acid residues are represented by a single coarse-grained site. In this paper, we propose a simple and fast method of coarse-graining TMPs obeying this condition. The procedure involves partitioning transmembrane domains into contiguous segments of equal length along the primary sequence. For the coarsest (lowest-resolution) mappings, it turns out to be most important to satisfy the symmetry in a coarse-grained model. As the resolution is increased to capture more detail, however, it becomes gradually more important to match modular repeats in the secondary structure (such as helix-loop repeats) instead. A set of eight TMPs of various complexity, functionality, structural topology, and internal symmetry, representing different classes of TMPs (ion channels, transporters, receptors, adhesion, and invasion proteins), has been examined. The present approach can be generalized to other systems possessing exact or approximate symmetry, allowing for reliable and fast creation of multiscale, highly coarse-grained mappings of large biomolecular assemblies. PMID:28043122

  20. Combustion Of Metals In Reduced Gravity And Extraterrestrial Environments

    NASA Technical Reports Server (NTRS)

    Abbud-Madrid, A.; Modak, A.; Branch, M. C.

    2003-01-01

    The recent focus of this research project has been to model the combustion of isolated metal droplets and, in particular, to couple the existing theories and formulations of phenomena such as condensation, reaction kinetics, radiation, and surface reactions to formulate a more complete combustion model. A fully transient, one-dimensional (spherical symmetry) numerical model that uses detailed chemical kinetics, multi-component molecular transport mechanisms, condensation kinetics, and gas phase radiation heat transfer was developed. A coagulation model was used to simulate the particulate formation of MgO. The model was used to simulate the combustion of an Mg droplet in pure O2 and CO2. Methanol droplet combustion is considered as a test case for the solution method for both quasi-steady and fully transient simulations. Although some important processes unique to methanol combustion, such as water absorption at the surface, are not included in the model, the results are in sufficient agreement with the published data. Since the major part of the heat released in combustion of Mg, and in combustion of metals in general, is due to the condensation of the metal oxide, it is very important to capture the condensation processes correctly. Using the modified nucleation theory, an Arrhenius type rate expression is derived to calculate the condensation rate of MgO. This expression can be easily included in the CHEMKIN reaction mechanism format. Although very little property data is available for MgO, the condensation rate expression derived using the existing data is able to capture the condensation of MgO. An appropriate choice of the reference temperature to calculate the rate coefficients allows the model to correctly predict the subsequent heat release and hence the flame temperature.

  1. Integration of Extended MHD and Kinetic Effects in Global Magnetosphere Models

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Wang, L.; Maynard, K. R. M.; Raeder, J.; Bhattacharjee, A.

    2015-12-01

    Computational models of Earth's geospace environment are an important tool to investigate the science of the coupled solar-wind -- magnetosphere -- ionosphere system, complementing satellite and ground observations with a global perspective. They are also crucial in understanding and predicting space weather, in particular under extreme conditions. Traditionally, global models have employed the one-fluid MHD approximation, which captures large-scale dynamics quite well. However, in Earth's nearly collisionless plasma environment it breaks down on small scales, where ion and electron dynamics and kinetic effects become important, and greatly change the reconnection dynamics. A number of approaches have recently been taken to advance global modeling, e.g., including multiple ion species, adding Hall physics in a Generalized Ohm's Law, embedding local PIC simulations into a larger fluid domain and also some work on simulating the entire system with hybrid or fully kinetic models, the latter however being to computationally expensive to be run at realistic parameters. We will present an alternate approach, ie., a multi-fluid moment model that is derived rigorously from the Vlasov-Maxwell system. The advantage is that the computational cost remains managable, as we are still solving fluid equations. While the evolution equation for each moment is exact, it depends on the next higher-order moment, so that truncating the hiearchy and closing the system to capture the essential kinetic physics is crucial. We implement 5-moment (density, momentum, scalar pressure) and 10-moment (includes pressure tensor) versions of the model, and use local approximations for the heat flux to close the system. We test these closures by local simulations where we can compare directly to PIC / hybrid codes, and employ them in global simulations using the next-generation OpenGGCM to contrast them to MHD / Hall-MHD results and compare with observations.

  2. The Emerging Importance of Business Process Standards in the Federal Government

    DTIC Science & Technology

    2006-02-23

    delivers enough value for its commercialization into the general industry. Today, we are seeing standards such as SOA, BPMN and BPEL hit that...Process Modeling Notation ( BPMN ) and the Business Process Execution Language (BPEL). BPMN provides a standard representation for capturing and...execution. The combination of BPMN and BPEL offers organizations the potential to standardize processes in a distributed environment, enabling

  3. A goodness-of-fit test for capture-recapture model M(t) under closure

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.

  4. "Hit-and-Run" transcription: de novo transcription initiated by a transient bZIP1 "hit" persists after the "run".

    PubMed

    Doidy, Joan; Li, Ying; Neymotin, Benjamin; Edwards, Molly B; Varala, Kranthi; Gresham, David; Coruzzi, Gloria M

    2016-02-03

    Dynamic transcriptional regulation is critical for an organism's response to environmental signals and yet remains elusive to capture. Such transcriptional regulation is mediated by master transcription factors (TF) that control large gene regulatory networks. Recently, we described a dynamic mode of TF regulation named "hit-and-run". This model proposes that master TF can interact transiently with a set of targets, but the transcription of these transient targets continues after the TF dissociation from the target promoter. However, experimental evidence validating active transcription of the transient TF-targets is still lacking. Here, we show that active transcription continues after transient TF-target interactions by tracking de novo synthesis of RNAs made in response to TF nuclear import. To do this, we introduced an affinity-labeled 4-thiouracil (4tU) nucleobase to specifically isolate newly synthesized transcripts following conditional TF nuclear import. Thus, we extended the TARGET system (Transient Assay Reporting Genome-wide Effects of Transcription factors) to include 4tU-labeling and named this new technology TARGET-tU. Our proof-of-principle example is the master TF Basic Leucine Zipper 1 (bZIP1), a central integrator of metabolic signaling in plants. Using TARGET-tU, we captured newly synthesized mRNAs made in response to bZIP1 nuclear import at a time when bZIP1 is no longer detectably bound to its target. Thus, the analysis of de novo transcripomics demonstrates that bZIP1 may act as a catalyst TF to initiate a transcriptional complex ("hit"), after which active transcription by RNA polymerase continues without the TF being bound to the gene promoter ("run"). Our findings provide experimental proof for active transcription of transient TF-targets supporting a "hit-and-run" mode of action. This dynamic regulatory model allows a master TF to catalytically propagate rapid and broad transcriptional responses to changes in environment. Thus, the functional read-out of de novo transcripts produced by transient TF-target interactions allowed us to capture new models for genome-wide transcriptional control.

  5. Meta-analyses of the proportion of Japanese encephalitis virus infection in vectors and vertebrate hosts.

    PubMed

    Oliveira, Ana R S; Cohnstaedt, Lee W; Strathe, Erin; Hernández, Luciana Etcheverry; McVey, D Scott; Piaggio, José; Cernicchiaro, Natalia

    2017-09-07

    Japanese encephalitis (JE) is a zoonosis in Southeast Asia vectored by mosquitoes infected with the Japanese encephalitis virus (JEV). Japanese encephalitis is considered an emerging exotic infectious disease with potential for introduction in currently JEV-free countries. Pigs and ardeid birds are reservoir hosts and play a major role on the transmission dynamics of the disease. The objective of the study was to quantitatively summarize the proportion of JEV infection in vectors and vertebrate hosts from data pertaining to observational studies obtained in a systematic review of the literature on vector and host competence for JEV, using meta-analyses. Data gathered in this study pertained to three outcomes: proportion of JEV infection in vectors, proportion of JEV infection in vertebrate hosts, and minimum infection rate (MIR) in vectors. Random-effects subgroup meta-analysis models were fitted by species (mosquito or vertebrate host species) to estimate pooled summary measures, as well as to compute the variance between studies. Meta-regression models were fitted to assess the association between different predictors and the outcomes of interest and to identify sources of heterogeneity among studies. Predictors included in all models were mosquito/vertebrate host species, diagnostic methods, mosquito capture methods, season, country/region, age category, and number of mosquitos per pool. Mosquito species, diagnostic method, country, and capture method represented important sources of heterogeneity associated with the proportion of JEV infection; host species and region were considered sources of heterogeneity associated with the proportion of JEV infection in hosts; and diagnostic and mosquito capture methods were deemed important contributors of heterogeneity for the MIR outcome. Our findings provide reference pooled summary estimates of vector competence for JEV for some mosquito species, as well as of sources of variability for these outcomes. Moreover, this work provides useful guidelines when interpreting vector and host infection proportions or prevalence from observational studies, and contributes to further our understanding of vector and vertebrate host competence for JEV, elucidating information on the relative importance of vectors and hosts on JEV introduction and transmission.

  6. Assessing the role of informal sector in WEEE management systems: A System Dynamics approach.

    PubMed

    Ardi, Romadhani; Leisten, Rainer

    2016-11-01

    Generally being ignored by academia and regulators, the informal sector plays important roles in Waste Electrical and Electronic Equipment (WEEE) management systems, especially in developing countries. This study aims: (1) to capture and model the variety of informal operations in WEEE management systems, (2) to capture the dynamics existing within the informal sector, and (3) to assess the role of the informal sector as the key player in the WEEE management systems, influencing both its future operations and its counterpart, the formal sector. By using System Dynamics as the methodology and India as the reference system, this study is able to explain the reasons behind, on the one hand, the superiority of the informal sector in WEEE management systems and, on the other hand, the failure of the formal systems. Additionally, this study reveals the important role of the second-hand market as the determinant of the rise and fall of the informal sector in the future. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. PADME (Phobos And Deimos and Mars Environment): A Proposed NASA Discovery Mission to Investigate the Two Moons of Mars

    NASA Technical Reports Server (NTRS)

    Lee, Pascal; Benna, Mehdi; Britt, Daniel; Colaprete, Anthony; Davis, Warren; Delory, Greg; Elphic, Richard; Fulsang, Ejner; Genova, Anthony; Glavin, Daniel; hide

    2015-01-01

    After 40 years of solar system exploration by spacecraft, the origin of Mars's satellites, remains vexingly unknown. There are three prevailing hypotheses concerning their origin: H1: They are captured small bodies from the outer main belt or beyond; H2: They are reaccreted Mars impact ejecta; H3: They are remnants of Mars' formation. There are many variants of these hypotheses, but as stated, these three capture the key ideas and constraints on their nature. So far, data and modeling have not allowed any one of these hypotheses to be verified or excluded. Each one of these hypotheses has important implications for the evolution of the solar system, the formation and evolution of planets and satellites, and the delivery of water and organics to Early Mars and Early Earth. Determining the origin of Phobos and Deimos is identified by the NASA and the NRC Decadal Survey as the most important science goal at these bodies.

  8. Characteristics of subgrid-resolved-scale dynamics in anisotropic turbulence, with application to rough-wall boundary layers

    NASA Astrophysics Data System (ADS)

    Juneja, Anurag; Brasseur, James G.

    1999-10-01

    Large-eddy simulation (LES) of the atmospheric boundary layer (ABL) using eddy viscosity subgrid-scale (SGS) models is known to poorly predict mean shear at the first few grid cells near the ground, a rough surface with no viscous sublayer. It has recently been shown that convective motions carry this localized error vertically to infect the entire ABL, and that the error is more a consequence of the SGS model than grid resolution in the near-surface inertial layer. Our goal was to determine what first-order errors in the predicted SGS terms lead to spurious expectation values, and what basic dynamics in the filtered equation for resolved scale (RS) velocity must be captured by SGS models to correct the deficiencies. Our analysis is of general relevance to LES of rough-wall high Reynolds number boundary layers, where the essential difficulty in the closure is the importance of the SGS acceleration terms, a consequence of necessary under-resolution of relevant energy-containing motions at the first few grid levels, leading to potentially strong couplings between the anisotropies in resolved velocity and predicted SGS dynamics. We analyze these two issues (under-resolution and anisotropy) in the absence of a wall using two direct numerical simulation datasets of homogeneous turbulence with very different anisotropic structure characteristic of the near-surface ABL: shear- and buoyancy-generated turbulence. We uncover three important issues which should be addressed in the design of SGS closures near rough walls and we provide a priori tests for the SGS model. First, we identify a strong spurious coupling between the anisotropic structure of the resolved velocity field and predicted SGS dynamics which can create a feedback loop to incorrectly enhance certain components of the predicted velocity field. Second, we find that eddy viscosity and "similarity" SGS models do not contain enough degrees of freedom to capture, at a sufficient level of accuracy, both RS-SGS energy flux and SGS-RS dynamics. Third, to correctly capture pressure transport near a wall, closures must be made more flexible to accommodate proper partitioning between SGS stress divergence and SGS pressure gradient.

  9. A tree-like Bayesian structure learning algorithm for small-sample datasets from complex biological model systems.

    PubMed

    Yin, Weiwei; Garimalla, Swetha; Moreno, Alberto; Galinski, Mary R; Styczynski, Mark P

    2015-08-28

    There are increasing efforts to bring high-throughput systems biology techniques to bear on complex animal model systems, often with a goal of learning about underlying regulatory network structures (e.g., gene regulatory networks). However, complex animal model systems typically have significant limitations on cohort sizes, number of samples, and the ability to perform follow-up and validation experiments. These constraints are particularly problematic for many current network learning approaches, which require large numbers of samples and may predict many more regulatory relationships than actually exist. Here, we test the idea that by leveraging the accuracy and efficiency of classifiers, we can construct high-quality networks that capture important interactions between variables in datasets with few samples. We start from a previously-developed tree-like Bayesian classifier and generalize its network learning approach to allow for arbitrary depth and complexity of tree-like networks. Using four diverse sample networks, we demonstrate that this approach performs consistently better at low sample sizes than the Sparse Candidate Algorithm, a representative approach for comparison because it is known to generate Bayesian networks with high positive predictive value. We develop and demonstrate a resampling-based approach to enable the identification of a viable root for the learned tree-like network, important for cases where the root of a network is not known a priori. We also develop and demonstrate an integrated resampling-based approach to the reduction of variable space for the learning of the network. Finally, we demonstrate the utility of this approach via the analysis of a transcriptional dataset of a malaria challenge in a non-human primate model system, Macaca mulatta, suggesting the potential to capture indicators of the earliest stages of cellular differentiation during leukopoiesis. We demonstrate that by starting from effective and efficient approaches for creating classifiers, we can identify interesting tree-like network structures with significant ability to capture the relationships in the training data. This approach represents a promising strategy for inferring networks with high positive predictive value under the constraint of small numbers of samples, meeting a need that will only continue to grow as more high-throughput studies are applied to complex model systems.

  10. Spatial capture-recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sollmann, Rahel; Gardner, Beth

    2013-01-01

    Spatial Capture-Recapture provides a revolutionary extension of traditional capture-recapture methods for studying animal populations using data from live trapping, camera trapping, DNA sampling, acoustic sampling, and related field methods. This book is a conceptual and methodological synthesis of spatial capture-recapture modeling. As a comprehensive how-to manual, this reference contains detailed examples of a wide range of relevant spatial capture-recapture models for inference about population size and spatial and temporal variation in demographic parameters. Practicing field biologists studying animal populations will find this book to be a useful resource, as will graduate students and professionals in ecology, conservation biology, and fisheries and wildlife management.

  11. Models to capture the potential for disease transmission in domestic sheep flocks.

    PubMed

    Schley, David; Whittle, Sophie; Taylor, Michael; Kiss, Istvan Zoltan

    2012-09-15

    Successful control of livestock diseases requires an understanding of how they spread amongst animals and between premises. Mathematical models can offer important insight into the dynamics of disease, especially when built upon experimental and/or field data. Here the dynamics of a range of epidemiological models are explored in order to determine which models perform best in capturing real-world heterogeneities at sufficient resolution. Individual based network models are considered together with one- and two-class compartmental models, for which the final epidemic size is calculated as a function of the probability of disease transmission occurring during a given physical contact between two individuals. For numerical results the special cases of a viral disease with a fast recovery rate (foot-and-mouth disease) and a bacterial disease with a slow recovery rate (brucellosis) amongst sheep are considered. Quantitative results from observational studies of physical contact amongst domestic sheep are applied and results from the differently structured flocks (ewes with newborn lambs, ewes with nearly weaned lambs and ewes only) compared. These indicate that the breeding cycle leads to significant changes in the expected basic reproduction ratio of diseases. The observed heterogeneity of contacts amongst animals is best captured by full network simulations, although simple compartmental models describe the key features of an outbreak but, as expected, often overestimate the speed of an outbreak. Here the weights of contacts are heterogeneous, with many low weight links. However, due to the well-connected nature of the networks, this has little effect and differences between models remain small. These results indicate that simple compartmental models can be a useful tool for modelling real-world flocks; their applicability will be greater still for more homogeneously mixed livestock, which could be promoted by higher intensity farming practices. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. An evaluation of the efficiency of minnow traps for estimating the abundance of minnows in desert spring systems

    USGS Publications Warehouse

    Peterson, James T.; Scheerer, Paul D.; Clements, Shaun

    2015-01-01

    Desert springs are sensitive aquatic ecosystems that pose unique challenges to natural resource managers and researchers. Among the most important of these is the need to accurately quantify population parameters for resident fish, particularly when the species are of special conservation concern. We evaluated the efficiency of baited minnow traps for estimating the abundance of two at-risk species, Foskett Speckled Dace Rhinichthys osculus ssp. and Borax Lake Chub Gila boraxobius, in desert spring systems in southeastern Oregon. We evaluated alternative sample designs using simulation and found that capture–recapture designs with four capture occasions would maximize the accuracy of estimates and minimize fish handling. We implemented the design and estimated capture and recapture probabilities using the Huggins closed-capture estimator. Trap capture probabilities averaged 23% and 26% for Foskett Speckled Dace and Borax Lake Chub, respectively, but differed substantially among sample locations, through time, and nonlinearly with fish body size. Recapture probabilities for Foskett Speckled Dace were, on average, 1.6 times greater than (first) capture probabilities, suggesting “trap-happy” behavior. Comparison of population estimates from the Huggins model with the commonly used Lincoln–Petersen estimator indicated that the latter underestimated Foskett Speckled Dace and Borax Lake Chub population size by 48% and by 20%, respectively. These biases were due to variability in capture and recapture probabilities. Simulation of fish monitoring that included the range of capture and recapture probabilities observed indicated that variability in capture and recapture probabilities in time negatively affected the ability to detect annual decreases by up to 20% in fish population size. Failure to account for variability in capture and recapture probabilities can lead to poor quality data and study inferences. Therefore, we recommend that fishery researchers and managers employ sample designs and estimators that can account for this variability.

  13. Post-audits of Three Groundwater Models for Evaluating Plume Containment

    NASA Astrophysics Data System (ADS)

    Andersen, P. F.

    2003-12-01

    Groundwater extraction systems were designed using numerical models at three sites within a U.S. Army Ammunition Plant in Tennessee. Each site, and hence model, has unique qualities such as boundary conditions, extensiveness of the contaminant plume, and quantity and quality of hydrogeologic data. Performance of each of these extraction systems has been evaluated throughout their operation, providing an opportunity to perform post-audits on the accuracy of the groundwater models that were used in their design. Areas of comparison between the models and the observed response in the natural systems include hydraulic head, drawdown, horizontal and vertical gradients, and extent of capture zones. The results of the post-audits show the importance of using all available data in the construction and calibration of the models, the importance of having sufficient data, and the critical nature of an accurate conceptual model. The post-audits also show that although it may be possible to assess the accuracy of the model predictions, it is often not possible to explain the reasons for discrepancies between predicted and observed results. From a practical perspective, parameter uncertainty is important to account for in the development of the models and subsequent design of the extraction systems.

  14. Analysis and Modeling of Ground Operations at Hub Airports

    NASA Technical Reports Server (NTRS)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  15. Utility of Policy Capturing as an Approach to Graduate Admissions Decision Making.

    ERIC Educational Resources Information Center

    Schmidt, Frank L.; And Others

    1978-01-01

    The present study examined and evaluated the application of linear policy-capturing models to the real-world decision task of graduate admissions. Utility of the policy-capturing models was great enough to be of practical significance, and least-squares weights showed no predictive advantage over equal weights. (Author/CTM)

  16. Sampling the stream landscape: Improving the applicability of an ecoregion-level capture probability model for stream fishes

    USGS Publications Warehouse

    Mollenhauer, Robert; Mouser, Joshua B.; Brewer, Shannon K.

    2018-01-01

    Temporal and spatial variability in streams result in heterogeneous gear capture probability (i.e., the proportion of available individuals identified) that confounds interpretation of data used to monitor fish abundance. We modeled tow-barge electrofishing capture probability at multiple spatial scales for nine Ozark Highland stream fishes. In addition to fish size, we identified seven reach-scale environmental characteristics associated with variable capture probability: stream discharge, water depth, conductivity, water clarity, emergent vegetation, wetted width–depth ratio, and proportion of riffle habitat. The magnitude of the relationship between capture probability and both discharge and depth varied among stream fishes. We also identified lithological characteristics among stream segments as a coarse-scale source of variable capture probability. The resulting capture probability model can be used to adjust catch data and derive reach-scale absolute abundance estimates across a wide range of sampling conditions with similar effort as used in more traditional fisheries surveys (i.e., catch per unit effort). Adjusting catch data based on variable capture probability improves the comparability of data sets, thus promoting both well-informed conservation and management decisions and advances in stream-fish ecology.

  17. Atmospheric CO2 capture by algae: Negative carbon dioxide emission path.

    PubMed

    Moreira, Diana; Pires, José C M

    2016-09-01

    Carbon dioxide is one of the most important greenhouse gas, which concentration increase in the atmosphere is associated to climate change and global warming. Besides CO2 capture in large emission point sources, the capture of this pollutant from atmosphere may be required due to significant contribution of diffuse sources. The technologies that remove CO2 from atmosphere (creating a negative balance of CO2) are called negative emission technologies. Bioenergy with Carbon Capture and Storage may play an important role for CO2 mitigation. It represents the combination of bioenergy production and carbon capture and storage, keeping carbon dioxide in geological reservoirs. Algae have a high potential as the source of biomass, as they present high photosynthetic efficiencies and high biomass yields. Their biomass has a wide range of applications, which can improve the economic viability of the process. Thus, this paper aims to assess the atmospheric CO2 capture by algal cultures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Annual survival of Snail Kites in Florida: Radio telemetry versus capture-resighting data

    USGS Publications Warehouse

    Bennetts, R.E.; Dreitz, V.J.; Kitchens, W.M.; Hines, J.E.; Nichols, J.D.

    1999-01-01

    We estimated annual survival of Snail Kites (Rostrhamus sociabilis) in Florida using the Kaplan-Meier estimator with data from 271 radio-tagged birds over a three-year period and capture-recapture (resighting) models with data from 1,319 banded birds over a six-year period. We tested the hypothesis that survival differed among three age classes using both data sources. We tested additional hypotheses about spatial and temporal variation using a combination of data from radio telemetry and single- and multistrata capture-recapture models. Results from these data sets were similar in their indications of the sources of variation in survival, but they differed in some parameter estimates. Both data sources indicated that survival was higher for adults than for juveniles, but they did not support delineation of a subadult age class. Our data also indicated that survival differed among years and regions for juveniles but not for adults. Estimates of juvenile survival using radio telemetry data were higher than estimates using capture-recapture models for two of three years (1992 and 1993). Ancillary evidence based on censored birds indicated that some mortality of radio-tagged juveniles went undetected during those years, resulting in biased estimates. Thus, we have greater confidence in our estimates of juvenile survival using capture-recapture models. Precision of estimates reflected the number of parameters estimated and was surprisingly similar between radio telemetry and single-stratum capture-recapture models, given the substantial differences in sample sizes. Not having to estimate resighting probability likely offsets, to some degree, the smaller sample sizes from our radio telemetry data. Precision of capture-recapture models was lower using multistrata models where region-specific parameters were estimated than using single-stratum models, where spatial variation in parameters was not taken into account.

  19. Hybrid stochastic simulations of intracellular reaction-diffusion systems.

    PubMed

    Kalantzis, Georgios

    2009-06-01

    With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.

  20. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering

    PubMed Central

    Ju, Bin; Qian, Yuntao; Ye, Minchao; Ni, Rong; Zhu, Chenxi

    2015-01-01

    Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time. PMID:26270539

  1. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering.

    PubMed

    Ju, Bin; Qian, Yuntao; Ye, Minchao; Ni, Rong; Zhu, Chenxi

    2015-01-01

    Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time.

  2. Southwestern USA Drought over Multiple Millennia

    NASA Astrophysics Data System (ADS)

    Salzer, M. W.; Kipfmueller, K. F.

    2014-12-01

    Severe to extreme drought conditions currently exist across much of the American West. There is increasing concern that climate change may be worsening droughts in the West and particularly the Southwest. Thus, it is important to understand the role of natural variability and to place current conditions in a long-term context. We present a tree-ring derived reconstruction of regional-scale precipitation for the Southwestern USA over several millennia. A network of 48 tree-ring chronologies from California, Nevada, Utah, Arizona, New Mexico, and Colorado was used. All of the chronologies are at least 1,000 years long. The network was subjected to data reduction through PCA and a "nested" multiple linear regression reconstruction approach. The regression model was able to capture 72% of the variance in September-August precipitation over the last 1,000 years and 53% of the variance over the first millennium of the Common Era. Variance captured and spatial coverage further declined back in time as the shorter chronologies dropped out of the model, eventually reaching 24% of variance captured at 3250 BC. Results show regional droughts on decadal- to multi-decadal scales have been prominent and persistent phenomena in the region over the last several millennia. Anthropogenic warming is likely to exacerbate the effects of future droughts on human and other biotic populations.

  3. Design of a framework for the deployment of collaborative independent rare disease-centric registries: Gaucher disease registry model.

    PubMed

    Bellgard, Matthew I; Napier, Kathryn R; Bittles, Alan H; Szer, Jeffrey; Fletcher, Sue; Zeps, Nikolajs; Hunter, Adam A; Goldblatt, Jack

    2018-02-01

    Orphan drug clinical trials often are adversely affected by a lack of high quality treatment efficacy data that can be reliably compared across large patient cohorts derived from multiple governmental and country jurisdictions. It is critical that these patient data be captured with limited corporate involvement. For some time, there have been calls to develop collaborative, non-proprietary, patient-centric registries for post-market surveillance of aspects related to orphan drug efficacy. There is an urgent need for the development and sustainable deployment of these 'independent' registries that can capture comprehensive clinical, genetic and therapeutic information on patients with rare diseases. We therefore extended an open-source registry platform, the Rare Disease Registry Framework (RDRF) to establish an Independent Rare Disease Registry (IRDR). We engaged with an established rare disease community for Gaucher disease to determine system requirements, methods of data capture, consent, and reporting. A non-proprietary IRDR model is presented that can serve as autonomous data repository, but more importantly ensures that the relevant data can be made available to appropriate stakeholders in a secure, timely and efficient manner to improve clinical decision-making and the lives of those with a rare disease. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Land surface albedo and vegetation feedbacks enhanced the millennium drought in south-east Australia

    NASA Astrophysics Data System (ADS)

    Evans, Jason P.; Meng, Xianhong; McCabe, Matthew F.

    2017-01-01

    In this study, we have examined the ability of a regional climate model (RCM) to simulate the extended drought that occurred throughout the period of 2002 through 2007 in south-east Australia. In particular, the ability to reproduce the two drought peaks in 2002 and 2006 was investigated. Overall, the RCM was found to reproduce both the temporal and the spatial structure of the drought-related precipitation anomalies quite well, despite using climatological seasonal surface characteristics such as vegetation fraction and albedo. This result concurs with previous studies that found that about two-thirds of the precipitation decline can be attributed to the El Niño-Southern Oscillation (ENSO). Simulation experiments that allowed the vegetation fraction and albedo to vary as observed illustrated that the intensity of the drought was underestimated by about 10 % when using climatological surface characteristics. These results suggest that in terms of drought development, capturing the feedbacks related to vegetation and albedo changes may be as important as capturing the soil moisture-precipitation feedback. In order to improve our modelling of multi-year droughts, the challenge is to capture all these related surface changes simultaneously, and provide a comprehensive description of land surface-precipitation feedback during the droughts development.

  5. Use of Machine Learning Techniques for Iidentification of Robust Teleconnections to East African Rainfall Variability in Observations and Models

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, Franklin R.; Funk, Chris

    2014-01-01

    Providing advance warning of East African rainfall variations is a particular focus of several groups including those participating in the Famine Early Warming Systems Network. Both seasonal and long-term model projections of climate variability are being used to examine the societal impacts of hydrometeorological variability on seasonal to interannual and longer time scales. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of both seasonal and climate model projections to develop downscaled scenarios for using in impact modeling. The utility of these projections is reliant on the ability of current models to capture the embedded relationships between East African rainfall and evolving forcing within the coupled ocean-atmosphere-land climate system. Previous studies have posited relationships between variations in El Niño, the Walker circulation, Pacific decadal variability (PDV), and anthropogenic forcing. This study applies machine learning methods (e.g. clustering, probabilistic graphical model, nonlinear PCA) to observational datasets in an attempt to expose the importance of local and remote forcing mechanisms of East African rainfall variability. The ability of the NASA Goddard Earth Observing System (GEOS5) coupled model to capture the associated relationships will be evaluated using Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations.

  6. Neutron Capture Gamma-Ray Libraries for Nuclear Applications

    NASA Astrophysics Data System (ADS)

    Sleaford, B. W.; Firestone, R. B.; Summers, N.; Escher, J.; Hurst, A.; Krticka, M.; Basunia, S.; Molnar, G.; Belgya, T.; Revay, Z.; Choi, H. D.

    2011-06-01

    The neutron capture reaction is useful in identifying and analyzing the gamma-ray spectrum from an unknown assembly as it gives unambiguous information on its composition. This can be done passively or actively where an external neutron source is used to probe an unknown assembly. There are known capture gamma-ray data gaps in the ENDF libraries used by transport codes for various nuclear applications. The Evaluated Gamma-ray Activation file (EGAF) is a new thermal neutron capture database of discrete line spectra and cross sections for over 260 isotopes that was developed as part of an IAEA Coordinated Research Project. EGAF is being used to improve the capture gamma production in ENDF libraries. For medium to heavy nuclei the quasi continuum contribution to the gamma cascades is not experimentally resolved. The continuum contains up to 90% of all the decay energy and is modeled here with the statistical nuclear structure code DICEBOX. This code also provides a consistency check of the level scheme nuclear structure evaluation. The calculated continuum is of sufficient accuracy to include in the ENDF libraries. This analysis also determines new total thermal capture cross sections and provides an improved RIPL database. For higher energy neutron capture there is less experimental data available making benchmarking of the modeling codes more difficult. We are investigating the capture spectra from higher energy neutrons experimentally using surrogate reactions and modeling this with Hauser-Feshbach codes. This can then be used to benchmark CASINO, a version of DICEBOX modified for neutron capture at higher energy. This can be used to simulate spectra from neutron capture at incident neutron energies up to 20 MeV to improve the gamma-ray spectrum in neutron data libraries used for transport modeling of unknown assemblies.

  7. Estimation of sex-specific survival from capture-recapture data when sex is not always known

    USGS Publications Warehouse

    Nichols, J.D.; Kendall, W.L.; Hines, J.E.; Spendelow, J.A.

    2004-01-01

    Many animals lack obvious sexual dimorphism, making assignment of sex difficult even for observed or captured animals. For many such species it is possible to assign sex with certainty only at some occasions; for example, when they exhibit certain types of behavior. A common approach to handling this situation in capture-recapture studies has been to group capture histories into those of animals eventually identified as male and female and those for which sex was never known. Because group membership is dependent on the number of occasions at which an animal was caught or observed (known sex animals, on average, will have been observed at more occasions than unknown-sex animals), survival estimates for known-sex animals will be positively biased, and those for unknown animals will be negatively biased. In this paper, we develop capture-recapture models that incorporate sex ratio and sex assignment parameters that permit unbiased estimation in the face of this sampling problem. We demonstrate the magnitude of bias in the traditional capture-recapture approach to this sampling problem, and we explore properties of estimators from other ad hoc approaches. The model is then applied to capture-recapture data for adult Roseate Terns (Sterna dougallii) at Falkner Island, Connecticut, 1993-2002. Sex ratio among adults in this population favors females, and we tested the hypothesis that this population showed sex-specific differences in adult survival. Evidence was provided for higher survival of adult females than males, as predicted. We recommend use of this modeling approach for future capture-recapture studies in which sex cannot always be assigned to captured or observed animals. We also place this problem in the more general context of uncertainty in state classification in multistate capture-recapture models.

  8. Capturing the Patient’s Experience: Using Qualitative Methods to Develop a Measure of Patient-Reported Symptom Burden: An Example from Ovarian Cancer

    PubMed Central

    Williams, Loretta A.; Agarwal, Sonika; Bodurka, Diane C.; Saleeba, Angele K.; Sun, Charlotte C.; Cleeland, Charles S.

    2013-01-01

    Context Experts in patient-reported outcome (PRO) measurement emphasize the importance of including patient input in the development of PRO measures. Although best methods for acquiring this input are not yet identified, patient input early in instrument development ensures that instrument content captures information most important and relevant to patients in understandable terms. Objectives The M. D. Anderson Symptom Inventory (MDASI) is a reliable, valid PRO instrument for assessing cancer symptom burden. We report a qualitative (open-ended, in-depth) interviewing method that can be used to incorporate patient input into PRO symptom measure development, with our experience in constructing a MDASI module for ovarian cancer (MDASI-OC) as a model. Methods Fourteen patients with ovarian cancer (OC) described symptoms experienced at the time of the study, at diagnosis, and during prior treatments. Researchers and clinicians used content analysis of interview transcripts to identify symptoms in patient language. Symptoms were ranked on the basis of the number of patients mentioning them and by clinician assessment of relevance. Results Forty-two symptoms were mentioned. Eight OC-specific items will be added to the 13 core symptom items and six interference items of the MDASI in a test version of the MDASI-OC based on the number of patients mentioning them and clinician assessment of importance. The test version is undergoing psychometric evaluation. Conclusion The qualitative interviewing process, used to develop the test MDASI-OC, systematically captures common symptoms important to patients with ovarian cancer. This methodology incorporates the patient experience recommended by experts in PRO instrument development. PMID:23615044

  9. Asymmetric capture of Dirac dark matter by the Sun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blennow, Mattias; Clementz, Stefan

    2015-08-18

    Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles andmore » anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.« less

  10. Moving beyond Watson-Crick models of coarse grained DNA dynamics.

    PubMed

    Linak, Margaret C; Tourdot, Richard; Dorfman, Kevin D

    2011-11-28

    DNA produces a wide range of structures in addition to the canonical B-form of double-stranded DNA. Some of these structures are stabilized by Hoogsteen bonds. We developed an experimentally parameterized, coarse-grained model that incorporates such bonds. The model reproduces many of the microscopic features of double-stranded DNA and captures the experimental melting curves for a number of short DNA hairpins, even when the open state forms complicated secondary structures. We demonstrate the utility of the model by simulating the folding of a thrombin aptamer, which contains G-quartets, and strand invasion during triplex formation. Our results highlight the importance of including Hoogsteen bonding in coarse-grained models of DNA.

  11. Orthogonal-blendshape-based editing system for facial motion capture data.

    PubMed

    Li, Qing; Deng, Zhigang

    2008-01-01

    The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.

  12. Decision insight into stakeholder conflict for ERN.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siirola, John; Tidwell, Vincent Carroll; Benz, Zachary O.

    Participatory modeling has become an important tool in facilitating resource decision making and dispute resolution. Approaches to modeling that are commonly used in this context often do not adequately account for important human factors. Current techniques provide insights into how certain human activities and variables affect resource outcomes; however, they do not directly simulate the complex variables that shape how, why, and under what conditions different human agents behave in ways that affect resources and human interactions related to them. Current approaches also do not adequately reveal how the effects of individual decisions scale up to have systemic level effectsmore » in complex resource systems. This lack of integration prevents the development of more robust models to support decision making and dispute resolution processes. Development of integrated tools is further hampered by the fact that collection of primary data for decision-making modeling is costly and time consuming. This project seeks to develop a new approach to resource modeling that incorporates both technical and behavioral modeling techniques into a single decision-making architecture. The modeling platform is enhanced by use of traditional and advanced processes and tools for expedited data capture. Specific objectives of the project are: (1) Develop a proof of concept for a new technical approach to resource modeling that combines the computational techniques of system dynamics and agent based modeling, (2) Develop an iterative, participatory modeling process supported with traditional and advance data capture techniques that may be utilized to facilitate decision making, dispute resolution, and collaborative learning processes, and (3) Examine potential applications of this technology and process. The development of this decision support architecture included both the engineering of the technology and the development of a participatory method to build and apply the technology. Stakeholder interaction with the model and associated data capture was facilitated through two very different modes of engagement, one a standard interface involving radio buttons, slider bars, graphs and plots, while the other utilized an immersive serious gaming interface. The decision support architecture developed through this project was piloted in the Middle Rio Grande Basin to examine how these tools might be utilized to promote enhanced understanding and decision-making in the context of complex water resource management issues. Potential applications of this architecture and its capacity to lead to enhanced understanding and decision-making was assessed through qualitative interviews with study participants who represented key stakeholders in the basin.« less

  13. Dynamics of Postcombustion CO2 Capture Plants: Modeling, Validation, and Case Study

    PubMed Central

    2017-01-01

    The capture of CO2 from power plant flue gases provides an opportunity to mitigate emissions that are harmful to the global climate. While the process of CO2 capture using an aqueous amine solution is well-known from experience in other technical sectors (e.g., acid gas removal in the gas processing industry), its operation combined with a power plant still needs investigation because in this case, the interaction with power plants that are increasingly operated dynamically poses control challenges. This article presents the dynamic modeling of CO2 capture plants followed by a detailed validation using transient measurements recorded from the pilot plant operated at the Maasvlakte power station in the Netherlands. The model predictions are in good agreement with the experimental data related to the transient changes of the main process variables such as flow rate, CO2 concentrations, temperatures, and solvent loading. The validated model was used to study the effects of fast power plant transients on the capture plant operation. A relevant result of this work is that an integrated CO2 capture plant might enable more dynamic operation of retrofitted fossil fuel power plants because the large amount of steam needed by the capture process can be diverted rapidly to and from the power plant. PMID:28413256

  14. Time series modeling of live-cell shape dynamics for image-based phenotypic profiling.

    PubMed

    Gordonov, Simon; Hwang, Mun Kyung; Wells, Alan; Gertler, Frank B; Lauffenburger, Douglas A; Bathe, Mark

    2016-01-01

    Live-cell imaging can be used to capture spatio-temporal aspects of cellular responses that are not accessible to fixed-cell imaging. As the use of live-cell imaging continues to increase, new computational procedures are needed to characterize and classify the temporal dynamics of individual cells. For this purpose, here we present the general experimental-computational framework SAPHIRE (Stochastic Annotation of Phenotypic Individual-cell Responses) to characterize phenotypic cellular responses from time series imaging datasets. Hidden Markov modeling is used to infer and annotate morphological state and state-switching properties from image-derived cell shape measurements. Time series modeling is performed on each cell individually, making the approach broadly useful for analyzing asynchronous cell populations. Two-color fluorescent cells simultaneously expressing actin and nuclear reporters enabled us to profile temporal changes in cell shape following pharmacological inhibition of cytoskeleton-regulatory signaling pathways. Results are compared with existing approaches conventionally applied to fixed-cell imaging datasets, and indicate that time series modeling captures heterogeneous dynamic cellular responses that can improve drug classification and offer additional important insight into mechanisms of drug action. The software is available at http://saphire-hcs.org.

  15. Ring Current Pressure Estimation withRAM-SCB using Data Assimilation and VanAllen Probe Flux Data

    NASA Astrophysics Data System (ADS)

    Godinez, H. C.; Yu, Y.; Henderson, M. G.; Larsen, B.; Jordanova, V.

    2015-12-01

    Capturing and subsequently modeling the influence of tail plasma injections on the inner magnetosphere is particularly important for understanding the formation and evolution of Earth's ring current. In this study, the ring current distribution is estimated with the Ring Current-Atmosphere Interactions Model with Self-Consistent Magnetic field (RAM-SCB) using, for the first time, data assimilation techniques and particle flux data from the Van Allen Probes. The state of the ring current within the RAM-SCB is corrected via an ensemble based data assimilation technique by using proton flux from one of the Van Allen Probes, to capture the enhancement of ring current following an isolated substorm event on July 18 2013. The results show significant improvement in the estimation of the ring current particle distributions in the RAM-SCB model, leading to better agreement with observations. This newly implemented data assimilation technique in the global modeling of the ring current thus provides a promising tool to better characterize the effect of substorm injections in the near-Earth regions. The work is part of the Space Hazards Induced near Earth by Large, Dynamic Storms (SHIELDS) project in Los Alamos National Laboratory.

  16. Rethinking work-health models for the new global economy: a qualitative analysis of emerging dimensions of work.

    PubMed

    Polanyi, Michael; Tompa, Emile

    2004-01-01

    Technology change, rising international trade and investment, and increased competition are changing the organization, distribution and nature of work in industrialized countries. To enhance productivity, employers are striving to increase innovation while minimizing costs. This is leading to an intensification of work demands on core employees and the outsourcing or casualization of more marginal tasks, often to contingent workers. The two prevailing models of work and health - demand-control and effort-reward imbalance - may not capture the full range of experiences of workers in today's increasingly flexible and competitive economies. To explore this proposition, we conducted a secondary qualitative analysis of interviews with 120 American workers [6]. Our analysis identifies aspects of work affecting the quality of workers' experiences that are largely overlooked by popular work-health models: the nature of social interactions with customers and clients; workers' belief in, and perception of, the importance of the product of their work. We suggest that the quality of work experiences is partly determined by the objective characteristics of the work environment, but also by the fit of the work environment with the worker's needs, interests, desires and personality, something not adequately captured in current models.

  17. Bayesian Treed Multivariate Gaussian Process with Adaptive Design: Application to a Carbon Capture Unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik

    2014-05-16

    Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less

  18. Coupling biochemistry and hydrodynamics captures hyperactivated sperm motility in a simple flagellar model

    PubMed Central

    Olson, Sarah D.; Suarez, Susan S.; Fauci, Lisa J.

    2011-01-01

    Hyperactivation in mammalian sperm is characterized by highly asymmetrical waveforms and an increase in the amplitude of flagellar bends. It is important for the sperm to be able to achieve hyperactivated motility in order to reach and fertilize the egg. Calcium (Ca2+) dynamics are known to play a large role in the initiation and maintenance of hyperactivated motility. Here we present an integrative model that couples the CatSper channel mediated Ca2+ dynamics of hyperactivation to a mechanical model of an idealized sperm flagellum in a 3-d viscous, incompressible fluid. The mechanical forces are due to passive stiffness properties and active bending moments that are a function of the local Ca2+ concentration along the length of the flagellum. By including an asymmetry in bending moments to reflect an asymmetry in the axoneme’s response to Ca2+, we capture the transition from activated motility to hyperactivated motility. We examine the effects of elastic properties of the flagellum and the Ca2+ dynamics on the overall swimming patterns. The swimming velocities of the model flagellum compare well with data for hyperactivated mouse sperm. PMID:21669209

  19. The MP (Materialization Pattern) Model for Representing Math Educational Standards

    NASA Astrophysics Data System (ADS)

    Choi, Namyoun; Song, Il-Yeol; An, Yuan

    Representing natural languages with UML has been an important research issue for various reasons. Little work has been done for modeling imperative mood sentences which are the sentence structure of math educational standard statements. In this paper, we propose the MP (Materialization Pattern) model that captures the semantics of English sentences used in math educational standards. The MP model is based on the Reed-Kellogg sentence diagrams and creates MP schemas with the UML notation. The MP model explicitly represents the semantics of the sentences by extracting math concepts and the cognitive process of math concepts from math educational standard statements, and simplifies modeling. This MP model is also developed to be used for aligning math educational standard statements via schema matching.

  20. Physical-mathematical model of condensation process of the sub-micron dust capture in sprayer scrubber

    NASA Astrophysics Data System (ADS)

    Shilyaev, M. I.; Khromova, E. M.; Grigoriev, A. V.; Tumashova, A. V.

    2011-09-01

    A physical-mathematical model of the heat and mass exchange process and condensation capture of sub-micron dust particles on the droplets of dispersed liquid in a sprayer scrubber is proposed and analysed. A satisfactory agreement of computed results and experimental data on soot capturing from the cracking gases is obtained.

  1. A New Method for Computing Three-Dimensional Capture Fraction in Heterogeneous Regional Systems using the MODFLOW Adjoint Code

    NASA Astrophysics Data System (ADS)

    Clemo, T. M.; Ramarao, B.; Kelly, V. A.; Lavenue, M.

    2011-12-01

    Capture is a measure of the impact of groundwater pumping upon groundwater and surface water systems. The computation of capture through analytical or numerical methods has been the subject of articles in the literature for several decades (Bredehoeft et al., 1982). Most recently Leake et al. (2010) described a systematic way to produce capture maps in three-dimensional systems using a numerical perturbation approach in which capture from streams was computed using unit rate pumping at many locations within a MODFLOW model. The Leake et al. (2010) method advances the current state of computing capture. A limitation stems from the computational demand required by the perturbation approach wherein days or weeks of computational time might be required to obtain a robust measure of capture. In this paper, we present an efficient method to compute capture in three-dimensional systems based upon adjoint states. The efficiency of the adjoint method will enable uncertainty analysis to be conducted on capture calculations. The USGS and INTERA have collaborated to extend the MODFLOW Adjoint code (Clemo, 2007) to include stream-aquifer interaction and have applied it to one of the examples used in Leake et al. (2010), the San Pedro Basin MODFLOW model. With five layers and 140,800 grid blocks per layer, the San Pedro Basin model, provided an ideal example data set to compare the capture computed from the perturbation and the adjoint methods. The capture fraction map produced from the perturbation method for the San Pedro Basin model required significant computational time to compute and therefore the locations for the pumping wells were limited to 1530 locations in layer 4. The 1530 direct simulations of capture require approximately 76 CPU hours. Had capture been simulated in each grid block in each layer, as is done in the adjoint method, the CPU time would have been on the order of 4 years. The MODFLOW-Adjoint produced the capture fraction map of the San Pedro Basin model at 704,000 grid blocks (140,800 grid blocks x 5 layers) in just 6 minutes. The capture fraction maps from the perturbation and adjoint methods agree closely. The results of this study indicate that the adjoint capture method and its associated computational efficiency will enable scientists and engineers facing water resource management decisions to evaluate the sensitivity and uncertainty of impacts to regional water resource systems as part of groundwater supply strategies. Bredehoeft, J.D., S.S. Papadopulos, and H.H. Cooper Jr, Groundwater: The water budget myth. In Scientific Basis of Water-Resources Management, ed. National Research Council (U.S.), Geophysical Study Committee, 51-57. Washington D.C.: National Academy Press, 1982. Clemo, Tom, MODFLOW-2005 Ground-Water Model-Users Guide to Adjoint State based Sensitivity Process (ADJ), BSU CGISS 07-01, Center for the Geophysical Investigation of the Shallow Subsurface, Boise State University, 2007. Leake, S.A., H.W. Reeves, and J.E. Dickinson, A New Capture Fraction Method to Map How Pumpage Affects Surface Water Flow, Ground Water, 48(5), 670-700, 2010.

  2. Incorporating grazing into an eco-hydrologic model: Simulating coupled human and natural systems in rangelands

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Liu, M.; Tague, C.; Choate, J. S.; Evans, R. D.; Johnson, K. A.; Adam, J. C.

    2013-12-01

    Rangelands provide an opportunity to investigate the coupled feedbacks between human activities and natural ecosystems. These areas comprise at least one-third of the Earth's surface and provide ecological support for birds, insects, wildlife and agricultural animals including grazing lands for livestock. Capturing the interactions among water, carbon, and nitrogen cycles within the context of regional scale patterns of climate and management is important to understand interactions, responses, and feedbacks between rangeland systems and humans, as well as provide relevant information to stakeholders and policymakers. The overarching objective of this research is to understand the full consequences, intended and unintended, of human activities and climate over time in rangelands by incorporating dynamics related to rangeland management into an eco-hydrologic model that also incorporates biogeochemical and soil processes. Here we evaluate our model over ungrazed and grazed sites for different rangeland ecosystems. The Regional Hydro-ecologic Simulation System (RHESSys) is a process-based, watershed-scale model that couples water with carbon and nitrogen cycles. Climate, soil, vegetation, and management effects within the watershed are represented in a nested landscape hierarchy to account for heterogeneity and the lateral movement of water and nutrients. We incorporated a daily time-series of plant biomass loss from rangeland to represent grazing. The TRY Plant Trait Database was used to parameterize genera of shrubs and grasses in different rangeland types, such as tallgrass prairie, Intermountain West cold desert, and shortgrass steppe. In addition, other model parameters captured the reallocation of carbon and nutrients after grass defoliation. Initial simulations were conducted at the Curlew Valley site in northern Utah, a former International Geosphere-Biosphere Programme Desert Biome site. We found that grasses were most sensitive to model parameters affecting the daily-to-yearly ratio of net primary productivity allocation of carbon, non-structural carbohydrate pool, rate of root turnover, and leaf on/off days. We also ran RHESSys over AmeriFlux sites representing a spectrum of rangeland ecosystems, such as at Konza Prairie (Kansas), Fort Peck (Montana), and Corral Pocket (Utah), as well as grazed versus ungrazed sites. We evaluated RHESSys using net ecosystem exchange . Competition between rangeland vegetation types with different physiological parameters, such as carbon:nitrogen ratio and specific leaf area within a single site were also tested. Preliminary results indicated both species-specific parameters and allocation controls were important to capturing the ecosystem response to environmental conditions. Furthermore, the addition of a grazing component allowed us to better capture impacts of management at grazed sites. Future research will involve incorporation of other grazing processes, such as impacts of excreta and increased nutrient availability and cycling.

  3. Application of an improved magnetic immunosorbent in an Ephesia chip designed for circulating tumor cell capture.

    PubMed

    Svobodova, Zuzana; Kucerova, Jana; Autebert, Julien; Horak, Daniel; Bruckova, Lenka; Viovy, Jean-Louis; Bilkova, Zuzana

    2014-02-01

    In this study, we describe a particular step in developing a microfluidic device for capture and detection of circulating tumor cells-specifically the preparation of an immunosorbent for implementation into the separation chip. We highlight some of the most important specifics connected with superparamegnetic microspheres for microfluidic purposes. Factors such as nonspecific adsorption on microfluidic channels, interactions with model cell lines, and tendency to aggregation were investigated. Poly(glycidyl methacrylate) microspheres with carboxyl groups were employed for this purpose. To address the aforementioned challenges, the microspheres were coated with hydrazide-PEG-hydrazide, and subsequently anti-epithelial cell adhesion molecule (EpCAM) antibody was immobilized. The prepared anti-EpCAM immunosorbent was pretested using model cell lines with differing EpCAM density (MCF7, SKBR3, A549, and Raji) in a batchwise arrangement. Finally, the entire system was implemented and studied in an Ephesia chip and an evaluation was performed by the MCF7 cell line. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Image and in situ data integration to derive sawgrass density for surface flow modelling in the Everglades, Florida, USA

    USGS Publications Warehouse

    Jones, J.W.

    2000-01-01

    The US Geological Survey is building models of the Florida Everglades to be used in managing south Florida surface water flows for habitat restoration and maintenance. Because of the low gradients in the Everglades, vegetation structural characteristics are very important and greatly influence surface water flow and distribution. Vegetation density is being evaluated as an index of surface resistance to flow. Digital multispectral videography (DMSV) has been captured over several sites just before field collection of vegetation data. Linear regression has been used to establish a relationship between normalized difference vegetation index (NDVI) values computed from the DMSV and field-collected biomass and density estimates. Spatial analysis applied to the DMSV data indicates that thematic mapper (TM) resolution is at the limit required to capture land surface heterogeneity. The TM data collected close to the time of the DMSV will be used to derive a regional sawgrass density map.

  5. Image and in situ data integration to derive sawgrass density for surface flow modelling in the Everglades, Florida, USA

    USGS Publications Warehouse

    Jones, J.W.

    2001-01-01

    The US Geological Survey is building models of the Florida Everglades to be used in managing south Florida surface water flows for habitat restoration and maintenance. Because of the low gradients in the Everglades, vegetation structural characteristics are very important and greatly influence surface water flow and distribution. Vegetation density is being evaluated as an index of surface resistance to flow. Digital multispectral videography (DMSV) has been captured over several sites just before field collection of vegetation data. Linear regression has been used to establish a relationship between normalized difference vegetation index (NDVI) values computed from the DMSV and field-collected biomass and density estimates. Spatial analysis applied to the DMSV data indicates that thematic mapper (TM) resolution is at the limit required to capture land surface heterogeneity. The TM data collected close to the time of the DMSV will be used to derive a regional sawgrass density map.

  6. Non-linear corrections to the time-covariance function derived from a multi-state chemical master equation.

    PubMed

    Scott, M

    2012-08-01

    The time-covariance function captures the dynamics of biochemical fluctuations and contains important information about the underlying kinetic rate parameters. Intrinsic fluctuations in biochemical reaction networks are typically modelled using a master equation formalism. In general, the equation cannot be solved exactly and approximation methods are required. For small fluctuations close to equilibrium, a linearisation of the dynamics provides a very good description of the relaxation of the time-covariance function. As the number of molecules in the system decrease, deviations from the linear theory appear. Carrying out a systematic perturbation expansion of the master equation to capture these effects results in formidable algebra; however, symbolic mathematics packages considerably expedite the computation. The authors demonstrate that non-linear effects can reveal features of the underlying dynamics, such as reaction stoichiometry, not available in linearised theory. Furthermore, in models that exhibit noise-induced oscillations, non-linear corrections result in a shift in the base frequency along with the appearance of a secondary harmonic.

  7. Online, automatic, ionospheric maps: IRI-PLAS-MAP

    NASA Astrophysics Data System (ADS)

    Arikan, F.; Sezen, U.; Gulyaeva, T. L.; Cilibas, O.

    2015-04-01

    Global and regional behavior of the ionosphere is an important component of space weather. The peak height and critical frequency of ionospheric layer for the maximum ionization, namely, hmF2 and foF2, and the total number of electrons on a ray path, Total Electron Content (TEC), are the most investigated and monitored values of ionosphere in capturing and observing ionospheric variability. Typically ionospheric models such as International Reference Ionosphere (IRI) can provide electron density profile, critical parameters of ionospheric layers and Ionospheric electron content for a given location, date and time. Yet, IRI model is limited by only foF2 STORM option in reflecting the dynamics of ionospheric/plasmaspheric/geomagnetic storms. Global Ionospheric Maps (GIM) are provided by IGS analysis centers for global TEC distribution estimated from ground-based GPS stations that can capture the actual dynamics of ionosphere and plasmasphere, but this service is not available for other ionospheric observables. In this study, a unique and original space weather service is introduced as IRI-PLAS-MAP from http://www.ionolab.org

  8. Process modeling of an advanced NH₃ abatement and recycling technology in the ammonia-based CO₂ capture process.

    PubMed

    Li, Kangkang; Yu, Hai; Tade, Moses; Feron, Paul; Yu, Jingwen; Wang, Shujuan

    2014-06-17

    An advanced NH3 abatement and recycling process that makes great use of the waste heat in flue gas was proposed to solve the problems of ammonia slip, NH3 makeup, and flue gas cooling in the ammonia-based CO2 capture process. The rigorous rate-based model, RateFrac in Aspen Plus, was thermodynamically and kinetically validated by experimental data from open literature and CSIRO pilot trials at Munmorah Power Station, Australia, respectively. After a thorough sensitivity analysis and process improvement, the NH3 recycling efficiency reached as high as 99.87%, and the NH3 exhaust concentration was only 15.4 ppmv. Most importantly, the energy consumption of the NH3 abatement and recycling system was only 59.34 kJ/kg CO2 of electricity. The evaluation of mass balance and temperature steady shows that this NH3 recovery process was technically effective and feasible. This process therefore is a promising prospect toward industrial application.

  9. Design principles of nuclear receptor signaling: how complex networking improves signal transduction

    PubMed Central

    Kolodkin, Alexey N; Bruggeman, Frank J; Plant, Nick; Moné, Martijn J; Bakker, Barbara M; Campbell, Moray J; van Leeuwen, Johannes P T M; Carlberg, Carsten; Snoep, Jacky L; Westerhoff, Hans V

    2010-01-01

    The topology of nuclear receptor (NR) signaling is captured in a systems biological graphical notation. This enables us to identify a number of ‘design' aspects of the topology of these networks that might appear unnecessarily complex or even functionally paradoxical. In realistic kinetic models of increasing complexity, calculations show how these features correspond to potentially important design principles, e.g.: (i) cytosolic ‘nuclear' receptor may shuttle signal molecules to the nucleus, (ii) the active export of NRs may ensure that there is sufficient receptor protein to capture ligand at the cytoplasmic membrane, (iii) a three conveyor belts design dissipating GTP-free energy, greatly aids response, (iv) the active export of importins may prevent sequestration of NRs by importins in the nucleus and (v) the unspecific nature of the nuclear pore may ensure signal-flux robustness. In addition, the models developed are suitable for implementation in specific cases of NR-mediated signaling, to predict individual receptor functions and differential sensitivity toward physiological and pharmacological ligands. PMID:21179018

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blennow, Mattias; Clementz, Stefan, E-mail: emb@kth.se, E-mail: scl@kth.se

    Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles andmore » anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.« less

  11. 77 FR 4059 - Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof; Receipt...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ... Images, and Components Thereof; Receipt of Complaint; Solicitation of Comments Relating to the Public... Devices for Capturing and Transmitting Images, and Components Thereof, DN 2869; the Commission is... importation of certain electronic devices for capturing and transmitting images, and components thereof. The...

  12. Combining multistate capture-recapture data with tag recoveries to estimate demographic parameters

    USGS Publications Warehouse

    Kendall, W.L.; Conn, P.B.; Hines, J.E.

    2006-01-01

    Matrix population models that allow an animal to occupy more than one state over time are important tools for population and evolutionary ecologists. Definition of state can vary, including location for metapopulation models and breeding state for life history models. For populations whose members can be marked and subsequently re-encountered, multistate mark-recapture models are available to estimate the survival and transition probabilities needed to construct population models. Multistate models have proved extremely useful in this context, but they often require a substantial amount of data and restrict estimation of transition probabilities to those areas or states subjected to formal sampling effort. At the same time, for many species, there are considerable tag recovery data provided by the public that could be modeled in order to increase precision and to extend inference to a greater number of areas or states. Here we present a statistical model for combining multistate capture-recapture data (e.g., from a breeding ground study) with multistate tag recovery data (e.g., from wintering grounds). We use this method to analyze data from a study of Canada Geese (Branta canadensis) in the Atlantic Flyway of North America. Our analysis produced marginal improvement in precision, due to relatively few recoveries, but we demonstrate how precision could be further improved with increases in the probability that a retrieved tag is reported.

  13. Health literacy and public health: a systematic review and integration of definitions and models.

    PubMed

    Sørensen, Kristine; Van den Broucke, Stephan; Fullam, James; Doyle, Gerardine; Pelikan, Jürgen; Slonska, Zofia; Brand, Helmut

    2012-01-25

    Health literacy concerns the knowledge and competences of persons to meet the complex demands of health in modern society. Although its importance is increasingly recognised, there is no consensus about the definition of health literacy or about its conceptual dimensions, which limits the possibilities for measurement and comparison. The aim of the study is to review definitions and models on health literacy to develop an integrated definition and conceptual model capturing the most comprehensive evidence-based dimensions of health literacy. A systematic literature review was performed to identify definitions and conceptual frameworks of health literacy. A content analysis of the definitions and conceptual frameworks was carried out to identify the central dimensions of health literacy and develop an integrated model. The review resulted in 17 definitions of health literacy and 12 conceptual models. Based on the content analysis, an integrative conceptual model was developed containing 12 dimensions referring to the knowledge, motivation and competencies of accessing, understanding, appraising and applying health-related information within the healthcare, disease prevention and health promotion setting, respectively. Based upon this review, a model is proposed integrating medical and public health views of health literacy. The model can serve as a basis for developing health literacy enhancing interventions and provide a conceptual basis for the development and validation of measurement tools, capturing the different dimensions of health literacy within the healthcare, disease prevention and health promotion settings.

  14. Constraining Centennial-Scale Ecosystem-Climate Interactions with a Pre-colonial Forest Reconstruction across the Upper Midwest and Northeastern United States

    NASA Astrophysics Data System (ADS)

    Matthes, J. H.; Dietze, M.; Fox, A. M.; Goring, S. J.; McLachlan, J. S.; Moore, D. J.; Poulter, B.; Quaife, T. L.; Schaefer, K. M.; Steinkamp, J.; Williams, J. W.

    2014-12-01

    Interactions between ecological systems and the atmosphere are the result of dynamic processes with system memories that persist from seconds to centuries. Adequately capturing long-term biosphere-atmosphere exchange within earth system models (ESMs) requires an accurate representation of changes in plant functional types (PFTs) through time and space, particularly at timescales associated with ecological succession. However, most model parameterization and development has occurred using datasets than span less than a decade. We tested the ability of ESMs to capture the ecological dynamics observed in paleoecological and historical data spanning the last millennium. Focusing on an area from the Upper Midwest to New England, we examined differences in the magnitude and spatial pattern of PFT distributions and ecotones between historic datasets and the CMIP5 inter-comparison project's large-scale ESMs. We then conducted a 1000-year model inter-comparison using six state-of-the-art biosphere models at sites that bridged regional temperature and precipitation gradients. The distribution of ecosystem characteristics in modeled climate space reveals widely disparate relationships between modeled climate and vegetation that led to large differences in long-term biosphere-atmosphere fluxes for this region. Model simulations revealed that both the interaction between climate and vegetation and the representation of ecosystem dynamics within models were important controls on biosphere-atmosphere exchange.

  15. Standardised animal models of host microbial mutualism

    PubMed Central

    Macpherson, A J; McCoy, K D

    2015-01-01

    An appreciation of the importance of interactions between microbes and multicellular organisms is currently driving research in biology and biomedicine. Many human diseases involve interactions between the host and the microbiota, so investigating the mechanisms involved is important for human health. Although microbial ecology measurements capture considerable diversity of the communities between individuals, this diversity is highly problematic for reproducible experimental animal models that seek to establish the mechanistic basis for interactions within the overall host-microbial superorganism. Conflicting experimental results may be explained away through unknown differences in the microbiota composition between vivaria or between the microenvironment of different isolated cages. In this position paper, we propose standardised criteria for stabilised and defined experimental animal microbiotas to generate reproducible models of human disease that are suitable for systematic experimentation and are reproducible across different institutions. PMID:25492472

  16. How well do satellite observations and models capture diurnal variation in aerosols over the Korean Peninsula?

    NASA Astrophysics Data System (ADS)

    Hyer, E. J.; Xian, P.; Campbell, J. R.

    2016-12-01

    Aerosol sources, sinks, and transport processes have important variations over the diurnal cycle. Advances in geostationary satellite observation have made it possible to retrieve aerosol properties over a larger fraction of the diurnal cycle in many areas. However, the conditions for retrieval of aerosol from space also have systematic diurnal variation, which must be considered when interpreting satellite data. We used surface PM2.5 observations from the Korean National Institute for Environmental Research, together with the dense network of AERONET sun photometers deployed in Korea for the KORUS-AQ mission in spring 2016, to examine diurnal variations in aerosol conditions and quantify the effect of systematic diurnal processes on daily integrated aerosol quantities of forcing and PM2.5 24-hour exposure. Time-resolved observations of aerosols from in situ data were compared to polar and geostationary satellite observations to evaluate these questions: 1) How well is diurnal variation observed in situ captured by satellite products? 2) Do the satellite products show evidence of systematic biases related to diurnally varying observing conditions? 3) What is the implication of diurnal variation for aerosol forcing estimates based on observations near solar noon? The diurnal variation diagnosed from observations was also compared to the output of the Navy Aerosol Analysis and Prediction System (NAAPS), to examine the ability of this model to capture aerosol diurnal variation. Finally, we discuss the implications of the observed diurnal variation for assimilation of aerosol observations into forecast models.

  17. Investigations of Flow Over a Hemisphere Using Numerical Simulations (Postprint)

    DTIC Science & Technology

    2015-06-22

    ranging from missile defense, remote sensing , and imaging . An important aspect of these applications is determining the effective beam-on-target...Stokes (URANS), detached eddy simulation (DES), and hybrid RANS/LES. The numerical results were compared with the experiment conducted at Auburn...turret. Using the DES and hybrid RANS/LES turbulence models, Loci-Chem was able to capture the unsteady flow structures, such as the shear layer

  18. Investigating the effects of forest structure on the small mammal community in frequent-fire coniferous forests using capture-recapture models for stratified populations

    Treesearch

    Rahel Sollmann; Angela M. White; Beth Gardner; Patricia N. Manley

    2015-01-01

    Small mammals comprise an important component of forest vertebrate communities. Our understanding of how small mammals use forested habitat has relied heavily on studies in forest systems not naturally prone to frequent disturbances. Small mammal populations that evolved in frequent-fire forests, however, may be less restricted to specific habitat conditions due to the...

  19. Work-Based Learning: A Practical Approach for Learning to Work and Working to Learn. A Case Study on Decision-Makers' Professional Development in Iran

    ERIC Educational Resources Information Center

    Arani, Mohammad Reza Sarkar; Alagamandan, Jafar; Tourani, Heidar

    2004-01-01

    The work-based learning model of human resource development has captured a great deal of attention and has gained increasing importance in higher education in recent years. Work-based learning is a powerful phenomenon that attempts to help policy-makers, managers and curriculum developers improve the quality of the decision and organizational…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Couture, Aaron Joseph

    This report documents aspects of direct and indirect neutron capture. The importance of neutron capture rates and methods to determine them are presented. The following conclusions are drawn: direct neutron capture measurements remain a backbone of experimental study; work is being done to take increased advantage of indirect methods for neutron capture; both instrumentation and facilities are making new measurements possible; more work is needed on the nuclear theory side to understand what is needed furthest from stability.

  1. Consumer Views: Importance of Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, Mark

    This presentation includes data captured by the National Renewable Energy Laboratory (NREL) to support the U.S. Department of Energy's Vehicle Technologies Office (VTO) research efforts. The data capture consumer views on the importance of fuel economy amongst other vehicle attributes and views on which alternative fuel types would be the best and worst replacements for gasoline.

  2. Effects of heat exchanger tubes on hydrodynamics and CO 2 capture of a sorbent-based fluidized bed reactor

    DOE PAGES

    Lai, Canhai; Xu, Zhijie; Li, Tingwen; ...

    2017-08-05

    In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber's performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered sub-grid models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable accuracymore » and manageable computational effort. Previously developed filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical tubes) on the adsorber's hydrodynamics and CO 2 capture performance are then examined. A one-dimensional three-region process model is briefly introduced for comparison purpose. The CFD model matches reasonably well with the process model while provides additional information about the flow field that is not available with the process model.« less

  3. Spectroscopic Analyses of Neutron Capture Elements in Open Clusters

    NASA Astrophysics Data System (ADS)

    O'Connell, Julia E.

    The evolution of elements as a function or age throughout the Milky Way disk provides strong constraints for galaxy evolution models, and on star formation epochs. In an effort to provide such constraints, we conducted an investigation into r- and s-process elemental abundances for a large sample of open clusters as part of an optical follow-up to the SDSS-III/APOGEE-1 near infrared survey. To obtain data for neutron capture abundance analysis, we conducted a long-term observing campaign spanning three years (2013-2016) using the McDonald Observatory Otto Struve 2.1-meter telescope and Sandiford Cass Echelle Spectrograph (SES, R(lambda/Deltalambda) ˜60,000). The SES provides a wavelength range of ˜1400 A, making it uniquely suited to investigate a number of other important chemical abundances as well as the neutron capture elements. For this study, we derive abundances for 18 elements covering four nucleosynthetic families- light, iron-peak, neutron capture and alpha-elements- for ˜30 open clusters within 6 kpc of the Sun with ages ranging from ˜80 Myr to ˜10 Gyr. Both equivalent width (EW) measurements and spectral synthesis methods were employed to derive abundances for all elements. Initial estimates for model stellar atmospheres- effective temperature and surface gravity- were provided by the APOGEE data set, and then re-derived for our optical spectra by removing abundance trends as a function of excitation potential and reduced width log(EW/lambda). With the exception of Ba II and Zr I, abundance analyses for all neutron capture elements were performed by generating synthetic spectra from the new stellar parameters. In order to remove molecular contamination, or blending from nearby atomic features, the synthetic spectra were modeled by a best-fit Gaussian to the observed data. Nd II shows a slight enhancement in all cluster stars, while other neutron capture elements follow solar abundance trends. Ba II shows a large cluster-to-cluster abundance spread, consistent with other open cluster abundance studies. From log(Age) ˜8.5, this large spread as a function of age appears to replicate the findings from an earlier, much debated study by Orazi et al. (2009) which found a linear trend of decreasing barium abundance with increasing age.

  4. Re-evaluating neonatal-age models for ungulates: Does model choice affect survival estimates?

    USGS Publications Warehouse

    Grovenburg, Troy W.; Monteith, Kevin L.; Jacques, Christopher N.; Klaver, Robert W.; DePerno, Christopher S.; Brinkman, Todd J.; Monteith, Kyle B.; Gilbert, Sophie L.; Smith, Joshua B.; Bleich, Vernon C.; Swanson, Christopher C.; Jenks, Jonathan A.

    2014-01-01

    New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001–2009, we captured and radiocollared 174 newborn (≤24-hrs old) ungulates: 76 white-tailed deer (Odocoileus virginianus) in Minnesota and South Dakota, 61 mule deer (O. hemionus) in California, and 37 pronghorn (Antilocapra americana) in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age) in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days) for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days) for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i.e., weekly versus daily) for estimating survival.

  5. Simulating the impact of glaciations on continental groundwater flow systems: 2. Model application to the Wisconsinian glaciation over the Canadian landscape

    NASA Astrophysics Data System (ADS)

    Lemieux, J.-M.; Sudicky, E. A.; Peltier, W. R.; Tarasov, L.

    2008-09-01

    A 3-D groundwater flow and brine transport numerical model of the entire Canadian landscape up to a depth of 10 km is constructed in order to capture the impacts of the Wisconsinian glaciation on the continental groundwater flow system. The numerical development of the model is presented in the companion paper of Lemieux et al. (2008b). Although the scale of the model prevents the use of a detailed geological model, commonly occurring geological materials that exhibit relatively consistent hydrogeological properties over the continent justify the simplifications while still allowing the capture of large-scale flow system trends. The model includes key processes pertaining to coupled groundwater flow and glaciation modeling, such a density-dependent (i.e., brine) flow, hydromechanical loading, subglacial infiltration, isostasy, and permafrost development. The surface boundary conditions are specified with the results of a glacial system model. The significant impact of the ice sheet on groundwater flow is evident by increases in the hydraulic head values below the ice sheet by as much as 3000 m down to a depth of 1.5 km into the subsurface. Results also indicate that the groundwater flow system after glaciation did not fully revert to its initial condition and that it is still recovering from the glaciation perturbation. This suggests that the current groundwater flow system cannot be interpreted solely on the basis of present-day boundary conditions and it is likely that several thousands of years of additional equilibration time will be necessary for the system to reach a new quasi-steady state. Finally, we find permafrost to have a large impact on the rate of dissipation of high hydraulic heads that build at depth and capturing its accurate distribution is important to explain the current hydraulic head distribution across the Canadian landscape.

  6. Comparison of hoop-net trapping and visual surveys to monitor abundance of the Rio Grande cooter (Pseudemys gorzugi).

    PubMed

    Mali, Ivana; Duarte, Adam; Forstner, Michael R J

    2018-01-01

    Abundance estimates play an important part in the regulatory and conservation decision-making process. It is important to correct monitoring data for imperfect detection when using these data to track spatial and temporal variation in abundance, especially in the case of rare and elusive species. This paper presents the first attempt to estimate abundance of the Rio Grande cooter ( Pseudemys gorzugi ) while explicitly considering the detection process. Specifically, in 2016 we monitored this rare species at two sites along the Black River, New Mexico via traditional baited hoop-net traps and less invasive visual surveys to evaluate the efficacy of these two sampling designs. We fitted the Huggins closed-capture estimator to estimate capture probabilities using the trap data and distance sampling models to estimate detection probabilities using the visual survey data. We found that only the visual survey with the highest number of observed turtles resulted in similar abundance estimates to those estimated using the trap data. However, the estimates of abundance from the remaining visual survey data were highly variable and often underestimated abundance relative to the estimates from the trap data. We suspect this pattern is related to changes in the basking behavior of the species and, thus, the availability of turtles to be detected even though all visual surveys were conducted when environmental conditions were similar. Regardless, we found that riverine habitat conditions limited our ability to properly conduct visual surveys at one site. Collectively, this suggests visual surveys may not be an effective sample design for this species in this river system. When analyzing the trap data, we found capture probabilities to be highly variable across sites and between age classes and that recapture probabilities were much lower than initial capture probabilities, highlighting the importance of accounting for detectability when monitoring this species. Although baited hoop-net traps seem to be an effective sampling design, it is important to note that this method required a relatively high trap effort to reliably estimate abundance. This information will be useful when developing a larger-scale, long-term monitoring program for this species of concern.

  7. Contingent capture and inhibition of return: a comparison of mechanisms.

    PubMed

    Prinzmetal, William; Taylor, Jordan A; Myers, Loretta Barry; Nguyen-Espino, Jacqueline

    2011-09-01

    We investigated the cause(s) of two effects associated with involuntary attention in the spatial cueing task: contingent capture and inhibition of return (IOR). Previously, we found that there were two mechanisms of involuntary attention in this task: (1) a (serial) search mechanism that predicts a larger cueing effect in reaction time with more display locations and (2) a decision (threshold) mechanism that predicts a smaller cueing effect with more display locations (Prinzmetal et al. 2010). In the present study, contingent capture and IOR had completely different patterns of results when we manipulated the number of display locations and the presence of distractors. Contingent capture was best described by a search model, whereas the inhibition of return was best described by a decision model. Furthermore, we fit a linear ballistic accumulator model to the results and IOR was accounted for by a change of threshold, whereas the results from contingent capture experiments could not be fit with a change of threshold and were better fit by a search model.

  8. Systematics of capture and fusion dynamics in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui

    2017-03-01

    We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.

  9. Multi-model ensemble hydrological simulation using a BP Neural Network for the upper Yalongjiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia

    2018-06-01

    Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.

  10. High-frequency health data and spline functions.

    PubMed

    Martín-Rodríguez, Gloria; Murillo-Fort, Carlos

    2005-03-30

    Seasonal variations are highly relevant for health service organization. In general, short run movements of medical magnitudes are important features for managers in this field to make adequate decisions. Thus, the analysis of the seasonal pattern in high-frequency health data is an appealing task. The aim of this paper is to propose procedures that allow the analysis of the seasonal component in this kind of data by means of spline functions embedded into a structural model. In the proposed method, useful adaptions of the traditional spline formulation are developed, and the resulting procedures are capable of capturing periodic variations, whether deterministic or stochastic, in a parsimonious way. Finally, these methodological tools are applied to a series of daily emergency service demand in order to capture simultaneous seasonal variations in which periods are different.

  11. Comparison of modelled runoff with observed proglacial discharge across the western margin of the Greenland ice sheet

    NASA Astrophysics Data System (ADS)

    Moustafa, S.; Rennermalm, A.; van As, D.; Overeem, I.; Tedesco, M.; Mote, T. L.; Koenig, L.; Smith, L. C.; Hagedorn, B.; Sletten, R. S.; Mikkelsen, A. B.; Hasholt, B.; Hall, D. K.; Fettweis, X.; Pitcher, L. H.; Hubbard, A.

    2017-12-01

    Greenland ice sheet surface ablation now dominates its total mass loss contributions to sea-level rise. Despite the increasing importance of Greenland's sea-level contribution, a quantitative inter-comparison between modeled and measured melt, runoff and discharge across multiple drainage basins is conspicuously lacking. Here we investigate the accuracy of model discharge estimates from the Modèle Atmosphérique Régionale (MAR v3.5.2) regional climate model by comparison with in situ proglacial river discharge measurements at three West Greenland drainage basins - North River (Thule), Watson River (Kangerlussuaq), and Naujat Kuat River (Nuuk). At each target catchment, we: 1) determine optimal drainage basin delineations; 2) assess primary drivers of melt; 3) evaluate MAR at daily, 5-, 10- and 20-day time scales; and 4) identify potential sources for model-observation discrepancies. Our results reveal that MAR resolves daily discharge variability poorly in the Nuuk and Thule basins (r2 = 0.4-0.5), but does capture variability over 5-, 10-, and 20-day means (r2 > 0.7). Model agreement with river flow data, though, is reduced during periods of peak discharge, particularly for the exceptional melt and discharge events of July 2012. Daily discharge is best captured by MAR across the Watson River basin, whilst there is lower correspondence between modeled and observed discharge at the Thule and Naujat Kuat River basins. We link the main source of model error to an underestimation of cloud cover, overestimation of surface albedo, and apparent warm bias in near-surface air temperatures. For future inter-comparison, we recommend using observations from catchments that have a self-contained and well-defined drainage area and an accurate discharge record over variable years coincident with a reliable automatic weather station record. Our study highlights the importance of improving MAR modeled surface albedo, cloud cover representation, and delay functions to reduce model error and to improve prediction of Greenland's future runoff contribution to global sea level rise.

  12. An Ontology-Based Archive Information Model for the Planetary Science Community

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris

    2008-01-01

    The Planetary Data System (PDS) information model is a mature but complex model that has been used to capture over 30 years of planetary science data for the PDS archive. As the de-facto information model for the planetary science data archive, it is being adopted by the International Planetary Data Alliance (IPDA) as their archive data standard. However, after seventeen years of evolutionary change the model needs refinement. First a formal specification is needed to explicitly capture the model in a commonly accepted data engineering notation. Second, the core and essential elements of the model need to be identified to help simplify the overall archive process. A team of PDS technical staff members have captured the PDS information model in an ontology modeling tool. Using the resulting knowledge-base, work continues to identify the core elements, identify problems and issues, and then test proposed modifications to the model. The final deliverables of this work will include specifications for the next generation PDS information model and the initial set of IPDA archive data standards. Having the information model captured in an ontology modeling tool also makes the model suitable for use by Semantic Web applications.

  13. Multi-phase CFD modeling of solid sorbent carbon capture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, E. M.; DeCroix, D.; Breault, R.

    2013-07-01

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  14. Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; DeCroix, David; Breault, Ronald W.

    2013-07-30

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  15. Compact fission counter for DANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, C Y; Chyzh, A; Kwan, E

    2010-11-06

    The Detector for Advanced Neutron Capture Experiments (DANCE) consists of 160 BF{sub 2} crystals with equal solid-angle coverage. DANCE is a 4{pi} {gamma}-ray calorimeter and designed to study the neutron-capture reactions on small quantities of radioactive and rare stable nuclei. These reactions are important for the radiochemistry applications and modeling the element production in stars. The recognition of capture event is made by the summed {gamma}-ray energy which is equivalent of the reaction Q-value and unique for a given capture reaction. For a selective group of actinides, where the neutron-induced fission reaction competes favorably with the neutron capture reaction, additionalmore » signature is needed to distinguish between fission and capture {gamma} rays for the DANCE measurement. This can be accomplished by introducing a detector system to tag fission fragments and thus establish a unique signature for the fission event. Once this system is implemented, one has the opportunity to study not only the capture but also fission reactions. A parallel-plate avalanche counter (PPAC) has many advantages for the detection of heavy charged particles such as fission fragments. These include fast timing, resistance to radiation damage, and tolerance of high counting rate. A PPAC also can be tuned to be insensitive to {alpha} particles, which is important for experiments with {alpha}-emitting actinides. Therefore, a PPAC is an ideal detector for experiments requiring a fast and clean trigger for fission. A PPAC with an ingenious design was fabricated in 2006 by integrating amplifiers into the target assembly. However, this counter was proved to be unsuitable for this application because of issues related to the stability of amplifiers and the ability to separate fission fragments from {alpha}'s. Therefore, a new design is needed. A LLNL proposal to develop a new PPAC for DANCE was funded by NA22 in FY09. The design goal is to minimize the mass for the proposed counter and still be able to maintain a stable operation under extreme radioactivity and the ability to separate fission fragments from {alpha}'s. In the following sections, the description is given for the design and performance of this new compact PPAC, for studying the neutron-induced reactions on actinides using DANCE at LANL.« less

  16. Data Model Management for Space Information Systems

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris

    2006-01-01

    The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool. We will also describe the current effort to provide interoperability with the European Space Agency (ESA)/Planetary Science Archive (PSA) which is critically dependent on a common data model.

  17. Effects of the f(R) and f(G) Gravities and the Exotic Particle on Primordial Nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Kusakabe, Motohiko; Koh, Seoktae; Kim, K. S.; Cheoun, Myung-Ki; Kajino, Toshitaka; Mathews, Grant J.

    A plateau Li/H abundance of metal-poor stars is smaller than those predicted in the standard big bang nucleosynthesis (BBN) model by a factor of ˜3, for the baryon density determined from Planck. This discrepancy may be caused by a non-standard cosmic thermal history or reactions of a hypothetical particle. We consider the BBN in specific modified gravity models characterized by f(R) and f(G) terms in the gravitational actions. These models have cosmic expansion rates different from that in the standard model, and abundances of all light elements are affected. The modified gravities are constrained mainly from observational deuterium abundances. No solution is found for the Li problem because a significant modification of the expansion rate results in a large change of D abundance. This result is quite a contrast to that of a BBN model including a long-lived negatively charged massive particle X-. The 7Be nuclide is destroyed via the recombination with an X- followed by the radiative proton capture. The X- particle selectively decreases the abundance of 7Be, and the primordial abundance of 7Li originating from the electron capture of 7Be is reduced. We have an important theoretical lesson: Some physical process must have operated preferentially on 7Be nuclei.

  18. Computation of rotor aerodynamic loads in forward flight using a full-span free wake analysis

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.; Wachspress, Daniel A.; Boschitsch, Alexander H.; Chua, Kiat

    1990-01-01

    The development of an advanced computational analysis of unsteady aerodynamic loads on isolated helicopter rotors in forward flight is described. The primary technical focus of the development was the implementation of a freely distorting filamentary wake model composed of curved vortex elements laid out along contours of constant vortex sheet strength in the wake. This model captures the wake generated by the full span of each rotor blade and makes possible a unified treatment of the shed and trailed vorticity in the wake. This wake model was coupled to a modal analysis of the rotor blade dynamics and a vortex lattice treatment of the aerodynamic loads to produce a comprehensive model for rotor performance and air loads in forward flight dubbed RotorCRAFT (Computation of Rotor Aerodynamics in Forward Flight). The technical background on the major components of this analysis are discussed and the correlation of predictions of performance, trim, and unsteady air loads with experimental data from several representative rotor configurations is examined. The primary conclusions of this study are that the RotorCRAFT analysis correlates well with measured loads on a variety of configurations and that application of the full span free wake model is required to capture several important features of the vibratory loading on rotor blades in forward flight.

  19. Model validation of simple-graph representations of metabolism

    PubMed Central

    Holme, Petter

    2009-01-01

    The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012

  20. Comparison of snow melt properties across multiple spatial scales and landscape units in interior sub-Arctic boreal Alaskan watersheds

    NASA Astrophysics Data System (ADS)

    Bennett, K. E.; Cherry, J. E.; Hiemstra, C. A.; Bolton, W. R.

    2013-12-01

    Interior sub-Arctic Alaskan snow cover is rapidly changing and requires further study for correct parameterization in physically based models. This project undertook field studies during the 2013 snow melt season to capture snow depth, snow temperature profiles, and snow cover extent to compare with observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor at four different sites underlain by discontinuous permafrost. The 2013 melt season, which turned out to be the latest snow melt period on record, was monitored using manual field measurements (SWE, snow depth data collection), iButtons to record temperature of the snow pack, GoPro cameras to capture time lapse of the snow melt, and low level orthoimagery collected at ~1500 m using a Navion L17a plane mounted with a Nikon D3s camera. Sites were selected across a range of landscape conditions, including a north facing black spruce hill slope, a south facing birch forest, an open tundra site, and a high alpine meadow. Initial results from the adjacent north and south facing sites indicate a highly sensitive system where snow cover melts over just a few days, illustrating the importance of high resolution temporal data capture at these locations. Field observations, iButtons and GoPro cameras show that the MODIS data captures the melt conditions at the south and the north site with accuracy (2.5% and 6.5% snow cover fraction present on date of melt, respectively), but MODIS data for the north site is less variable around the melt period, owing to open conditions and sparse tree cover. However, due to the rapid melt rate trajectory, shifting the melt date estimate by a day results in a doubling of the snow cover fraction estimate observed by MODIS. This information can assist in approximating uncertainty associated with remote sensing data that is being used to populate hydrologic and snow models (the Sacramento Soil Moisture Accounting model, coupled with SNOW-17, and the Variable Infiltration Capacity hydrologic model) and provide greater understanding of error and resultant model sensitivities associated with regional observations of snow cover across the sub-Arctic boreal landscape.

  1. New evidence on the tool-assisted hunting exhibited by chimpanzees (Pan troglodytes verus) in a savannah habitat at Fongoli, Sénégal.

    PubMed

    Pruetz, J D; Bertolani, P; Ontl, K Boyer; Lindshield, S; Shelley, M; Wessling, E G

    2015-04-01

    For anthropologists, meat eating by primates like chimpanzees (Pan troglodytes) warrants examination given the emphasis on hunting in human evolutionary history. As referential models, apes provide insight into the evolution of hominin hunting, given their phylogenetic relatedness and challenges reconstructing extinct hominin behaviour from palaeoanthropological evidence. Among chimpanzees, adult males are usually the main hunters, capturing vertebrate prey by hand. Savannah chimpanzees (P. t. verus) at Fongoli, Sénégal are the only known non-human population that systematically hunts vertebrate prey with tools, making them an important source for hypotheses of early hominin behaviour based on analogy. Here, we test the hypothesis that sex and age patterns in tool-assisted hunting (n=308 cases) at Fongoli occur and differ from chimpanzees elsewhere, and we compare tool-assisted hunting to the overall hunting pattern. Males accounted for 70% of all captures but hunted with tools less than expected based on their representation on hunting days. Females accounted for most tool-assisted hunting. We propose that social tolerance at Fongoli, along with the tool-assisted hunting method, permits individuals other than adult males to capture and retain control of prey, which is uncommon for chimpanzees. We assert that tool-assisted hunting could have similarly been important for early hominins.

  2. Targeting climate diversity in conservation planning to build resilience to climate change

    USGS Publications Warehouse

    Heller, Nicole E.; Kreitler, Jason R.; Ackerly, David; Weiss, Stuart; Recinos, Amanda; Branciforte, Ryan; Flint, Lorraine E.; Flint, Alan L.; Micheli, Elisabeth

    2015-01-01

    Climate change is raising challenging concerns for systematic conservation planning. Are methods based on the current spatial patterns of biodiversity effective given long-term climate change? Some conservation scientists argue that planning should focus on protecting the abiotic diversity in the landscape, which drives patterns of biological diversity, rather than focusing on the distribution of focal species, which shift in response to climate change. Climate is one important abiotic driver of biodiversity patterns, as different climates host different biological communities and genetic pools. We propose conservation networks that capture the full range of climatic diversity in a region will improve the resilience of biotic communities to climate change compared to networks that do not. In this study we used historical and future hydro-climate projections from the high resolution Basin Characterization Model to explore the utility of directly targeting climatic diversity in planning. Using the spatial planning tool, Marxan, we designed conservation networks to capture the diversity of climate types, at the regional and sub-regional scale, and compared them to networks we designed to capture the diversity of vegetation types. By focusing on the Conservation Lands Network (CLN) of the San Francisco Bay Area as a real-world case study, we compared the potential resilience of networks by examining two factors: the range of climate space captured, and climatic stability to 18 future climates, reflecting different emission scenarios and global climate models. We found that the climate-based network planned at the sub-regional scale captured a greater range of climate space and showed higher climatic stability than the vegetation and regional based-networks. At the same time, differences among network scenarios are small relative to the variance in climate stability across global climate models. Across different projected futures, topographically heterogeneous areas consistently show greater climate stability than homogenous areas. The analysis suggests that utilizing high-resolution climate and hydrological data in conservation planning improves the likely resilience of biodiversity to climate change. We used these analyses to suggest new conservation priorities for the San Francisco Bay Area.

  3. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David; Agarwal, Deborah A.; Sun, Xin

    2011-09-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  4. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.; Agarwal, D.; Sun, X.

    2011-01-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  5. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  6. Capture mechanism in Palaeotropical pitcher plants (Nepenthaceae) is constrained by climate

    PubMed Central

    Moran, Jonathan A.; Gray, Laura K.; Clarke, Charles; Chin, Lijin

    2013-01-01

    Background and Aims Nepenthes (Nepenthaceae, approx. 120 species) are carnivorous pitcher plants with a centre of diversity comprising the Philippines, Borneo, Sumatra and Sulawesi. Nepenthes pitchers use three main mechanisms for capturing prey: epicuticular waxes inside the pitcher; a wettable peristome (a collar-shaped structure around the opening); and viscoelastic fluid. Previous studies have provided evidence suggesting that the first mechanism may be more suited to seasonal climates, whereas the latter two might be more suited to perhumid environments. In this study, this idea was tested using climate envelope modelling. Methods A total of 94 species, comprising 1978 populations, were grouped by prey capture mechanism (large peristome, small peristome, waxy, waxless, viscoelastic, non-viscoelastic, ‘wet’ syndrome and ‘dry’ syndrome). Nineteen bioclimatic variables were used to model habitat suitability at approx. 1 km resolution for each group, using Maxent, a presence-only species distribution modelling program. Key Results Prey capture groups putatively associated with perhumid conditions (large peristome, waxless, viscoelastic and ‘wet’ syndrome) had more restricted areas of probable habitat suitability than those associated putatively with less humid conditions (small peristome, waxy, non-viscoelastic and‘dry’ syndrome). Overall, the viscoelastic group showed the most restricted area of modelled suitable habitat. Conclusions The current study is the first to demonstrate that the prey capture mechanism in a carnivorous plant is constrained by climate. Nepenthes species employing peristome-based and viscoelastic fluid-based capture are largely restricted to perhumid regions; in contrast, the wax-based mechanism allows successful capture in both perhumid and more seasonal areas. Possible reasons for the maintenance of peristome-based and viscoelastic fluid-based capture mechanisms in Nepenthes are discussed in relation to the costs and benefits associated with a given prey capture strategy. PMID:23975653

  7. Capture mechanism in Palaeotropical pitcher plants (Nepenthaceae) is constrained by climate.

    PubMed

    Moran, Jonathan A; Gray, Laura K; Clarke, Charles; Chin, Lijin

    2013-11-01

    Nepenthes (Nepenthaceae, approx. 120 species) are carnivorous pitcher plants with a centre of diversity comprising the Philippines, Borneo, Sumatra and Sulawesi. Nepenthes pitchers use three main mechanisms for capturing prey: epicuticular waxes inside the pitcher; a wettable peristome (a collar-shaped structure around the opening); and viscoelastic fluid. Previous studies have provided evidence suggesting that the first mechanism may be more suited to seasonal climates, whereas the latter two might be more suited to perhumid environments. In this study, this idea was tested using climate envelope modelling. A total of 94 species, comprising 1978 populations, were grouped by prey capture mechanism (large peristome, small peristome, waxy, waxless, viscoelastic, non-viscoelastic, 'wet' syndrome and 'dry' syndrome). Nineteen bioclimatic variables were used to model habitat suitability at approx. 1 km resolution for each group, using Maxent, a presence-only species distribution modelling program. Prey capture groups putatively associated with perhumid conditions (large peristome, waxless, viscoelastic and 'wet' syndrome) had more restricted areas of probable habitat suitability than those associated putatively with less humid conditions (small peristome, waxy, non-viscoelastic and'dry' syndrome). Overall, the viscoelastic group showed the most restricted area of modelled suitable habitat. The current study is the first to demonstrate that the prey capture mechanism in a carnivorous plant is constrained by climate. Nepenthes species employing peristome-based and viscoelastic fluid-based capture are largely restricted to perhumid regions; in contrast, the wax-based mechanism allows successful capture in both perhumid and more seasonal areas. Possible reasons for the maintenance of peristome-based and viscoelastic fluid-based capture mechanisms in Nepenthes are discussed in relation to the costs and benefits associated with a given prey capture strategy.

  8. Estimation and modeling of electrofishing capture efficiency for fishes in wadeable warmwater streams

    USGS Publications Warehouse

    Price, A.; Peterson, James T.

    2010-01-01

    Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.

  9. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  10. Designing attractive models via automated identification of chaotic and oscillatory dynamical regimes.

    PubMed

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Rose, Anna; Moon, Simon; Dallman, Margaret J; Stumpf, Michael P H

    2011-10-04

    Chaos and oscillations continue to capture the interest of both the scientific and public domains. Yet despite the importance of these qualitative features, most attempts at constructing mathematical models of such phenomena have taken an indirect, quantitative approach, for example, by fitting models to a finite number of data points. Here we develop a qualitative inference framework that allows us to both reverse-engineer and design systems exhibiting these and other dynamical behaviours by directly specifying the desired characteristics of the underlying dynamical attractor. This change in perspective from quantitative to qualitative dynamics, provides fundamental and new insights into the properties of dynamical systems.

  11. Incorporating heterogeneity into the transmission dynamics of a waterborne disease model.

    PubMed

    Collins, O C; Govinder, K S

    2014-09-07

    We formulate a mathematical model that captures the essential dynamics of waterborne disease transmission to study the effects of heterogeneity on the spread of the disease. The effects of heterogeneity on some important mathematical features of the model such as the basic reproduction number, type reproduction number and final outbreak size are analysed accordingly. We conduct a real-world application of this model by using it to investigate the heterogeneity in transmission in the recent cholera outbreak in Haiti. By evaluating the measure of heterogeneity between the administrative departments in Haiti, we discover a significant difference in the dynamics of the cholera outbreak between the departments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Landmark lecture on cardiac intensive care and anaesthesia: continuum and conundrums.

    PubMed

    Laussen, Peter C

    2017-12-01

    Cardiac anesthesia and critical care provide an important continuum of care for patients with congenital heart disease. Clinicians in both areas work in complex environments in which the interactions between humans and technology is critical. Understanding our contributions to outcomes (modifiable risk) and our ability to perceive and predict an evolving clinical state (low failure-to-predict rate) are important performance metrics. Improved methods for capturing continuous physiologic signals will allow for new and interactive approaches to data visualization, and for sophisticated and iterative data modeling that will help define a patient's phenotype and response to treatment (precision physiology).

  13. Improving the Projections of Vegetation Biogeography by Integrating Climate Envelope Models and Dynamic Global Vegetation Models

    NASA Astrophysics Data System (ADS)

    Case, M. J.; Kim, J. B.

    2015-12-01

    Assessing changes in vegetation is increasingly important for conservation planning in the face of climate change. Dynamic global vegetation models (DGVMs) are important tools for assessing such changes. DGVMs have been applied at regional scales to create projections of range expansions and contractions of plant functional types. Many DGVMs use a number of algorithms to determine the biogeography of plant functional types. One such DGVM, MC2, uses a series of decision trees based on bioclimatic thresholds while others, such as LPJ, use constraining emergent properties with a limited set of bioclimatic threshold-based rules. Although both approaches have been used widely, we demonstrate that these biogeography outputs perform poorly at continental scales when compared to existing potential vegetation maps. Specifically, we found that with MC2, the algorithm for determining leaf physiognomy is too simplistic to capture arid and semi-arid vegetation in much of the western U.S., as well as is the algorithm for determining the broadleaf and needleleaf mix in the Southeast. With LPJ, we found that the bioclimatic thresholds used to allow seedling establishment are too broad and fail to capture regional-scale biogeography of the plant functional types. In response, we demonstrate a new approach to determining the biogeography of plant functional types by integrating the climatic thresholds produced for individual tree species by a series of climate envelope models with the biogeography algorithms of MC2 and LPJ. Using this approach, we find that MC2 and LPJ perform considerably better when compared to potential vegetation maps.

  14. A Learning Theory Conceptual Foundation for Using Capture Technology in Teaching

    ERIC Educational Resources Information Center

    Berardi, Victor; Blundell, Greg

    2014-01-01

    Lecture capture technologies are increasingly being used by instructors, programs, and institutions to deliver online lectures and courses. This lecture capture movement is important as it increases access to education opportunities that were not possible before, it can improve efficiency, and it can increase student engagement. However, this is…

  15. Snug as a Bug: Goodness of Fit and Quality of Models.

    PubMed

    Jupiter, Daniel C

    In elucidating risk factors, or attempting to make predictions about the behavior of subjects in our biomedical studies, we often build statistical models. These models are meant to capture some aspect of reality, or some real-world process underlying the phenomena we are examining. However, no model is perfect, and it is thus important to have tools to assess how accurate models are. In this commentary, we delve into the various roles that our models can play. Then we introduce the notion of the goodness of fit of models and lay the ground work for further study of diagnostic tests for assessing both the fidelity of our models and the statistical assumptions underlying them. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  16. A technical, economic, and environmental assessment of amine-based CO2 capture technology for power plant greenhouse gas control.

    PubMed

    Rao, Anand B; Rubin, Edward S

    2002-10-15

    Capture and sequestration of CO2 from fossil fuel power plants is gaining widespread interest as a potential method of controlling greenhouse gas emissions. Performance and cost models of an amine (MEA)-based CO2 absorption system for postcombustion flue gas applications have been developed and integrated with an existing power plant modeling framework that includes multipollutant control technologies for other regulated emissions. The integrated model has been applied to study the feasibility and cost of carbon capture and sequestration at both new and existing coal-burning power plants. The cost of carbon avoidance was shown to depend strongly on assumptions about the reference plant design, details of the CO2 capture system design, interactions with other pollution control systems, and method of CO2 storage. The CO2 avoidance cost for retrofit systems was found to be generally higher than for new plants, mainly because of the higher energy penalty resulting from less efficient heat integration as well as site-specific difficulties typically encountered in retrofit applications. For all cases, a small reduction in CO2 capture cost was afforded by the SO2 emission trading credits generated by amine-based capture systems. Efforts are underway to model a broader suite of carbon capture and sequestration technologies for more comprehensive assessments in the context of multipollutant environmental management.

  17. Scientists' perspectives on consent in the context of biobanking research

    PubMed Central

    Master, Zubin; Campo-Engelstein, Lisa; Caulfield, Timothy

    2015-01-01

    Most bioethics studies have focused on capturing the views of patients and the general public on research ethics issues related to informed consent for biobanking and only a handful of studies have examined the perceptions of scientists. Capturing the opinions of scientists is important because they are intimately involved with biobanks as collectors and users of samples and health information. In this study, we performed interviews with scientists followed by qualitative analysis to capture the diversity of perspectives on informed consent. We found that the majority of scientists in our study reported their preference for a general consent approach although they do not believe there to be a consensus on consent type. Despite their overall desire for a general consent model, many reported several concerns including donors needing some form of assurance that nothing unethical will be done with their samples and information. Finally, scientists reported mixed opinions about incorporating exclusion clauses in informed consent as a means of limiting some types of contentious research as a mechanism to assure donors that their samples and information are being handled appropriately. This study is one of the first to capture the views of scientists on informed consent in biobanking. Future studies should attempt to generalize findings on the perspectives of different scientists on informed consent for biobanking. PMID:25074466

  18. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  19. Radiative capture reactions in astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brune, Carl R.; Davids, Barry

    Here, the radiative capture reactions of greatest importance in nuclear astrophysics are identified and placed in their stellar contexts. Recent experimental efforts to estimate their thermally averaged rates are surveyed.

  20. Radiative capture reactions in astrophysics

    DOE PAGES

    Brune, Carl R.; Davids, Barry

    2015-08-07

    Here, the radiative capture reactions of greatest importance in nuclear astrophysics are identified and placed in their stellar contexts. Recent experimental efforts to estimate their thermally averaged rates are surveyed.

  1. The Influence of Temperature on Time-Dependent Deformation and Failure in Granite: A Mesoscale Modeling Approach

    NASA Astrophysics Data System (ADS)

    Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick

    2017-09-01

    An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the modeling presented herein will assist in the management and optimization of geotechnical engineering projects within granite.

  2. Influence of atrial substrate on local capture induced by rapid pacing of atrial fibrillation.

    PubMed

    Rusu, Alexandru; Jacquemet, Vincent; Vesin, Jean-Marc; Virag, Nathalie

    2014-05-01

    Preliminary studies showed that the septum area was the only location allowing local capture of both the atria during rapid pacing of atrial fibrillation (AF) from a single site. The present model-based study investigated the influence of atrial substrate on the ability to capture AF when pacing the septum. Three biophysical models of AF with an identical anatomy from human atria but with different AF substrates were used: (i) AF based on multiple wavelets, (ii) AF based on heterogeneities in vagal activation, (iii) AF based on heterogeneities in repolarization. A fourth anatomical model without Bachmann's bundle (BB) was also implemented. Rapid pacing was applied from the septum at pacing cycle lengths in the range of 50-100% of AF cycle length. Local capture was automatically assessed with 24 pairs of electrodes evenly distributed on the atrial surface. The results were averaged over 16 AF simulations. In the homogeneous substrate, AF capture could reach 80% of the atrial surface. Heterogeneities degraded the ability to capture during AF. In the vagal substrate, the capture tended to be more regular and the degradation of the capture was not directly related to the spatial extent of the heterogeneities. In the third substrate, heterogeneities induced wave anchorings and wavebreaks even in areas close to the pacing site, with a more dramatic effect on AF capture. Finally, BB did not significantly affect the ability to capture. Atrial fibrillation substrate had a significant effect on rapid pacing outcomes. The response to therapeutic pacing may therefore be specific to each patient.

  3. A coarse-grained model for synergistic action of multiple enzymes on cellulose

    DOE PAGES

    Asztalos, Andrea; Daniels, Marcus; Sethi, Anurag; ...

    2012-08-01

    In this study, degradation of cellulose to glucose requires the cooperative action of three classes of enzymes, collectively known as cellulases. Endoglucanases randomly bind to cellulose surfaces and generate new chain ends by hydrolyzing -1,4-D-glycosidic bonds. Exoglucanases bind to free chain ends and hydrolyze glycosidic bonds in a processive manner releasing cellobiose units. Then, -glucosidases hydrolyze soluble cellobiose to glucose. Optimal synergistic action of these enzymes is essential for efficient digestion of cellulose. Experiments show that as hydrolysis proceeds and the cellulose substrate becomes more heterogeneous, the overall degradation slows down. As catalysis occurs on the surface of crystalline cellulose,more » several factors affect the overall hydrolysis. Therefore, spatial models of cellulose degradation must capture effects such as enzyme crowding and surface heterogeneity, which have been shown to lead to a reduction in hydrolysis rates. As a result, we present a coarse-grained stochastic model for capturing the key events associated with the enzymatic degradation of cellulose at the mesoscopic level. This functional model accounts for the mobility and action of a single cellulase enzyme as well as the synergy of multiple endo- and exo-cellulases on a cellulose surface. The quantitative description of cellulose degradation is calculated on a spatial model by including free and bound states of both endo- and exo-cellulases with explicit reactive surface terms (e.g., hydrogen bond breaking, covalent bond cleavages) and corresponding reaction rates. The dynamical evolution of the system is simulated by including physical interactions between cellulases and cellulose. In conclusion, our coarse-grained model reproduces the qualitative behavior of endoglucanases and exoglucanases by accounting for the spatial heterogeneity of the cellulose surface as well as other spatial factors such as enzyme crowding. Importantly, it captures the endo-exo synergism of cellulase enzyme cocktails. This model constitutes a critical step towards testing hypotheses and understanding approaches for maximizing synergy and substrate properties with a goal of cost effective enzymatic hydrolysis.« less

  4. A Clustering Graph Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winlaw, Manda; De Sterck, Hans; Sanders, Geoffrey

    In very simple terms a network can be de ned as a collection of points joined together by lines. Thus, networks can be used to represent connections between entities in a wide variety of elds including engi- neering, science, medicine, and sociology. Many large real-world networks share a surprising number of properties, leading to a strong interest in model development research and techniques for building synthetic networks have been developed, that capture these similarities and replicate real-world graphs. Modeling these real-world networks serves two purposes. First, building models that mimic the patterns and prop- erties of real networks helps tomore » understand the implications of these patterns and helps determine which patterns are important. If we develop a generative process to synthesize real networks we can also examine which growth processes are plausible and which are not. Secondly, high-quality, large-scale network data is often not available, because of economic, legal, technological, or other obstacles [7]. Thus, there are many instances where the systems of interest cannot be represented by a single exemplar network. As one example, consider the eld of cybersecurity, where systems require testing across diverse threat scenarios and validation across diverse network structures. In these cases, where there is no single exemplar network, the systems must instead be modeled as a collection of networks in which the variation among them may be just as important as their common features. By developing processes to build synthetic models, so-called graph generators, we can build synthetic networks that capture both the essential features of a system and realistic variability. Then we can use such synthetic graphs to perform tasks such as simulations, analysis, and decision making. We can also use synthetic graphs to performance test graph analysis algorithms, including clustering algorithms and anomaly detection algorithms.« less

  5. Estimating above-ground biomass on mountain meadows and pastures through remote sensing

    NASA Astrophysics Data System (ADS)

    Barrachina, M.; Cristóbal, J.; Tulla, A. F.

    2015-06-01

    Extensive stock-breeding systems developed in mountain areas like the Pyrenees are crucial for local farming economies and depend largely on above-ground biomass (AGB) in the form of grass produced on meadows and pastureland. In this study, a multiple linear regression analysis technique based on in-situ biomass collection and vegetation and wetness indices derived from Landsat-5 TM data is successfully applied in a mountainous Pyrenees area to model AGB. Temporal thoroughness of the data is ensured by using a large series of images. Results of on-site AGB collection show the importance for AGB models to capture the high interannual and intraseasonal variability that results from both meteorological conditions and farming practices. AGB models yield best results at midsummer and end of summer before mowing operations by farmers, with a mean R2, RMSE and PE for 2008 and 2009 midsummer of 0.76, 95 g m-2 and 27%, respectively; and with a mean R2, RMSE and PE for 2008 and 2009 end of summer of 0.74, 128 g m-2 and 36%, respectively. Although vegetation indices are a priori more related with biomass production, wetness indices play an important role in modeling AGB, being statistically selected more frequently (more than 50%) than other traditional vegetation indexes (around 27%) such as NDVI. This suggests that middle infrared bands are crucial descriptors of AGB. The methodology applied in this work compares favorably with other works in the literature, yielding better results than those works in mountain areas, owing to the ability of the proposed methodology to capture natural and anthropogenic variations in AGB which are the key to increasing AGB modeling accuracy.

  6. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE PAGES

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    2016-11-08

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  7. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  8. Site-level model intercomparison of high latitude and high altitude soil thermal dynamics in tundra and barren landscapes

    NASA Astrophysics Data System (ADS)

    Ekici, A.; Chadburn, S.; Chaudhary, N.; Hajdu, L. H.; Marmy, A.; Peng, S.; Boike, J.; Burke, E.; Friend, A. D.; Hauck, C.; Krinner, G.; Langer, M.; Miller, P. A.; Beer, C.

    2015-07-01

    Modeling soil thermal dynamics at high latitudes and altitudes requires representations of physical processes such as snow insulation, soil freezing and thawing and subsurface conditions like soil water/ice content and soil texture. We have compared six different land models: JSBACH, ORCHIDEE, JULES, COUP, HYBRID8 and LPJ-GUESS, at four different sites with distinct cold region landscape types, to identify the importance of physical processes in capturing observed temperature dynamics in soils. The sites include alpine, high Arctic, wet polygonal tundra and non-permafrost Arctic, thus showing how a range of models can represent distinct soil temperature regimes. For all sites, snow insulation is of major importance for estimating topsoil conditions. However, soil physics is essential for the subsoil temperature dynamics and thus the active layer thicknesses. This analysis shows that land models need more realistic surface processes, such as detailed snow dynamics and moss cover with changing thickness and wetness, along with better representations of subsoil thermal dynamics.

  9. Modeling for Battery Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick

    2017-01-01

    For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient, and is of suitable accuracy for reliable EOD prediction in a variety of operational profiles. The model can be considered an electrochemical engineering model, but unlike most such models found in the literature, certain approximations are done that allow to retain computational efficiency for online implementation of the model. Although the focus here is on Li-ion batteries, the model is quite general and can be applied to different chemistries through a change of model parameter values. Progress on model development, providing model validation results and EOD prediction results is being presented.

  10. Revisiting the Zingiberales: using multiplexed exon capture to resolve ancient and recent phylogenetic splits in a charismatic plant lineage

    PubMed Central

    Iles, William J.D.; Barrett, Craig F.; Smith, Selena Y.; Specht, Chelsea D.

    2016-01-01

    The Zingiberales are an iconic order of monocotyledonous plants comprising eight families with distinctive and diverse floral morphologies and representing an important ecological element of tropical and subtropical forests. While the eight families are demonstrated to be monophyletic, phylogenetic relationships among these families remain unresolved. Neither combined morphological and molecular studies nor recent attempts to resolve family relationships using sequence data from whole plastomes has resulted in a well-supported, family-level phylogenetic hypothesis of relationships. Here we approach this challenge by leveraging the complete genome of one member of the order, Musa acuminata, together with transcriptome information from each of the other seven families to design a set of nuclear loci that can be enriched from highly divergent taxa with a single array-based capture of indexed genomic DNA. A total of 494 exons from 418 nuclear genes were captured for 53 ingroup taxa. The entire plastid genome was also captured for the same 53 taxa. Of the total genes captured, 308 nuclear and 68 plastid genes were used for phylogenetic estimation. The concatenated plastid and nuclear dataset supports the position of Musaceae as sister to the remaining seven families. Moreover, the combined dataset recovers known intra- and inter-family phylogenetic relationships with generally high bootstrap support. This is a flexible and cost effective method that gives the broader plant biology community a tool for generating phylogenomic scale sequence data in non-model systems at varying evolutionary depths. PMID:26819846

  11. Efficient data management in a large-scale epidemiology research project.

    PubMed

    Meyer, Jens; Ostrzinski, Stefan; Fredrich, Daniel; Havemann, Christoph; Krafczyk, Janina; Hoffmann, Wolfgang

    2012-09-01

    This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Revisiting the Zingiberales: using multiplexed exon capture to resolve ancient and recent phylogenetic splits in a charismatic plant lineage.

    PubMed

    Sass, Chodon; Iles, William J D; Barrett, Craig F; Smith, Selena Y; Specht, Chelsea D

    2016-01-01

    The Zingiberales are an iconic order of monocotyledonous plants comprising eight families with distinctive and diverse floral morphologies and representing an important ecological element of tropical and subtropical forests. While the eight families are demonstrated to be monophyletic, phylogenetic relationships among these families remain unresolved. Neither combined morphological and molecular studies nor recent attempts to resolve family relationships using sequence data from whole plastomes has resulted in a well-supported, family-level phylogenetic hypothesis of relationships. Here we approach this challenge by leveraging the complete genome of one member of the order, Musa acuminata, together with transcriptome information from each of the other seven families to design a set of nuclear loci that can be enriched from highly divergent taxa with a single array-based capture of indexed genomic DNA. A total of 494 exons from 418 nuclear genes were captured for 53 ingroup taxa. The entire plastid genome was also captured for the same 53 taxa. Of the total genes captured, 308 nuclear and 68 plastid genes were used for phylogenetic estimation. The concatenated plastid and nuclear dataset supports the position of Musaceae as sister to the remaining seven families. Moreover, the combined dataset recovers known intra- and inter-family phylogenetic relationships with generally high bootstrap support. This is a flexible and cost effective method that gives the broader plant biology community a tool for generating phylogenomic scale sequence data in non-model systems at varying evolutionary depths.

  13. Capture and X-ray diffraction studies of protein microcrystals in a microfluidic trap array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyubimov, Artem Y.; Stanford University, Stanford, CA 94305; Stanford University, Stanford, CA 94305

    A microfluidic platform has been developed for the capture and X-ray analysis of protein microcrystals, affording a means to improve the efficiency of XFEL and synchrotron experiments. X-ray free-electron lasers (XFELs) promise to enable the collection of interpretable diffraction data from samples that are refractory to data collection at synchrotron sources. At present, however, more efficient sample-delivery methods that minimize the consumption of microcrystalline material are needed to allow the application of XFEL sources to a wide range of challenging structural targets of biological importance. Here, a microfluidic chip is presented in which microcrystals can be captured at fixed, addressablemore » points in a trap array from a small volume (<10 µl) of a pre-existing slurry grown off-chip. The device can be mounted on a standard goniostat for conducting diffraction experiments at room temperature without the need for flash-cooling. Proof-of-principle tests with a model system (hen egg-white lysozyme) demonstrated the high efficiency of the microfluidic approach for crystal harvesting, permitting the collection of sufficient data from only 265 single-crystal still images to permit determination and refinement of the structure of the protein. This work shows that microfluidic capture devices can be readily used to facilitate data collection from protein microcrystals grown in traditional laboratory formats, enabling analysis when cryopreservation is problematic or when only small numbers of crystals are available. Such microfluidic capture devices may also be useful for data collection at synchrotron sources.« less

  14. Realized detection and capture probabilities for giant gartersnakes (Thamnophis gigas) using modified floating aquatic funnel traps

    USGS Publications Warehouse

    Halstead, Brian J.; Skalos, Shannon M.; Casazza, Michael L.; Wylie, Glenn D.

    2015-01-01

    Detection and capture probabilities for giant gartersnakes (Thamnophis gigas) are very low, and successfully evaluating the effects of variables or experimental treatments on giant gartersnake populations will require greater detection and capture probabilities than those that had been achieved with standard trap designs. Previous research identified important trap modifications that can increase the probability of snakes entering traps and help prevent the escape of captured snakes. The purpose of this study was to quantify detection and capture probabilities obtained using the most successful modification to commercially available traps to date (2015), and examine the ability of realized detection and capture probabilities to achieve benchmark levels of precision in occupancy and capture-mark-recapture studies.

  15. A Method to Capture Macroslip at Bolted Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, Ronald Neil; Heitman, Lili Anne Akin

    2015-10-01

    Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less

  16. A Method to Capture Macroslip at Bolted Interfaces [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, Ronald Neil; Heitman, Lili Anne Akin

    2016-01-01

    Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less

  17. A MECHANISTIC MODEL FOR MERCURY CAPTURE WITH IN-SITU GENERATED TITANIA PARTICLES: ROLE OF WATER VAPOR

    EPA Science Inventory

    A mechanistic model to predict the capture of gas phase mercury species using in-situ generated titania nanosize particles activated by UV irradiation is developed. The model is an extension of a recently reported model1 for photochemical reactions that accounts for the rates of...

  18. Physics of Intact Capture of Cometary Coma Dust Samples

    NASA Astrophysics Data System (ADS)

    Anderson, William

    2011-06-01

    In 1986, Tom Ahrens and I developed a simple model for hypervelocity capture in low density foams, aimed in particular at the suggestion that such techniques could be used to capture dust during flyby of an active comet nucleus. While the model was never published in printed form, it became known to many in the cometary dust sampling community. More sophisticated models have been developed since, but our original model still retains superiority for some applications and elucidates the physics of the capture process in a more intuitive way than the more recent models. The model makes use of the small value of the Hugoniot intercept typical of highly distended media to invoke analytic expressions with functional forms common to fluid dynamics. The model successfully describes the deceleration and ablation of a particle that is large enough to see the foam as a low density continuum. I will present that model, updated with improved calculations of the temperature in the shocked foam, and show its continued utility in elucidating the phenomena of hypervelocity penetration of low-density foams.

  19. Missing pieces to modeling the Arctic-Boreal puzzle

    NASA Astrophysics Data System (ADS)

    Fisher, Joshua B.; Hayes, Daniel J.; Schwalm, Christopher R.; Huntzinger, Deborah N.; Stofferahn, Eric; Schaefer, Kevin; Luo, Yiqi; Wullschleger, Stan D.; Goetz, Scott; Miller, Charles E.; Griffith, Peter; Chadburn, Sarah; Chatterjee, Abhishek; Ciais, Philippe; Douglas, Thomas A.; Genet, Hélène; Ito, Akihiko; Neigh, Christopher S. R.; Poulter, Benjamin; Rogers, Brendan M.; Sonnentag, Oliver; Tian, Hanqin; Wang, Weile; Xue, Yongkang; Yang, Zong-Liang; Zeng, Ning; Zhang, Zhen

    2018-02-01

    NASA has launched the decade-long Arctic-Boreal Vulnerability Experiment (ABoVE). While the initial phases focus on field and airborne data collection, early integration with modeling activities is important to benefit future modeling syntheses. We compiled feedback from ecosystem modeling teams on key data needs, which encompass carbon biogeochemistry, vegetation, permafrost, hydrology, and disturbance dynamics. A suite of variables was identified as part of this activity with a critical requirement that they are collected concurrently and representatively over space and time. Individual projects in ABoVE may not capture all these needs, and thus there is both demand and opportunity for the augmentation of field observations, and synthesis of the observations that are collected, to ensure that science questions and integrated modeling activities are successfully implemented.

  20. As-built data capture of complex piping using photogrammetry technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morray, J.P.; Ziu, C.G.

    1995-11-01

    Plant owners face an increasingly difficult and expensive task of updating drawings, both regarding the plant logic and physical layout. Through the use of photogrammetry technology, H-H spectrum has created a complete operating plant data capture service, with the result that the task of recording accurate plant configurations has become assured and economical. The technology has proven to be extremely valuable for the capture of complex piping configurations, as well as entire plant facilities, and yields accuracy within 1/4 inch. The method uses photographs and workstation technology to quickly document and compute the plant layout, with all components, regardless ofmore » size, included in the resulting model. The system has the capability to compute actual 3-D coordinates of any point based on previous triangulations, allowing for an immediate assessment of accuracy. This ensures a consistent level of accuracy, which is impossible to achieve in a manual approach. Due to the speed of the process, the approach is very important in hazardous/difficult environments such as nuclear power facilities or offshore platforms.« less

  1. A Design Support Framework through Dynamic Deployment of Hypothesis and Verification in the Design Process

    NASA Astrophysics Data System (ADS)

    Nomaguch, Yutaka; Fujita, Kikuo

    This paper proposes a design support framework, named DRIFT (Design Rationale Integration Framework of Three layers), which dynamically captures and manages hypothesis and verification in the design process. A core of DRIFT is a three-layered design process model of action, model operation and argumentation. This model integrates various design support tools and captures design operations performed on them. Action level captures the sequence of design operations. Model operation level captures the transition of design states, which records a design snapshot over design tools. Argumentation level captures the process of setting problems and alternatives. The linkage of three levels enables to automatically and efficiently capture and manage iterative hypothesis and verification processes through design operations over design tools. In DRIFT, such a linkage is extracted through the templates of design operations, which are extracted from the patterns embeded in design tools such as Design-For-X (DFX) approaches, and design tools are integrated through ontology-based representation of design concepts. An argumentation model, gIBIS (graphical Issue-Based Information System), is used for representing dependencies among problems and alternatives. A mechanism of TMS (Truth Maintenance System) is used for managing multiple hypothetical design stages. This paper also demonstrates a prototype implementation of DRIFT and its application to a simple design problem. Further, it is concluded with discussion of some future issues.

  2. Temporal variability of local abundance, sex ratio and activity in the Sardinian chalk hill blue butterfly

    USGS Publications Warehouse

    Casula, P.; Nichols, J.D.

    2003-01-01

    When capturing and marking of individuals is possible, the application of newly developed capture-recapture models can remove several sources of bias in the estimation of population parameters such as local abundance and sex ratio. For example, observation of distorted sex ratios in counts or captures can reflect either different abundances of the sexes or different sex-specific capture probabilities, and capture-recapture models can help distinguish between these two possibilities. Robust design models and a model selection procedure based on information-theoretic methods were applied to study the local population structure of the endemic Sardinian chalk hill blue butterfly, Polyommatus coridon gennargenti. Seasonal variations of abundance, plus daily and weather-related variations of active populations of males and females were investigated. Evidence was found of protandry and male pioneering of the breeding space. Temporary emigration probability, which describes the proportion of the population not exposed to capture (e.g. absent from the study area) during the sampling process, was estimated, differed between sexes, and was related to temperature, a factor known to influence animal activity. The correlation between temporary emigration and average daily temperature suggested interpreting temporary emigration as inactivity of animals. Robust design models were used successfully to provide a detailed description of the population structure and activity in this butterfly and are recommended for studies of local abundance and animal activity in the field.

  3. Design Evaluation for Personnel, Training and Human Factors (DEPTH) Final Report.

    DTIC Science & Technology

    1998-01-17

    human activity was primarily intended to facilitate man-machine design analyses of complex systems. By importing computer aided design (CAD) data, the human figure models and analysis algorithms can help to ensure components can be seen, reached, lifted and removed by most maintainers. These simulations are also useful for logistics data capture, training, and task analysis. DEPTH was also found to be useful in obtaining task descriptions for technical

  4. Time-Filtered Navier-Stokes Approach and Emulation of Turbulence-Chemistry Interaction

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey; Wey, Thomas; Shih, Tsan-Hsing

    2013-01-01

    This paper describes the time-filtered Navier-Stokes approach capable of capturing unsteady flow structures important for turbulent mixing and an accompanying subgrid model directly accounting for the major processes in turbulence-chemistry interaction. They have been applied to the computation of two-phase turbulent combustion occurring in a single-element lean-direct-injection combustor. Some of the preliminary results from this computational effort are presented in this paper.

  5. Impact of Domain Analysis on Reuse Methods

    DTIC Science & Technology

    1989-11-06

    return on the investment. The potential negative effects a "bad" domain analysis has on developing systems in the domain also increases the risks of a...importance of domain analysis as part of a software reuse program. A particular goal is to assist in avoiding the potential negative effects of ad hoc or...are specification objects discovered by performing object-oriented analysis. Object-based analysis approaches thus serve to capture a model of reality

  6. Attrition-enhanced sulfur capture by limestone particles in fluidized beds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saastamoinen, J.J.; Shimizu, T.

    2007-02-14

    Sulfur capture by limestone particles in fluidized beds is a well-established technology. The underlying chemical and physical phenomena of the process have been extensively studied and modeled. However, most of the studies have been focused on the relatively brief initial stage of the process, which extends from a few minutes to hours, yet the residence time of the particles in the boiler is much longer. Following the initial stage, a dense product layer will be formed on the particle surface, which decreases the rate of sulfur capture and the degree of utilization of the sorbent. Attrition can enhance sulfur capturemore » by removing this layer. A particle model for sulfur capture has been incorporated with an attrition model. After the initial stage, the rate of sulfur capture stabilizes, so that attrition removes the surface at the same rate as diffusion and chemical reaction produces new product in a thin surface layer of a particle. An analytical solution for the conversion of particles for this regime is presented. The solution includes the effects of the attrition rate, diffusion, chemical kinetics, pressure, and SO{sub 2} concentration, relative to conversion-dependent diffusivity and the rate of chemical reaction. The particle model results in models that describe the conversion of limestone in both fly ash and bottom ash. These are incorporated with the residence time (or reactor) models to calculate the average conversion of the limestone in fly ash and bottom ash, as well as the efficiency of sulfur capture. Data from a large-scale pressurized fluidized bed are compared with the model results.« less

  7. A new capture fraction method to map how pumpage affects surface water flow.

    PubMed

    Leake, Stanley A; Reeves, Howard W; Dickinson, Jesse E

    2010-01-01

    All groundwater pumped is balanced by removal of water somewhere, initially from storage in the aquifer and later from capture in the form of increase in recharge and decrease in discharge. Capture that results in a loss of water in streams, rivers, and wetlands now is a concern in many parts of the United States. Hydrologists commonly use analytical and numerical approaches to study temporal variations in sources of water to wells for select points of interest. Much can be learned about coupled surface/groundwater systems, however, by looking at the spatial distribution of theoretical capture for select times of interest. Development of maps of capture requires (1) a reasonably well-constructed transient or steady state model of an aquifer with head-dependent flow boundaries representing surface water features or evapotranspiration and (2) an automated procedure to run the model repeatedly and extract results, each time with a well in a different location. This paper presents new methods for simulating and mapping capture using three-dimensional groundwater flow models and presents examples from Arizona, Oregon, and Michigan.

  8. Silica Aerogel Captures Cosmic Dust Intact

    NASA Technical Reports Server (NTRS)

    Tsou, P.

    1994-01-01

    The mesostructure of silica aerogel resembles stings of grapes, ranging in size from 10 to 100 angstrom. This fine mesostructure transmits nearly 90 percent of incident light in the visible, while providing sufficiently gentle dissipation of the kinetric energy of hypervelocity cosmic dust particles to permit their intact capture. We introduced silica aerogel in 1987 as capture medium to take advantage of its low density, fine mesostruicture and most importantly, its transparency, allowing optical location of captured micron sized particles.

  9. Innovative approach for transcriptomic analysis of obligate intracellular pathogen: selective capture of transcribed sequences of Ehrlichia ruminantium

    PubMed Central

    2009-01-01

    Background Whole genome transcriptomic analysis is a powerful approach to elucidate the molecular mechanisms controlling the pathogenesis of obligate intracellular bacteria. However, the major hurdle resides in the low quantity of prokaryotic mRNAs extracted from host cells. Our model Ehrlichia ruminantium (ER), the causative agent of heartwater, is transmitted by tick Amblyomma variegatum. This bacterium affects wild and domestic ruminants and is present in Sub-Saharan Africa and the Caribbean islands. Because of its strictly intracellular location, which constitutes a limitation for its extensive study, the molecular mechanisms involved in its pathogenicity are still poorly understood. Results We successfully adapted the SCOTS method (Selective Capture of Transcribed Sequences) on the model Rickettsiales ER to capture mRNAs. Southern Blots and RT-PCR revealed an enrichment of ER's cDNAs and a diminution of ribosomal contaminants after three rounds of capture. qRT-PCR and whole-genome ER microarrays hybridizations demonstrated that SCOTS method introduced only a limited bias on gene expression. Indeed, we confirmed the differential gene expression between poorly and highly expressed genes before and after SCOTS captures. The comparative gene expression obtained from ER microarrays data, on samples before and after SCOTS at 96 hpi was significantly correlated (R2 = 0.7). Moreover, SCOTS method is crucial for microarrays analysis of ER, especially for early time points post-infection. There was low detection of transcripts for untreated samples whereas 24% and 70.7% were revealed for SCOTS samples at 24 and 96 hpi respectively. Conclusions We conclude that this SCOTS method has a key importance for the transcriptomic analysis of ER and can be potentially used for other Rickettsiales. This study constitutes the first step for further gene expression analyses that will lead to a better understanding of both ER pathogenicity and the adaptation of obligate intracellular bacteria to their environment. PMID:20034374

  10. Systems Analysis of Physical Absorption of CO2 in Ionic Liquids for Pre-Combustion Carbon Capture.

    PubMed

    Zhai, Haibo; Rubin, Edward S

    2018-04-17

    This study develops an integrated technical and economic modeling framework to investigate the feasibility of ionic liquids (ILs) for precombustion carbon capture. The IL 1-hexyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide is modeled as a potential physical solvent for CO 2 capture at integrated gasification combined cycle (IGCC) power plants. The analysis reveals that the energy penalty of the IL-based capture system comes mainly from the process and product streams compression and solvent pumping, while the major capital cost components are the compressors and absorbers. On the basis of the plant-level analysis, the cost of CO 2 avoided by the IL-based capture and storage system is estimated to be $63 per tonne of CO 2 . Technical and economic comparisons between IL- and Selexol-based capture systems at the plant level show that an IL-based system could be a feasible option for CO 2 capture. Improving the CO 2 solubility of ILs can simplify the capture process configuration and lower the process energy and cost penalties to further enhance the viability of this technology.

  11. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  12. Geological Sequestration Training and Research Program in Capture and Transport: Development of the Most Economical Separation Method for CO2 Capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vahdat, Nader

    2013-09-30

    The project provided hands-on training and networking opportunities to undergraduate students in the area of carbon dioxide (CO2) capture and transport, through fundamental research study focused on advanced separation methods that can be applied to the capture of CO2 resulting from the combustion of fossil-fuels for power generation . The project team’s approach to achieve its objectives was to leverage existing Carbon Capture and Storage (CCS) course materials and teaching methods to create and implement an annual CCS short course for the Tuskegee University community; conduct a survey of CO2 separation and capture methods; utilize data to verify and developmore » computer models for CO2 capture and build CCS networks and hands-on training experiences. The objectives accomplished as a result of this project were: (1) A comprehensive survey of CO2 capture methods was conducted and mathematical models were developed to compare the potential economics of the different methods based on the total cost per year per unit of CO2 avoidance; and (2) Training was provided to introduce the latest CO2 capture technologies and deployment issues to the university community.« less

  13. Vegetation Removal from Uav Derived Dsms, Using Combination of RGB and NIR Imagery

    NASA Astrophysics Data System (ADS)

    Skarlatos, D.; Vlachos, M.

    2018-05-01

    Current advancements on photogrammetric software along with affordability and wide spreading of Unmanned Aerial Vehicles (UAV), allow for rapid, timely and accurate 3D modelling and mapping of small to medium sized areas. Although the importance and applications of large format aerial overlaps cameras and photographs in Digital Surface Model (DSM) production and LIDAR data is well documented in literature, this is not the case for UAV photography. Additionally, the main disadvantage of photogrammetry is the inability to map the dead ground (terrain), when we deal with areas that include vegetation. This paper assesses the use of near-infrared imagery captured by small UAV platforms to automatically remove vegetation from Digital Surface Models (DSMs) and obtain a Digital Terrain Model (DTM). Two areas were tested, based on the availability of ground reference points, both under trees and among vegetation, as well as on terrain. In addition, RGB and near-infrared UAV photography was captured and processed using Structure from Motion (SfM) and Multi View Stereo (MVS) algorithms to generate DSMs and corresponding colour and NIR orthoimages with 0.2 m and 0.25 m as pixel size respectively for the two test sites. Moreover, orthophotos were used to eliminate the vegetation from the DSMs using NDVI index, thresholding and masking. Following that, different interpolation algorithms, according to the test sites, were applied to fill in the gaps and created DTMs. Finally, a statistic analysis was made using reference terrain points captured on field, both on dead ground and under vegetation to evaluate the accuracy of the whole process and assess the overall accuracy of the derived DTMs in contrast with the DSMs.

  14. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources.more » We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.« less

  15. Conceptualizing intragroup and intergroup dynamics within a controlled crowd evacuation.

    PubMed

    Elzie, Terra; Frydenlund, Erika; Collins, Andrew J; Robinson, R Michael

    2015-01-01

    Social dynamics play a critical role in successful pedestrian evacuations. Crowd modeling research has made progress in capturing the way individual and group dynamics affect evacuations; however, few studies have simultaneously examined how individuals and groups interact with one another during egress. To address this gap, the researchers present a conceptual agent-based model (ABM) designed to study the ways in which autonomous, heterogeneous, decision-making individuals negotiate intragroup and intergroup behavior while exiting a large venue. A key feature of this proposed model is the examination of the dynamics among and between various groupings, where heterogeneity at the individual level dynamically affects group behavior and subsequently group/group interactions. ABM provides a means of representing the important social factors that affect decision making among diverse social groups. Expanding on the 2013 work of Vizzari et al., the researchers focus specifically on social factors and decision making at the individual/group and group/group levels to more realistically portray dynamic crowd systems during a pedestrian evacuation. By developing a model with individual, intragroup, and intergroup interactions, the ABM provides a more representative approximation of real-world crowd egress. The simulation will enable more informed planning by disaster managers, emergency planners, and other decision makers. This pedestrian behavioral concept is one piece of a larger simulation model. Future research will build toward an integrated model capturing decision-making interactions between pedestrians and vehicles that affect evacuation outcomes.

  16. Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture

    NASA Astrophysics Data System (ADS)

    Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo

    2018-03-01

    For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.

  17. Hall viscosity and geometric response in the Chern-Simons matrix model of the Laughlin states

    NASA Astrophysics Data System (ADS)

    Lapa, Matthew F.; Hughes, Taylor L.

    2018-05-01

    We study geometric aspects of the Laughlin fractional quantum Hall (FQH) states using a description of these states in terms of a matrix quantum mechanics model known as the Chern-Simons matrix model (CSMM). This model was proposed by Polychronakos as a regularization of the noncommutative Chern-Simons theory description of the Laughlin states proposed earlier by Susskind. Both models can be understood as describing the electrons in a FQH state as forming a noncommutative fluid, i.e., a fluid occupying a noncommutative space. Here, we revisit the CSMM in light of recent work on geometric response in the FQH effect, with the goal of determining whether the CSMM captures this aspect of the physics of the Laughlin states. For this model, we compute the Hall viscosity, Hall conductance in a nonuniform electric field, and the Hall viscosity in the presence of anisotropy (or intrinsic geometry). Our calculations show that the CSMM captures the guiding center contribution to the known values of these quantities in the Laughlin states, but lacks the Landau orbit contribution. The interesting correlations in a Laughlin state are contained entirely in the guiding center part of the state/wave function, and so we conclude that the CSMM accurately describes the most important aspects of the physics of the Laughlin FQH states, including the Hall viscosity and other geometric properties of these states, which are of current interest.

  18. 19 CFR 12.27 - Importation or exportation of wild animals or birds, or the dead bodies thereof illegally...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... birds, or the dead bodies thereof illegally captured or killed, etc. 12.27 Section 12.27 Customs Duties... SPECIAL CLASSES OF MERCHANDISE Wild Animals, Birds, and Insects § 12.27 Importation or exportation of wild animals or birds, or the dead bodies thereof illegally captured or killed, etc. Customs officers shall...

  19. 19 CFR 12.27 - Importation or exportation of wild animals or birds, or the dead bodies thereof illegally...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... birds, or the dead bodies thereof illegally captured or killed, etc. 12.27 Section 12.27 Customs Duties... SPECIAL CLASSES OF MERCHANDISE Wild Animals, Birds, and Insects § 12.27 Importation or exportation of wild animals or birds, or the dead bodies thereof illegally captured or killed, etc. Customs officers shall...

  20. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  1. Aquatic prey capture in snakes: the link between morphology, behavior and hydrodynamics

    NASA Astrophysics Data System (ADS)

    Segall, Marion; Herrel, Anthony; Godoy-Diana, Ramiro; Funevol Team; Pmmh Team

    2017-11-01

    Natural selection favors animals that are the most successful in their fitness-related behaviors, such as foraging. Secondary adaptations pose the problem of re-adapting an already 'hypothetically optimized' phenotype to new constraints. When animals forage underwater, they face strong physical constraints, particularly when capturing a prey. The capture requires the predator to be fast and to generate a high acceleration to catch the prey. This involves two main constraints due to the surrounding fluid: drag and added mass. Both of these constraints are related to the shape of the animal. We experimentally explore the relationship between shape and performance in the context of an aquatic strike. As a model, we use 3D-printed snake heads of different shapes and frontal strike kinematics based on in vivo observations. By using direct force measurements, we compare the drag and added mass generated by aquatic and non-aquatic snake models during a strike. Our results show that drag is optimized in aquatic snakes. Added mass appears less important than drag for snakes during an aquatic strike. The flow features associated to the hydrodynamic forces measured allows us to propose a mechanism rendering the shape of the head of aquatic snakes well adapted to catch prey underwater. Region Ile de France and the doctoral school Frontieres du Vivant (FdV) - Programme Bettencourt.

  2. Integrating Image-Based Phenomics and Association Analysis to Dissect the Genetic Architecture of Temporal Salinity Responses in Rice1[OPEN

    PubMed Central

    Knecht, Avi C.; Wang, Dong

    2015-01-01

    Salinity affects a significant portion of arable land and is particularly detrimental for irrigated agriculture, which provides one-third of the global food supply. Rice (Oryza sativa), the most important food crop, is salt sensitive. The genetic resources for salt tolerance in rice germplasm exist but are underutilized due to the difficulty in capturing the dynamic nature of physiological responses to salt stress. The genetic basis of these physiological responses is predicted to be polygenic. In an effort to address this challenge, we generated temporal imaging data from 378 diverse rice genotypes across 14 d of 90 mm NaCl stress and developed a statistical model to assess the genetic architecture of dynamic salinity-induced growth responses in rice germplasm. A genomic region on chromosome 3 was strongly associated with the early growth response and was captured using visible range imaging. Fluorescence imaging identified four genomic regions linked to salinity-induced fluorescence responses. A region on chromosome 1 regulates both the fluorescence shift indicative of the longer term ionic stress and the early growth rate decline during salinity stress. We present, to our knowledge, a new approach to capture the dynamic plant responses to its environment and elucidate the genetic basis of these responses using a longitudinal genome-wide association model. PMID:26111541

  3. Effects of asymmetric medical insurance subsidy on hospitals competition under non-price regulation.

    PubMed

    Wang, Chan; Nie, Pu-Yan

    2016-11-15

    Poor medical care and high fees are two major problems in the world health care system. As a result, health care insurance system reform is a major issue in developing countries, such as China. Governments should take the effect of health care insurance system reform on the competition of hospitals into account when they practice a reform. This article aims to capture the influences of asymmetric medical insurance subsidy and the importance of medical quality to patients on hospitals competition under non-price regulation. We establish a three-stage duopoly model with quantity and quality competition. In the model, qualitative difference and asymmetric medical insurance subsidy among hospitals are considered. The government decides subsidy (or reimbursement) ratios in the first stage. Hospitals choose the quality in the second stage and then support the quantity in the third stage. We obtain our conclusions by mathematical model analyses and all the results are achieved by backward induction. The importance of medical quality to patients has stronger influence on the small hospital, while subsidy has greater effect on the large hospital. Meanwhile, the importance of medical quality to patients strengthens competition, but subsidy effect weakens it. Besides, subsidy ratios difference affects the relationship between subsidy and hospital competition. Furthermore, we capture the optimal reimbursement ratio based on social welfare maximization. More importantly, this paper finds that the higher management efficiency of the medical insurance investment funds is, the higher the best subsidy ratio is. This paper states that subsidy is a two-edged sword. On one hand, subsidy stimulates medical demand. On the other hand, subsidy raises price and inhibits hospital competition. Therefore, government must set an appropriate subsidy ratio difference between large and small hospitals to maximize the total social welfare. For a developing country with limited medical resources and great difference in hospitals such as China, adjusting the reimbursement ratios between different level hospitals and increasing medical quality are two reasonable methods for the sustainable development of its health system.

  4. Hierarchical calibration and validation of computational fluid dynamics models for solid sorbent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Pan, Wenxiao

    2016-01-01

    To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less

  5. Magnetic Capture of a Molecular Biomarker from Synovial Fluid in a Rat Model of Knee Osteoarthritis

    PubMed Central

    Yarmola, Elena G.; Shah, Yash; Arnold, David P.; Dobson, Jon; Allen, Kyle D.

    2015-01-01

    Biomarker development for osteoarthritis (OA) often begins in rodent models, but can be limited by an inability to aspirate synovial fluid from a rodent stifle (similar to the human knee). To address this limitation, we have developed a magnetic nanoparticle-based technology to collect biomarkers from a rodent stifle, termed magnetic capture. Using a common OA biomarker - the c-terminus telopeptide of type II collagen (CTXII) - magnetic capture was optimized in vitro using bovine synovial fluid and then tested in a rat model of knee OA. Anti-CTXII antibodies were conjugated to the surface of superparamagnetic iron oxide-containing polymeric particles. Using these anti-CTXII particles, magnetic capture was able to estimate the level of CTXII in 25 µL aliquots of bovine synovial fluid; and under controlled conditions, this estimate was unaffected by synovial fluid viscosity. Following in vitro testing, anti-CTXII particles were tested in a rat monoiodoacetate model of knee OA. CTXII could be magnetically captured from a rodent stifle without the need to aspirate fluid and showed 10 fold changes in CTXII levels from OA-affected joints relative to contralateral control joints. Combined, these data demonstrate the ability and sensitivity of magnetic capture for post-mortem analysis of OA biomarkers in the rat. PMID:26136062

  6. Magnetic Capture of a Molecular Biomarker from Synovial Fluid in a Rat Model of Knee Osteoarthritis.

    PubMed

    Yarmola, Elena G; Shah, Yash; Arnold, David P; Dobson, Jon; Allen, Kyle D

    2016-04-01

    Biomarker development for osteoarthritis (OA) often begins in rodent models, but can be limited by an inability to aspirate synovial fluid from a rodent stifle (similar to the human knee). To address this limitation, we have developed a magnetic nanoparticle-based technology to collect biomarkers from a rodent stifle, termed magnetic capture. Using a common OA biomarker--the c-terminus telopeptide of type II collagen (CTXII)--magnetic capture was optimized in vitro using bovine synovial fluid and then tested in a rat model of knee OA. Anti-CTXII antibodies were conjugated to the surface of superparamagnetic iron oxide-containing polymeric particles. Using these anti-CTXII particles, magnetic capture was able to estimate the level of CTXII in 25 μL aliquots of bovine synovial fluid; and under controlled conditions, this estimate was unaffected by synovial fluid viscosity. Following in vitro testing, anti-CTXII particles were tested in a rat monoiodoacetate model of knee OA. CTXII could be magnetically captured from a rodent stifle without the need to aspirate fluid and showed tenfold changes in CTXII levels from OA-affected joints relative to contralateral control joints. Combined, these data demonstrate the ability and sensitivity of magnetic capture for post-mortem analysis of OA biomarkers in the rat.

  7. The underestimated potential of solar energy to mitigate climate change

    NASA Astrophysics Data System (ADS)

    Creutzig, Felix; Agoston, Peter; Goldschmidt, Jan Christoph; Luderer, Gunnar; Nemet, Gregory; Pietzcker, Robert C.

    2017-09-01

    The Intergovernmental Panel on Climate Change's fifth assessment report emphasizes the importance of bioenergy and carbon capture and storage for achieving climate goals, but it does not identify solar energy as a strategically important technology option. That is surprising given the strong growth, large resource, and low environmental footprint of photovoltaics (PV). Here we explore how models have consistently underestimated PV deployment and identify the reasons for underlying bias in models. Our analysis reveals that rapid technological learning and technology-specific policy support were crucial to PV deployment in the past, but that future success will depend on adequate financing instruments and the management of system integration. We propose that with coordinated advances in multiple components of the energy system, PV could supply 30-50% of electricity in competitive markets.

  8. Ab initio theory and modeling of water.

    PubMed

    Chen, Mohan; Ko, Hsin-Yu; Remsing, Richard C; Calegari Andrade, Marcos F; Santra, Biswajit; Sun, Zhaoru; Selloni, Annabella; Car, Roberto; Klein, Michael L; Perdew, John P; Wu, Xifan

    2017-10-10

    Water is of the utmost importance for life and technology. However, a genuinely predictive ab initio model of water has eluded scientists. We demonstrate that a fully ab initio approach, relying on the strongly constrained and appropriately normed (SCAN) density functional, provides such a description of water. SCAN accurately describes the balance among covalent bonds, hydrogen bonds, and van der Waals interactions that dictates the structure and dynamics of liquid water. Notably, SCAN captures the density difference between water and ice I h at ambient conditions, as well as many important structural, electronic, and dynamic properties of liquid water. These successful predictions of the versatile SCAN functional open the gates to study complex processes in aqueous phase chemistry and the interactions of water with other materials in an efficient, accurate, and predictive, ab initio manner.

  9. Ab initio theory and modeling of water

    PubMed Central

    Chen, Mohan; Ko, Hsin-Yu; Remsing, Richard C.; Calegari Andrade, Marcos F.; Santra, Biswajit; Sun, Zhaoru; Selloni, Annabella; Car, Roberto; Klein, Michael L.; Perdew, John P.; Wu, Xifan

    2017-01-01

    Water is of the utmost importance for life and technology. However, a genuinely predictive ab initio model of water has eluded scientists. We demonstrate that a fully ab initio approach, relying on the strongly constrained and appropriately normed (SCAN) density functional, provides such a description of water. SCAN accurately describes the balance among covalent bonds, hydrogen bonds, and van der Waals interactions that dictates the structure and dynamics of liquid water. Notably, SCAN captures the density difference between water and ice Ih at ambient conditions, as well as many important structural, electronic, and dynamic properties of liquid water. These successful predictions of the versatile SCAN functional open the gates to study complex processes in aqueous phase chemistry and the interactions of water with other materials in an efficient, accurate, and predictive, ab initio manner. PMID:28973868

  10. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.

  11. Connecting Provenance with Semantic Descriptions in the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Nemani, R. R.

    2012-12-01

    NASA Earth Exchange (NEX) is a data, modeling and knowledge collaboratory that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform. Some of the main goals of NEX are transparency and repeatability and to that extent we have been adding components that enable tracking of provenance of both scientific processes and datasets produced by these processes. As scientific processes become more complex, they are often developed collaboratively and it becomes increasingly important for the research team to be able to track the development of the process and the datasets that are produced along the way. Additionally, we want to be able to link the processes and the datasets developed on NEX to an existing information and knowledge, so that the users can query and compare the provenance of any dataset or process with regard to the component-specific attributes such as data quality, geographic location, related publications, user comments and annotations etc. We have developed several ontologies that describe datasets and workflow components available on NEX using the OWL ontology language as well as a simple ontology that provides linking mechanism to the collected provenance information. The provenance is captured in two ways - we utilize existing provenance infrastructure of VisTrails, which is used as a workflow engine on NEX, and we extend the captured provenance using the PROV data model expressed through the PROV-O ontology. We do this in order to link and query the provenance easier in the context of the existing NEX information and knowledge. The captured provenance graph is processed and stored using RDFlib with MySQL backend that can be queried using either RDFLib or SPARQL. As a concrete example, we show how this information is captured during anomaly detection process in large satellite datasets.

  12. Effective secondary fracture prevention: implementation of a global benchmarking of clinical quality using the IOF Capture the Fracture® Best Practice Framework tool.

    PubMed

    Javaid, M K; Kyer, C; Mitchell, P J; Chana, J; Moss, C; Edwards, M H; McLellan, A R; Stenmark, J; Pierroz, D D; Schneider, M C; Kanis, J A; Akesson, K; Cooper, C

    2015-11-01

    Fracture Liaison Services are the best model to prevent secondary fractures. The International Osteoporosis Foundation developed a Best Practice Framework to provide a quality benchmark. After a year of implementation, we confirmed that a single framework with set criteria is able to benchmark services across healthcare systems worldwide. Despite evidence for the clinical effectiveness of secondary fracture prevention, translation in the real-world setting remains disappointing. Where implemented, a wide variety of service models are used to deliver effective secondary fracture prevention. To support use of effective models of care across the globe, the International Osteoporosis Foundation's Capture the Fracture® programme developed a Best Practice Framework (BPF) tool of criteria and standards to provide a quality benchmark. We now report findings after the first 12 months of implementation. A questionnaire for the BPF was created and made available to institutions on the Capture the Fracture website. Responses from institutions were used to assign gold, silver, bronze or black (insufficient) level of achievements mapped across five domains. Through an interactive process with the institution, a final score was determined and published on the Capture the Fracture website Fracture Liaison Service (FLS) map. Sixty hospitals across six continents submitted their questionnaires. The hospitals served populations from 20,000 to 15 million and were a mix of private and publicly funded. Each FLS managed 146 to 6200 fragility fracture patients per year with a total of 55,160 patients across all sites. Overall, 27 hospitals scored gold, 23 silver and 10 bronze. The pathway for the hip fracture patients had the highest proportion of gold grading while vertebral fracture the lowest. In the first 12 months, we have successfully tested the BPF tool in a range of health settings across the globe. Initial findings confirm a significant heterogeneity in service provision and highlight the importance of a global approach to ensure high quality secondary fracture prevention services.

  13. Parameter-expanded data augmentation for Bayesian analysis of capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, Robert M.

    2012-01-01

    Data augmentation (DA) is a flexible tool for analyzing closed and open population models of capture-recapture data, especially models which include sources of hetereogeneity among individuals. The essential concept underlying DA, as we use the term, is based on adding "observations" to create a dataset composed of a known number of individuals. This new (augmented) dataset, which includes the unknown number of individuals N in the population, is then analyzed using a new model that includes a reformulation of the parameter N in the conventional model of the observed (unaugmented) data. In the context of capture-recapture models, we add a set of "all zero" encounter histories which are not, in practice, observable. The model of the augmented dataset is a zero-inflated version of either a binomial or a multinomial base model. Thus, our use of DA provides a general approach for analyzing both closed and open population models of all types. In doing so, this approach provides a unified framework for the analysis of a huge range of models that are treated as unrelated "black boxes" and named procedures in the classical literature. As a practical matter, analysis of the augmented dataset by MCMC is greatly simplified compared to other methods that require specialized algorithms. For example, complex capture-recapture models of an augmented dataset can be fitted with popular MCMC software packages (WinBUGS or JAGS) by providing a concise statement of the model's assumptions that usually involves only a few lines of pseudocode. In this paper, we review the basic technical concepts of data augmentation, and we provide examples of analyses of closed-population models (M 0, M h , distance sampling, and spatial capture-recapture models) and open-population models (Jolly-Seber) with individual effects.

  14. Discussion of “Bayesian design of experiments for industrial and scientific applications via gaussian processes”

    DOE PAGES

    Anderson-Cook, Christine M.; Burke, Sarah E.

    2016-10-18

    First, we would like to commend Dr. Woods on his thought-provoking paper and insightful presentation at the 4th Annual Stu Hunter conference. We think that the material presented highlights some important needs in the area of design of experiments for generalized linear models (GLMs). In addition, we agree with Dr. Woods that design of experiements of GLMs does implicitly require expert judgement about model parameters, and hence using a Bayesian approach to capture this knowledge is a natural strategy to summarize what is known with the opportunity to incorporate associated uncertainty about that information.

  15. Discussion of “Bayesian design of experiments for industrial and scientific applications via gaussian processes”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine M.; Burke, Sarah E.

    First, we would like to commend Dr. Woods on his thought-provoking paper and insightful presentation at the 4th Annual Stu Hunter conference. We think that the material presented highlights some important needs in the area of design of experiments for generalized linear models (GLMs). In addition, we agree with Dr. Woods that design of experiements of GLMs does implicitly require expert judgement about model parameters, and hence using a Bayesian approach to capture this knowledge is a natural strategy to summarize what is known with the opportunity to incorporate associated uncertainty about that information.

  16. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  17. Power-Law Modeling of Cancer Cell Fates Driven by Signaling Data to Reveal Drug Effects

    PubMed Central

    Zhang, Fan; Wu, Min; Kwoh, Chee Keong; Zheng, Jie

    2016-01-01

    Extracellular signals are captured and transmitted by signaling proteins inside a cell. An important type of cellular responses to the signals is the cell fate decision, e.g., apoptosis. However, the underlying mechanisms of cell fate regulation are still unclear, thus comprehensive and detailed kinetic models are not yet available. Alternatively, data-driven models are promising to bridge signaling data with the phenotypic measurements of cell fates. The traditional linear model for data-driven modeling of signaling pathways has its limitations because it assumes that the a cell fate is proportional to the activities of signaling proteins, which is unlikely in the complex biological systems. Therefore, we propose a power-law model to relate the activities of all the measured signaling proteins to the probabilities of cell fates. In our experiments, we compared our nonlinear power-law model with the linear model on three cancer datasets with phosphoproteomics and cell fate measurements, which demonstrated that the nonlinear model has superior performance on cell fates prediction. By in silico simulation of virtual protein knock-down, the proposed model is able to reveal drug effects which can complement traditional approaches such as binding affinity analysis. Moreover, our model is able to capture cell line specific information to distinguish one cell line from another in cell fate prediction. Our results show that the power-law data-driven model is able to perform better in cell fate prediction and provide more insights into the signaling pathways for cancer cell fates than the linear model. PMID:27764199

  18. Subgrid Modeling of AGN-driven Turbulence in Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Scannapieco, Evan; Brüggen, Marcus

    2008-10-01

    Hot, underdense bubbles powered by active galactic nuclei (AGNs) are likely to play a key role in halting catastrophic cooling in the centers of cool-core galaxy clusters. We present three-dimensional simulations that capture the evolution of such bubbles, using an adaptive mesh hydrodynamic code, FLASH3, to which we have added a subgrid model of turbulence and mixing. While pure hydro simulations indicate that AGN bubbles are disrupted into resolution-dependent pockets of underdense gas, proper modeling of subgrid turbulence indicates that this is a poor approximation to a turbulent cascade that continues far beyond the resolution limit. Instead, Rayleigh-Taylor instabilities act to effectively mix the heated region with its surroundings, while at the same time preserving it as a coherent structure, consistent with observations. Thus, bubbles are transformed into hot clouds of mixed material as they move outward in the hydrostatic intracluster medium (ICM), much as large airbursts lead to a distinctive "mushroom cloud" structure as they rise in the hydrostatic atmosphere of Earth. Properly capturing the evolution of such clouds has important implications for many ICM properties. In particular, it significantly changes the impact of AGNs on the distribution of entropy and metals in cool-core clusters such as Perseus.

  19. Gamow-Teller Strength Distributions for pf-shell Nuclei and its Implications in Astrophysics

    NASA Astrophysics Data System (ADS)

    Rahman, M.-U.; Nabi, J.-U.

    2009-08-01

    The {pf}-shell nuclei are present in abundance in the pre-supernova and supernova phases and these nuclei are considered to play an important role in the dynamics of core collapse supernovae. The B(GT) values are calculated for the {pf}-shell nuclei 55Co and 57Zn using the pn-QRPA theory. The calculated B(GT) strengths have differences with earlier reported shell model calculations, however, the results are in good agreement with the experimental data. These B(GT) strengths are used in the calculations of weak decay rates which play a decisive role in the core-collapse supernovae dynamics and nucleosynthesis. Unlike previous calculations the so-called Brink's hypothesis is not assumed in the present calculation which leads to a more realistic estimate of weak decay rates. The electron capture rates are calculated over wide grid of temperature ({0.01} × 109 - 30 × 109 K) and density (10-1011 g-cm-3). Our rates are enhanced compared to the reported shell model rates. This enhancement is attributed partly to the liberty of selecting a huge model space, allowing consideration of many more excited states in the present electron capture rates calculations.

  20. Numerical Simulation of the 9-10 June 1972 Black Hills Storm Using CSU RAMS

    NASA Technical Reports Server (NTRS)

    Nair, U. S.; Hjelmfelt, Mark R.; Pielke, Roger A., Sr.

    1997-01-01

    Strong easterly flow of low-level moist air over the eastern slopes of the Black Hills on 9-10 June 1972 generated a storm system that produced a flash flood, devastating the area. Based on observations from this storm event, and also from the similar Big Thompson 1976 storm event, conceptual models have been developed to explain the unusually high precipitation efficiency. In this study, the Black Hills storm is simulated using the Colorado State University Regional Atmospheric Modeling System. Simulations with homogeneous and inhomogeneous initializations and different grid structures are presented. The conceptual models of storm structure proposed by previous studies are examined in light of the present simulations. Both homogeneous and inhomogeneous initialization results capture the intense nature of the storm, but the inhomogeneous simulation produced a precipitation pattern closer to the observed pattern. The simulations point to stationary tilted updrafts, with precipitation falling out to the rear as the preferred storm structure. Experiments with different grid structures point to the importance of removing the lateral boundaries far from the region of activity. Overall, simulation performance in capturing the observed behavior of the storm system was enhanced by use of inhomogeneous initialization.

  1. "I knew it was wrong the moment I got the order": A narrative thematic analysis of moral injury in combat veterans.

    PubMed

    Held, Philip; Klassen, Brian J; Hall, Joanne M; Friese, Tanya R; Bertsch-Gout, Marcel M; Zalta, Alyson K; Pollack, Mark H

    2018-05-03

    Moral injury is a nascent construct intended to capture reactions to events that violate deeply held beliefs and moral values. Although a model of moral injury has been proposed, many of the theoretical propositions of this model have yet to be systematically studied. We conducted semistructured interviews with eight veterans who reported experiencing morally injurious events during war zone deployments. Using narrative thematic analysis, five main themes and associated subthemes emerged from the data. The main themes capture the timing of the event, contextual factors that affected the decision-making process during the morally injurious event, reactions to the moral injurious event, search for purpose and meaning, and opening up. The findings from the present study supported an existing model of moral injury, while extending it in several important ways. Preliminary clinical recommendations and directions for future research are discussed based on the study findings. These include directly exploring the context surrounding the morally injurious event, examining the veterans' moral appraisals, and helping them assume appropriate responsibility for their actions to reduce excessive self-blame. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Challenges in modeling spatiotemporally varying phytoplankton blooms in the Northwestern Arabian Sea and Gulf of Oman

    NASA Astrophysics Data System (ADS)

    Sedigh Marvasti, S.; Gnanadesikan, A.; Bidokhti, A. A.; Dunne, J. P.; Ghader, S.

    2016-02-01

    Recent years have shown an increase in harmful algal blooms in the Northwest Arabian Sea and Gulf of Oman, raising the question of whether climate change will accelerate this trend. This has led us to examine whether the Earth System Models used to simulate phytoplankton productivity accurately capture bloom dynamics in this region - both in terms of the annual cycle and interannual variability. Satellite data (SeaWIFS ocean color) show two climatological blooms in this region, a wintertime bloom peaking in February and a summertime bloom peaking in September. On a regional scale, interannual variability of the wintertime bloom is dominated by cyclonic eddies which vary in location from one year to another. Two coarse (1°) models with the relatively complex biogeochemistry (TOPAZ) capture the annual cycle but neither eddies nor the interannual variability. An eddy-resolving model (GFDL CM2.6) with a simpler biogeochemistry (miniBLING) displays larger interannual variability, but overestimates the wintertime bloom and captures eddy-bloom coupling in the south but not in the north. The models fail to capture both the magnitude of the wintertime bloom and its modulation by eddies in part because of their failure to capture the observed sharp thermocline and/or nutricline in this region. When CM2.6 is able to capture such features in the Southern part of the basin, eddies modulate diffusive nutrient supply to the surface (a mechanism not previously emphasized in the literature). For the model to simulate the observed wintertime blooms within cyclones, it will be necessary to represent this relatively unusual nutrient structure as well as the cyclonic eddies. This is a challenge in the Northern Arabian Sea as it requires capturing the details of the outflow from the Persian Gulf - something that is poorly done in global models.

  3. Age-structured mark-recapture analysis: A virtual-population-analysis-based model for analyzing age-structured capture-recapture data

    USGS Publications Warehouse

    Coggins, L.G.; Pine, William E.; Walters, C.J.; Martell, S.J.D.

    2006-01-01

    We present a new model to estimate capture probabilities, survival, abundance, and recruitment using traditional Jolly-Seber capture-recapture methods within a standard fisheries virtual population analysis framework. This approach compares the numbers of marked and unmarked fish at age captured in each year of sampling with predictions based on estimated vulnerabilities and abundance in a likelihood function. Recruitment to the earliest age at which fish can be tagged is estimated by using a virtual population analysis method to back-calculate the expected numbers of unmarked fish at risk of capture. By using information from both marked and unmarked animals in a standard fisheries age structure framework, this approach is well suited to the sparse data situations common in long-term capture-recapture programs with variable sampling effort. ?? Copyright by the American Fisheries Society 2006.

  4. Contributions of microtubule rotation and dynamic instability to kinetochore capture

    NASA Astrophysics Data System (ADS)

    Sweezy-Schindler, Oliver; Edelmaier, Christopher; Blackwell, Robert; Glaser, Matt; Betterton, Meredith

    2014-03-01

    The capture of lost kinetochores (KCs) by microtubules (MTs) is a crucial part of prometaphase during mitosis. Microtubule dynamic instability has been considered the primary mechanism of KC capture, but recent work discovered that lateral KC attachment to pivoting MTs enabled rapid capture even with significantly reduced MT dynamics. We aim to understand the relative contributions of MT rotational diffusion and dynamic instability to KC capture, as well as KC capture through end-on and/or lateral attachment. Our model consists of rigid MTs and a spherical KC, which are allowed to diffuse inside a spherical nuclear envelope consistent with the geometry of fission yeast. For simplicity, we include a single spindle pole body, which is anchored to the nuclear membrane, and its associated polar MTs. Brownian dynamics treats the diffusion of the MTs and KC and kinetic Monte Carlo models stochastic processes such as dynamic instability. NSF 1546021.

  5. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    PubMed

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  6. a New Multi-Criteria Evaluation Model Based on the Combination of Non-Additive Fuzzy Ahp, Choquet Integral and Sugeno λ-MEASURE

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Samiei, M.; Salari, H. R.; Karami, N.

    2017-09-01

    This paper proposes a new model for multi-criteria evaluation under uncertain condition. In this model we consider the interaction between criteria as one of the most challenging issues especially in the presence of uncertainty. In this case usual pairwise comparisons and weighted sum cannot be used to calculate the importance of criteria and to aggregate them. Our model is based on the combination of non-additive fuzzy linguistic preference relation AHP (FLPRAHP), Choquet integral and Sugeno λ-measure. The proposed model capture fuzzy preferences of users and fuzzy values of criteria and uses Sugeno λ -measure to determine the importance of criteria and their interaction. Then, integrating Choquet integral and FLPRAHP, all the interaction between criteria are taken in to account with least number of comparison and the final score for each alternative is determined. So we would model a comprehensive set of interactions between criteria that can lead us to more reliable result. An illustrative example presents the effectiveness and capability of the proposed model to evaluate different alternatives in a multi-criteria decision problem.

  7. A Multilevel AR(1) Model: Allowing for Inter-Individual Differences in Trait-Scores, Inertia, and Innovation Variance.

    PubMed

    Jongerling, Joran; Laurenceau, Jean-Philippe; Hamaker, Ellen L

    2015-01-01

    In this article we consider a multilevel first-order autoregressive [AR(1)] model with random intercepts, random autoregression, and random innovation variance (i.e., the level 1 residual variance). Including random innovation variance is an important extension of the multilevel AR(1) model for two reasons. First, between-person differences in innovation variance are important from a substantive point of view, in that they capture differences in sensitivity and/or exposure to unmeasured internal and external factors that influence the process. Second, using simulation methods we show that modeling the innovation variance as fixed across individuals, when it should be modeled as a random effect, leads to biased parameter estimates. Additionally, we use simulation methods to compare maximum likelihood estimation to Bayesian estimation of the multilevel AR(1) model and investigate the trade-off between the number of individuals and the number of time points. We provide an empirical illustration by applying the extended multilevel AR(1) model to daily positive affect ratings from 89 married women over the course of 42 consecutive days.

  8. Sensitivity of a Simulated Derecho Event to Model Initial Conditions

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    2014-05-01

    Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.

  9. Generic solar photovoltaic system dynamic simulation model specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Abraham; Behnke, Michael Robert; Elliott, Ryan Thomas

    This document is intended to serve as a specification for generic solar photovoltaic (PV) system positive-sequence dynamic models to be implemented by software developers and approved by the WECC MVWG for use in bulk system dynamic simulations in accordance with NERC MOD standards. Two specific dynamic models are included in the scope of this document. The first, a Central Station PV System model, is intended to capture the most important dynamic characteristics of large scale (> 10 MW) PV systems with a central Point of Interconnection (POI) at the transmission level. The second, a Distributed PV System model, is intendedmore » to represent an aggregation of smaller, distribution-connected systems that comprise a portion of a composite load that might be modeled at a transmission load bus.« less

  10. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  11. Individual heterogeneity and identifiability in capture-recapture models

    USGS Publications Warehouse

    Link, W.A.

    2004-01-01

    Individual heterogeneity in detection probabilities is a far more serious problem for capture-recapture modeling than has previously been recognized. In this note, I illustrate that population size is not an identifiable parameter under the general closed population mark-recapture model Mh. The problem of identifiability is obvious if the population includes individuals with pi = 0, but persists even when it is assumed that individual detection probabilities are bounded away from zero. Identifiability may be attained within parametric families of distributions for pi, but not among parametric families of distributions. Consequently, in the presence of individual heterogeneity in detection probability, capture-recapture analysis is strongly model dependent.

  12. Health literacy and public health: A systematic review and integration of definitions and models

    PubMed Central

    2012-01-01

    Background Health literacy concerns the knowledge and competences of persons to meet the complex demands of health in modern society. Although its importance is increasingly recognised, there is no consensus about the definition of health literacy or about its conceptual dimensions, which limits the possibilities for measurement and comparison. The aim of the study is to review definitions and models on health literacy to develop an integrated definition and conceptual model capturing the most comprehensive evidence-based dimensions of health literacy. Methods A systematic literature review was performed to identify definitions and conceptual frameworks of health literacy. A content analysis of the definitions and conceptual frameworks was carried out to identify the central dimensions of health literacy and develop an integrated model. Results The review resulted in 17 definitions of health literacy and 12 conceptual models. Based on the content analysis, an integrative conceptual model was developed containing 12 dimensions referring to the knowledge, motivation and competencies of accessing, understanding, appraising and applying health-related information within the healthcare, disease prevention and health promotion setting, respectively. Conclusions Based upon this review, a model is proposed integrating medical and public health views of health literacy. The model can serve as a basis for developing health literacy enhancing interventions and provide a conceptual basis for the development and validation of measurement tools, capturing the different dimensions of health literacy within the healthcare, disease prevention and health promotion settings. PMID:22276600

  13. Unsteady numerical simulation of the flow in the U9 Kaplan turbine model

    NASA Astrophysics Data System (ADS)

    Javadi, Ardalan; Nilsson, Håkan

    2014-03-01

    The Reynolds-averaged Navier-Stokes equations with the RNG k-ε turbulence model closure are utilized to simulate the unsteady turbulent flow throughout the whole flow passage of the U9 Kaplan turbine model. The U9 Kaplan turbine model comprises 20 stationary guide vanes and 6 rotating blades (696.3 RPM), working at best efficiency load (0.71 m3/s). The computations are conducted using a general finite volume method, using the OpenFOAM CFD code. A dynamic mesh is used together with a sliding GGI interface to include the effect of the rotating runner. The clearance is included in the guide vane. The hub and tip clearances are also included in the runner. An analysis is conducted of the unsteady behavior of the flow field, the pressure fluctuation in the draft tube, and the coherent structures of the flow. The tangential and axial velocity distributions at three sections in the draft tube are compared against LDV measurements. The numerical result is in reasonable agreement with the experimental data, and the important flow physics close to the hub in the draft tube is captured. The hub and tip vortices and an on-axis forced vortex are captured. The numerical results show that the frequency of the forced vortex in 1/5 of the runner rotation.

  14. Degree versus direction: a comparison of four handedness classification schemes through the investigation of lateralised semantic priming.

    PubMed

    Kaploun, Kristen A; Abeare, Christopher A

    2010-09-01

    Four classification systems were examined using lateralised semantic priming in order to investigate whether degree or direction of handedness better captures the pattern of lateralised semantic priming. A total of 85 participants completed a lateralised semantic priming task and three handedness questionnaires. The classification systems tested were: (1) the traditional right- vs left-handed (RHs vs LHs); (2) a four-factor model of strong and weak right- and left-handers (SRHs, WRHs, SLHs, WLHs); (3) strong- vs mixed-handed (SHs vs MHs); and (4) a three-factor model of consistent left- (CLHs), inconsistent left- (ILHs), and consistent right-handers (CRHs). Mixed-factorial ANOVAs demonstrated significant visual field (VF) by handedness interactions for all but the third model. Results show that LHs, SLHs, CLHs, and ILHs responded faster to LVF targets, whereas RHs, SRHs, and CRHs responded faster to RVF targets; no significant VF by handedness interaction was found between SHs and MHs. The three-factor model better captures handedness group divergence on lateralised semantic priming by incorporating the direction of handedness as well as the degree. These findings help explain some of the variance in language lateralisation, demonstrating that direction of handedness is as important as degree. The need for greater consideration of handedness subgroups in laterality research is highlighted.

  15. Computational Modeling and Simulation of Developmental ...

    EPA Pesticide Factsheets

    Developmental and Reproductive Toxicity (DART) testing is important for assessing the potential consequences of drug and chemical exposure on human health and well-being. Complexity of pregnancy and the reproductive cycle makes DART testing challenging and costly for traditional (animal-based) methods. A compendium of in vitro data from ToxCast/Tox21 high-throughput screening (HTS) programs is available for predictive toxicology. ‘Predictive DART’ will require an integrative strategy that mobilizes HTS data into in silico models that capture the relevant embryology. This lecture addresses progress on EPA's 'virtual embryo'. The question of how tissues and organs are shaped during development is crucial for understanding (and predicting) human birth defects. While ToxCast HTS data may predict developmental toxicity with reasonable accuracy, mechanistic models are still necessary to capture the relevant biology. Subtle microscopic changes induced chemically may amplify to an adverse outcome but coarse changes may override lesion propagation in any complex adaptive system. Modeling system dynamics in a developing tissue is a multiscale problem that challenges our ability to predict toxicity from in vitro profiling data (ToxCast/Tox21). (DISCLAIMER: The views expressed in this presentation are those of the presenter and do not necessarily reflect the views or policies of the US EPA). This was an invited seminar presentation to the National Institute for Public H

  16. Hybrid CFD/CAA Modeling for Liftoff Acoustic Predictions

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Liever, Peter A.

    2011-01-01

    This paper presents development efforts at the NASA Marshall Space flight Center to establish a hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) simulation system for launch vehicle liftoff acoustics environment analysis. Acoustic prediction engineering tools based on empirical jet acoustic strength and directivity models or scaled historical measurements are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. CFD based modeling approaches are now able to capture the important details of vehicle specific plume flow environment, identifY the noise generation sources, and allow assessment of the influence of launch pad geometric details and sound mitigation measures such as water injection. However, CFD methodologies are numerically too dissipative to accurately capture the propagation of the acoustic waves in the large CFD models. The hybrid CFD/CAA approach combines the high-fidelity CFD analysis capable of identifYing the acoustic sources with a fast and efficient Boundary Element Method (BEM) that accurately propagates the acoustic field from the source locations. The BEM approach was chosen for its ability to properly account for reflections and scattering of acoustic waves from launch pad structures. The paper will present an overview of the technology components of the CFD/CAA framework and discuss plans for demonstration and validation against test data.

  17. Role of resolution in regional climate change projections over China

    NASA Astrophysics Data System (ADS)

    Shi, Ying; Wang, Guiling; Gao, Xuejie

    2017-11-01

    This paper investigates the sensitivity of projected future climate changes over China to the horizontal resolution of a regional climate model RegCM4.4 (RegCM), using RCP8.5 as an example. Model validation shows that RegCM performs better in reproducing the spatial distribution and magnitude of present-day temperature, precipitation and climate extremes than the driving global climate model HadGEM2-ES (HadGEM, at 1.875° × 1.25° degree resolution), but little difference is found between the simulations at 50 and 25 km resolutions. Comparison with observational data at different resolutions confirmed the added value of the RCM and finer model resolutions in better capturing the probability distribution of precipitation. However, HadGEM and RegCM at both resolutions project a similar pattern of significant future warming during both winter and summer, and a similar pattern of winter precipitation changes including dominant increase in most areas of northern China and little change or decrease in the southern part. Projected precipitation changes in summer diverge among the three models, especially over eastern China, with a general increase in HadGEM, little change in RegCM at 50 km, and a mix of increase and decrease in RegCM at 25 km resolution. Changes of temperature-related extremes (annual total number of daily maximum temperature > 25 °C, the maximum value of daily maximum temperature, the minimum value of daily minimum temperature in the three simulations especially in the two RegCM simulations are very similar to each other; so are the precipitation-related extremes (maximum consecutive dry days, maximum consecutive 5-day precipitation and extremely wet days' total amount). Overall, results from this study indicate a very low sensitivity of projected changes in this region to model resolution. While fine resolution is critical for capturing the spatial variability of the control climate, it may not be as important for capturing the climate response to homogeneous forcing (in this case greenhouse gas concentration changes).

  18. Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application

    PubMed Central

    Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H

    2017-01-01

    Background The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. Objective The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Methods Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. Results The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. Conclusions This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. PMID:28903894

  19. Evaluating the importance of characterizing soil structure and horizons in parameterizing a hydrologic process model

    USGS Publications Warehouse

    Mirus, Benjamin B.

    2015-01-01

    Incorporating the influence of soil structure and horizons into parameterizations of distributed surface water/groundwater models remains a challenge. Often, only a single soil unit is employed, and soil-hydraulic properties are assigned based on textural classification, without evaluating the potential impact of these simplifications. This study uses a distributed physics-based model to assess the influence of soil horizons and structure on effective parameterization. This paper tests the viability of two established and widely used hydrogeologic methods for simulating runoff and variably saturated flow through layered soils: (1) accounting for vertical heterogeneity by combining hydrostratigraphic units with contrasting hydraulic properties into homogeneous, anisotropic units and (2) use of established pedotransfer functions based on soil texture alone to estimate water retention and conductivity, without accounting for the influence of pedon structures and hysteresis. The viability of this latter method for capturing the seasonal transition from runoff-dominated to evapotranspiration-dominated regimes is also tested here. For cases tested here, event-based simulations using simplified vertical heterogeneity did not capture the state-dependent anisotropy and complex combinations of runoff generation mechanisms resulting from permeability contrasts in layered hillslopes with complex topography. Continuous simulations using pedotransfer functions that do not account for the influence of soil structure and hysteresis generally over-predicted runoff, leading to propagation of substantial water balance errors. Analysis suggests that identifying a dominant hydropedological unit provides the most acceptable simplification of subsurface layering and that modified pedotransfer functions with steeper soil-water retention curves might adequately capture the influence of soil structure and hysteresis on hydrologic response in headwater catchments.

  20. EpiK: A Knowledge Base for Epidemiological Modeling and Analytics of Infectious Diseases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, S. M. Shamimul; Fox, Edward A.; Bisset, Keith

    Computational epidemiology seeks to develop computational methods to study the distribution and determinants of health-related states or events (including disease), and the application of this study to the control of diseases and other health problems. Recent advances in computing and data sciences have led to the development of innovative modeling environments to support this important goal. The datasets used to drive the dynamic models as well as the data produced by these models presents unique challenges owing to their size, heterogeneity and diversity. These datasets form the basis of effective and easy to use decision support and analytical environments. Asmore » a result, it is important to develop scalable data management systems to store, manage and integrate these datasets. In this paper, we develop EpiK—a knowledge base that facilitates the development of decision support and analytical environments to support epidemic science. An important goal is to develop a framework that links the input as well as output datasets to facilitate effective spatio-temporal and social reasoning that is critical in planning and intervention analysis before and during an epidemic. The data management framework links modeling workflow data and its metadata using a controlled vocabulary. The metadata captures information about storage, the mapping between the linked model and the physical layout, and relationships to support services. EpiK is designed to support agent-based modeling and analytics frameworks—aggregate models can be seen as special cases and are thus supported. We use semantic web technologies to create a representation of the datasets that encapsulates both the location and the schema heterogeneity. The choice of RDF as a representation language is motivated by the diversity and growth of the datasets that need to be integrated. A query bank is developed—the queries capture a broad range of questions that can be posed and answered during a typical case study pertaining to disease outbreaks. The queries are constructed using SPARQL Protocol and RDF Query Language (SPARQL) over the EpiK. EpiK can hide schema and location heterogeneity while efficiently supporting queries that span the computational epidemiology modeling pipeline: from model construction to simulation output. As a result, we show that the performance of benchmark queries varies significantly with respect to the choice of hardware underlying the database and resource description framework (RDF) engine.« less

Top