NASA Astrophysics Data System (ADS)
Harman, C. J.
2015-12-01
Surface water hydrologic models are increasingly used to analyze the transport of solutes through the landscape, such as nitrate. However, many of these models cannot adequately capture the effect of groundwater flow paths, which can have long travel times and accumulate legacy contaminants, releasing them to streams over decades. If these long lag times are not accounted for, the short-term efficacy of management activities to reduce nitrogen loads may be overestimated. Models that adopt a simple 'well-mixed' assumption, leading to an exponential transit time distribution at steady state, cannot adequately capture the broadly skewed nature of groundwater transit times in typical watersheds. Here I will demonstrate how StorAge Selection functions can be used to capture the long lag times of groundwater in a typical subwatershed-based hydrologic model framework typical of models like SWAT, HSPF, HBV, PRMS and others. These functions can be selected and calibrated to reproduce historical data where available, but can also be fitted to the results of a steady-state groundwater transport model like MODFLOW/MODPATH, allowing those results to directly inform the parameterization of an unsteady surface water model. The long tails of the transit time distribution predicted by the groundwater model can then be completely captured by the surface water model. Examples of this application in the Chesapeake Bay watersheds and elsewhere will be given.
Multiscale solute transport upscaling for a three-dimensional hierarchical porous medium
NASA Astrophysics Data System (ADS)
Zhang, Mingkan; Zhang, Ye
2015-03-01
A laboratory-generated hierarchical, fully heterogeneous aquifer model (FHM) provides a reference for developing and testing an upscaling approach that integrates large-scale connectivity mapping with flow and transport modeling. Based on the FHM, three hydrostratigraphic models (HSMs) that capture lithological (static) connectivity at different resolutions are created, each corresponding to a sedimentary hierarchy. Under increasing system lnK variances (0.1, 1.0, 4.5), flow upscaling is first conducted to calculate equivalent hydraulic conductivity for individual connectivity (or unit) of the HSMs. Given the computed flow fields, an instantaneous, conservative tracer test is simulated by all models. For the HSMs, two upscaling formulations are tested based on the advection-dispersion equation (ADE), implementing space versus time-dependent macrodispersivity. Comparing flow and transport predictions of the HSMs against those of the reference model, HSMs capturing connectivity at increasing resolutions are more accurate, although upscaling errors increase with system variance. Results suggest: (1) by explicitly modeling connectivity, an enhanced degree of freedom in representing dispersion can improve the ADE-based upscaled models by capturing non-Fickian transport of the FHM; (2) when connectivity is sufficiently resolved, the type of data conditioning used to model transport becomes less critical. Data conditioning, however, is influenced by the prediction goal; (3) when aquifer is weakly-to-moderately heterogeneous, the upscaled models adequately capture the transport simulation of the FHM, despite the existence of hierarchical heterogeneity at smaller scales. When aquifer is strongly heterogeneous, the upscaled models become less accurate because lithological connectivity cannot adequately capture preferential flows; (4) three-dimensional transport connectivities of the hierarchical aquifer differ quantitatively from those analyzed for two-dimensional systems. This article was corrected on 7 MAY 2015. See the end of the full text for details.
A Multiperspectival Conceptual Model of Transformative Meaning Making
ERIC Educational Resources Information Center
Freed, Maxine
2009-01-01
Meaning making is central to transformative learning, but little work has explored how meaning is constructed in the process. Moreover, no meaning-making theory adequately captures its characteristics and operations during radical transformation. The purpose of this dissertation was to formulate and specify a multiperspectival conceptual model of…
Evaluation of XHVRB for Capturing Explosive Shock Desensitization
NASA Astrophysics Data System (ADS)
Tuttle, Leah; Schmitt, Robert; Kittell, Dave; Harstad, Eric
2017-06-01
Explosive shock desensitization phenomena have been recognized for some time. It has been demonstrated that pressure-based reactive flow models do not adequately capture the basic nature of the explosive behavior. Historically, replacing the local pressure with a shock captured pressure has dramatically improved the numerical modeling approaches. Models based upon shock pressure or functions of entropy have recently been developed. A pseudo-entropy based formulation using the History Variable Reactive Burn model, as proposed by Starkenberg, was implemented into the Eulerian shock physics code CTH. Improvements in the shock capturing algorithm were made. The model is demonstrated to reproduce single shock behavior consistent with published pop plot data. It is also demonstrated to capture a desensitization effect based on available literature data, and to qualitatively capture dead zones from desensitization in 2D corner turning experiments. This models shows promise for use in modeling and simulation problems that are relevant to the desensitization phenomena. Issues are identified with the current implementation and future work is proposed for improving and expanding model capabilities. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Verification of Orthogrid Finite Element Modeling Techniques
NASA Technical Reports Server (NTRS)
Steeve, B. E.
1996-01-01
The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.
Abiotic/biotic coupling in the rhizosphere: a reactive transport modeling analysis
Lawrence, Corey R.; Steefel, Carl; Maher, Kate
2014-01-01
A new generation of models is needed to adequately simulate patterns of soil biogeochemical cycling in response changing global environmental drivers. For example, predicting the influence of climate change on soil organic matter storage and stability requires models capable of addressing complex biotic/abiotic interactions of rhizosphere and weathering processes. Reactive transport modeling provides a powerful framework simulating these interactions and the resulting influence on soil physical and chemical characteristics. Incorporation of organic reactions in an existing reactive transport model framework has yielded novel insights into soil weathering and development but much more work is required to adequately capture root and microbial dynamics in the rhizosphere. This endeavor provides many advantages over traditional soil biogeochemical models but also many challenges.
Cell population modelling of yeast glycolytic oscillations.
Henson, Michael A; Müller, Dirk; Reuss, Matthias
2002-01-01
We investigated a cell-population modelling technique in which the population is constructed from an ensemble of individual cell models. The average value or the number distribution of any intracellular property captured by the individual cell model can be calculated by simulation of a sufficient number of individual cells. The proposed method is applied to a simple model of yeast glycolytic oscillations where synchronization of the cell population is mediated by the action of an excreted metabolite. We show that smooth one-dimensional distributions can be obtained with ensembles comprising 1000 individual cells. Random variations in the state and/or structure of individual cells are shown to produce complex dynamic behaviours which cannot be adequately captured by small ensembles. PMID:12206713
Water surface modeling from a single viewpoint video.
Li, Chuan; Pickup, David; Saunders, Thomas; Cosker, Darren; Marshall, David; Hall, Peter; Willis, Philip
2013-07-01
We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.
Towards a Context-Aware Proactive Decision Support Framework
2013-11-15
initiative that has developed text analytic technology that crosses the semantic gap into the area of event recognition and representation. The...recognizing operational context, and techniques for recognizing context shift. Additional research areas include: • Adequately capturing users...Universal Interaction Context Ontology [12] might serve as a foundation • Instantiating formal models of decision making based on information seeking
Numerical experiments with model monophyletic and paraphyletic taxa
NASA Technical Reports Server (NTRS)
Sepkoski, J. J. Jr; Kendrick, D. C.; Sepkoski JJ, J. r. (Principal Investigator)
1993-01-01
The problem of how accurately paraphyletic taxa versus monophyletic (i.e., holophyletic) groups (clades) capture underlying species patterns of diversity and extinction is explored with Monte Carlo simulations. Phylogenies are modeled as stochastic trees. Paraphyletic taxa are defined in an arbitrary manner by randomly choosing progenitors and clustering all descendants not belonging to other taxa. These taxa are then examined to determine which are clades, and the remaining paraphyletic groups are dissected to discover monophyletic subgroups. Comparisons of diversity patterns and extinction rates between modeled taxa and lineages indicate that paraphyletic groups can adequately capture lineage information under a variety of conditions of diversification and mass extinction. This suggests that these groups constitute more than mere "taxonomic noise" in this context. But, strictly monophyletic groups perform somewhat better, especially with regard to mass extinctions. However, when low levels of paleontologic sampling are simulated, the veracity of clades deteriorates, especially with respect to diversity, and modeled paraphyletic taxa often capture more information about underlying lineages. Thus, for studies of diversity and taxic evolution in the fossil record, traditional paleontologic genera and families need not be rejected in favor of cladistically-defined taxa.
Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model
NASA Astrophysics Data System (ADS)
Vila, J.; Fernández-Sáez, J.; Zaera, R.
2018-04-01
In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.
Modelling and simulation techniques for membrane biology.
Burrage, Kevin; Hancock, John; Leier, André; Nicolau, Dan V
2007-07-01
One of the most important aspects of Computational Cell Biology is the understanding of the complicated dynamical processes that take place on plasma membranes. These processes are often so complicated that purely temporal models cannot always adequately capture the dynamics. On the other hand, spatial models can have large computational overheads. In this article, we review some of these issues with respect to chemistry, membrane microdomains and anomalous diffusion and discuss how to select appropriate modelling and simulation paradigms based on some or all the following aspects: discrete, continuous, stochastic, delayed and complex spatial processes.
Using argument notation to engineer biological simulations with increased confidence
Alden, Kieran; Andrews, Paul S.; Polack, Fiona A. C.; Veiga-Fernandes, Henrique; Coles, Mark C.; Timmis, Jon
2015-01-01
The application of computational and mathematical modelling to explore the mechanics of biological systems is becoming prevalent. To significantly impact biological research, notably in developing novel therapeutics, it is critical that the model adequately represents the captured system. Confidence in adopting in silico approaches can be improved by applying a structured argumentation approach, alongside model development and results analysis. We propose an approach based on argumentation from safety-critical systems engineering, where a system is subjected to a stringent analysis of compliance against identified criteria. We show its use in examining the biological information upon which a model is based, identifying model strengths, highlighting areas requiring additional biological experimentation and providing documentation to support model publication. We demonstrate our use of structured argumentation in the development of a model of lymphoid tissue formation, specifically Peyer's Patches. The argumentation structure is captured using Artoo (www.york.ac.uk/ycil/software/artoo), our Web-based tool for constructing fitness-for-purpose arguments, using a notation based on the safety-critical goal structuring notation. We show how argumentation helps in making the design and structured analysis of a model transparent, capturing the reasoning behind the inclusion or exclusion of each biological feature and recording assumptions, as well as pointing to evidence supporting model-derived conclusions. PMID:25589574
Using argument notation to engineer biological simulations with increased confidence.
Alden, Kieran; Andrews, Paul S; Polack, Fiona A C; Veiga-Fernandes, Henrique; Coles, Mark C; Timmis, Jon
2015-03-06
The application of computational and mathematical modelling to explore the mechanics of biological systems is becoming prevalent. To significantly impact biological research, notably in developing novel therapeutics, it is critical that the model adequately represents the captured system. Confidence in adopting in silico approaches can be improved by applying a structured argumentation approach, alongside model development and results analysis. We propose an approach based on argumentation from safety-critical systems engineering, where a system is subjected to a stringent analysis of compliance against identified criteria. We show its use in examining the biological information upon which a model is based, identifying model strengths, highlighting areas requiring additional biological experimentation and providing documentation to support model publication. We demonstrate our use of structured argumentation in the development of a model of lymphoid tissue formation, specifically Peyer's Patches. The argumentation structure is captured using Artoo (www.york.ac.uk/ycil/software/artoo), our Web-based tool for constructing fitness-for-purpose arguments, using a notation based on the safety-critical goal structuring notation. We show how argumentation helps in making the design and structured analysis of a model transparent, capturing the reasoning behind the inclusion or exclusion of each biological feature and recording assumptions, as well as pointing to evidence supporting model-derived conclusions.
A High-Resolution, Three-Dimensional Model of Jupiter's Great Red Spot
NASA Technical Reports Server (NTRS)
Cho, James Y.-K.; delaTorreJuarez, Manuel; Ingersoll, Andrew P.; Dritschel, David G.
2001-01-01
The turbulent flow at the periphery of the Great Red Spot (GRS) contains many fine-scale filamentary structures, while the more quiescent core, bounded by a narrow high- velocity ring, exhibits organized, possibly counterrotating, motion. Past studies have neither been able to capture this complexity nor adequately study the effect of vertical stratification L(sub R)(zeta) on the GRS. We present results from a series of high-resolution, three-dimensional simulations that advect the dynamical tracer, potential vorticity. The detailed flow is successfully captured with a characteristic value of L(sub R) approx. equals 2000 km, independent of the precise vertical stratification profile.
NASA Astrophysics Data System (ADS)
Oaida, C. M.; Skiles, M.; Painter, T. H.; Xue, Y.
2015-12-01
The mountain snowpack is an essential resource for both the environment as well as society. Observational and energy balance modeling work have shown that dust on snow (DOS) in western U.S. (WUS) is a major contributor to snow processes, including snowmelt timing and runoff amount in regions like the Upper Colorado River Basin (UCRB). In order to accurately estimate the impact of DOS to the hydrologic cycle and water resources, now and under a changing climate, we need to be able to (1) adequately simulate the snowpack (accumulation), and (2) realistically represent DOS processes in models. Energy balance models do not capture the impact on a broader local or regional scale, nor the land-atmosphere feedbacks, while GCM studies cannot resolve orographic-related precipitation processes, and therefore snowpack accumulation, owing to coarse spatial resolution and smoother terrain. All this implies the impacts of dust on snow on the mountain snowpack and other hydrologic processes are likely not well captured in current modeling studies. Recent increase in computing power allows for RCMs to be used at higher spatial resolutions, while recent in situ observations of dust in snow properties can help constrain modeling simulations. Therefore, in the work presented here, we take advantage of these latest resources to address the some of the challenges outlined above. We employ the newly enhanced WRF/SSiB regional climate model at 4 km horizontal resolution. This scale has been shown by others to be adequate in capturing orographic processes over WUS. We also constrain the magnitude of dust deposition provided by a global chemistry and transport model, with in situ measurements taken at sites in the UCRB. Furthermore, we adjust the dust absorptive properties based on observed values at these sites, as opposed to generic global ones. This study aims to improve simulation of the impact of dust in snow on the hydrologic cycle and related water resources.
NASA Technical Reports Server (NTRS)
Hutto, Clayton; Briscoe, Erica; Trewhitt, Ethan
2012-01-01
Societal level macro models of social behavior do not sufficiently capture nuances needed to adequately represent the dynamics of person-to-person interactions. Likewise, individual agent level micro models have limited scalability - even minute parameter changes can drastically affect a model's response characteristics. This work presents an approach that uses agent-based modeling to represent detailed intra- and inter-personal interactions, as well as a system dynamics model to integrate societal-level influences via reciprocating functions. A Cognitive Network Model (CNM) is proposed as a method of quantitatively characterizing cognitive mechanisms at the intra-individual level. To capture the rich dynamics of interpersonal communication for the propagation of beliefs and attitudes, a Socio-Cognitive Network Model (SCNM) is presented. The SCNM uses socio-cognitive tie strength to regulate how agents influence--and are influenced by--one another's beliefs during social interactions. We then present experimental results which support the use of this network analytical approach, and we discuss its applicability towards characterizing and understanding human information processing.
NASA Astrophysics Data System (ADS)
Choler, P.; Sea, W.; Briggs, P.; Raupach, M.; Leuning, R.
2009-09-01
Modelling leaf phenology in water-controlled ecosystems remains a difficult task because of high spatial and temporal variability in the interaction of plant growth and soil moisture. Here, we move beyond widely used linear models to examine the performance of low-dimensional, nonlinear ecohydrological models that couple the dynamics of plant cover and soil moisture. The study area encompasses 400 000 km2 of semi-arid perennial tropical grasslands, dominated by C4 grasses, in the Northern Territory and Queensland (Australia). We prepared 8 yr time series (2001-2008) of climatic variables and estimates of fractional vegetation cover derived from MODIS Normalized Difference Vegetation Index (NDVI) for 400 randomly chosen sites, of which 25% were used for model calibration and 75% for model validation. We found that the mean absolute error of linear and nonlinear models did not markedly differ. However, nonlinear models presented key advantages: (1) they exhibited far less systematic error than their linear counterparts; (2) their error magnitude was consistent throughout a precipitation gradient while the performance of linear models deteriorated at the driest sites, and (3) they better captured the sharp transitions in leaf cover that are observed under high seasonality of precipitation. Our results showed that low-dimensional models including feedbacks between soil water balance and plant growth adequately predict leaf dynamics in semi-arid perennial grasslands. Because these models attempt to capture fundamental ecohydrological processes, they should be the favoured approach for prognostic models of phenology.
NASA Astrophysics Data System (ADS)
Choler, P.; Sea, W.; Briggs, P.; Raupach, M.; Leuning, R.
2010-03-01
Modelling leaf phenology in water-controlled ecosystems remains a difficult task because of high spatial and temporal variability in the interaction of plant growth and soil moisture. Here, we move beyond widely used linear models to examine the performance of low-dimensional, nonlinear ecohydrological models that couple the dynamics of plant cover and soil moisture. The study area encompasses 400 000 km2 of semi-arid perennial tropical grasslands, dominated by C4 grasses, in the Northern Territory and Queensland (Australia). We prepared 8-year time series (2001-2008) of climatic variables and estimates of fractional vegetation cover derived from MODIS Normalized Difference Vegetation Index (NDVI) for 400 randomly chosen sites, of which 25% were used for model calibration and 75% for model validation. We found that the mean absolute error of linear and nonlinear models did not markedly differ. However, nonlinear models presented key advantages: (1) they exhibited far less systematic error than their linear counterparts; (2) their error magnitude was consistent throughout a precipitation gradient while the performance of linear models deteriorated at the driest sites, and (3) they better captured the sharp transitions in leaf cover that are observed under high seasonality of precipitation. Our results showed that low-dimensional models including feedbacks between soil water balance and plant growth adequately predict leaf dynamics in semi-arid perennial grasslands. Because these models attempt to capture fundamental ecohydrological processes, they should be the favoured approach for prognostic models of phenology.
Harris, Keith M; Thandrayen, Joanne; Samphoas, Chien; Se, Pros; Lewchalermwongse, Boontriga; Ratanashevorn, Rattanakorn; Perry, Megan L; Britts, Choloe
2016-04-01
This study tested a low-cost method for estimating suicide rates in developing nations that lack adequate statistics. Data comprised reported suicides from Cambodia's 2 largest newspapers. Capture-recapture modeling estimated a suicide rate of 3.8/100 000 (95% CI = 2.5-6.7) for 2012. That compares to World Health Organization estimates of 1.3 to 9.4/100 000 and a Cambodian government estimate of 3.5/100 000. Suicide rates of males were twice that of females, and rates of those <40 years were twice that of those ≥40 years. Capture-recapture modeling with newspaper reports proved a reasonable method for estimating suicide rates for countries with inadequate official data. These methods are low-cost and can be applied to regions with at least 2 newspapers with overlapping reports. Means to further improve this approach are discussed. These methods are applicable to both recent and historical data, which can benefit epidemiological work, and may also be applicable to homicides and other statistics. © 2016 APJPH.
PACE and the Medicare+Choice risk-adjusted payment model.
Temkin-Greener, H; Meiners, M R; Gruenberg, L
2001-01-01
This paper investigates the impact of the Medicare principal inpatient diagnostic cost group (PIP-DCG) payment model on the Program of All-Inclusive Care for the Elderly (PACE). Currently, more than 6,000 Medicare beneficiaries who are nursing home certifiable receive care from PACE, a program poised for expansion under the Balanced Budget Act of 1997. Overall, our analysis suggests that the application of the PIP-DCG model to the PACE program would reduce Medicare payments to PACE, on average, by 38%. The PIP-DCG payment model bases its risk adjustment on inpatient diagnoses and does not capture adequately the risk of caring for a population with functional impairments.
Estimation of density of mongooses with capture-recapture and distance sampling
Corn, J.L.; Conroy, M.J.
1998-01-01
We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.
PROCEDURE FOR ESTIMATING PERMANENT TOTAL ENCLOSURE COSTS
The paper discusses a procedure for estimating permanent total enclosure (PTE) costs. (NOTE: Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use PTEs, enclosures that mee...
NASA Astrophysics Data System (ADS)
Osterman, G. B.; Neu, J. L.; Eldering, A.; Pinder, R. W.; Tang, Y.; McQueen, J.
2014-12-01
Most regional scale models that are used for air quality forecasts and ozone source attribution do not adequately capture the distribution of ozone in the mid- and upper troposphere, but it is unclear how this shortcoming relates to their ability to simulate surface ozone. We combine ozone profile data from the NASA Earth Observing System (EOS) Tropospheric Emission Spectrometer (TES) and a new joint product from TES and the Ozone Monitoring Instrument along with ozonesonde measurements and EPA AirNow ground station ozone data to examine air quality events during August 2006 in the Community Multi-Scale Air Quality (CMAQ) and National Air Quality Forecast Capability (NAQFC) models. We present both aggregated statistics and case-study analyses with the goal of assessing the relationship between the models' ability to reproduce surface air quality events and their ability to capture the vertical distribution of ozone. We find that the models lack the mid-tropospheric ozone variability seen in TES and the ozonesonde data, and discuss the conditions under which this variability appears to be important for surface air quality.
A Study of Fan Stage/Casing Interaction Models
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Carney, Kelly; Gallardo, Vicente
2003-01-01
The purpose of the present study is to investigate the performance of several existing and new, blade-case interactions modeling capabilities that are compatible with the large system simulations used to capture structural response during blade-out events. Three contact models are examined for simulating the interactions between a rotor bladed disk and a case: a radial and linear gap element and a new element based on a hydrodynamic formulation. The first two models are currently available in commercial finite element codes such as NASTRAN and have been showed to perform adequately for simulating rotor-case interactions. The hydrodynamic model, although not readily available in commercial codes, may prove to be better able to characterize rotor-case interactions.
Simcharoen, S.; Pattanavibool, A.; Karanth, K.U.; Nichols, J.D.; Kumar, N.S.
2007-01-01
We used capture-recapture analyses to estimate the density of a tiger Panthera tigris population in the tropical forests of Huai Kha Khaeng Wildlife Sanctuary, Thailand, from photographic capture histories of 15 distinct individuals. The closure test results (z = 0.39, P = 0.65) provided some evidence in support of the demographic closure assumption. Fit of eight plausible closed models to the data indicated more support for model Mh, which incorporates individual heterogeneity in capture probabilities. This model generated an average capture probability $\\hat p$ = 0.42 and an abundance estimate of $\\widehat{N}(\\widehat{SE}[\\widehat{N}])$ = 19 (9.65) tigers. The sampled area of $\\widehat{A}(W)(\\widehat{SE}[\\widehat{A}(W)])$ = 477.2 (58.24) km2 yielded a density estimate of $\\widehat{D}(\\widehat{SE}[\\widehat{D}])$ = 3.98 (0.51) tigers per 100 km2. Huai Kha Khaeng Wildlife Sanctuary could therefore hold 113 tigers and the entire Western Forest Complex c. 720 tigers. Although based on field protocols that constrained us to use sub-optimal analyses, this estimated tiger density is comparable to tiger densities in Indian reserves that support moderate prey abundances. However, tiger densities in well-protected Indian reserves with high prey abundances are three times higher. If given adequate protection we believe that the Western Forest Complex of Thailand could potentially harbour >2,000 wild tigers, highlighting its importance for global tiger conservation. The monitoring approaches we recommend here would be useful for managing this tiger population.
NASA Astrophysics Data System (ADS)
Orhan, K.; Mayerle, R.
2016-12-01
A methodology comprising of the estimates of power yield, evaluation of the effects of power extraction on flow conditions, and near-field investigations to deliver wake characteritics, recovery and interactions is described and applied to several straits in Indonesia. Site selection is done with high-resolution, three-dimensional flow models providing sufficient spatiotemporal coverage. Much attention has been given to the meteorological forcing, and conditions at the open sea boundaries to adequately capture the density gradients and flow fields. Model verification using tidal records shows excellent agreement. Sites with adequate depth for the energy conversion using horizontal axis tidal turbines, average kinetic power density greater than 0.5 kW/m2, and surface area larger than 0.5km2 are defined as energy hotspots. Spatial variation of the average extractable electric power is determined, and annual tidal energy resource is estimated for the straits in question. The results showed that the potential for tidal power generation in Indonesia is likely to exceed previous predictions reaching around 4,800MW. To assess the impact of the devices, flexible mesh models with higher resolutions have been developed. Effects on flow conditions, and near-field turbine wakes are resolved in greater detail with triangular horizontal grids. The energy is assumed to be removed uniformly by sub-grid scale arrays of turbines, and calculations are made based on velocities at the hub heights of the devices. An additional drag force resulting in dissipation of the pre-existing kinetic power from %10 to %60 within a flow cross-section is introduced to capture the impacts. It was found that the effect of power extraction on water levels and flow speeds in adjacent areas is not significant. Results show the effectivess of the method to capture wake characteritics and recovery reasonably well with low computational cost.
High-frequency health data and spline functions.
Martín-Rodríguez, Gloria; Murillo-Fort, Carlos
2005-03-30
Seasonal variations are highly relevant for health service organization. In general, short run movements of medical magnitudes are important features for managers in this field to make adequate decisions. Thus, the analysis of the seasonal pattern in high-frequency health data is an appealing task. The aim of this paper is to propose procedures that allow the analysis of the seasonal component in this kind of data by means of spline functions embedded into a structural model. In the proposed method, useful adaptions of the traditional spline formulation are developed, and the resulting procedures are capable of capturing periodic variations, whether deterministic or stochastic, in a parsimonious way. Finally, these methodological tools are applied to a series of daily emergency service demand in order to capture simultaneous seasonal variations in which periods are different.
NASA Astrophysics Data System (ADS)
Osterman, G. B.; Neu, J. L.; Eldering, A.; Pinder, R. W.; Tang, Y.; McQueen, J.
2012-12-01
At night, ozone can be transported long distances above the surface inversion layer without chemical destruction or deposition. As the boundary layer breaks up in the morning, this nocturnal ozone can be mixed down to the surface and rapidly increase ozone concentrations at a rate that can rival chemical ozone production. Most regional scale models that are used for air quality forecasts and ozone source attribution do not adequately capture nighttime ozone concentrations and transport. We combine ozone profile data from the NASA Earth Observing System (EOS) Tropospheric Emission Spectrometer (TES) and other sensors, ozonesonde data collected during the INTEX Ozonesonde Network Study (IONS), EPA AirNow ground station ozone data, the Community Multi-Scale Air Quality (CMAQ) model, and the National Air Quality Forecast Capability (NAQFC) model to examine air quality events during August 2006. We present both aggregated statistics and case-study analyses that assess the relationship between the models' ability to reproduce surface air quality events and their ability to capture the vertical distribution of ozone both during the day and at night. We perform the comparisons looking at the geospatial dependence in the differences between the measurements and models under different surface ozone conditions.
Legacy nutrient dynamics and patterns of catchment response under changing land use and management
NASA Astrophysics Data System (ADS)
Attinger, S.; Van, M. K.; Basu, N. B.
2017-12-01
Watersheds are complex heterogeneous systems that store, transform, and release water and nutrients under a broad distribution of both natural and anthropogenic controls. Many current watershed models, from complex numerical models to simpler reservoir-type models, are considered to be well-developed in their ability to predict fluxes of water and nutrients to streams and groundwater. They are generally less adept, however, at capturing watershed storage dynamics. In other words, many current models are run with an assumption of steady-state dynamics, and focus on nutrient flows rather than changes in nutrient stocks within watersheds. Although these commonly used modeling approaches may be able to adequately capture short-term watershed dynamics, they are unable to represent the clear nonlinearities or hysteresis responses observed in watersheds experiencing significant changes in nutrient inputs. To address such a lack, we have, in the present work, developed a parsimonious modeling approach designed to capture long-term catchment responses to spatial and temporal changes in nutrient inputs. In this approach, we conceptualize the catchment as a biogeochemical reactor that is driven by nutrient inputs, characterized internally by both biogeochemical degradation and residence or travel time distributions, resulting in a specific nutrient output. For the model simulations, we define a range of different scenarios to represent real-world changes in land use and management implemented to improve water quality. We then introduce the concept of state-space trajectories to describe system responses to these potential changes in anthropogenic forcings. We also increase model complexity, in a stepwise fashion, by dividing the catchment into multiple biogeochemical reactors, coupled in series or in parallel. Using this approach, we attempt to answer the following questions: (1) What level of model complexity is needed to capture observed system responses? (2) How can we explain different patterns of nonlinearity in watershed nutrient dynamics? And finally, how does the accumulation of nutrient legacies within watersheds impact current and future water quality?
Decision insight into stakeholder conflict for ERN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siirola, John; Tidwell, Vincent Carroll; Benz, Zachary O.
Participatory modeling has become an important tool in facilitating resource decision making and dispute resolution. Approaches to modeling that are commonly used in this context often do not adequately account for important human factors. Current techniques provide insights into how certain human activities and variables affect resource outcomes; however, they do not directly simulate the complex variables that shape how, why, and under what conditions different human agents behave in ways that affect resources and human interactions related to them. Current approaches also do not adequately reveal how the effects of individual decisions scale up to have systemic level effectsmore » in complex resource systems. This lack of integration prevents the development of more robust models to support decision making and dispute resolution processes. Development of integrated tools is further hampered by the fact that collection of primary data for decision-making modeling is costly and time consuming. This project seeks to develop a new approach to resource modeling that incorporates both technical and behavioral modeling techniques into a single decision-making architecture. The modeling platform is enhanced by use of traditional and advanced processes and tools for expedited data capture. Specific objectives of the project are: (1) Develop a proof of concept for a new technical approach to resource modeling that combines the computational techniques of system dynamics and agent based modeling, (2) Develop an iterative, participatory modeling process supported with traditional and advance data capture techniques that may be utilized to facilitate decision making, dispute resolution, and collaborative learning processes, and (3) Examine potential applications of this technology and process. The development of this decision support architecture included both the engineering of the technology and the development of a participatory method to build and apply the technology. Stakeholder interaction with the model and associated data capture was facilitated through two very different modes of engagement, one a standard interface involving radio buttons, slider bars, graphs and plots, while the other utilized an immersive serious gaming interface. The decision support architecture developed through this project was piloted in the Middle Rio Grande Basin to examine how these tools might be utilized to promote enhanced understanding and decision-making in the context of complex water resource management issues. Potential applications of this architecture and its capacity to lead to enhanced understanding and decision-making was assessed through qualitative interviews with study participants who represented key stakeholders in the basin.« less
Defining metrics of the Quasi-Biennial Oscillation in global climate models
NASA Astrophysics Data System (ADS)
Schenzinger, Verena; Osprey, Scott; Gray, Lesley; Butchart, Neal
2017-06-01
As the dominant mode of variability in the tropical stratosphere, the Quasi-Biennial Oscillation (QBO) has been subject to extensive research. Though there is a well-developed theory of this phenomenon being forced by wave-mean flow interaction, simulating the QBO adequately in global climate models still remains difficult. This paper presents a set of metrics to characterize the morphology of the QBO using a number of different reanalysis datasets and the FU Berlin radiosonde observation dataset. The same metrics are then calculated from Coupled Model Intercomparison Project 5 and Chemistry-Climate Model Validation Activity 2 simulations which included a representation of QBO-like behaviour to evaluate which aspects of the QBO are well captured by the models and which ones remain a challenge for future model development.
Effect of Multiple Scattering on the Compton Recoil Current Generated in an EMP, Revisited
Farmer, William A.; Friedman, Alex
2015-06-18
Multiple scattering has historically been treated in EMP modeling through the obliquity factor. The validity of this approach is examined here. A simplified model problem, which correctly captures cyclotron motion, Doppler shifting due to the electron motion, and multiple scattering is first considered. The simplified problem is solved three ways: the obliquity factor, Monte-Carlo, and Fokker-Planck finite-difference. Because of the Doppler effect, skewness occurs in the distribution. It is demonstrated that the obliquity factor does not correctly capture this skewness, but the Monte-Carlo and Fokker-Planck finite-difference approaches do. Here, the obliquity factor and Fokker-Planck finite-difference approaches are then compared inmore » a fuller treatment, which includes the initial Klein-Nishina distribution of the electrons, and the momentum dependence of both drag and scattering. It is found that, in general, the obliquity factor is adequate for most situations. However, as the gamma energy increases and the Klein-Nishina becomes more peaked in the forward direction, skewness in the distribution causes greater disagreement between the obliquity factor and a more accurate model of multiple scattering.« less
NASA Astrophysics Data System (ADS)
Noacco, V.; Wagener, T.; Pianosi, F.; Philp, T.
2017-12-01
Insurance companies provide insurance against a wide range of threats, such as natural catastrophes, nuclear incidents and terrorism. To quantify risk and support investment decisions, mathematical models are used, for example to set the premiums charged to clients that protect from financial loss, should deleterious events occur. While these models are essential tools for adequately assessing the risk attached to an insurer's portfolio, their development is costly and their value for decision-making may be limited by an incomplete understanding of uncertainty and sensitivity. Aside from the business need to understand risk and uncertainty, the insurance sector also faces regulation which requires them to test their models in such a way that uncertainties are appropriately captured and that plans are in place to assess the risks and their mitigation. The building and testing of models constitutes a high cost for insurance companies, and it is a time intensive activity. This study uses an established global sensitivity analysis toolbox (SAFE) to more efficiently capture the uncertainties and sensitivities embedded in models used by a leading re/insurance firm, with structured approaches to validate these models and test the impact of assumptions on the model predictions. It is hoped that this in turn will lead to better-informed and more robust business decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Min; Zhuang, Qianlai; Cook, D.
2011-08-31
Satellite remote sensing provides continuous temporal and spatial information of terrestrial ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical models, such as the Terrestrial Ecosystem Model (TEM), should provide a more adequate quantification of carbon dynamics of terrestrial ecosystems. Here we use Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI), Land Surface Water Index (LSWI) and carbon flux data of AmeriFlux to conduct such a study. We first modify the gross primary production (GPP) modeling in TEM by incorporating EVI and LSWI to account for the effects of the changes of canopy photosynthetic capacity, phenologymore » and water stress. Second, we parameterize and verify the new version of TEM with eddy flux data. We then apply the model to the conterminous United States over the period 2000-2005 at a 0.05-0.05 spatial resolution. We find that the new version of TEM made improvement over the previous version and generally captured the expected temporal and spatial patterns of regional carbon dynamics. We estimate that regional GPP is between 7.02 and 7.78 PgC yr{sup -1} and net primary production (NPP) ranges from 3.81 to 4.38 Pg Cyr{sup -1} and net ecosystem production (NEP) varies within 0.08- 0.73 PgC yr{sup -1} over the period 2000-2005 for the conterminous United States. The uncertainty due to parameterization is 0.34, 0.65 and 0.18 PgC yr{sup -1} for the regional estimates of GPP, NPP and NEP, respectively. The effects of extreme climate and disturbances such as severe drought in 2002 and destructive Hurricane Katrina in 2005 were captured by the model. Our study provides a new independent and more adequate measure of carbon fluxes for the conterminous United States, which will benefit studies of carbon-climate feedback and facilitate policy-making of carbon management and climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collis, Scott; Protat, Alain; May, Peter T.
2013-08-01
Comparisons between direct measurements and modeled values of vertical air motions in precipitating systems are complicated by differences in temporal and spatial scales. On one hand, vertically profiling radars more directly measure the vertical air motion but do not adequately capture full storm dynamics. On the other hand, vertical air motions retrieved from two or more scanning Doppler radars capture the full storm dynamics but require model constraints that may not capture all updraft features because of inadequate sampling, resolution, numerical constraints, and the fact that the storm is evolving as it is scanned by the radars. To investigate themore » veracity of radar-based retrievals, which can be used to verify numerically modeled vertical air motions, this article presents several case studies from storm events around Darwin, Northern Territory, Australia, in which measurements from a dual-frequency radar profiler system and volumetric radar-based wind retrievals are compared. While a direct comparison was not possible because of instrumentation location, an indirect comparison shows promising results, with volume retrievals comparing well to those obtained from the profiling system. This prompted a statistical analysis of an extended period of an active monsoon period during the Tropical Warm Pool International Cloud Experiment (TWP-ICE). Results show less vigorous deep convective cores with maximum updraft velocities occurring at lower heights than some cloudresolving modeling studies suggest. 1. Introduction The regionalization of global climate models has been a driver of demand for more complex convective parameterization schemes. A key readjustment of the modeled atmosphere« less
Disruption of River Networks in Nature and Models
NASA Astrophysics Data System (ADS)
Perron, J. T.; Black, B. A.; Stokes, M.; McCoy, S. W.; Goldberg, S. L.
2017-12-01
Many natural systems display especially informative behavior as they respond to perturbations. Landscapes are no exception. For example, longitudinal elevation profiles of rivers responding to changes in uplift rate can reveal differences among erosional mechanisms that are obscured while the profiles are in equilibrium. The responses of erosional river networks to perturbations, including disruption of their network structure by diversion, truncation, resurfacing, or river capture, may be equally revealing. In this presentation, we draw attention to features of disrupted erosional river networks that a general model of landscape evolution should be able to reproduce, including the consequences of different styles of planetary tectonics and the response to heterogeneous bedrock structure and deformation. A comparison of global drainage directions with long-wavelength topography on Earth, Mars, and Saturn's moon Titan reveals the extent to which persistent and relatively rapid crustal deformation has disrupted river networks on Earth. Motivated by this example and others, we ask whether current models of river network evolution adequately capture the disruption of river networks by tectonic, lithologic, or climatic perturbations. In some cases the answer appears to be no, and we suggest some processes that models may be missing.
Polanyi, Michael; Tompa, Emile
2004-01-01
Technology change, rising international trade and investment, and increased competition are changing the organization, distribution and nature of work in industrialized countries. To enhance productivity, employers are striving to increase innovation while minimizing costs. This is leading to an intensification of work demands on core employees and the outsourcing or casualization of more marginal tasks, often to contingent workers. The two prevailing models of work and health - demand-control and effort-reward imbalance - may not capture the full range of experiences of workers in today's increasingly flexible and competitive economies. To explore this proposition, we conducted a secondary qualitative analysis of interviews with 120 American workers [6]. Our analysis identifies aspects of work affecting the quality of workers' experiences that are largely overlooked by popular work-health models: the nature of social interactions with customers and clients; workers' belief in, and perception of, the importance of the product of their work. We suggest that the quality of work experiences is partly determined by the objective characteristics of the work environment, but also by the fit of the work environment with the worker's needs, interests, desires and personality, something not adequately captured in current models.
NASA Astrophysics Data System (ADS)
Kusangaya, Samuel; Warburton Toucher, Michele L.; van Garderen, Emma Archer
2018-02-01
Downscaled General Circulation Models (GCMs) output are used to forecast climate change and provide information used as input for hydrological modelling. Given that our understanding of climate change points towards an increasing frequency, timing and intensity of extreme hydrological events, there is therefore the need to assess the ability of downscaled GCMs to capture these extreme hydrological events. Extreme hydrological events play a significant role in regulating the structure and function of rivers and associated ecosystems. In this study, the Indicators of Hydrologic Alteration (IHA) method was adapted to assess the ability of simulated streamflow (using downscaled GCMs (dGCMs)) in capturing extreme river dynamics (high and low flows), as compared to streamflow simulated using historical climate data from 1960 to 2000. The ACRU hydrological model was used for simulating streamflow for the 13 water management units of the uMngeni Catchment, South Africa. Statistically downscaled climate models obtained from the Climate System Analysis Group at the University of Cape Town were used as input for the ACRU Model. Results indicated that, high flows and extreme high flows (one in ten year high flows/large flood events) were poorly represented both in terms of timing, frequency and magnitude. Simulated streamflow using dGCMs data also captures more low flows and extreme low flows (one in ten year lowest flows) than that captured in streamflow simulated using historical climate data. The overall conclusion was that although dGCMs output can reasonably be used to simulate overall streamflow, it performs poorly when simulating extreme high and low flows. Streamflow simulation from dGCMs must thus be used with caution in hydrological applications, particularly for design hydrology, as extreme high and low flows are still poorly represented. This, arguably calls for the further improvement of downscaling techniques in order to generate climate data more relevant and useful for hydrological applications such as in design hydrology. Nevertheless, the availability of downscaled climatic output provide the potential of exploring climate model uncertainties in different hydro climatic regions at local scales where forcing data is often less accessible but more accurate at finer spatial scales and with adequate spatial detail.
Irvine, Michael A; Hollingsworth, T Déirdre
2018-05-26
Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Archetypes for Organisational Safety
NASA Technical Reports Server (NTRS)
Marais, Karen; Leveson, Nancy G.
2003-01-01
We propose a framework using system dynamics to model the dynamic behavior of organizations in accident analysis. Most current accident analysis techniques are event-based and do not adequately capture the dynamic complexity and non-linear interactions that characterize accidents in complex systems. In this paper we propose a set of system safety archetypes that model common safety culture flaws in organizations, i.e., the dynamic behaviour of organizations that often leads to accidents. As accident analysis and investigation tools, the archetypes can be used to develop dynamic models that describe the systemic and organizational factors contributing to the accident. The archetypes help clarify why safety-related decisions do not always result in the desired behavior, and how independent decisions in different parts of the organization can combine to impact safety.
76 FR 55804 - Dicamba; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-09
... Considerations A. Analytical Enforcement Methodology Adequate enforcement methodologies, Methods I and II--gas chromatography with electron capture detection (GC/ECD), are available to enforce the tolerance expression. The...
Usability evaluation of mobile applications; where do we stand?
NASA Astrophysics Data System (ADS)
Zahra, Fatima; Hussain, Azham; Mohd, Haslina
2017-10-01
The range and availability of mobile applications is expanding rapidly. With the increased processing power available on portable devices, developers are increasing the range of services by embracing smartphones in their extensive and diverse practices. While usability testing and evaluations of mobile applications have not yet touched the accuracy level of other web based applications. The existing usability models do not adequately capture the complexities of interacting with applications on a mobile platform. Therefore, this study aims to presents review on existing usability models for mobile applications. These models are in their infancy but with time and more research they may eventually be adopted. Moreover, different categories of mobile apps (medical, entertainment, education) possess different functional and non-functional requirements thus customized models are required for diverse mobile applications.
Nonlinear finite element simulation of non-local tension softening for high strength steel material
NASA Astrophysics Data System (ADS)
Tong, F. M.
The capability of current finite element softwares in simulating the stress-strain relation beyond the elastic-plastic region has been limited by the inability for non- positivity in the computational finite elements' stiffness matrixes. Although analysis up to the peak stress has been proved adequate for analysis and design, it provides no indication of the possible failure predicament that is to follow. Therefore an attempt was made to develop a modelling technique capable of capturing the complete stress-deformation response in an analysis beyond the limit point. This proposed model characterizes a cyclic loading and unloading procedure, as observed in a typical laboratory uniaxial cyclic test, along with a series of material properties updates. The Voce equation and a polynomial function were proposed to define the monotonic elastoplastic hardening and softening behaviour respectively. A modified form of the Voce equation was used to capture the reloading response in the softening region. To accommodate the reduced load capacity of the material at each subsequent softening point, an optimization macro was written to control this optimum load at which the material could withstand. This preliminary study has ignored geometrical effect and is thus incapable of capturing the localized necking phenomenon that accompanies many ductile materials. The current softening model is sufficient if a global measure is considered. Several validation cases were performed to investigate the feasibility of the modelling technique and the results have been proved satisfactory. The ANSYS finite element software is used as the platform at which the modelling technique operates.
NASA Astrophysics Data System (ADS)
Chapman, Steven W.; Parker, Beth L.; Sale, Tom C.; Doner, Lee Ann
2012-08-01
It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making.
Get Over It! A Multilevel Threshold Autoregressive Model for State-Dependent Affect Regulation.
De Haan-Rietdijk, Silvia; Gottman, John M; Bergeman, Cindy S; Hamaker, Ellen L
2016-03-01
Intensive longitudinal data provide rich information, which is best captured when specialized models are used in the analysis. One of these models is the multilevel autoregressive model, which psychologists have applied successfully to study affect regulation as well as alcohol use. A limitation of this model is that the autoregressive parameter is treated as a fixed, trait-like property of a person. We argue that the autoregressive parameter may be state-dependent, for example, if the strength of affect regulation depends on the intensity of affect experienced. To allow such intra-individual variation, we propose a multilevel threshold autoregressive model. Using simulations, we show that this model can be used to detect state-dependent regulation with adequate power and Type I error. The potential of the new modeling approach is illustrated with two empirical applications that extend the basic model to address additional substantive research questions.
NASA Astrophysics Data System (ADS)
Torregrosa, A.; Flint, L. E.; Flint, A. L.; Peters, J.; Combs, C.
2014-12-01
Coastal fog modifies the hydrodynamic and thermodynamic properties of California watersheds with the greatest impact to ecosystem functioning during arid summer months. Lowered maximum temperatures resulting from inland penetration of marine fog are probably adequate to capture fog effects on thermal land surface characteristics however the hydrologic impact from lowered rates of evapotranspiration due to shade, fog drip, increased relative humidity, and other factors associated with fog events are more difficult to gauge. Fog products, such as those derived from National Weather Service Geostationary Operational Environmental Satellite (GOES) imagery, provide high frequency (up to 15 min) views of fog and low cloud cover and can potentially improve water balance models. Even slight improvements in water balance calculations can benefit urban water managers and agricultural irrigation. The high frequency of GOES output provides the opportunity to explore options for integrating fog frequency data into water balance models. This pilot project compares GOES-derived fog frequency intervals (6, 12 and 24 hour) to explore the most useful for water balance models and to develop model-relevant relationships between climatic and water balance variables. Seasonal diurnal thermal differences, plant ecophysiological processes, and phenology suggest that a day/night differentiation on a monthly basis may be adequate. To explore this hypothesis, we examined discharge data from stream gages and outputs from the USGS Basin Characterization Model for runoff, recharge, potential evapotranspiration, and actual evapotranspiration for the Russian River Watershed under low, medium, and high fog event conditions derived from hourly GOES imagery (1999-2009). We also differentiated fog events into daytime and nighttime versus a 24-hour compilation on a daily, monthly, and seasonal basis. Our data suggest that a daily time-step is required to adequately incorporate the hydrologic effect of fog.
Tertiary instability of zonal flows within the Wigner-Moyal formulation of drift turbulence
NASA Astrophysics Data System (ADS)
Zhu, Hongxuan; Ruiz, D. E.; Dodin, I. Y.
2017-10-01
The stability of zonal flows (ZFs) is analyzed within the generalized-Hasegawa-Mima model. The necessary and sufficient condition for a ZF instability, which is also known as the tertiary instability, is identified. The qualitative physics behind the tertiary instability is explained using the recently developed Wigner-Moyal formulation and the corresponding wave kinetic equation (WKE) in the geometrical-optics (GO) limit. By analyzing the drifton phase space trajectories, we find that the corrections proposed in Ref. to the WKE are critical for capturing the spatial scales characteristic for the tertiary instability. That said, we also find that this instability itself cannot be adequately described within a GO formulation in principle. Using the Wigner-Moyal equations, which capture diffraction, we analytically derive the tertiary-instability growth rate and compare it with numerical simulations. The research was sponsored by the U.S. Department of Energy.
Longitudinal train dynamics model for a rail transit simulation system
Wang, Jinghui; Rakha, Hesham A.
2018-01-01
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Longitudinal train dynamics model for a rail transit simulation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jinghui; Rakha, Hesham A.
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Gottert, Ann; Barrington, Clare; Pettifor, Audrey; McNaughton-Reyes, Heath Luz; Maman, Suzanne; MacPhail, Catherine; Kahn, Kathleen; Selin, Amanda; Twine, Rhian; Lippman, Sheri A
2016-08-01
Gender norms and gender role conflict/stress may influence HIV risk behaviors among men; however scales measuring these constructs need further development and evaluation in African settings. We conducted exploratory and confirmatory factor analyses to evaluate the Gender Equitable Men's Scale (GEMS) and the Gender Role Conflict/Stress (GRC/S) scale among 581 men in rural northeast South Africa. The final 17-item GEMS was unidimensional, with adequate model fit and reliability (alpha = 0.79). Factor loadings were low (0.2-0.3) for items related to violence and sexual relationships. The final 24-item GRC/S scale was multidimensional with four factors: Success, power, competition; Subordination to women; Restrictive emotionality; and Sexual prowess. The scale had adequate model fit and good reliability (alpha = 0.83). While GEMS is a good measure of inequitable gender norms, new or revised scale items may need to be explored in the South African context. Adding the GRC/S scale to capture men's strain related to gender roles could provide important insights into men's risk behaviors.
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
NASA Astrophysics Data System (ADS)
Matthes, J. H.; Dietze, M.; Fox, A. M.; Goring, S. J.; McLachlan, J. S.; Moore, D. J.; Poulter, B.; Quaife, T. L.; Schaefer, K. M.; Steinkamp, J.; Williams, J. W.
2014-12-01
Interactions between ecological systems and the atmosphere are the result of dynamic processes with system memories that persist from seconds to centuries. Adequately capturing long-term biosphere-atmosphere exchange within earth system models (ESMs) requires an accurate representation of changes in plant functional types (PFTs) through time and space, particularly at timescales associated with ecological succession. However, most model parameterization and development has occurred using datasets than span less than a decade. We tested the ability of ESMs to capture the ecological dynamics observed in paleoecological and historical data spanning the last millennium. Focusing on an area from the Upper Midwest to New England, we examined differences in the magnitude and spatial pattern of PFT distributions and ecotones between historic datasets and the CMIP5 inter-comparison project's large-scale ESMs. We then conducted a 1000-year model inter-comparison using six state-of-the-art biosphere models at sites that bridged regional temperature and precipitation gradients. The distribution of ecosystem characteristics in modeled climate space reveals widely disparate relationships between modeled climate and vegetation that led to large differences in long-term biosphere-atmosphere fluxes for this region. Model simulations revealed that both the interaction between climate and vegetation and the representation of ecosystem dynamics within models were important controls on biosphere-atmosphere exchange.
Ni, Bing-Jie; Ruscalleda, Maël; Pellicer-Nàcher, Carles; Smets, Barth F
2011-09-15
Nitrous oxide (N(2)O) can be formed during biological nitrogen (N) removal processes. In this work, a mathematical model is developed that describes N(2)O production and consumption during activated sludge nitrification and denitrification. The well-known ASM process models are extended to capture N(2)O dynamics during both nitrification and denitrification in biological N removal. Six additional processes and three additional reactants, all involved in known biochemical reactions, have been added. The validity and applicability of the model is demonstrated by comparing simulations with experimental data on N(2)O production from four different mixed culture nitrification and denitrification reactor study reports. Modeling results confirm that hydroxylamine oxidation by ammonium oxidizers (AOB) occurs 10 times slower when NO(2)(-) participates as final electron acceptor compared to the oxic pathway. Among the four denitrification steps, the last one (N(2)O reduction to N(2)) seems to be inhibited first when O(2) is present. Overall, N(2)O production can account for 0.1-25% of the consumed N in different nitrification and denitrification systems, which can be well simulated by the proposed model. In conclusion, we provide a modeling structure, which adequately captures N(2)O dynamics in autotrophic nitrification and heterotrophic denitrification driven biological N removal processes and which can form the basis for ongoing refinements.
Francisca, Franco Matías; Montoro, Marcos Alexis; Glatstein, Daniel Alejandro
2017-05-01
Landfill gas (LFG) management is one of the most important tasks for landfill operation and closure because of its impact in potential global warming. The aim of this work is to present a case history evaluating an LFG capture and treatment system for the present landfill facility in Córdoba, Argentina. The results may be relevant for many developing countries around the world where landfill gas is not being properly managed. The LFG generation is evaluated by modeling gas production applying the zero-order model, Landfill Gas Emissions Model (LandGEM; U.S. Environmental Protection Agency [EPA]), Scholl Canyon model, and triangular model. Variability in waste properties, weather, and landfill management conditions are analyzed in order to evaluate the feasibility of implementing different treatment systems. The results show the advantages of capturing and treating LFG in order to reduce the emissions of gases responsible for global warming and to determine the revenue rate needed for the project's financial requirements. This particular project reduces by half the emission of equivalent tons of carbon dioxide (CO 2 ) compared with the situation where there is no gas treatment. In addition, the study highlights the need for a change in the electricity prices if it is to be economically feasible to implement the project in the current Argentinean electrical market. Methane has 21 times more greenhouse gas potential than carbon dioxide. Because of that, it is of great importance to adequately manage biogas emissions from landfills. In addition, it is environmentally convenient to use this product as an alternative energy source, since it prevents methane emissions while preventing fossil fuel consumption, minimizing carbon dioxide emissions. Performed analysis indicated that biogas capturing and energy generation implies 3 times less equivalent carbon dioxide emissions; however, a change in the Argentinean electrical market fees are required to guarantee the financial feasibility of the project.
Bromaghin, Jeffrey F.; Gates, Kenneth S.; Palmer, Douglas E.
2010-01-01
Many fisheries for Pacific salmon Oncorhynchus spp. are actively managed to meet escapement goal objectives. In fisheries where the demand for surplus production is high, an extensive assessment program is needed to achieve the opposing objectives of allowing adequate escapement and fully exploiting the available surplus. Knowledge of abundance is a critical element of such assessment programs. Abundance estimation using mark—recapture experiments in combination with telemetry has become common in recent years, particularly within Alaskan river systems. Fish are typically captured and marked in the lower river while migrating in aggregations of individuals from multiple populations. Recapture data are obtained using telemetry receivers that are co-located with abundance assessment projects near spawning areas, which provide large sample sizes and information on population-specific mark rates. When recapture data are obtained from multiple populations, unequal mark rates may reflect a violation of the assumption of homogeneous capture probabilities. A common analytical strategy is to test the hypothesis that mark rates are homogeneous and combine all recapture data if the test is not significant. However, mark rates are often low, and a test of homogeneity may lack sufficient power to detect meaningful differences among populations. In addition, differences among mark rates may provide information that could be exploited during parameter estimation. We present a temporally stratified mark—recapture model that permits capture probabilities and migratory timing through the capture area to vary among strata. Abundance information obtained from a subset of populations after the populations have segregated for spawning is jointly modeled with telemetry distribution data by use of a likelihood function. Maximization of the likelihood produces estimates of the abundance and timing of individual populations migrating through the capture area, thus yielding substantially more information than the total abundance estimate provided by the conventional approach. The utility of the model is illustrated with data for coho salmon O. kisutch from the Kasilof River in south-central Alaska.
Source Update Capture in Information Agents
NASA Technical Reports Server (NTRS)
Ashish, Naveen; Kulkarni, Deepak; Wang, Yao
2003-01-01
In this paper we present strategies for successfully capturing updates at Web sources. Web-based information agents provide integrated access to autonomous Web sources that can get updated. For many information agent applications we are interested in knowing when a Web source to which the application provides access, has been updated. We may also be interested in capturing all the updates at a Web source over a period of time i.e., detecting the updates and, for each update retrieving and storing the new version of data. Previous work on update and change detection by polling does not adequately address this problem. We present strategies for intelligently polling a Web source for efficiently capturing changes at the source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moges, Edom; Demissie, Yonas; Li, Hong-Yi
2016-04-01
In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
Simulation tools for particle-based reaction-diffusion dynamics in continuous space
2014-01-01
Particle-based reaction-diffusion algorithms facilitate the modeling of the diffusional motion of individual molecules and the reactions between them in cellular environments. A physically realistic model, depending on the system at hand and the questions asked, would require different levels of modeling detail such as particle diffusion, geometrical confinement, particle volume exclusion or particle-particle interaction potentials. Higher levels of detail usually correspond to increased number of parameters and higher computational cost. Certain systems however, require these investments to be modeled adequately. Here we present a review on the current field of particle-based reaction-diffusion software packages operating on continuous space. Four nested levels of modeling detail are identified that capture incrementing amount of detail. Their applicability to different biological questions is discussed, arching from straight diffusion simulations to sophisticated and expensive models that bridge towards coarse grained molecular dynamics. PMID:25737778
Electron-phonon interaction within classical molecular dynamics
Tamm, A.; Samolyuk, G.; Correa, A. A.; ...
2016-07-14
Here, we present a model for nonadiabatic classical molecular dynamics simulations that captures with high accuracy the wave-vector q dependence of the phonon lifetimes, in agreement with quantum mechanics calculations. It is based on a local view of the e-ph interaction where individual atom dynamics couples to electrons via a damping term that is obtained as the low-velocity limit of the stopping power of a moving ion in a host. The model is parameter free, as its components are derived from ab initio-type calculations, is readily extended to the case of alloys, and is adequate for large-scale molecular dynamics computermore » simulations. We also show how this model removes some oversimplifications of the traditional ionic damped dynamics commonly used to describe situations beyond the Born-Oppenheimer approximation.« less
Harte, Philip T.
2017-01-01
A common assumption with groundwater sampling is that low (<0.5 L/min) pumping rates during well purging and sampling captures primarily lateral flow from the formation through the well-screened interval at a depth coincident with the pump intake. However, if the intake is adjacent to a low hydraulic conductivity part of the screened formation, this scenario will induce vertical groundwater flow to the pump intake from parts of the screened interval with high hydraulic conductivity. Because less formation water will initially be captured during pumping, a substantial volume of water already in the well (preexisting screen water or screen storage) will be captured during this initial time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.
Chapman, Steven W; Parker, Beth L; Sale, Tom C; Doner, Lee Ann
2012-08-01
It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making. Copyright © 2012 Elsevier B.V. All rights reserved.
Linking vegetation structure, function and physiology through spectroscopic remote sensing
NASA Astrophysics Data System (ADS)
Serbin, S.; Singh, A.; Couture, J. J.; Shiklomanov, A. N.; Rogers, A.; Desai, A. R.; Kruger, E. L.; Townsend, P. A.
2015-12-01
Terrestrial ecosystem process models require detailed information on ecosystem states and canopy properties to properly simulate the fluxes of carbon (C), water and energy from the land to the atmosphere and assess the vulnerability of ecosystems to perturbations. Current models fail to adequately capture the magnitude, spatial variation, and seasonality of terrestrial C uptake and storage, leading to significant uncertainties in the size and fate of the terrestrial C sink. By and large, these parameter and process uncertainties arise from inadequate spatial and temporal representation of plant traits, vegetation structure, and functioning. With increases in computational power and changes to model architecture and approaches, it is now possible for models to leverage detailed, data rich and spatially explicit descriptions of ecosystems to inform parameter distributions and trait tradeoffs. In this regard, spectroscopy and imaging spectroscopy data have been shown to be invaluable observational datasets to capture broad-scale spatial and, eventually, temporal dynamics in important vegetation properties. We illustrate the linkage of plant traits and spectral observations to supply key data constraints for model parameterization. These constraints can come either in the form of the raw spectroscopic data (reflectance, absorbtance) or physiological traits derived from spectroscopy. In this presentation we highlight our ongoing work to build ecological scaling relationships between critical vegetation characteristics and optical properties across diverse and complex canopies, including temperate broadleaf and conifer forests, Mediterranean vegetation, Arctic systems, and agriculture. We focus on work at the leaf, stand, and landscape scales, illustrating the importance of capturing the underlying variability in a range of parameters (including vertical variation within canopies) to enable more efficient scaling of traits related to functional diversity of ecosystems.
Substratum interfacial energetic effects on the attachment of marine bacteria
NASA Astrophysics Data System (ADS)
Ista, Linnea Kathryn
Biofilms represent an ancient, ubiquitous and influential form of life on earth. Biofilm formation is initiated by attachment of bacterial cells from an aqueous suspension onto a suitable attachment substratum. While in certain, well studied cases initial attachment and subsequent biofilm formation is mediated by specific ligand-receptor pairs on the bacteria and attachment substratum, in the open environment, including the ocean, it is assumed to be non-specific and mediated by processes similar to those that drive adsorption of colloids at the water-solid interface. Colloidal principles are studied to determine the molecular and physicochemical interactions involved in the attachment of the model marine bacterium, Cobetia marina to model self-assembled monolayer surfaces. In the simplest application of colloidal principles the wettability of attachment substrata, as measured by the advancing contact angle of water (theta AW) on the surface, is frequently used as an approximation for the surface tension. We demonstrate the applicability of this approach for attachment of C. marina and algal zoospores and extend it to the development of a means to control attachment and release of microorganisms by altering and tuning surface thetaAW. In many cases, however, thetaAW does not capture all the information necessary to model attachment of bacteria to attachment substrata; SAMs with similar thetaAW attach different number of bacteria. More advanced colloidal models of initial bacterial attachment have evolved over the last several decades, with the emergence of the model proposed by van Oss, Chaudhury and Good (VCG) as preeminent. The VCG model enables calculation of interfacial tensions by dividing these into two major interactions thought to be important at biointerfaces: apolar, Lifshitz-van der Waals and polar, Lewis acid-base (including hydrogen bonding) interactions. These interfacial tensions are combined to yield DeltaGadh, the free energy associated with attachment of bacteria to a substratum. We use VCG to model DeltaGadh and interfacial tensions as they relate to model bacterial attachment on SAMs that accumulate cells to different degrees. Even with the more complex interactions measured by VCG, surface energy of the attachment substratum alone was insufficient to predict attachment. VCG was then employed to model attachment of C. marina to a series of SAMs varying systematically in the number of ethylene glycol residues present in the molecule; an identical series has been previously shown to vary dramatically in the number of cells attached as a function of ethylene glycols present. Our results indicate that while VCG adequately models the interfacial tension between water and ethylene glycol SAMs in a manner that predicts bacterial attachment, DeltaGadh as calculated by VCG neither qualitatively nor quantitatively reflects the attachment data. The VCG model, thus, fails to capture specific information regarding the interactions between the attaching bacteria, water, and the SAM. We show that while hydrogen-bond accepting interactions are very well captured by this model, the ability for SAMs and bacteria to donate hydrogen bonds is not adequately described as the VCG model is currently applied. We also describe ways in which VCG fails to capture two specific biological aspects that may be important in bacterial attachment to surfaces:1.) specific interactions between molecules on the surface and bacteria and 2.) bacterial cell surface heterogeneities that may be important in differential attachment to different substrata.
Chen, M.; Zhuang, Q.; Cook, D. R.; ...
2011-09-21
Satellite remote sensing provides continuous temporal and spatial information of terrestrial ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical models, such as the Terrestrial Ecosystem Model (TEM), should provide a more adequate quantification of carbon dynamics of terrestrial ecosystems. Here we use Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI), Land Surface Water Index (LSWI) and carbon flux data of AmeriFlux to conduct such a study. First we modify the gross primary production (GPP) modeling in TEM by incorporating EVI and LSWI to account for the effects of the changes of canopy photosynthetic capacity, phenologymore » and water stress. Second, we parameterize and verify the new version of TEM with eddy flux data. We then apply the model to the conterminous United States over the period 2000–2005 at a 0.05° × 0.05° spatial resolution. We find that the new version of TEM made improvement over the previous version and generally captured the expected temporal and spatial patterns of regional carbon dynamics. We estimate that regional GPP is between 7.02 and 7.78 PgC yr -1 and net primary production (NPP) ranges from 3.81 to 4.38 Pg Cyr -1 and net ecosystem production (NEP) varies within 0.08– 0.73 PgC yr -1 over the period 2000–2005 for the conterminous United States. The uncertainty due to parameterization is 0.34, 0.65 and 0.18 PgC yr -1 for the regional estimates of GPP, NPP and NEP, respectively. The effects of extreme climate and disturbances such as severe drought in 2002 and destructive Hurricane Katrina in 2005 were captured by the model. Lastly, our study provides a new independent and more adequate measure of carbon fluxes for the conterminous United States, which will benefit studies of carbon-climate feedback and facilitate policy-making of carbon management and climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Min; Zhuang, Qianlai; Cook, David R.
2011-09-21
Satellite remote sensing provides continuous temporal and spatial information of terrestrial 24 ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical 25 models, such as the Terrestrial Ecosystem Model (TEM), should provide a more adequate 26 quantification of carbon dynamics of terrestrial ecosystems. Here we use Moderate Resolution 27 Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI), Land Surface Water Index 28 (LSWI) and carbon flux data of AmeriFlux to conduct such a study. We first modify the gross primary 29 production (GPP) modeling in TEM by incorporating EVI and LSWI to account for the effects of themore » 30 changes of canopy photosynthetic capacity, phenology and water stress. Second, we parameterize and 31 verify the new version of TEM with eddy flux data. We then apply the model to the conterminous 32 United States over the period 2000-2005 at a 0.05o ×0.05o spatial resolution. We find that the new 33 version of TEM generally captured the expected temporal and spatial patterns of regional carbon 34 dynamics. We estimate that regional GPP is between 7.02 and 7.78 Pg C yr-1 and net primary 35 production (NPP) ranges from 3.81 to 4.38 Pg C yr-1 and net ecosystem production (NEP) varies 36 within 0.08-0.73 Pg C yr-1 over the period 2000-2005 for the conterminous United States. The 37 uncertainty due to parameterization is 0.34, 0.65 and 0.18 Pg C yr-1 for the regional estimates of GPP, 38 NPP and NEP, respectively. The effects of extreme climate and disturbances such as severe drought in 39 2002 and destructive Hurricane Katrina in 2005 were captured by the model. Our study provides a 40 new independent and more adequate measure of carbon fluxes for the conterminous United States, 41 which will benefit studies of carbon-climate feedback and facilitate policy-making of carbon 42 management and climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M.; Zhuang, Q.; Cook, D. R.
Satellite remote sensing provides continuous temporal and spatial information of terrestrial ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical models, such as the Terrestrial Ecosystem Model (TEM), should provide a more adequate quantification of carbon dynamics of terrestrial ecosystems. Here we use Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI), Land Surface Water Index (LSWI) and carbon flux data of AmeriFlux to conduct such a study. First we modify the gross primary production (GPP) modeling in TEM by incorporating EVI and LSWI to account for the effects of the changes of canopy photosynthetic capacity, phenologymore » and water stress. Second, we parameterize and verify the new version of TEM with eddy flux data. We then apply the model to the conterminous United States over the period 2000–2005 at a 0.05° × 0.05° spatial resolution. We find that the new version of TEM made improvement over the previous version and generally captured the expected temporal and spatial patterns of regional carbon dynamics. We estimate that regional GPP is between 7.02 and 7.78 PgC yr -1 and net primary production (NPP) ranges from 3.81 to 4.38 Pg Cyr -1 and net ecosystem production (NEP) varies within 0.08– 0.73 PgC yr -1 over the period 2000–2005 for the conterminous United States. The uncertainty due to parameterization is 0.34, 0.65 and 0.18 PgC yr -1 for the regional estimates of GPP, NPP and NEP, respectively. The effects of extreme climate and disturbances such as severe drought in 2002 and destructive Hurricane Katrina in 2005 were captured by the model. Lastly, our study provides a new independent and more adequate measure of carbon fluxes for the conterminous United States, which will benefit studies of carbon-climate feedback and facilitate policy-making of carbon management and climate.« less
NASA Astrophysics Data System (ADS)
Orhan, Kadir; Mayerle, Roberto
2017-04-01
Climate change is an urgent and potentially irreversible threat to human societies and the planet and thus requires an effective and appropriate response, with a view to accelerating the reduction of global greenhouse gas emissions. At this point, a worldwide shift to renewable energy is crucial. In this study, a methodology comprising of the estimates of power yield, evaluation of the effects of power extraction on flow conditions, and near-field investigations to deliver wake characteristics, recovery and interactions is described and applied to several straits in Indonesia. Site selection is done with high-resolution, three-dimensional flow models providing sufficient spatiotemporal coverage. Much attention has been given to the meteorological forcing, and conditions at the open sea boundaries to adequately capture the density gradients and flow fields. Model verifications using tidal records show excellent agreement. Sites with adequate depth for the energy conversion using horizontal axis tidal turbines, average kinetic power density greater than 0.5 kW/m2, and surface area larger than 0.5km2 are defined as energy hotspots. Spatial variation of the average extractable electric power is determined, and annual tidal energy resource is estimated for the straits in question. The results showed that the potential for tidal power generation in Indonesia is likely to exceed previous predictions reaching around 4,800MW. Models with higher resolutions have been developed to assess the impacts of devices on flow conditions and to resolve near-field turbine wakes in greater detail. The energy is assumed to be removed uniformly by sub-grid scale arrays of turbines. An additional drag force resulting in dissipation of the pre-existing kinetic power from 10% to 60% within a flow cross-section is introduced to capture the impacts. k-ɛ model, which is a second order turbulence closure model is selected to involve the effects of the turbulent kinetic energy and turbulent kinetic energy dissipation. Preliminary results show the effectiveness of the method to capture the effects of power extraction, and wake characteristics and recovery reasonably well with low computational cost. It was found that although there is no significant change regarding water levels, an impact has been observed on current velocities as a result of velocity profile adjusting to the increased momentum transfer. It was also seen that, depending on the level of energy dissipation, currently recommended tidal farm configurations can be conservative regarding the spacing of the tidal turbines.
NASA Astrophysics Data System (ADS)
VanderZaag, A. C.; MacDonald, J. D.; Evans, L.; Vergé, X. P. C.; Desjardins, R. L.
2013-09-01
Methane emissions from manure management represent an important mitigation opportunity, yet emission quantification methods remain crude and do not contain adequate detail to capture changes in agricultural practices that may influence emissions. Using the Canadian emission inventory methodology as an example, this letter explores three key aspects for improving emission quantification: (i) obtaining emission measurements to improve and validate emission model estimates, (ii) obtaining more useful activity data, and (iii) developing a methane emission model that uses the available farm management activity data. In Canada, national surveys to collect manure management data have been inconsistent and not designed to provide quantitative data. Thus, the inventory has not been able to accurately capture changes in management systems even between manure stored as solid versus liquid. To address this, we re-analyzed four farm management surveys from the past decade and quantified the significant change in manure management which can be linked to the annual agricultural survey to create a continuous time series. In the dairy industry of one province, for example, the percentage of manure stored as liquid increased by 300% between 1991 and 2006, which greatly affects the methane emission estimates. Methane emissions are greatest from liquid manure, but vary by an order of magnitude depending on how the liquid manure is managed. Even if more complete activity data are collected on manure storage systems, default Intergovernmental Panel on Climate Change (IPCC) guidance does not adequately capture the impacts of management decisions to reflect variation among farms and regions in inventory calculations. We propose a model that stays within the IPCC framework but would be more responsive to farm management by generating a matrix of methane conversion factors (MCFs) that account for key factors known to affect methane emissions: temperature, retention time and inoculum. This MCF matrix would be populated using a mechanistic emission model verified with on-farm emission measurements. Implementation of these MCF values will require re-analysis of farm surveys to quantify liquid manure emptying frequency and timing, and will rely on the continued collection of this activity data in the future. For model development and validation, emission measurement campaigns will be needed on representative farms over at least one full year, or manure management cycle (whichever is longer). The proposed approach described in this letter is long-term, but is required to establish baseline data for emissions from manure management systems. With these improvements, the manure management emission inventory will become more responsive to the changing practices on Canadian livestock farms.
2012-06-08
process begins with gasification of feedstocks such as coal, natural gas, or biomass towards the production of alternative fuels. With adequate carbon...Barrels per day CBTL Coal and Biomass to Liquid CCS Carbon Dioxide Capture and Sequestration CTL Coal to Liquid DARPA Defense Advanced Research...sequestration. Captured carbon dioxide from coal-to-liquid (CTL) or coal and biomass -to-liquid (CBTL) production could be readily injected into the
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
Epithermal neutron beams from the 7 Li(p,n) reaction near the threshold for neutron capture therapy
NASA Astrophysics Data System (ADS)
Porras, I.; Praena, J.; Arias de Saavedra, F.; Pedrosa, M.; Esquinas, P.; L. Jiménez-Bonilla, P.
2016-11-01
Two applications for neutron capture therapy of epithermal neutron beams calculated from the 7Li ( p , n reaction are discussed. In particular, i) for a proton beam of 1920 keV of a 30 mA, a neutron beam of adequate features for BNCT is found at an angle of 80° from the forward direction; and ii) for a proton beam of 1910 keV, a neutron beam is obtained at the forward direction suitable for performing radiobiology experiments for the determination of the biological weighting factors of the fast dose component in neutron capture therapy.
Capturing security requirements for software systems.
El-Hadary, Hassan; El-Kassas, Sherif
2014-07-01
Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way.
Capturing security requirements for software systems
El-Hadary, Hassan; El-Kassas, Sherif
2014-01-01
Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way. PMID:25685514
Mirus, Benjamin B.
2015-01-01
Incorporating the influence of soil structure and horizons into parameterizations of distributed surface water/groundwater models remains a challenge. Often, only a single soil unit is employed, and soil-hydraulic properties are assigned based on textural classification, without evaluating the potential impact of these simplifications. This study uses a distributed physics-based model to assess the influence of soil horizons and structure on effective parameterization. This paper tests the viability of two established and widely used hydrogeologic methods for simulating runoff and variably saturated flow through layered soils: (1) accounting for vertical heterogeneity by combining hydrostratigraphic units with contrasting hydraulic properties into homogeneous, anisotropic units and (2) use of established pedotransfer functions based on soil texture alone to estimate water retention and conductivity, without accounting for the influence of pedon structures and hysteresis. The viability of this latter method for capturing the seasonal transition from runoff-dominated to evapotranspiration-dominated regimes is also tested here. For cases tested here, event-based simulations using simplified vertical heterogeneity did not capture the state-dependent anisotropy and complex combinations of runoff generation mechanisms resulting from permeability contrasts in layered hillslopes with complex topography. Continuous simulations using pedotransfer functions that do not account for the influence of soil structure and hysteresis generally over-predicted runoff, leading to propagation of substantial water balance errors. Analysis suggests that identifying a dominant hydropedological unit provides the most acceptable simplification of subsurface layering and that modified pedotransfer functions with steeper soil-water retention curves might adequately capture the influence of soil structure and hysteresis on hydrologic response in headwater catchments.
A Generalized Framework for Modeling Next Generation 911 Implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelic, Andjelka; Aamir, Munaf Syed; Kelic, Andjelka
This document summarizes the current state of Sandia 911 modeling capabilities and then addresses key aspects of Next Generation 911 (NG911) architectures for expansion of existing models. Analysis of three NG911 implementations was used to inform heuristics , associated key data requirements , and assumptions needed to capture NG911 architectures in the existing models . Modeling of NG911 necessitates careful consideration of its complexity and the diversity of implementations. Draft heuristics for constructing NG911 models are pres ented based on the analysis along with a summary of current challenges and ways to improve future NG911 modeling efforts . We foundmore » that NG911 relies on E nhanced 911 (E911) assets such as 911 selective routers to route calls originating from traditional tel ephony service which are a majority of 911 calls . We also found that the diversity and transitional nature of NG911 implementations necessitates significant and frequent data collection to ensure that adequate model s are available for crisis action support .« less
Pore-scale modeling of phase change in porous media
NASA Astrophysics Data System (ADS)
Juanes, Ruben; Cueto-Felgueroso, Luis; Fu, Xiaojing
2017-11-01
One of the main open challenges in pore-scale modeling is the direct simulation of flows involving multicomponent mixtures with complex phase behavior. Reservoir fluid mixtures are often described through cubic equations of state, which makes diffuse interface, or phase field theories, particularly appealing as a modeling framework. What is still unclear is whether equation-of-state-driven diffuse-interface models can adequately describe processes where surface tension and wetting phenomena play an important role. Here we present a diffuse interface model of single-component, two-phase flow (a van der Waals fluid) in a porous medium under different wetting conditions. We propose a simplified Darcy-Korteweg model that is appropriate to describe flow in a Hele-Shaw cell or a micromodel, with a gap-averaged velocity. We study the ability of the diffuse-interface model to capture capillary pressure and the dynamics of vaporization/condensation fronts, and show that the model reproduces pressure fluctuations that emerge from abrupt interface displacements (Haines jumps) and from the break-up of wetting films.
Mechanisms of kinetic trapping in self-assembly and phase transformation
Hagan, Michael F.; Elrad, Oren M.; Jack, Robert L.
2011-01-01
In self-assembly processes, kinetic trapping effects often hinder the formation of thermodynamically stable ordered states. In a model of viral capsid assembly and in the phase transformation of a lattice gas, we show how simulations in a self-assembling steady state can be used to identify two distinct mechanisms of kinetic trapping. We argue that one of these mechanisms can be adequately captured by kinetic rate equations, while the other involves a breakdown of theories that rely on cluster size as a reaction coordinate. We discuss how these observations might be useful in designing and optimising self-assembly reactions. PMID:21932884
Tansel, Berrin; Surita, Sharon C
2016-06-01
Siloxane levels in biogas can jeopardize the warranties of the engines used at the biogas to energy facilities. The chemical structure of siloxanes consists of silicon and oxygen atoms, alternating in position, with hydrocarbon groups attached to the silicon side chain. Siloxanes can be either in cyclic (D) or linear (L) configuration and referred with a letter corresponding to their structure followed by a number corresponding to the number of silicon atoms present. When siloxanes are burned, the hydrocarbon fraction is lost and silicon is converted to silicates. The purpose of this study was to evaluate the adequacy of activated carbon gas samplers for quantitative analysis of siloxanes in biogas samples. Biogas samples were collected from a landfill and an anaerobic digester using multiple carbon sorbent tubes assembled in series. One set of samples was collected for 30min (sampling 6-L gas), and the second set was collected for 60min (sampling 12-L gas). Carbon particles were thermally desorbed and analyzed by Gas Chromatography Mass Spectrometry (GC/MS). The results showed that biogas sampling using a single tube would not adequately capture octamethyltrisiloxane (L3), hexamethylcyclotrisiloxane (D3), octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5) and dodecamethylcyclohexasiloxane (D6). Even with 4 tubes were used in series, D5 was not captured effectively. The single sorbent tube sampling method was adequate only for capturing trimethylsilanol (TMS) and hexamethyldisiloxane (L2). Affinity of siloxanes for activated carbon decreased with increasing molecular weight. Using multiple carbon sorbent tubes in series can be an appropriate method for developing a standard procedure for determining siloxane levels for low molecular weight siloxanes (up to D3). Appropriate quality assurance and quality control procedures should be developed for adequately quantifying the levels of the higher molecular weight siloxanes in biogas with sorbent tubes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.
2007-01-01
Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.
Automated camera-phone experience with the frequency of imaging necessary to capture diet.
Arab, Lenore; Winter, Ashley
2010-08-01
Camera-enabled cell phones provide an opportunity to strengthen dietary recall through automated imaging of foods eaten during a specified period. To explore the frequency of imaging needed to capture all foods eaten, we examined the number of images of individual foods consumed in a pilot study of automated imaging using camera phones set to an image-capture frequency of one snapshot every 10 seconds. Food images were tallied from 10 young adult subjects who wore the phone continuously during the work day and consented to share their images. Based on the number of images received for each eating experience, the pilot data suggest that automated capturing of images at a frequency of once every 10 seconds is adequate for recording foods consumed during regular meals, whereas a greater frequency of imaging is necessary to capture snacks and beverages eaten quickly. 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
On the modelling of gyroplane flight dynamics
NASA Astrophysics Data System (ADS)
Houston, Stewart; Thomson, Douglas
2017-01-01
The study of the gyroplane, with a few exceptions, is largely neglected in the literature which is indicative of a niche configuration limited to the sport and recreational market where resources are limited. However the contemporary needs of an informed population of owners and constructors, as well as the possibility of a wider application of such low-cost rotorcraft in other roles, suggests that an examination of the mathematical modelling requirements for the study of gyroplane flight mechanics is timely. Rotorcraft mathematical modelling has become stratified in three levels, each one defining the inclusion of various layers of complexity added to embrace specific modelling features as well as an attempt to improve fidelity. This paper examines the modelling of gyroplane flight mechanics in the context of this complexity, and shows that relatively simple formulations are adequate for capturing most aspects of gyroplane trim, stability and control characteristics. In particular the conventional 6 degree-of-freedom model structure is suitable for the synthesis of models from flight test data as well as being the framework for reducing the order of the higher levels of modelling. However, a high level of modelling can be required to mimic some aspects of behaviour observed in data gathered from flight experiments and even then can fail to capture other details. These limitations are addressed in the paper. It is concluded that the mathematical modelling of gyroplanes for the simulation and analysis of trim, stability and control presents no special difficulty and the conventional techniques, methods and formulations familiar to the rotary-wing community are directly applicable.
The underestimated potential of solar energy to mitigate climate change
NASA Astrophysics Data System (ADS)
Creutzig, Felix; Agoston, Peter; Goldschmidt, Jan Christoph; Luderer, Gunnar; Nemet, Gregory; Pietzcker, Robert C.
2017-09-01
The Intergovernmental Panel on Climate Change's fifth assessment report emphasizes the importance of bioenergy and carbon capture and storage for achieving climate goals, but it does not identify solar energy as a strategically important technology option. That is surprising given the strong growth, large resource, and low environmental footprint of photovoltaics (PV). Here we explore how models have consistently underestimated PV deployment and identify the reasons for underlying bias in models. Our analysis reveals that rapid technological learning and technology-specific policy support were crucial to PV deployment in the past, but that future success will depend on adequate financing instruments and the management of system integration. We propose that with coordinated advances in multiple components of the energy system, PV could supply 30-50% of electricity in competitive markets.
Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments
Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...
2016-06-13
We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less
Characterizing a Model of Coronal Heating and Solar Wind Acceleration Based on Wave Turbulence.
NASA Astrophysics Data System (ADS)
Downs, C.; Lionello, R.; Mikic, Z.; Linker, J.; Velli, M.
2014-12-01
Understanding the nature of coronal heating and solar wind acceleration is a key goal in solar and heliospheric research. While there have been many theoretical advances in both topics, including suggestions that they may be intimately related, the inherent scale coupling and complexity of these phenomena limits our ability to construct models that test them on a fundamental level for realistic solar conditions. At the same time, there is an ever increasing impetus to improve our spaceweather models, and incorporating treatments for these processes that capture their basic features while remaining tractable is an important goal. With this in mind, I will give an overview of our exploration of a wave-turbulence driven (WTD) model for coronal heating and solar wind acceleration based on low-frequency Alfvénic turbulence. Here we attempt to bridge the gap between theory and practical modeling by exploring this model in 1D HD and multi-dimensional MHD contexts. The key questions that we explore are: What properties must the model possess to be a viable model for coronal heating? What is the influence of the magnetic field topology (open, closed, rapidly expanding)? And can we simultaneously capture coronal heating and solar wind acceleration with such a quasi-steady formulation? Our initial results suggest that a WTD based formulation performs adequately for a variety of solar and heliospheric conditions, while significantly reducing the number of free parameters when compared to empirical heating and solar wind models. The challenges, applications, and future prospects of this type of approach will also be discussed.
A comprehensive Network Security Risk Model for process control networks.
Henry, Matthew H; Haimes, Yacov Y
2009-02-01
The risk of cyber attacks on process control networks (PCN) is receiving significant attention due to the potentially catastrophic extent to which PCN failures can damage the infrastructures and commodity flows that they support. Risk management addresses the coupled problems of (1) reducing the likelihood that cyber attacks would succeed in disrupting PCN operation and (2) reducing the severity of consequences in the event of PCN failure or manipulation. The Network Security Risk Model (NSRM) developed in this article provides a means of evaluating the efficacy of candidate risk management policies by modeling the baseline risk and assessing expectations of risk after the implementation of candidate measures. Where existing risk models fall short of providing adequate insight into the efficacy of candidate risk management policies due to shortcomings in their structure or formulation, the NSRM provides model structure and an associated modeling methodology that captures the relevant dynamics of cyber attacks on PCN for risk analysis. This article develops the NSRM in detail in the context of an illustrative example.
NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
Evaluating Daily Load Stimulus Formulas in Relating Bone Response to Exercise
NASA Technical Reports Server (NTRS)
Pennline, James A.; Mulugeta, Lealem
2014-01-01
Six formulas representing what is commonly referred to as "daily load stimulus" are identified, compared and tested in their ability to relate skeletal mechanical loading to bone maintenance and osteogenic response. Particular emphasis is placed on exercise- induced skeletal loading and whether or not the formulas can adequately capture the known experimental observations of saturation of continuous cyclic loading, rest insertion between repetitions (cycles), recovery of osteogenic potential following saturation, and multiple shorter bouts versus a single long bout of exercise. To evaluate the ability of the formulas to capture these characteristics, a set of exercise scenarios with type of exercise bout, specific duration, number of repetitions, and rest insertion between repetitions is defined. The daily load values obtained from the formulas for the loading conditions of the set of scenarios is illustrated. Not all of the formulas form estimates of daily load in units of stress or in terms of strain at a skeletal site due to the loading force from a specific exercise prescription. The comparative results show that none of the formulas are able to capture all of the experimentally observed characteristics of cyclic loading. However, the enhanced formula presented by Genc et al. does capture several characteristics of cyclic loading that the others do not, namely recovery of osteogenic potential and saturation. This could be a basis for further development of mathematical formulas that more adequately approximates the amount of daily stress at a skeletal site that contributes to bone adaptation.
Penny, Phillip; Swords, Michael; Heisler, Jason; Cien, Adam; Sands, Andrew; Cole, Peter
2016-08-01
The purpose of this study was to examine the screw trajectory of ten commercially available distal tibia plates and compare them to common fracture patterns seen in OTA C type pilon fractures to determine their ability to stabilize the three most common fracture fragments while buttressing anterolateral zones of comminution. We hypothesized that a single plate for the distal tibia would fail to adequately stabilize all three main fracture fragments and zones of comminution in complex pilon fractures. Ten synthetic distal tibia sawbones models were used in conjunction with ten different locking distal tibia plate designs from three manufacturers (Depuy Synthes, J&J Co, Paoli, PA; Smith & Nephew, Memphis, TN; and Stryker, Mawa, NJ). Both medial and anterolateral plates from each company were utilized and separately applied to an individual sawbone model. Three implants allowing variable angle screw placement were used. The location of the locking screws and buttress effect 1cm above the articular surface was noted for each implant using axial computed tomography (CT). The images were then compared to a recently published "pilon fracture map" using an overlay technique to establish the relationship between screw location and known common fracture lines and areas of comminution. Each of the three main fragments was considered "captured" by a screw if it was purchased by at least two screws thereby controlling rotational forces on each fragment. Three of four anterolateral plates lacked stable fixation in the medial fragment. Of the 4 anterolateral plates used, only the variable angle anterolateral plate by Depuy Synthes captured the medial fragment with two screws. All four anterolateral plates buttressed the area of highest comminution and had an average of 1.25 screws in the medial fragment and an average of 3 screws in the posterolateral fragment. All five direct medial plates had variable fixation within anterolateral and posterolateral fragments with an average of 1.8 screws in the anterolateral fragment and an average of 1.3 screws in the posterolateral fragment. The Depuy Synthes variable angle anterolateral plate allowed for fixation of the medial fragment with two screws while simultaneously buttressing the zone of highest comminution and capturing both the anterolateral and posterolateral fragments with five and three screws respectively. The variable angle anteromedial plate by Depuy Synthes captured all three main fracture fragments but it did not buttress the anterolateral zone of comminution. In OTA 43C type pilon fractures, 8 out of 10 studied commercially available implants precontoured for the distal tibia, do not adequately stabilize the three primary fracture fragments typically seen in these injuries. Anterolateral plates were superior in addressing the coronal primary fracture line across the apex of the plafond, and buttressing the zone of comminution. None of the available plates can substitute for an understanding of the fracture planes and fragments typically seen in complex intra-articular tibia fractures and the addition of a second plate is necessary for adequate stability. Level IV. Copyright © 2016 Elsevier Ltd. All rights reserved.
Proposing a Formalised Model for Mindful Information Systems Offshoring
NASA Astrophysics Data System (ADS)
Costello, Gabriel J.; Coughlan, Chris; Donnellan, Brian; Gadatsch, Andreas
The central thesis of this chapter is that mathematical economics can provide a novel approach to the examination of offshoring business decisions and provide an impetus for future research in the area. A growing body of research indicates that projected cost savings from IT offshoring projects are not being met. Furthermore, evidence suggests that decision-making processes have been more emotional than rational, and that many offshoring arrangements have been rushed into without adequate analysis of the true costs involved. Building on the concept of mindfulness and mindlessness introduced to the IS literature by Swanson and Ramiller, a cost equation is developed using “deductive reasoning rather than inductive study” in the tradition of mathematical economics. The model endeavours to capture a wide range of both the quantitative and qualitative parameters. Although the economic model is illustrated against the background of a European scenario, the theoretical framework is generic and applicable to organisations in any global location.
Horn, Kyle G; Solomon, Irene C
2014-01-01
Spike-frequency dynamics and spike shape can provide insight into the types of ion channels present in any given neuron and give a sense for the precise response any neuron may have to a given input stimulus. Motoneuron firing frequency over time is especially important due to its direct effect on motor output. Of particular interest is intracellular Ca(2+), which exerts a powerful influence on both firing properties over time and spike shape. In order to better understand the cellular mechanisms for the regulation of intracellular Ca(2+) and their effect on spiking behavior, we have modified a computational model of an HM to include a variety of Ca(2+) handling processes. For the current study, a series of HM models that include Ca(2+) pumps, Na(+)/Ca(2+) exchangers, and a generic exponential decay of excess Ca(2+) were generated. Simulations from these models indicate that although each extrusion mechanism exerts a similar effect on voltage, the firing properties change distinctly with the inclusion of additional Ca(2+)-related mechanisms: BK channels, Ca(2+) buffering, and diffusion of [Ca(2+)]i modeled via a linear diffusion partial differential equation. While an exponential decay of Ca(2+) seems to adequately capture short-term changes in firing frequency seen in biological data, internal diffusion of Ca(2+) appears to be necessary for capturing longer term frequency changes. © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Setiyono, T. D.
2014-12-01
Accurate and timely information on rice crop growth and yield helps governments and other stakeholders adapting their economic policies and enables relief organizations to better anticipate and coordinate relief efforts in the wake of a natural catastrophe. Such delivery of rice growth and yield information is made possible by regular earth observation using space-born Synthetic Aperture Radar (SAR) technology combined with crop modeling approach to estimate yield. Radar-based remote sensing is capable of observing rice vegetation growth irrespective of cloud coverage, an important feature given that in incidences of flooding the sky is often cloud-covered. The system allows rapid damage assessment over the area of interest. Rice yield monitoring is based on a crop growth simulation and SAR-derived key information, particularly start of season and leaf growth rate. Results from pilot study sites in South and South East Asian countries suggest that incorporation of SAR data into crop model improves yield estimation for actual yields. Remote-sensing data assimilation into crop model effectively capture responses of rice crops to environmental conditions over large spatial coverage, which otherwise is practically impossible to achieve. Such improvement of actual yield estimates offers practical application such as in a crop insurance program. Process-based crop simulation model is used in the system to ensure climate information is adequately captured and to enable mid-season yield forecast.
Challenges of DNA-based mark-recapture studies of American black bears
Settlage, K.E.; Van Manen, F.T.; Clark, J.D.; King, T.L.
2008-01-01
We explored whether genetic sampling would be feasible to provide a region-wide population estimate for American black bears (Ursus americanus) in the southern Appalachians, USA. Specifically, we determined whether adequate capture probabilities (p >0.20) and population estimates with a low coefficient of variation (CV <20%) could be achieved given typical agency budget and personnel constraints. We extracted DNA from hair collected from baited barbed-wire enclosures sampled over a 10-week period on 2 study areas: a high-density black bear population in a portion of Great Smoky Mountains National Park and a lower density population on National Forest lands in North Carolina, South Carolina, and Georgia. We identified individual bears by their unique genotypes obtained from 9 microsatellite loci. We sampled 129 and 60 different bears in the National Park and National Forest study areas, respectively, and applied closed mark–recapture models to estimate population abundance. Capture probabilities and precision of the population estimates were acceptable only for sampling scenarios for which we pooled weekly sampling periods. We detected capture heterogeneity biases, probably because of inadequate spatial coverage by the hair-trapping grid. The logistical challenges of establishing and checking a sufficiently high density of hair traps make DNA-based estimates of black bears impractical for the southern Appalachian region. Alternatives are to estimate population size for smaller areas, estimate population growth rates or survival using mark–recapture methods, or use independent marking and recapturing techniques to reduce capture heterogeneity.
Polcicová, Gabriela; Tino, Peter
2004-01-01
We introduce topographic versions of two latent class models (LCM) for collaborative filtering. Latent classes are topologically organized on a square grid. Topographic organization of latent classes makes orientation in rating/preference patterns captured by the latent classes easier and more systematic. The variation in film rating patterns is modelled by multinomial and binomial distributions with varying independence assumptions. In the first stage of topographic LCM construction, self-organizing maps with neural field organized according to the LCM topology are employed. We apply our system to a large collection of user ratings for films. The system can provide useful visualization plots unveiling user preference patterns buried in the data, without loosing potential to be a good recommender model. It appears that multinomial distribution is most adequate if the model is regularized by tight grid topologies. Since we deal with probabilistic models of the data, we can readily use tools from probability and information theories to interpret and visualize information extracted by our system.
In vivo imaging of the neurovascular unit in CNS disease
Merlini, Mario; Davalos, Dimitrios; Akassoglou, Katerina
2014-01-01
The neurovascular unit—comprised of glia, pericytes, neurons and cerebrovasculature—is a dynamic interface that ensures physiological central nervous system (CNS) functioning. In disease dynamic remodeling of the neurovascular interface triggers a cascade of responses that determine the extent of CNS degeneration and repair. The dynamics of these processes can be adequately captured by imaging in vivo, which allows the study of cellular responses to environmental stimuli and cell-cell interactions in the living brain in real time. This perspective focuses on intravital imaging studies of the neurovascular unit in stroke, multiple sclerosis (MS) and Alzheimer disease (AD) models and discusses their potential for identifying novel therapeutic targets. PMID:25197615
A mechanism to account for well known peculiarities of low mass AGB star nucleosynthesis
NASA Astrophysics Data System (ADS)
Palmerini, Sara; Trippella, Oscar; Vescovi, Diego; Busso, Maurizio
2018-01-01
We present here the application of a model for a mass circulation mechanism induced by the stellar magnetic field to study peculiar aspects of AGB star nucleosynthesis. The mixing scheme is based on a previously suggested magnetic-buoyancy process [1, 2] and here shown to account adequately for the formation of the 13C neutron source for s-processes. In particular our analysis results are focused on addressing the constrains to AGB nucleosynthesis coming from the isotopic composition of presolar grains recovered in meteorites. It turns out that n-captures driven by the magnetically-induced mixing can account for the isotopic abundance ratios of s-elements recorded.
Creep Measurement Video Extensometer
NASA Technical Reports Server (NTRS)
Jaster, Mark; Vickerman, Mary; Padula, Santo, II; Juhas, John
2011-01-01
Understanding material behavior under load is critical to the efficient and accurate design of advanced aircraft and spacecraft. Technologies such as the one disclosed here allow accurate creep measurements to be taken automatically, reducing error. The goal was to develop a non-contact, automated system capable of capturing images that could subsequently be processed to obtain the strain characteristics of these materials during deformation, while maintaining adequate resolution to capture the true deformation response of the material. The measurement system comprises a high-resolution digital camera, computer, and software that work collectively to interpret the image.
Rajta, Istvan; Huszánk, Robert; Szabó, Atilla T T; Nagy, Gyula U L; Szilasi, Szabolcs; Fürjes, Peter; Holczer, Eszter; Fekete, Zoltan; Járvás, Gabor; Szigeti, Marton; Hajba, Laszlo; Bodnár, Judit; Guttman, Andras
2016-02-01
Design, fabrication, integration, and feasibility test results of a novel microfluidic cell capture device is presented, exploiting the advantages of proton beam writing to make lithographic irradiations under multiple target tilting angles and UV lithography to easily reproduce large area structures. A cell capture device is demonstrated with a unique doubly tilted micropillar array design for cell manipulation in microfluidic applications. Tilting the pillars increased their functional surface, therefore, enhanced fluidic interaction when special bioaffinity coating was used, and improved fluid dynamic behavior regarding cell culture injection. The proposed microstructures were capable to support adequate distribution of body fluids, such as blood, spinal fluid, etc., between the inlet and outlet of the microfluidic sample reservoirs, offering advanced cell capture capability on the functionalized surfaces. The hydrodynamic characteristics of the microfluidic systems were tested with yeast cells (similar size as red blood cells) for efficient capture. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Statistical modeling of natural backgrounds in hyperspectral LWIR data
NASA Astrophysics Data System (ADS)
Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph
2016-09-01
Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.
DMSP observations of high latitude Poynting flux during magnetic storms
NASA Astrophysics Data System (ADS)
Huang, Cheryl Y.; Huang, Yanshi; Su, Yi-Jiun; Hairston, Marc R.; Sotirelis, Thomas
2017-11-01
Previous studies have demonstrated that energy can enter the high-latitude regions of the Ionosphere-Thermosphere (IT) system on open field lines. To assess the extent of high-latitude energy input, we have carried out a study of Poynting flux measured by the Defense Meteorological Satellite Program (DMSP) satellites during magnetic storms. We report sporadic intense Poynting fluxes measured by four DMSP satellites at polar latitudes during two moderate magnetic storms which occurred in August and September 2011. Comparisons with a widely used empirical model for energy input to the IT system show that the model does not adequately capture electromagnetic (EM) energy at very high latitudes during storms. We have extended this study to include more than 30 storm events and find that intense EM energy is frequently detected poleward of 75° magnetic latitude.
The NASA Human Research Wiki - An Online Collaboration Tool
NASA Technical Reports Server (NTRS)
Barr, Yael; Rasbury, Jack; Johnson, Jordan; Barstend, Kristina; Saile, Lynn; Watkins, Sharmi
2012-01-01
The Exploration Medical Capability (ExMC) element is one of six elements of the Human Research Program (HRP). ExMC is charged with decreasing the risk of: "Inability to adequately recognize or treat an ill or injured crew member" for exploration-class missions In preparation for exploration-class missions, ExMC has compiled a large evidence base, previously available only to persons within the NASA community. ExMC has developed the "NASA Human Research Wiki" in an effort to make the ExMC information available to the general public and increase collaboration within and outside of NASA. The ExMC evidence base is comprised of several types of data, including: (1)Information on more than 80 medical conditions which could occur during space flight (a)Derived from several sources (b)Including data on incidence and potential outcomes, as captured in the Integrated Medical Model s (IMM) Clinical Finding Forms (CliFFs). (2)Approximately 25 gap reports (a)Identify any "gaps" in knowledge and/or technology that would need to be addressed in order to provide adequate medical support for these novel missions.
Is Low Health Literacy Associated with Increased Emergency Department Utilization and Recidivism?
Griffey, Richard T.; Kennedy, Sarah K.; McGownan, Lucy; Goodman, Melody; Kaphingst, Kimberly A.
2015-01-01
Objectives To determine whether patients with low health literacy have higher ED utilization and higher ED recidivism than patients with adequate health literacy. Methods The study was conducted at an urban academic ED with over 95,000 annual visits that is part of a 13-hospital health system, using electronic records that are captured in a central data repository. As part of a larger, cross sectional, convenience sample study, health literacy testing was performed using the short test of functional health literacy in adults (STOFHLA), and standard test thresholds identifying those with inadequate, marginal, and adequate health literacy. The authors collected patients' demographic and clinical data, including items known to affect recidivism. This was a structured electronic record review directed at determining 1) the median number of total ED visits in this health system within a 2-year period, and 2) the proportion of patients with each level of health literacy who had return visits within 3, 7, and 14 days of index visits. Descriptive data for demographics and ED returns are reported, stratified by health literacy level. The Mantel-Haenszel chi-square was used to test whether there is an association between health literacy and ED recidivism. A negative binomial multivariable model was performed to examine whether health literacy affects ED use, including variables significant at the 0.1 alpha level on bivariate analysis, and retaining those significant at an alpha of 0.05 in the final model. Results Among 431 patients evaluated, 13.2% had inadequate, 10% had marginal, and 76.3% had adequate health literacy as identified by S-TOFHLA. Patients with inadequate health literacy had higher ED utilization compared to those with adequate health literacy (p = 0.03). Variables retained in the final model included S-TOFHLA score, number of medications, having a personal doctor, being a property owner, race, insurance, age, and simple comorbidity score. During the study period, 118 unique patients each made at least one return ED visit within a 14-day period. The proportion of patients with inadequate health literacy making at least one return visit was higher than that of patients with adequate health literacy at 14 days, but was not significantly higher within 3 or 7 days. Conclusions In this single-center study, higher utilization of the ED by patients with inadequate health literacy when compared to those with adequate health literacy was observed. Patients with inadequate health literacy made a higher number of return visits at 14 days but not at 3 or 7 days. PMID:25308133
Why Compare? A Response to Stephen Lawton.
ERIC Educational Resources Information Center
Gordon, Liz; Pearce, Diane
1993-01-01
Aims to stimulate interest in comparative education policy analysis by critiquing a paper by Stephen Lawton. Such comparative analysis is important in understanding neoliberal education reforms, but more work is needed to provide adequate categories for analysis. Lawton's categories, reformulated here as efficiency, managing and provider capture,…
Disaggregating soil erosion processes within an evolving experimental landscape
USDA-ARS?s Scientific Manuscript database
Soil-mantled landscapes subjected to rainfall, runoff events, and downstream base level adjustments will erode and evolve in time and space. Yet the precise mechanisms for soil erosion also will vary, and such variations may not be adequately captured by soil erosion prediction technology. This st...
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
NASA Astrophysics Data System (ADS)
Tseng, Yolanda D.; Wootton, Landon; Nyflot, Matthew; Apisarnthanarax, Smith; Rengan, Ramesh; Bloch, Charles; Sandison, George; St. James, Sara
2018-01-01
Four dimensional computed tomography (4DCT) scans are routinely used in radiation therapy to determine the internal treatment volume for targets that are moving (e.g. lung tumors). The use of these studies has allowed clinicians to create target volumes based upon the motion of the tumor during the imaging study. The purpose of this work is to determine if a target volume based on a single 4DCT scan at simulation is sufficient to capture thoracic motion. Phantom studies were performed to determine expected differences between volumes contoured on 4DCT scans and those on the evaluation CT scans (slow scans). Evaluation CT scans acquired during treatment of 11 patients were compared to the 4DCT scans used for treatment planning. The images were assessed to determine if the target remained within the target volume determined during the first 4DCT scan. A total of 55 slow scans were compared to the 11 planning 4DCT scans. Small differences were observed in phantom between the 4DCT volumes and the slow scan volumes, with a maximum of 2.9%, that can be attributed to minor differences in contouring and the ability of the 4DCT scan to adequately capture motion at the apex and base of the motion trajectory. Larger differences were observed in the patients studied, up to a maximum volume difference of 33.4%. These results demonstrate that a single 4DCT scan is not adequate to capture all thoracic motion throughout treatment.
Modeling of the competition life cycle using the software complex of cellular automata PyCAlab
NASA Astrophysics Data System (ADS)
Berg, D. B.; Beklemishev, K. A.; Medvedev, A. N.; Medvedeva, M. A.
2015-11-01
The aim of the work is to develop a numerical model of the life cycle of competition on the basis of software complex cellular automata PyCAlab. The model is based on the general patterns of growth of various systems in resource-limited settings. At examples it is shown that the period of transition from an unlimited growth of the market agents to the stage of competitive growth takes quite a long time and may be characterized as monotonic. During this period two main strategies of competitive selection coexist: 1) capture of maximum market space with any reasonable costs; 2) saving by reducing costs. The obtained results allow concluding that the competitive strategies of companies must combine two mentioned types of behavior, and this issue needs to be given adequate attention in the academic literature on management. The created numerical model may be used for market research when developing of the strategies for promotion of new goods and services.
Assessing the multidimensional and hierarchical structure of SERVQUAL.
Ma, Jun; Harvey, Milton E; Hu, Michael Y
2007-10-01
Parasuraman, Zeithaml, and Berry introduced SERVQUAL in 1998 as a scale to measure service quality. Since then, researchers have proposed several variations. This study examines the development of the tool. Marketing researchers have first challenged the conceptualization of a perceptions-expectations gap and have concluded that the performance-based measures are adequate to capture consumers' perception of service quality. Some researchers have argued that the five dimensions of the SERVQUAL scale only focus on the process of service delivery and have extended the SERVQUAL scale into six dimensions by including the service outcome dimension. Others have proposed that service quality is a multilevel construct and should be measured accordingly. From a sample of 467 undergraduate students data on service quality toward up-scale restaurants were collected. Using the structural equation approach, two measurement models of service quality were compared, the extended SERVQUAL model and the restructured multilevel SERVQUAL model. Analysis suggested that the latter model fits the data better than the extended one.
Cheng, Ryan R; Hawk, Alexander T; Makarov, Dmitrii E
2013-02-21
Recent experiments showed that the reconfiguration dynamics of unfolded proteins are often adequately described by simple polymer models. In particular, the Rouse model with internal friction (RIF) captures internal friction effects as observed in single-molecule fluorescence correlation spectroscopy (FCS) studies of a number of proteins. Here we use RIF, and its non-free draining analog, Zimm model with internal friction, to explore the effect of internal friction on the rate with which intramolecular contacts can be formed within the unfolded chain. Unlike the reconfiguration times inferred from FCS experiments, which depend linearly on the solvent viscosity, the first passage times to form intramolecular contacts are shown to display a more complex viscosity dependence. We further describe scaling relationships obeyed by contact formation times in the limits of high and low internal friction. Our findings provide experimentally testable predictions that can serve as a framework for the analysis of future studies of contact formation in proteins.
Debates—Perspectives on socio-hydrology: Modeling flood risk as a public policy problem
NASA Astrophysics Data System (ADS)
Gober, Patricia; Wheater, Howard S.
2015-06-01
Socio-hydrology views human activities as endogenous to water system dynamics; it is the interaction between human and biophysical processes that threatens the viability of current water systems through positive feedbacks and unintended consequences. Di Baldassarre et al. implement socio-hydrology as a flood risk problem using the concept of social memory as a vehicle to link human perceptions to flood damage. Their mathematical model has heuristic value in comparing potential flood damages in green versus technological societies. It can also support communities in exploring the potential consequences of policy decisions and evaluating critical policy tradeoffs, for example, between flood protection and economic development. The concept of social memory does not, however, adequately capture the social processes whereby public perceptions are translated into policy action, including the pivotal role played by the media in intensifying or attenuating perceived flood risk, the success of policy entrepreneurs in keeping flood hazard on the public agenda during short windows of opportunity for policy action, and different societal approaches to managing flood risk that derive from cultural values and economic interests. We endorse the value of seeking to capture these dynamics in a simplified conceptual framework, but favor a broader conceptualization of socio-hydrology that includes a knowledge exchange component, including the way modeling insights and scientific results are communicated to floodplain managers. The social processes used to disseminate the products of socio-hydrological research are as important as the research results themselves in determining whether modeling is used for real-world decision making.
NASA Technical Reports Server (NTRS)
Reznick, Steve
1988-01-01
Transonic Euler/Navier-Stokes computations are accomplished for wing-body flow fields using a computer program called Transonic Navier-Stokes (TNS). The wing-body grids are generated using a program called ZONER, which subdivides a coarse grid about a fighter-like aircraft configuration into smaller zones, which are tailored to local grid requirements. These zones can be either finely clustered for capture of viscous effects, or coarsely clustered for inviscid portions of the flow field. Different equation sets may be solved in the different zone types. This modular approach also affords the opportunity to modify a local region of the grid without recomputing the global grid. This capability speeds up the design optimization process when quick modifications to the geometry definition are desired. The solution algorithm embodied in TNS is implicit, and is capable of capturing pressure gradients associated with shocks. The algebraic turbulence model employed has proven adequate for viscous interactions with moderate separation. Results confirm that the TNS program can successfully be used to simulate transonic viscous flows about complicated 3-D geometries.
NASA Technical Reports Server (NTRS)
Stoeger, W. R.; Pacholczyk, A. G.; Stepinski, T. F.
1992-01-01
The extent to which individual holes in a cluster of black holes with a mass spectrum can liberate and accrete the resulting material by tidally disrupting stars they encounter, or by capturing stars as binary companions is studied. It is found that the smaller black holes in 'the halo' of such clusters can adequately supply themselves to the level M-dot sub h or greater than 0.0001(M-dot sub h) sub crit, and up to 0.05(M-dot sub h)sub crit for the smallest holes, by tidal disruption, as long as the cluster is embedded in a distribution of stars of relatively high density (not less than 0.1M sub cl/cu pc), and as long as the entire cluster of stars is not too compact (not less than 0.5 pc). Consideration is given to modifications this 'internal' mode of supply introduces in the spectrum emitted by such black hole clusters, and to the current status of their viability as models for AGN and QSOs in light of dynamical studies by Quinlan and Shapiro (1987, 1989).
Stage-discharge relationship in tidal channels
NASA Astrophysics Data System (ADS)
Kearney, W. S.; Mariotti, G.; Deegan, L.; Fagherazzi, S.
2016-12-01
Long-term records of the flow of water through tidal channels are essential to constrain the budgets of sediments and biogeochemical compounds in salt marshes. Statistical models which relate discharge to water level allow the estimation of such records from more easily obtained records of water stage in the channel. While there is clearly structure in the stage-discharge relationship, nonlinearity and nonstationarity of the relationship complicates the construction of statistical stage-discharge models with adequate performance for discharge estimation and uncertainty quantification. Here we compare four different types of stage-discharge models, each of which is designed to capture different characteristics of the stage-discharge relationship. We estimate and validate each of these models on a two-month long time series of stage and discharge obtained with an Acoustic Doppler Current Profiler in a salt marsh channel. We find that the best performance is obtained by models which account for the nonlinear and time-varying nature of the stage-discharge relationship. Good performance can also be obtained from a simplified version of these models which approximates the fully nonlinear and time-varying models with a piecewise linear formulation.
An Analysis of Aims and the Educational "Event"
ERIC Educational Resources Information Center
den Heyer, Kent
2015-01-01
In this article, the author explores key distinctions relevant to aims talk in education. He argues that present formulations of aims fail to adequately capture or speak to several overlapping domains involved in schooling: qualification, socialization, and the educational in the form of subjectification (Biesta, 2010). Drawing off Egan and Biesta…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-03
... maximum of 15 points, based upon significant risk factors that are not adequately captured in the... severity score could be adjusted, up or down, by a maximum of 15 points, based on significant risk factors... Risk (VaR)/Tier 1 capital--and one additional factor to the ability to withstand funding-related stress...
The complexity of the components and their interactions that characterize children’s health and well-being are not adequately captured by current public health paradigms. Children are exposed to combinations of chemical and non-chemical stressors from their built, natural, ...
17 CFR 38.552 - Elements of an acceptable audit trail program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...
17 CFR 38.552 - Elements of an acceptable audit trail program.
Code of Federal Regulations, 2013 CFR
2013-04-01
... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...
Space and Place: Urban Parents' Geographical Preferences for Schools
ERIC Educational Resources Information Center
Bell, Courtney A.
2007-01-01
Prior research documents the almost universal preference for schools that are "convenient." Drawing on longitudinal interview data gathered from 36 urban parents, I argue parents' preference for "convenient" schools is more complex than previously understood. Conceptions of geography used by policy makers do not adequately capture the ways in…
Variability in Photos of the Same Face
ERIC Educational Resources Information Center
Jenkins, Rob; White, David; Van Montfort, Xandra; Burton, A. Mike
2011-01-01
Psychological studies of face recognition have typically ignored within-person variation in appearance, instead emphasising differences "between" individuals. Studies typically assume that a photograph adequately captures a person's appearance, and for that reason most studies use just one, or a small number of photos per person. Here we show that…
What Research Has To Say about Reading Instruction. Third Edition.
ERIC Educational Resources Information Center
Farstrup, Alan E., Ed.; Samuels, S. Jay, Ed.
Maintaining a balance among theory, research, and effective classroom practice without presenting a formulaic view of good instruction or overly theoretical discussions in which practical applications of research findings are not adequately explored, the 17 chapters in this book capture the best evidence-based thinking of experienced researchers…
Evaluating diagnosis-based case-mix measures: how well do they apply to the VA population?
Rosen, A K; Loveland, S; Anderson, J J; Rothendler, J A; Hankin, C S; Rakovski, C C; Moskowitz, M A; Berlowitz, D R
2001-07-01
Diagnosis-based case-mix measures are increasingly used for provider profiling, resource allocation, and capitation rate setting. Measures developed in one setting may not adequately capture the disease burden in other settings. To examine the feasibility of adapting two such measures, Adjusted Clinical Groups (ACGs) and Diagnostic Cost Groups (DCGs), to the Department of Veterans Affairs (VA) population. A 60% random sample of veterans who used health care services during FY 1997 was obtained from VA inpatient and outpatient administrative databases. A split-sample technique was used to obtain a 40% sample (n = 1,046,803) for development and a 20% sample (n = 524,461) for validation. Concurrent ACG and DCG risk adjustment models, using 1997 diagnoses and demographics to predict FY 1997 utilization (ambulatory provider encounters, and service days-the sum of a patient's inpatient and outpatient visit days), were fitted and cross-validated. Patients were classified into groupings that indicated a population with multiple psychiatric and medical diseases. Model R-squares explained between 6% and 32% of the variation in service utilization. Although reparameterized models did better in predicting utilization than models with external weights, none of the models was adequate in characterizing the entire population. For predicting service days, DCGs were superior to ACGs in most categories, whereas ACGs did better at discriminating among veterans who had the lowest utilization. Although "off-the-shelf" case-mix measures perform moderately well when applied to another setting, modifications may be required to accurately characterize a population's disease burden with respect to the resource needs of all patients.
The development of a survey instrument for community health improvement.
Bazos, D A; Weeks, W B; Fisher, E S; DeBlois, H A; Hamilton, E; Young, M J
2001-01-01
OBJECTIVE: To develop a survey instrument that could be used both to guide and evaluate community health improvement efforts. DATA SOURCES/STUDY SETTING: A randomized telephone survey was administered to a sample of about 250 residents in two communities in Lehigh Valley, Pennsylvania in the fall of 1997. METHODS: The survey instrument was developed by health professionals representing diverse health care organizations. This group worked collaboratively over a period of two years to (1) select a conceptual model of health as a foundation for the survey; (2) review relevant literature to identify indicators that adequately measured the health constructs within the chosen model; (3) develop new indicators where important constructs lacked specific measures; and (4) pilot test the final survey to assess the reliability and validity of the instrument. PRINCIPAL FINDINGS: The Evans and Stoddart Field Model of the Determinants of Health and Well-Being was chosen as the conceptual model within which to develop the survey. The Field Model depicts nine domains important to the origins and production of health and provides a comprehensive framework from which to launch community health improvement efforts. From more than 500 potential indicators we identified 118 survey questions that reflected the multiple determinants of health as conceptualized by this model. Sources from which indicators were selected include the Behavior Risk Factor Surveillance Survey, the National Health Interview Survey, the Consumer Assessment of Health Plans Survey, and the SF-12 Summary Scales. The work group developed 27 new survey questions for constructs for which we could not locate adequate indicators. Twenty-five questions in the final instrument can be compared to nationally published norms or benchmarks. The final instrument was pilot tested in 1997 in two communities. Administration time averaged 22 minutes with a response rate of 66 percent. Reliability of new survey questions was adequate. Face validity was supported by previous findings from qualitative and quantitative studies. CONCLUSIONS: We developed, pilot tested, and validated a survey instrument designed to provide more comprehensive and timely data to communities for community health assessments. This instrument allows communities to identify and measure critical domains of health that have previously not been captured in a single instrument. PMID:11508639
Gasdynamic Model of Turbulent Combustion in TNT Explosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, A L; Bell, J B; Beckner, V E
2010-01-08
A model is proposed to simulate turbulent combustion in confined TNT explosions. It is based on: (i) the multi-component gasdynamic conservation laws, (ii) a fast-chemistry model for TNT-air combustion, (iii) a thermodynamic model for frozen reactants and equilibrium products, (iv) a high-order Godunov scheme providing a non-diffusive solution of the governing equations, and (v) an ILES approach whereby adaptive mesh refinement is used to capture the energy bearing scales of the turbulence on the grid. Three-dimensional numerical simulations of explosion fields from 1.5-g PETN/TNT charges were performed. Explosions in six different chambers were studied: three calorimeters (volumes of 6.6-l, 21.2-lmore » and 40.5-l with L/D = 1), and three tunnels (L/D = 3.8, 4.65 and 12.5 with volumes of 6.3-l) - to investigate the influence of chamber volume and geometry on the combustion process. Predicted pressures histories were quite similar to measured pressure histories for all cases studied. Experimentally, mass fraction of products, Y{sub p}{sup exp}, reached a peak value of 88% at an excess air ratio of twice stoichiometric, and then decayed with increasing air dilution; mass fractions Y{sub p}{sup calc} computed from the numerical simulations followed similar trends. Based on this agreement, we conclude that the dominant effect that controls the rate of TNT combustion with air is the turbulent mixing rate; the ILES approach along with the fast-chemistry model used here adequately captures this effect.« less
Statistical Compression of Wind Speed Data
NASA Astrophysics Data System (ADS)
Tagle, F.; Castruccio, S.; Crippa, P.; Genton, M.
2017-12-01
In this work we introduce a lossy compression approach that utilizes a stochastic wind generator based on a non-Gaussian distribution to reproduce the internal climate variability of daily wind speed as represented by the CESM Large Ensemble over Saudi Arabia. Stochastic wind generators, and stochastic weather generators more generally, are statistical models that aim to match certain statistical properties of the data on which they are trained. They have been used extensively in applications ranging from agricultural models to climate impact studies. In this novel context, the parameters of the fitted model can be interpreted as encoding the information contained in the original uncompressed data. The statistical model is fit to only 3 of the 30 ensemble members and it adequately captures the variability of the ensemble in terms of seasonal internannual variability of daily wind speed. To deal with such a large spatial domain, it is partitioned into 9 region, and the model is fit independently to each of these. We further discuss a recent refinement of the model, which relaxes this assumption of regional independence, by introducing a large-scale component that interacts with the fine-scale regional effects.
Assal, Timothy J.; Anderson, Patrick J.; Sibold, Jason
2015-01-01
The availability of land cover data at local scales is an important component in forest management and monitoring efforts. Regional land cover data seldom provide detailed information needed to support local management needs. Here we present a transferable framework to model forest cover by major plant functional type using aerial photos, multi-date Système Pour l’Observation de la Terre (SPOT) imagery, and topographic variables. We developed probability of occurrence models for deciduous broad-leaved forest and needle-leaved evergreen forest using logistic regression in the southern portion of the Wyoming Basin Ecoregion. The model outputs were combined into a synthesis map depicting deciduous and coniferous forest cover type. We evaluated the models and synthesis map using a field-validated, independent data source. Results showed strong relationships between forest cover and model variables, and the synthesis map was accurate with an overall correct classification rate of 0.87 and Cohen’s kappa value of 0.81. The results suggest our method adequately captures the functional type, size, and distribution pattern of forest cover in a spatially heterogeneous landscape.
Key Provenance of Earth Science Observational Data Products
NASA Astrophysics Data System (ADS)
Conover, H.; Plale, B.; Aktas, M.; Ramachandran, R.; Purohit, P.; Jensen, S.; Graves, S. J.
2011-12-01
As the sheer volume of data increases, particularly evidenced in the earth and environmental sciences, local arrangements for sharing data need to be replaced with reliable records about the what, who, how, and where of a data set or collection. This is frequently called the provenance of a data set. While observational data processing systems in the earth sciences have a long history of capturing metadata about the processing pipeline, current processes are limited in both what is captured and how it is disseminated to the science community. Provenance capture plays a role in scientific data preservation and stewardship precisely because it can automatically capture and represent a coherent picture of the what, how and who of a particular scientific collection. It reflects the transformations that a data collection underwent prior to its current form and the sequence of tasks that were executed and data products applied to generate a new product. In the NASA-funded Instant Karma project, we examine provenance capture in earth science applications, specifically the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) Science Investigator-led Processing system (SIPS). The project is integrating the Karma provenance collection and representation tool into the AMSR-E SIPS production environment, with an initial focus on Sea Ice. This presentation will describe capture and representation of provenance that is guided by the Open Provenance Model (OPM). Several things have become clear during the course of the project to date. One is that core OPM entities and relationships are not adequate for expressing the kinds of provenance that is of interest in the science domain. OPM supports name-value pair annotations that can be used to augment what is known about the provenance entities and relationships, but in Karma, annotations cannot be added during capture, but only after the fact. This limits the capture system's ability to record something it learned about an entity after the event of its creation in the provenance record. We will discuss extensions to the Open Provenance Model (OPM) and modifications to the Karma tool suite to address this issue, more efficient representations of earth science kinds of provenance, and definition of metadata structures for capturing related knowledge about the data products and science algorithms used to generate them. Use scenarios for provenance information is an active topic of investigation. It has additionally become clear through the project that not all provenance is created equal. In processing pipelines, some provenance is repetitive and uninteresting. Because of the volume of provenance, this obscures what are the interesting pieces of provenance. Methodologies to reveal science-relevant provenance will be presented, along with a preview of the AMSR-E Provenance Browser.
Surprises from the field: Novel aspects of aeolian saltation observed under natural turbulence
NASA Astrophysics Data System (ADS)
Martin, R. L.; Kok, J. F.; Chamecki, M.
2015-12-01
The mass flux of aeolian (wind-blown) sediment transport - critical for understanding earth and planetary geomorphology, dust generation, and soil stability - is difficult to predict. Recent work suggests that competing models for saltation (the characteristic hopping of aeolian sediment) fail because they do not adequately account for wind turbulence. To address this issue, we performed field deployments measuring high-frequency co-variations of aeolian saltation and near-surface winds at multiple sites under a range of conditions. Our observations yield several novel findings not currently captured by saltation models: (1) Saltation flux displays no significant lag relative to horizontal wind velocity; (2) Characteristic height of the saltation layer remains constant with changes in shear velocity; and (3) During saltation, the vertical profile of mean horizontal wind velocity is steeper than expected from the Reynolds stress. We examine how the interactions between saltation and turbulence in field settings could explain some of these surprising observations.
Exploring the Factor Structure of Neurocognitive Measures in Older Individuals
Santos, Nadine Correia; Costa, Patrício Soares; Amorim, Liliana; Moreira, Pedro Silva; Cunha, Pedro; Cotter, Jorge; Sousa, Nuno
2015-01-01
Here we focus on factor analysis from a best practices point of view, by investigating the factor structure of neuropsychological tests and using the results obtained to illustrate on choosing a reasonable solution. The sample (n=1051 individuals) was randomly divided into two groups: one for exploratory factor analysis (EFA) and principal component analysis (PCA), to investigate the number of factors underlying the neurocognitive variables; the second to test the “best fit” model via confirmatory factor analysis (CFA). For the exploratory step, three extraction (maximum likelihood, principal axis factoring and principal components) and two rotation (orthogonal and oblique) methods were used. The analysis methodology allowed exploring how different cognitive/psychological tests correlated/discriminated between dimensions, indicating that to capture latent structures in similar sample sizes and measures, with approximately normal data distribution, reflective models with oblimin rotation might prove the most adequate. PMID:25880732
Thinking through cancer risk: characterizing smokers' process of risk determination.
Hay, Jennifer; Shuk, Elyse; Cruz, Gustavo; Ostroff, Jamie
2005-10-01
The perception of cancer risk motivates cancer risk reduction behaviors. However, common measurement strategies for cancer risk perceptions, which involve numerical likelihood estimates, do not adequately capture individuals' thoughts and feelings about cancer risk. To guide the development of novel measurement strategies, the authors used semistructured interviews to examine the thought processes used by smokers (N = 15) as they considered their cancer risk. They used grounded theory to guide systematic data coding and develop a heuristic model describing smokers' risk perception process that includes a cognitive, primarily rational process whereby salient personal risk factors for cancer are considered and combined, and an affective/attitudinal process, which shifts risk perceptions either up or down. The model provides a tentative explanation concerning how people hold cancer risk perceptions that diverge from rational assessment of their risks and will be useful in guiding the development of non-numerical measurements strategies for cancer risk perceptions.
Status and Distribution of Wintering Waterfowl in Narragansett Bay, Rhode Island, 2005-2014
Estuaries on the east coast of the U.S. provide habitats used by a number of waterfowl species in winter. Aerial surveys of many of these wintering areas have been conducted over the past 50 plus years, but data from larger-scale surveys may not adequately capture local fluctuat...
Toward a Functional and Culturally Salient Definition of Literacy
ERIC Educational Resources Information Center
Ntiri, Daphne W.
2009-01-01
This paper sets out to show that the concept of literacy is evolving and its conventional definition, "reading and writing," is no longer adequate. Literacy is now subjected to constant redefinition to reflect criteria for social, political, religious, and economic relevance and expectations. The definition must be reframed to capture the…
Nursing Workload and the Changing Health Care Environment: A Review of the Literature
ERIC Educational Resources Information Center
Neill, Denise
2011-01-01
Changes in the health care environment have impacted nursing workload, quality of care, and patient safety. Traditional nursing workload measures do not guarantee efficiency, nor do they adequately capture the complexity of nursing workload. Review of the literature indicates nurses perceive the quality of their work has diminished. Research has…
ERIC Educational Resources Information Center
Smith, Leann V.; Cokley, Kevin
2016-01-01
The authors investigated the psychometric properties of the Social Identities and Attitudes Scale developed by Picho and Brown, which captures an individual's vulnerability to Stereotype Threat effects. Confirmatory factor analyses and group invariance tests conducted on a diverse sample of 516 college students revealed adequate reliability and…
Picturing Leisure: Using Photovoice to Understand the Experience of Leisure and Dementia
ERIC Educational Resources Information Center
Genoe, M. Rebecca; Dupuis, Sherry L.
2013-01-01
Interviews and participant observation are commonly used to explore the experience of dementia, yet may not adequately capture perspectives of persons with dementia as communication changes. We used photovoice (i.e., using cameras in qualitative research) along with interviews and participant observation to explore meanings of leisure for persons…
The complexity of the components and their interactions that characterize children’s health and well-being are not adequately captured by current public health paradigms. Children are exposed to combinations of chemical and non-chemical stressors from their built, natural,...
Applicable Testing and Associated Challenges
DOT National Transportation Integrated Search
2014-12-04
NovAtel Context to Set 1 GPS L1 Only - NovAtel receivers are wideband, at a minimum of 20MHz to adequately capture the full L1 CA main lobe - To achieve 4 cm code and 0.5 mm carrier phase measurements on GPS L1 - GPS L1 only users are typically a SW ...
Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.
2000-01-01
Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] <2 %) and confidence interval coverage close to nominal, except for survival estimates of banded birds for the 'off study area' stratum, which were negatively biased (RBIAS -7 to -15%) when sample sizes were small (5-10 banded or radioed animals 'released' per time interval). To provide adequate data for useful inference from this model, study designs should seek a minimum of 25 animals of each marking type observed (marked or observed via telemetry) in each time period and geographic stratum.
Clearing margin system in the futures markets—Applying the value-at-risk model to Taiwanese data
NASA Astrophysics Data System (ADS)
Chiu, Chien-Liang; Chiang, Shu-Mei; Hung, Jui-Cheng; Chen, Yu-Lung
2006-07-01
This article sets out to investigate if the TAIFEX has adequate clearing margin adjustment system via unconditional coverage, conditional coverage test and mean relative scaled bias to assess the performance of three value-at-risk (VaR) models (i.e., the TAIFEX, RiskMetrics and GARCH-t). For the same model, original and absolute returns are compared to explore which can accurately capture the true risk. For the same return, daily and tiered adjustment methods are examined to evaluate which corresponds to risk best. The results indicate that the clearing margin adjustment of the TAIFEX cannot reflect true risks. The adjustment rules, including the use of absolute return and tiered adjustment of the clearing margin, have distorted VaR-based margin requirements. Besides, the results suggest that the TAIFEX should use original return to compute VaR and daily adjustment system to set clearing margin. This approach would improve the funds operation efficiency and the liquidity of the futures markets.
Large-Eddy Simulation of Waked Turbines in a Scaled Wind Farm Facility
NASA Astrophysics Data System (ADS)
Wang, J.; McLean, D.; Campagnolo, F.; Yu, T.; Bottasso, C. L.
2017-05-01
The aim of this paper is to present the numerical simulation of waked scaled wind turbines operating in a boundary layer wind tunnel. The simulation uses a LES-lifting-line numerical model. An immersed boundary method in conjunction with an adequate wall model is used to represent the effects of both the wind turbine nacelle and tower, which are shown to have a considerable effect on the wake behavior. Multi-airfoil data calibrated at different Reynolds numbers are used to account for the lift and drag characteristics at the low and varying Reynolds conditions encountered in the experiments. The present study focuses on low turbulence inflow conditions and inflow non-uniformity due to wind tunnel characteristics, while higher turbulence conditions are considered in a separate study. The numerical model is validated by using experimental data obtained during test campaigns conducted with the scaled wind farm facility. The simulation and experimental results are compared in terms of power capture, rotor thrust, downstream velocity profiles and turbulence intensity.
A sophisticated simulation for the fracture behavior of concrete material using XFEM
NASA Astrophysics Data System (ADS)
Zhai, Changhai; Wang, Xiaomin; Kong, Jingchang; Li, Shuang; Xie, Lili
2017-10-01
The development of a powerful numerical model to simulate the fracture behavior of concrete material has long been one of the dominant research areas in earthquake engineering. A reliable model should be able to adequately represent the discontinuous characteristics of cracks and simulate various failure behaviors under complicated loading conditions. In this paper, a numerical formulation, which incorporates a sophisticated rigid-plastic interface constitutive model coupling cohesion softening, contact, friction and shear dilatation into the XFEM, is proposed to describe various crack behaviors of concrete material. An effective numerical integration scheme for accurately assembling the contribution to the weak form on both sides of the discontinuity is introduced. The effectiveness of the proposed method has been assessed by simulating several well-known experimental tests. It is concluded that the numerical method can successfully capture the crack paths and accurately predict the fracture behavior of concrete structures. The influence of mode-II parameters on the mixed-mode fracture behavior is further investigated to better determine these parameters.
The Problem of Auto-Correlation in Parasitology
Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick
2012-01-01
Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865
A two-scale model of radio-frequency electrosurgical tissue ablation
NASA Astrophysics Data System (ADS)
Karaki, Wafaa; Rahul; Lopez, Carlos A.; Borca-Tasciuc, Diana-Andra; De, Suvranu
2017-12-01
Radio-frequency electrosurgical procedures are widely used to simultaneously dissect and coagulate tissue. Experiments suggest that evaporation of cellular and intra-cellular water plays a significant role in the evolution of the temperature field at the tissue level, which is not adequately captured in a single scale energy balance equation. Here, we propose a two-scale model to study the effects of microscale phase change and heat dissipation in response to radiofrequency heating on the tissue level in electrosurgical ablation procedures. At the microscale, the conservation of mass along with thermodynamic and mechanical equilibrium is applied to obtain an equation-of-state relating vapor mass fraction to temperature and pressure. The evaporation losses are incorporated in the macro-level energy conservation and results are validated with mean experimental temperature distributions measured from electrosurgical ablation testing on ex vivo porcine liver at different power settings of the electrosurgical instrument. Model prediction of water loss and its effect on the temperature along with the effect of the mechanical properties on results are evaluated and discussed.
Andrade, Carla Maria Araujo; Araujo Júnior, Edward; Torloni, Maria Regina; Moron, Antonio Fernandes; Guazzelli, Cristina Aparecida Falbo
2016-02-01
To compare the rates of success of two-dimensional (2D) and three-dimensional (3D) sonographic (US) examinations in locating and adequately visualizing levonorgestrel intrauterine devices (IUDs) and to explore factors associated with the unsuccessful viewing on 2D US. Transvaginal 2D and 3D US examinations were performed on all patients 1 month after insertion of levonorgestrel IUDs. The devices were considered adequately visualized on 2D US if both the vertical (shadow, upper and lower extremities) and the horizontal (two echogenic lines) shafts were identified. 3D volumes were also captured to assess the location of levonorgestrel IUDs on 3D US. Thirty women were included. The rates of adequate device visualization were 40% on 2D US (95% confidence interval [CI], 24.6; 57.7) and 100% on 3D US (95% CI, 88.6; 100.0). The device was not adequately visualized in all six women who had a retroflexed uterus, but it was adequately visualized in 12 of the 24 women (50%) who had a nonretroflexed uterus (95% CI, -68.6; -6.8). We found that 3D US is better than 2D US for locating and adequately visualizing levonorgestrel IUDs. Other well-designed studies with adequate power should be conducted to confirm this finding. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Carr, G.
2017-12-01
Real world problems rarely regard disciplinary boundaries. This is particularly apparent in catchments, where knowledge and understanding from many different research disciplines is essential to address the water resource challenges facing society. People are an integral part of any catchment. Therefore a comprehensive understanding of catchment evolution needs to include the social system. Socio-hydrological models that can simulate the co-evolution of human-water systems, for example, with regards to floods and droughts, show great promise in their capacity to capture and understand such systems. Yet, to develop socio-hydrological models into more comprehensive analysis tools that adequately capture the social components of the system, researchers need to embrace interdisciplinary working and multi-disciplinary research teams. By exploring the development of interdisciplinary research in a water programme, several key practices have been identified that support interdisciplinary collaboration. These include clarification where researchers discuss and re-explain their research or position to expose all the assumptions being made until all involved understand it; harnessing differences where different opinions and types of knowledge are treated respectfully to minimise tensions and disputes; and boundary setting where defensible limits to the research enquiry are set with consideration for the restrictions (funds, skills, resources) through negotiation and discussion between the research team members. Focussing on these research practices while conducting interdisciplinary collaborative research into the human-water system, is anticipated to support the development of more integrated approaches and models.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Zimmermann, N. E.; Poulter, B.
2015-12-01
Simulations of the spatial-temporal dynamics of wetlands is key to understanding the role of wetland biogeochemistry under past and future climate variability. Hydrologic inundation models, such as TOPMODEL, are based on a fundamental parameter known as the compound topographic index (CTI) and provide a computationally cost-efficient approach to simulate global wetland dynamics. However, there remains large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl DGVM, and quantifies uncertainties by comparing three digital elevation model products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. We found that calibrating TOPMODEL with a benchmark dataset can help to successfully predict the seasonal and interannual variations of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows best accuracy for capturing the spatio-temporal dynamics of wetland among three DEM products. This study demonstrates the feasibility to capture spatial heterogeneity of inundation and to estimate seasonal and interannual variations in wetland by coupling a hydrological module in LSMs with appropriate benchmark datasets. It additionally highlight the importance of an adequate understanding of topographic indices for simulating global wetlands and show the opportunity to converge wetland estimations in LSMs by identifying the uncertainty associated with existing wetland products.
Verhoef, Marja J; Lewith, George; Ritenbaugh, Cheryl; Boon, Heather; Fleishman, Susan; Leis, Anne
2005-09-01
Complementary and alternative medicine (CAM) often consists of whole systems of care (such as naturopathic medicine or traditional Chinese medicine (TCM)) that combine a wide range of modalities to provide individualised treatment. The complexity of these interventions and their potential synergistic effect requires innovative evaluative approaches. Model validity, which encompasses the need for research to adequately address the unique healing theory and therapeutic context of the intervention, is central to whole systems research (WSR). Classical randomised controlled trials (RCTs) are limited in their ability to address this need. Therefore, we propose a mixed methods approach that includes a range of relevant and holistic outcome measures. As the individual components of most whole systems are inseparable, complementary and synergistic, WSR must not focus only on the "active" ingredients of a system. An emerging WSR framework must be non-hierarchical, cyclical, flexible and adaptive, as knowledge creation is continuous, evolutionary and necessitates a continuous interplay between research methods and "phases" of knowledge. Finally, WSR must hold qualitative and quantitative research methods in equal esteem to realize their unique research contribution. Whole systems are complex and therefore no one method can adequately capture the meaning, process and outcomes of these interventions.
Disabled personnel emergency-heating system. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Dine, N.
1974-12-16
This progress report describes a receiving well for the operating fuel supply (two provided) designed within the unit to permit parasitic heat loss from the system to be captured by the fuel supply. The Zero can housing provides adequate volume to accommodate stowage of the fluid-flow umbilicals for connection of the heater system to the tubulated casualty bag liner.
Capture the Human Side of Learning: Data Makeover Puts Students Front and Center
ERIC Educational Resources Information Center
Sharratt, Lyn; Fullan, Michael
2013-01-01
Education is overloaded with programs and data. The growth of digital power has aided and abetted the spread of accountability-driven data--Adequate Yearly Progress, test results for every child in every grade, Common Core standards, formative and summative assessments. Technology accelerates the onslaught of data. All this information goes for…
40 CFR 86.1869-12 - CO2 credits for off-cycle CO2-reducing technologies.
Code of Federal Regulations, 2013 CFR
2013-07-01
... where the CO2 reduction benefit of the technology is not adequately captured on the Federal Test Procedure and/or the Highway Fuel Economy Test. These technologies must have a measurable, demonstrable, and verifiable real-world CO2 reduction that occurs outside the conditions of the Federal Test Procedure and the...
The Development and Psychometric Evaluation of the Brief Resilient Coping Scale
ERIC Educational Resources Information Center
Sinclair, Vaughn G.; Wallston, Kenneth A.
2004-01-01
This article introduces the Brief Resilient Coping Scale (BRCS), a 4-item measure designed to capture tendencies to cope with stress in a highly adaptive manner. Two samples of individuals with rheumatoid arthritis (ns = 90 and 140) provide evidence for the reliability and validity of the BRCS. The BRCS has adequate internal consistency and…
Polar Bear Population Status in the Southern Beaufort Sea
Regehr, Eric V.; Amstrup, Steven C.; Stirling, Ian
2006-01-01
Polar bears depend entirely on sea ice for survival. In recent years, a warming climate has caused major changes in the Arctic sea ice environment, leading to concerns regarding the status of polar bear populations. Here we present findings from long-term studies of polar bears in the southern Beaufort Sea (SBS) region of the U.S. and Canada, which are relevant to these concerns. We applied open population capture-recapture models to data collected from 2001 to 2006, and estimated there were 1,526 (95% CI = 1,211; 1,841) polar bears in the SBS region in 2006. The number of polar bears in this region was previously estimated to be approximately 1,800. Because precision of earlier estimates was low, our current estimate of population size and the earlier ones cannot be statistically differentiated. For the 2001-06 period, the best fitting capture-recapture model provided estimates of total apparent survival of 0.43 for cubs of the year (COYs), and 0.92 for all polar bears older than COYs. Because the survival rates for older polar bears included multiple sex and age strata, they could not be compared to previous estimates. Survival rates for COYs, however, were significantly lower than estimates derived in earlier studies (P = 0.03). The lower survival of COYs was corroborated by a comparison of the number of COYs per adult female for periods before (1967-89) and after (1990-2006) the winter of 1989-90, when warming temperatures and altered atmospheric circulation caused an abrupt change in sea ice conditions in the Arctic basin. In the latter period, there were significantly more COYs per adult female in the spring (P = 0.02), and significantly fewer COYs per adult female in the autumn (P < 0.001). Apparently, cub production was higher in the latter period, but fewer cubs survived beyond the first 6 months of life. Parallel with declining survival, skull measurements suggested that COYs captured from 1990 to 2006 were smaller than those captured before 1990. Similarly, both skull measurements and body weights suggested that adult males captured from 1990 to 2006 were smaller than those captured before 1990. The smaller stature of males was especially notable because it corresponded with a higher mean age of adult males. Male polar bears continue to grow into their teens, and if adequately nourished, the older males captured in the latter period should have been larger than those captured earlier. In western Hudson Bay, Canada, a significant decline in population size was preceded by observed declines in cub survival and physical stature. The evidence of declining recruitment and body size reported here, therefore, suggests vigilance regarding the future of polar bears in the SBS region.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Zimmermann, N. E.; Poulter, B.
2015-11-01
Simulations of the spatial-temporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate variability. Hydrologic inundation models, such as TOPMODEL, are based on a fundamental parameter known as the compound topographic index (CTI) and provide a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl dynamic global vegetation model (DGVM), and quantifies uncertainties by comparing three digital elevation model products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland dataset can help to successfully delineate the seasonal and interannual variations of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows best accuracy for capturing the spatio-temporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ∼ 10.3 Mkm2 (106 km2), with a mean annual maximum of ∼ 5.17 Mkm2 for 1980-2010. This study demonstrates the feasibility to capture spatial heterogeneity of inundation and to estimate seasonal and interannual variations in wetland by coupling a hydrological module in LSMs with appropriate benchmark datasets. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.
[Advances on biomechanics and kinematics of sprain of ankle joint].
Zhao, Yong; Wang, Gang
2015-04-01
Ankle sprains are orthopedic clinical common disease, accounting for joint ligament sprain of the first place. If treatment is not timely or appropriate, the joint pain and instability maybe develop, and even bone arthritis maybe develop. The mechanism of injury of ankle joint, anatomical basis has been fully study at present, and the diagnostic problem is very clear. Along with the development of science and technology, biological modeling and three-dimensional finite element, three-dimensional motion capture system,digital technology study, electromyographic signal study were used for the basic research of sprain of ankle. Biomechanical and kinematic study of ankle sprain has received adequate attention, combined with the mechanism research of ankle sprain,and to explore the the biomechanics and kinematics research progress of the sprain of ankle joint.
Animal Models of Post-Traumatic Stress Disorder and Recent Neurobiological Insights
Whitaker, Annie M.; Gilpin, Nicholas W.; Edwards, Scott
2014-01-01
Post-traumatic stress disorder (PTSD) is a complex psychiatric disorder characterized by the intrusive re-experiencing of past trauma, avoidant behavior, enhanced fear, and hyperarousal following a traumatic event in vulnerable populations. Preclinical animal models do not replicate the human condition in its entirety, but seek to mimic symptoms or endophenotypes associated with PTSD. Although many models of traumatic stress exist, few adequately capture the complex nature of the disorder and the observed individual variability in susceptibility of humans to develop PTSD. In addition, various types of stressors may produce different molecular neuroadaptations that likely contribute to the various behavioral disruptions produced by each model, although certain consistent neurobiological themes related to PTSD have emerged. For example, animal models report traumatic stress- and trauma reminder-induced alterations in neuronal activity in the amygdala and prefrontal cortex, in agreement with the human PTSD literature. Models have also provided a conceptual framework for the often observed combination of PTSD and co-morbid conditions such as alcohol use disorder (AUD). Future studies will continue to refine preclinical PTSD models in hopes of capitalizing on their potential to deliver new and more efficacious treatments for PTSD and associated psychiatric disorders. PMID:25083568
Inspiration from drones, Lidar measurements and 3D models in undergraduate teaching
NASA Astrophysics Data System (ADS)
Blenkinsop, Thomas; Ellis, Jennifer
2017-04-01
Three-dimensional models, photogrammetry and remote sensing are increasingly common techniques used in structural analysis. We have found that using drones and Lidar on undergraduate field trips has piqued interest in fieldwork, provided data for follow-up laboratory exercises, and inspired undergraduates to attempt 3D modelling in independent mapping projects. The scale of structures visible in cliff and sea shore exposures in South Wales is ideal for using drones to capture images for 3D models. Fault scarps in the South Wales coalfield were scanned by Lidar and drone. Our experience suggests that the drone data were much easier to acquire and process than the Lidar data, and adequate for most teaching purposes. In the lab, we used the models to show the structure in 3D, and as the basis for an introduction to geological modelling software. Now that tools for photogrammetry, drones, and processing software are widely available and affordable, they can be readily integrated into teaching. An additional benefit from the images and models is that they may be used for exercises that can be substituted for fieldwork to achieve some (but not all) of the learning outcomes in the case that field access is prevented.
Combining Mechanistic Approaches for Studying Eco-Hydro-Geomorphic Coupling
NASA Astrophysics Data System (ADS)
Francipane, A.; Ivanov, V.; Akutina, Y.; Noto, V.; Istanbullouglu, E.
2008-12-01
Vegetation interacts with hydrology and geomorphic form and processes of a river basin in profound ways. Despite recent advances in hydrological modeling, the dynamic coupling between these processes is yet to be adequately captured at the basin scale to elucidate key features of process interaction and their role in the organization of vegetation and landscape morphology. In this study, we present a blueprint for integrating a geomorphic component into the physically-based, spatially distributed ecohydrological model, tRIBS- VEGGIE, which reproduces essential water and energy processes over the complex topography of a river basin and links them to the basic plant life regulatory processes. We present a preliminary design of the integrated modeling framework in which hillslope and channel erosion processes at the catchment scale, will be coupled with vegetation-hydrology dynamics. We evaluate the developed framework by applying the integrated model to Lucky Hills basin, a sub-catchment of the Walnut Gulch Experimental Watershed (Arizona). The evaluation is carried out by comparing sediment yields at the basin outlet, that follows a detailed verification of simulated land-surface energy partition, biomass dynamics, and soil moisture states.
Capturing spatial heterogeneity of soil organic carbon under changing climate
NASA Astrophysics Data System (ADS)
Mishra, U.; Fan, Z.; Jastrow, J. D.; Matamala, R.; Vitharana, U.
2015-12-01
The spatial heterogeneity of the land surface affects water, energy, and greenhouse gas exchanges with the atmosphere. Designing observation networks that capture land surface spatial heterogeneity is a critical scientific challenge. Here, we present a geospatial approach to capture the existing spatial heterogeneity of soil organic carbon (SOC) stocks across Alaska, USA. We used the standard deviation of 556 georeferenced SOC profiles previously compiled in Mishra and Riley (2015, Biogeosciences, 12:3993-4004) to calculate the number of observations that would be needed to reliably estimate Alaskan SOC stocks. This analysis indicated that 906 randomly distributed observation sites would be needed to quantify the mean value of SOC stocks across Alaska at a confidence interval of ± 5 kg m-2. We then used soil-forming factors (climate, topography, land cover types, surficial geology) to identify the locations of appropriately distributed observation sites by using the conditioned Latin hypercube sampling approach. Spatial correlation and variogram analyses demonstrated that the spatial structures of soil-forming factors were adequately represented by these 906 sites. Using the spatial correlation length of existing SOC observations, we identified 484 new observation sites would be needed to provide the best estimate of the present status of SOC stocks in Alaska. We then used average decadal projections (2020-2099) of precipitation, temperature, and length of growing season for three representative concentration pathway (RCP 4.5, 6.0, and 8.5) scenarios of the Intergovernmental Panel on Climate Change to investigate whether the location of identified observation sites will shift/change under future climate. Our results showed 12-41 additional observation sites (depending on emission scenarios) will be required to capture the impact of projected climatic conditions by 2100 on the spatial heterogeneity of Alaskan SOC stocks. Our results represent an ideal distribution of observation sites across Alaska that captures the land surface spatial heterogeneity and can be used in efforts to quantify SOC stocks, monitor greenhouse gas emissions, and benchmark Earth System Model results.
Discrete Element Modelling of Floating Debris
NASA Astrophysics Data System (ADS)
Mahaffey, Samantha; Liang, Qiuhua; Parkin, Geoff; Large, Andy; Rouainia, Mohamed
2016-04-01
Flash flooding is characterised by high velocity flows which impact vulnerable catchments with little warning time and as such, result in complex flow dynamics which are difficult to replicate through modelling. The impacts of flash flooding can be made yet more severe by the transport of both natural and anthropogenic debris, ranging from tree trunks to vehicles, wheelie bins and even storage containers, the effects of which have been clearly evident during recent UK flooding. This cargo of debris can have wide reaching effects and result in actual flood impacts which diverge from those predicted. A build-up of debris may lead to partial channel blockage and potential flow rerouting through urban centres. Build-up at bridges and river structures also leads to increased hydraulic loading which may result in damage and possible structural failure. Predicting the impacts of debris transport; however, is difficult as conventional hydrodynamic modelling schemes do not intrinsically include floating debris within their calculations. Subsequently a new tool has been developed using an emerging approach, which incorporates debris transport through the coupling of two existing modelling techniques. A 1D hydrodynamic modelling scheme has here been coupled with a 2D discrete element scheme to form a new modelling tool which predicts the motion and flow-interaction of floating debris. Hydraulic forces arising from flow around the object are applied to instigate its motion. Likewise, an equivalent opposing force is applied to fluid cells, enabling backwater effects to be simulated. Shock capturing capabilities make the tool applicable to predicting the complex flow dynamics associated with flash flooding. The modelling scheme has been applied to experimental case studies where cylindrical wooden dowels are transported by a dam-break wave. These case studies enable validation of the tool's shock capturing capabilities and the coupling technique applied between the two numerical schemes. The results show that the tool is able to adequately replicate water depth and depth-averaged velocity of a dam-break wave, as well as velocity and displacement of floating cylindrical elements, thus validating its shock capturing capabilities and the coupling technique applied for this simple test case. Future development of the tool will incorporate a 2D hydrodynamic scheme and a 3D discrete element scheme in order to model the more complex processes associated with debris transport.
Ziche, Paul
2008-03-01
Paul Hinneberg promises, in his multi-volume Kultur der Gegenwart (1906sqq.), to capture the 'culture' of his time in its entirety; only a veritable encyclopedia could be adequate to the task of synthesizing the manifold and disparate tendencies of 'Kultur'. Surprisingly, however, any attempt to make explicit the systematic principles governing his encyclopedic synthesis is missing from his project. It is argued that this--unusual--feature of Hinneberg's Kultur der Gegenwart can itself be understood as a result of typical analyses of 'Kultur' at the turn of the century; culture, as an open, multi-sided, and integrative concept may indeed best be captured in an open system that avoids strict and explicit demarcations. In these respects, the task of capturing Kultur turns out to be closely linked to another task prominent around 1900: that of providing a systematic ordering of the various 'Wissenschaften'.
Poverty, global health, and infectious disease: lessons from Haiti and Rwanda.
Alsan, Marcella M; Westerhaus, Michael; Herce, Michael; Nakashima, Koji; Farmer, Paul E
2011-09-01
Poverty and infectious diseases interact in complex ways. Casting destitution as intractable, or epidemics that afflict the poor as accidental, erroneously exonerates us from responsibility for caring for those most in need. Adequately addressing communicable diseases requires a biosocial appreciation of the structural forces that shape disease patterns. Most health interventions in resource-poor settings could garner support based on cost/benefit ratios with appropriately lengthy time horizons to capture the return on health investments and an adequate accounting of externalities; however, such a calculus masks the suffering of inaction and risks eroding the most powerful incentive to act: redressing inequality. Copyright © 2011 Elsevier Inc. All rights reserved.
FY07 LDRD Final Report Neutron Capture Cross-Section Measurements at DANCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, W; Agvaanluvsan, U; Wilk, P
2008-02-08
We have measured neutron capture cross sections intended to address defense science problems including mix and the Quantification of Margins and Uncertainties (QMU), and provide details about statistical decay of excited nuclei. A major part of this project included developing the ability to produce radioactive targets. The cross-section measurements were made using the white neutron source at the Los Alamos Neutron Science Center, the detector array called DANCE (The Detector for Advanced Neutron Capture Experiments) and targets important for astrophysics and stockpile stewardship. DANCE is at the leading edge of neutron capture physics and represents a major leap forward inmore » capability. The detector array was recently built with LDRD money. Our measurements are a significant part of the early results from the new experimental DANCE facility. Neutron capture reactions are important for basic nuclear science, including astrophysics and the statistics of the {gamma}-ray cascades, and for applied science, including stockpile science and technology. We were most interested in neutron capture with neutron energies in the range between 1 eV and a few hundred keV, with targets important to basic science, and the s-process in particular. Of particular interest were neutron capture cross-section measurements of rare isotopes, especially radioactive isotopes. A strong collaboration between universities and Los Alamos due to the Academic Alliance was in place at the start of our project. Our project gave Livermore leverage in focusing on Livermore interests. The Lawrence Livermore Laboratory did not have a resident expert in cross-section measurements; this project allowed us to develop this expertise. For many radionuclides, the cross sections for destruction, especially (n,{gamma}), are not well known, and there is no adequate model that describes neutron capture. The modeling problem is significant because, at low energies where capture reactions are important, the neutron reaction cross sections show resonance behavior or follow 1/v of the incident neutrons. In the case of odd-odd nuclei, the modeling problem is particularly difficult because degenerate states (rotational bands) present in even-even nuclei have separated in energy. Our work included interpretation of the {gamma}-ray spectra to compare with the Statistical Model and provides information on level density and statistical decay. Neutron capture cross sections are of programmatic interest to defense sciences because many elements were added to nuclear devices in order to determine various details of the nuclear detonation, including fission yields, fusion yields, and mix. Both product nuclei created by (n,2n) reactions and reactant nuclei are transmuted by neutron capture during the explosion. Very few of the (n,{gamma}) cross sections for reactions that create products measured by radiochemists have ever been experimentally determined; most are calculated by radiochemical equivalences. Our new experimentally measured capture cross sections directly impact our knowledge about the uncertainties in device performances, which enhances our capability of carrying out our stockpile stewardship program. Europium and gadolinium cross sections are important for both astrophysics and defense programs. Measurements made prior to this project on stable europium targets differ by 30-40%, which was considered to be significantly disparate. Of the gadolinium isotopes, {sup 151}Gd is important for stockpile stewardship, and {sup 153}Gd is of high interest to astrophysics, and nether of these (radioactive) gadolinium (n,{gamma}) cross sections have been measured. Additional stable gadolinium isotopes, including {sup 157,160}Gd are of interest to astrophysics. Historical measurements of gadolinium isotopes, including {sup 152,154}Gd, had disagreements similar to the 30-40% disagreements found in the historical europium data. Actinide capture cross section measurements are important for both Stockpile Stewardship and for nuclear forensics. We focused on the {sup 242m}Am(n,{gamma}) measurement, as there was no existing capture measurement for this isotope. The cross-section measurements (cross section vs. E{sub n}) were made at the Detector for Advanced Neutron Capture Experiments. DANCE is comprised of a highly segmented array of barium fluoride (BaF{sub 2}) crystals specifically designed for neutron capture-gamma measurements, using small radioactive targets (less than one milligram). A picture of half the array, along with a photo of one crystal, is shown in Fig. 1. DANCE provides the world's leading capability for measurements of neutron capture cross sections with radioactive targets. The DANCE is a 4{pi} calorimeter and uses the intense spallation neutron source the Lujan Center at the Los Alamos National Laboratory. The detector array consists of 159 barium fluoride crystals arranged in a sphere around the target.« less
FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems
NASA Astrophysics Data System (ADS)
Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.
2016-12-01
Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.
Spatial and temporal variation in efficiency of the Moore egg collector
Worthington, Thomas A.; Brewer, Shannon K.; Farless, Nicole
2013-01-01
The Moore egg collector (MEC) was developed for quantitative and nondestructive capture of semibuoyant fish eggs. Previous studies have indicated that capture efficiency of the MEC was low and the use of one device did not adequately represent the spatial distribution within the water column of egg surrogates (gellan beads) of pelagic broadcast-spawning cyprinids. The objective of this study was to assess whether use of multiple MECs showed differences in spatial and temporal distribution of bead catches. Capture efficiency of three MECs was tested at four 500-m sites on the South Canadian River, a Great Plains river in Oklahoma. For each trial, approximately 100,000 beads were released and mean capture efficiency was 0.47–2.16%. Kolmogorov–Smirnov tests indicated the spatial distributions of bead catches were different among multiple MECs at three of four sites. Temporal variability in timing of peak catches of gellan beads was also evident between MECs. We concluded that the use of multiple MECs is necessary to properly sample eggs of pelagic broadcast-spawning cyprinids.
NASA Astrophysics Data System (ADS)
Keskinen, Johanna; Looms, Majken C.; Nielsen, Lars; Klotzsche, Anja; van der Kruk, Jan; Moreau, Julien; Stemmerik, Lars; Holliger, Klaus
2015-04-01
Chalk is an important reservoir rock for hydrocarbons and for groundwater resources for many major cities. Therefore, this rock type has been extensively investigated using both geological and geophysical methods. Many applications of crosshole GPR tomography rely on the ray approximation and corresponding inversions of first break traveltimes and/or maximum first-cycle amplitudes. Due to the inherent limitations associated with such approaches, the resulting models tend to be overly smooth and cannot adequately capture the small-scale heterogeneities. In contrast, the full-waveform inversion uses all the information contained in the data and is able to provide significantly improved images. Here, we apply full-waveform inversion to crosshole GPR data to image strong heterogeneity of the chalk related to changes in lithology and porosity. We have collected a crosshole tomography dataset in an old chalk quarry in Eastern Denmark. Based on core data (including plug samples and televiewer logging data) collected in our four ~15-m-deep boreholes and results from previous related studies, it is apparent that the studied chalk is strongly heterogeneous. The upper ~7 m consist of variable coarse-grained chalk layers with numerous flint nodules. The lower half of the studied section appears to be finer-grained and contains less flint. However, still significant porosity variations are also detected in the lower half. In general, the water-saturated (watertable depth ~2 m) chalk is characterized by high porosities, and thus low velocities and high attenuation, while the flint is essentially non-porous and has correspondingly high velocities and low attenuation. Together these characteristics form a strongly heterogeneous medium, which is challenging for the full-waveform inversion to recover. Here, we address the importance of (i) adequate starting models, both in terms of the dielectric permittivity and the electrical conductivity, (ii) the estimation of the source wavelet, (iii) and the effects of data sampling density when imaging this rock type. Moreover, we discuss the resolution of the bedding recovered by the full-waveform approach. Our results show that with proper estimates of the above-mentioned prior parameters, crosshole GPR full-waveform tomography provides high-resolution images capturing a high degree of variability that standard methods cannot resolve in chalk. This in turn makes crosshole full-waveform inversion a promising tool to support time-lapse flow modelling.
Nonequilibrium Interfacial Tension in Simple and Complex Fluids
NASA Astrophysics Data System (ADS)
Truzzolillo, Domenico; Mora, Serge; Dupas, Christelle; Cipelletti, Luca
2016-10-01
Interfacial tension between immiscible phases is a well-known phenomenon, which manifests itself in everyday life, from the shape of droplets and foam bubbles to the capillary rise of sap in plants or the locomotion of insects on a water surface. More than a century ago, Korteweg generalized this notion by arguing that stresses at the interface between two miscible fluids act transiently as an effective, nonequilibrium interfacial tension, before homogenization is eventually reached. In spite of its relevance in fields as diverse as geosciences, polymer physics, multiphase flows, and fluid removal, experiments and theoretical works on the interfacial tension of miscible systems are still scarce, and mostly restricted to molecular fluids. This leaves crucial questions unanswered, concerning the very existence of the effective interfacial tension, its stabilizing or destabilizing character, and its dependence on the fluid's composition and concentration gradients. We present an extensive set of measurements on miscible complex fluids that demonstrate the existence and the stabilizing character of the effective interfacial tension, unveil new regimes beyond Korteweg's predictions, and quantify its dependence on the nature of the fluids and the composition gradient at the interface. We introduce a simple yet general model that rationalizes nonequilibrium interfacial stresses to arbitrary mixtures, beyond Korteweg's small gradient regime, and show that the model captures remarkably well both our new measurements and literature data on molecular and polymer fluids. Finally, we briefly discuss the relevance of our model to a variety of interface-driven problems, from phase separation to fracture, which are not adequately captured by current approaches based on the assumption of small gradients.
Evaluation of bias associated with capture maps derived from nonlinear groundwater flow models
Nadler, Cara; Allander, Kip K.; Pohll, Greg; Morway, Eric D.; Naranjo, Ramon C.; Huntington, Justin
2018-01-01
The impact of groundwater withdrawal on surface water is a concern of water users and water managers, particularly in the arid western United States. Capture maps are useful tools to spatially assess the impact of groundwater pumping on water sources (e.g., streamflow depletion) and are being used more frequently for conjunctive management of surface water and groundwater. Capture maps have been derived using linear groundwater flow models and rely on the principle of superposition to demonstrate the effects of pumping in various locations on resources of interest. However, nonlinear models are often necessary to simulate head-dependent boundary conditions and unconfined aquifers. Capture maps developed using nonlinear models with the principle of superposition may over- or underestimate capture magnitude and spatial extent. This paper presents new methods for generating capture difference maps, which assess spatial effects of model nonlinearity on capture fraction sensitivity to pumping rate, and for calculating the bias associated with capture maps. The sensitivity of capture map bias to selected parameters related to model design and conceptualization for the arid western United States is explored. This study finds that the simulation of stream continuity, pumping rates, stream incision, well proximity to capture sources, aquifer hydraulic conductivity, and groundwater evapotranspiration extinction depth substantially affect capture map bias. Capture difference maps demonstrate that regions with large capture fraction differences are indicative of greater potential capture map bias. Understanding both spatial and temporal bias in capture maps derived from nonlinear groundwater flow models improves their utility and defensibility as conjunctive-use management tools.
Lee, S A; Dunne, J; Mottram, T; Bedford, M R
2017-06-01
In this study, a novel capsule technique was used to capture real-time pH readings from the gizzard over several hours, in response to different dietary treatments. 1. The first experiment was a preliminary study into capsule administration and pH recordings using 9 male Ross 308 broilers from 20 d. In the second experiment, broilers (576) were fed in two phases (0-21 and 21-42 d) with 4 treatment groups; low and adequate Ca and AvP diets with and without Quantum Blue phytase (1500 FTU/kg). Capsules were administered to 8 birds from each treatment group, pre and post diet phase change, with readings captured over a 2.5 h period. 2. Phytase addition improved body weight gain (BWG) and feed conversion ratio (FCR) of birds fed low dietary Ca, while having no significant effect on birds fed adequate Ca diets. Unexpectantly, diets with higher Ca levels gave a lower average gizzard pH compared to the low Ca diet. Phytase addition, irrespective of Ca level, increased average gizzard pH. Fluctuations in gizzard pH (0.6-3.8) were observed across all treatment groups. Higher frequencies of pH readings below pH 1.0 were seen in birds fed an adequate Ca diet and with phytase supplementation of a low Ca diet. 3. These results signify the potential use of capsule techniques to monitor real-time pH changes. The implication on gastric protein and fibre hydrolysis as a result of large fluctuations in pH should be considered.
Effect of Spatial Distribution and Connectivity of Urban Impervious Areas on Hydrologic Response
NASA Astrophysics Data System (ADS)
Khoshouei, F.; Basu, N. B.; Schnoor, J. L.
2012-12-01
Urbanization alters the hydrology of a watershed by increasing impervious areas which results in decreased infiltration and increased runoff. Total Impervious Area (TIA) has been extensively used as a metric to describe this impact. It has recently been recognized, however, that TIA is a necessary but not sufficient attribute to describe the hydrologic response of a watershed. The connectivity and spatial placement of the impervious areas play a significant role in altering streamflow distributions. While the importance of spatial metrics is well recognized, the actual magnitude of their impact has not been adequately quantified in a systematic manner. We assess the effect of the spatial distribution of impervious area on hydrologic response in six peri-urban watersheds with areas in the order of 15 sq km in Midwest. We use the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model from the Army Corp of Engineers for our exploration. GSSHA is a grid-based two-dimensional hydrologic model with 2D overland flow and 1D streamflow and infiltration. The models for the watersheds were calibrated and validated using discharge data from USGS streamflow database. The models were then used in a virtual experimentation mode to understand the variability in hydrologic response as a function of different patterns of urban expansion. A new metric, "Impervious Area Width Function- IAWF" was developed that captured the distribution of flow path lengths from impervious areas. This metric captured the difference in hydrologic response between two watersheds with the same total impervious area but different distributions. The results suggest that urban development in areas with longer travel time (far from outlet) results in higher peak flows.
Generalized estimators of avian abundance from count survey data
Royle, J. Andrew
2004-01-01
I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
NASA Astrophysics Data System (ADS)
Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang
2016-09-01
Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.
A Thematic Analysis of the Motivation Behind Sexual Homicide From the Perspective of the Killer.
Kerr, Kevin J; Beech, Anthony R
2016-12-01
Using thematic analysis, this study explores the motivation to commit sexual homicide from the perspective of the perpetrator. In the process, it revisits motivational models and offender typologies that have been put forward to explain such offenses. From the homicide narratives of eight sexual homicide offenders detained in a high security hospital in the United Kingdom, four themes were found which appeared significant in terms of understanding the offenses committed. These themes were labeled as follows: (a) avenging sexual abuse, (b) events leading to a catathymic reaction, (c) homicidal impulse, and (d) emotional loneliness. Although these findings are not inconsistent with previous research, we argue that the current literature fails to capture the complexity associated with these offenses. We also argue that the context or situation in which sexual homicide occurs is a crucial feature of the offense, and one which has not been adequately taken into account by motivational models. © The Author(s) 2015.
Non-invasive flow path characterization in a mining-impacted wetland
Bethune, James; Randell, Jackie; Runkel, Robert L.; Singha, Kamini
2015-01-01
Time-lapse electrical resistivity (ER) was used to capture the dilution of a seasonal pulse of acid mine drainage (AMD) contamination in the subsurface of a wetland downgradient of the abandoned Pennsylvania mine workings in central Colorado. Data were collected monthly from mid-July to late October of 2013, with an additional dataset collected in June of 2014. Inversion of the ER data shows the development through time of multiple resistive anomalies in the subsurface, which corroborating data suggest are driven by changes in total dissolved solids (TDS) localized in preferential flow pathways. Sensitivity analyses on a synthetic model of the site suggest that the anomalies would need to be at least several meters in diameter to be adequately resolved by the inversions. The existence of preferential flow paths would have a critical impact on the extent of attenuation mechanisms at the site, and their further characterization could be used to parameterize reactive transport models in developing quantitative predictions of remediation strategies.
WHO WOULD EAT IN A WORLD WITHOUT PHOSPHORUS? A GLOBAL DYNAMIC MODEL
NASA Astrophysics Data System (ADS)
Dumas, M.
2009-12-01
Phosphorus is an indispensable and non-substitutable resource, as agriculture is impossible if soils do not hold adequate amounts of this nutrient. Phosphorus is also considered to be a non-renewable and increasingly scarce resource, as phosphate rock reserves - as one measure of availability amongst others - are estimated to last for 50 to 100 years at current rates of consumption. How would food production decline in different parts of the world in the scenario of a sudden shortage in phosphorus? To answer this question and explore management scenarios, I present a probabilistic model of the structure and dynamics of the global P cycle in the world’s agro-ecosystems. The model proposes an original solution to the challenge of capturing the large-scale aggregate dynamics of multiple micro-scale soil cycling processes. Furthermore, it integrates the essential natural processes with a model of human-managed flows, thereby bringing together several decades of research and measurements from soil science, plant nutrition and long-term agricultural experiments from around the globe. In this paper, I present the model, the first simulation results and the implications for long-term sustainable management of phosphorus and soil fertility.
Aerocapture Performance Analysis of A Venus Exploration Mission
NASA Technical Reports Server (NTRS)
Starr, Brett R.; Westhelle, Carlos H.
2005-01-01
A performance analysis of a Discovery Class Venus Exploration Mission in which aerocapture is used to capture a spacecraft into a 300km polar orbit for a two year science mission has been conducted to quantify its performance. A preliminary performance assessment determined that a high heritage 70 sphere-cone rigid aeroshell with a 0.25 lift to drag ratio has adequate control authority to provide an entry flight path angle corridor large enough for the mission s aerocapture maneuver. A 114 kilograms per square meter ballistic coefficient reference vehicle was developed from the science requirements and the preliminary assessment s heating indicators and deceleration loads. Performance analyses were conducted for the reference vehicle and for sensitivity studies on vehicle ballistic coefficient and maximum bank rate. The performance analyses used a high fidelity flight simulation within a Monte Carlo executive to define the aerocapture heating environment and deceleration loads and to determine mission success statistics. The simulation utilized the Program to Optimize Simulated Trajectories (POST) that was modified to include Venus specific atmospheric and planet models, aerodynamic characteristics, and interplanetary trajectory models. In addition to Venus specific models, an autonomous guidance system, HYPAS, and a pseudo flight controller were incorporated in the simulation. The Monte Carlo analyses incorporated a reference set of approach trajectory delivery errors, aerodynamic uncertainties, and atmospheric density variations. The reference performance analysis determined the reference vehicle achieves 100% successful capture and has a 99.87% probability of attaining the science orbit with a 90 meters per second delta V budget for post aerocapture orbital adjustments. A ballistic coefficient trade study conducted with reference uncertainties determined that the 0.25 L/D vehicle can achieve 100% successful capture with a ballistic coefficient of 228 kilograms per square meter and that the increased ballistic coefficient increases post aerocapture V budget to 134 meters per second for a 99.87% probability of attaining the science orbit. A trade study on vehicle bank rate determined that the 0.25 L/D vehicle can achieve 100% successful capture when the maximum bank rate is decreased from 30 deg/s to 20 deg/s. The decreased bank rate increases post aerocapture delta V budget to 102 meters per second for a 99.87% probability of attaining the science orbit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewy, Ann; Heim, Kenneth J.; McGonigal, Sean T.
A comparative groundwater hydrogeologic modeling analysis is presented herein to simulate potential contaminant migration pathways in a sole source aquifer in Nassau County, Long Island, New York. The source of contamination is related to historical operations at the Sylvania Corning Plant ('Site'), a 9.49- acre facility located at 70, 100 and 140 Cantiague Rock Road, Town of Oyster Bay in the westernmost portion of Hicksville, Long Island. The Site had historically been utilized as a nuclear materials manufacturing facility (e.g., cores, slug, and fuel elements) for reactors used in both research and electric power generation in early 1950's until latemore » 1960's. The Site is contaminated with various volatile organic and inorganic compounds, as well as radionuclides. The major contaminants of concern at the Site are tetrachloroethene (PCE), trichloroethene (TCE), nickel, uranium, and thorium. These compounds are present in soil and groundwater underlying the Site and have migrated off-site. The Site is currently being investigated as part of the Formerly Utilized Sites Remedial Action Program (FUSRAP). The main objective of the current study is to simulate the complex hydrogeologic features in the region, such as numerous current and historic production well fields; large, localized recharge basins; and, multiple aquifers, and to assess potential contaminant migration pathways originating from the Site. For this purpose, the focus of attention was given to the underlying Magothy formation, which has been impacted by the contaminants of concern. This aquifer provides more than 90% of potable water supply in the region. Nassau and Suffolk Counties jointly developed a three-dimensional regional groundwater flow model to help understand the factors affecting groundwater flow regime in the region, to determine adequate water supply for public consumption, to investigate salt water intrusion in localized areas, to evaluate the impacts of regional pumping activity, and to better understand the contaminant transport and fate mechanisms through the underlying aquifers. This regional model, developed for the N.Y. State Department of Environmental Conservation (NYSDEC) by Camp Dresser and McKee (CDM), uses the finite element model DYNFLOW developed by CDM, Cambridge, Massachusetts. The coarseness of the regional model, however, could not adequately capture the hydrogeologic heterogeneity of the aquifer. Specifically, the regional model did not adequately capture the interbedded nature of the Magothy aquifer and, as such, simulated particles tended to track down-gradient from the Site in relatively straight lines while the movement of groundwater in such a heterogeneous aquifer is expected to proceed along a more tortuous path. This paper presents a qualitative comparison of site-specific groundwater flow modeling results with results obtained from the regional model. In order to assess the potential contaminant migration pathways, a particle tracking method was employed. Available site-specific and regional hydraulic conductivity data measured in-situ with respect to depth and location were incorporated into the T-PROG module in GMS model to define statistical variation to better represent the actual stratigraphy and layer heterogeneity. The groundwater flow characteristics in the Magothy aquifer were simulated with the stochastic hydraulic conductivity variation as opposed to constant values as employed in the regional model. Contaminant sources and their exact locations have been fully delineated at the Site during the Remedial Investigation (RI) phase of the project. Contaminant migration pathways originating from these source locations at the Site are qualitatively traced within the sole source aquifer utilizing particles introduced at source locations. Contaminant transport mechanism modeled in the current study is based on pure advection (i.e., plug flow) and mechanical dispersion while molecular diffusion effects are neglected due to relatively high groundwater velocities encountered in the aquifer. In addition, fate of contaminants is ignored hereby to simulate the worst-case scenario, which considers the contaminants of concern as tracer-like compounds for modeling purposes. The results of the modeling analysis are qualitatively compared with the County's regional model, and patterns of contaminant migration in the region are presented. (authors)« less
Defining Alcohol-Specific Rules Among Parents of Older Adolescents: Moving Beyond No Tolerance.
Bourdeau, Beth; Miller, Brenda; Vanya, Magdalena; Duke, Michael; Ames, Genevieve
2012-01-01
Parental beliefs and rules regarding their teen's use of alcohol influence teen decisions regarding alcohol use. However, measurement of parental rules regarding adolescent alcohol use has not been thoroughly studied. This study used qualitative interviews with 174 parents of older teens from 100 families. From open-ended questions, themes emerged that describe explicit rules tied to circumscribed use, no tolerance, and "call me." There was some inconsistency in explicit rules with and between parents. Responses also generated themes relating to implicit rules such as expectations and preferences. Parents described their methods of communicating their position via conversational methods, role modeling their own behavior, teaching socially appropriate use of alcohol by offering their teen alcohol, and monitoring their teens' social activities. Findings indicate that alcohol rules are not adequately captured by current assessment measures.
Defining Alcohol-Specific Rules Among Parents of Older Adolescents: Moving Beyond No Tolerance
Bourdeau, Beth; Miller, Brenda; Vanya, Magdalena; Duke, Michael; Ames, Genevieve
2012-01-01
Parental beliefs and rules regarding their teen’s use of alcohol influence teen decisions regarding alcohol use. However, measurement of parental rules regarding adolescent alcohol use has not been thoroughly studied. This study used qualitative interviews with 174 parents of older teens from 100 families. From open-ended questions, themes emerged that describe explicit rules tied to circumscribed use, no tolerance, and “call me.” There was some inconsistency in explicit rules with and between parents. Responses also generated themes relating to implicit rules such as expectations and preferences. Parents described their methods of communicating their position via conversational methods, role modeling their own behavior, teaching socially appropriate use of alcohol by offering their teen alcohol, and monitoring their teens’ social activities. Findings indicate that alcohol rules are not adequately captured by current assessment measures. PMID:23204931
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Reddy, D. R.; Kapoor, K.
1993-01-01
A three-dimensional implicit Full Navier-Stokes (FNS) analysis and a 3D Reduced Navier-Stokes (RNS) initial value space marching solution technique has been applied to a class of separate flow problems within a diffusing S-duct configuration characterized as vortex-liftoff. Both Full Navier-Stokes and Reduced Navier-Stokes solution techniques were able to capture the overall flow physics of vortex lift-off, however more consideration must be given to the development of turbulence models for the prediction of the locations of separation and reattachment. This accounts for some of the discrepancies in the prediction of the relevant inlet distortion descriptors, particularly circumferential distortion. The 3D RNS solution technique adequately described the topological structure of flow separation associated with vortex lift-off.
Family Structure, Family Processes, and Adolescent Smoking and Drinking*
Brown, Susan L.; Rinelli, Lauren N.
2010-01-01
This study examined whether family structure was associated with adolescent risk behaviors, including smoking and drinking. Family living arrangements have become increasingly diverse, yet research on adolescent risk behaviors has typically relied on measures of family structure that do not adequately capture this diversity. Data from the 1994-95 National Longitudinal Study of Adolescent Health were used to conduct logistic regression analyses that revealed adolescents in two biological married parent families were least likely to smoke or drink, whereas adolescents in cohabiting stepfamilies were most likely. Those in single-mother families and married stepfamilies were in between. Maternal socialization was related to reduced odds of smoking and drinking. Maternal modeling was positively associated with smoking and drinking. Family structure is indicative of distinct family processes that are linked to risky behaviors among adolescents. PMID:20543893
Landmeyer, J.E.
1994-01-01
Ground-water capture zone boundaries for individual pumped wells in a confined aquffer were delineated by using groundwater models. Both analytical and numerical (semi-analytical) models that more accurately represent the $round-water-flow system were used. All models delineated 2-dimensional boundaries (capture zones) that represent the areal extent of groundwater contribution to a pumped well. The resultant capture zones were evaluated on the basis of the ability of each model to realistically rapresent the part of the ground-water-flow system that contributed water to the pumped wells. Analytical models used were based on a fixed radius approach, and induded; an arbitrary radius model, a calculated fixed radius model based on the volumetric-flow equation with a time-of-travel criterion, and a calculated fixed radius model derived from modification of the Theis model with a drawdown criterion. Numerical models used induded the 2-dimensional, finite-difference models RESSQC and MWCAP. The arbitrary radius and Theis analytical models delineated capture zone boundaries that compared least favorably with capture zones delineated using the volumetric-flow analytical model and both numerical models. The numerical models produced more hydrologically reasonable capture zones (that were oriented parallel to the regional flow direction) than the volumetric-flow equation. The RESSQC numerical model computed more hydrologically realistic capture zones than the MWCAP numerical model by accounting for changes in the shape of capture zones caused by multiple-well interference. The capture zone boundaries generated by using both analytical and numerical models indicated that the curnmtly used 100-foot radius of protection around a wellhead in South Carolina is an underestimate of the extent of ground-water capture for pumped wetis in this particular wellfield in the Upper Floridan aquifer. The arbitrary fixed radius of 100 feet was shown to underestimate the upgradient contribution of ground-water flow to a pumped well.
The lattice Boltzmann method and the problem of turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djenidi, L.
2015-03-10
This paper reports a brief review of numerical simulations of homogeneous isotopic turbulence (HIT) using the lattice Boltzmann method (LBM). The LBM results shows that the details of HIT are well captured and in agreement with existing data. This clearly indicates that the LBM is as good as current Navier-Stokes solvers and is very much adequate for investigating the problem of turbulence.
NASA Astrophysics Data System (ADS)
Han, B.; Flores, A. N.; Benner, S. G.
2017-12-01
In semiarid and arid regions where water supply is intensively managed, future water scarcity is a product of complex interactions between climate change and human activities. Evaluating future water scarcity under alternative scenarios of climate change, therefore, necessitates modeling approaches that explicitly represent the coupled biophysical and social processes responsible for the redistribution of water in these regions. At regional scales a particular challenge lies in adequately capturing not only the central tendencies of change in projections of climate change, but also the associated plausible range of variability in those projections. This study develops a framework that combines a stochastic weather generator, historical climate observations, and statistically downscaled General Circulation Model (GCM) projections. The method generates a large ensemble of daily climate realizations, avoiding deficiencies of using a few or mean values of individual GCM realizations. Three climate change scenario groups reflecting the historical, RCP4.5, and RCP8.5 future projections are developed. Importantly, the model explicitly captures the spatiotemporally varying irrigation activities as constrained by local water rights in a rapidly growing, semi-arid human-environment system in southwest Idaho. We use this modeling framework to project water use and scarcity patterns under the three future climate change scenarios. The model is built using the Envision alternative futures modeling framework. Climate projections for the region show future increases in both precipitation and temperature, especially under the RCP8.5 scenario. The increase of temperature has a direct influence on the increase of the irrigation water use and water scarcity, while the influence of increased precipitation on water use is less clear. The predicted changes are potentially useful in identifying areas in the watershed particularly sensitive to water scarcity, the relative importance of changes in precipitation versus temperature as a driver of scarcity, and potential shortcomings of the current water management framework in the region.
Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan
2016-08-15
This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.
Improved Endpoints for Cancer Immunotherapy Trials
Eggermont, Alexander M. M.; Janetzki, Sylvia; Hodi, F. Stephen; Ibrahim, Ramy; Anderson, Aparna; Humphrey, Rachel; Blumenstein, Brent; Wolchok, Jedd
2010-01-01
Unlike chemotherapy, which acts directly on the tumor, cancer immunotherapies exert their effects on the immune system and demonstrate new kinetics that involve building a cellular immune response, followed by changes in tumor burden or patient survival. Thus, adequate design and evaluation of some immunotherapy clinical trials require a new development paradigm that includes reconsideration of established endpoints. Between 2004 and 2009, several initiatives facilitated by the Cancer Immunotherapy Consortium of the Cancer Research Institute and partner organizations systematically evaluated an immunotherapy-focused clinical development paradigm and created the principles for redefining trial endpoints. On this basis, a body of clinical and laboratory data was generated that supports three novel endpoint recommendations. First, cellular immune response assays generate highly variable results. Assay harmonization in multicenter trials may minimize variability and help to establish cellular immune response as a reproducible biomarker, thus allowing investigation of its relationship with clinical outcomes. Second, immunotherapy may induce novel patterns of antitumor response not captured by Response Evaluation Criteria in Solid Tumors or World Health Organization criteria. New immune-related response criteria were defined to more comprehensively capture all response patterns. Third, delayed separation of Kaplan–Meier curves in randomized immunotherapy trials can affect results. Altered statistical models describing hazard ratios as a function of time and recognizing differences before and after separation of curves may allow improved planning of phase III trials. These recommendations may improve our tools for cancer immunotherapy trials and may offer a more realistic and useful model for clinical investigation. PMID:20826737
Measuring genetic knowledge: a brief survey instrument for adolescents and adults.
Fitzgerald-Butt, S M; Bodine, A; Fry, K M; Ash, J; Zaidi, A N; Garg, V; Gerhardt, C A; McBride, K L
2016-02-01
Basic knowledge of genetics is essential for understanding genetic testing and counseling. The lack of a written, English language, validated, published measure has limited our ability to evaluate genetic knowledge of patients and families. Here, we begin the psychometric analysis of a true/false genetic knowledge measure. The 18-item measure was completed by parents of children with congenital heart defects (CHD) (n = 465) and adolescents and young adults with CHD (age: 15-25, n = 196) with a mean total correct score of 12.6 [standard deviation (SD) = 3.5, range: 0-18]. Utilizing exploratory factor analysis, we determined that one to three correlated factors, or abilities, were captured by our measure. Through confirmatory factor analysis, we determined that the two factor model was the best fit. Although it was necessary to remove two items, the remaining items exhibited adequate psychometric properties in a multidimensional item response theory analysis. Scores for each factor were computed, and a sum-score conversion table was derived. We conclude that this genetic knowledge measure discriminates best at low knowledge levels and is therefore well suited to determine a minimum adequate amount of genetic knowledge. However, further reliability testing and validation in diverse research and clinical settings is needed. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
International casemix and funding models: lessons for rehabilitation.
Turner-Stokes, Lynne; Sutch, Stephen; Dredge, Robert; Eagar, Kathy
2012-03-01
This series of articles for rehabilitation in practice aims to cover a knowledge element of the rehabilitation medicine curriculum. Nevertheless they are intended to be of interest to a multidisciplinary audience. The competency addressed in this article is 'An understanding of the different international models for funding of health care services and casemix systems, as exemplified by those in the US, Australia and the UK.' Payment for treatment in healthcare systems around the world is increasingly based on fixed tariff models to drive up efficiency and contain costs. Casemix classifications, however, must account adequately for the resource implications of varying case complexity. Rehabilitation poses some particular challenges for casemix development. The objectives of this educational narrative review are (a) to provide an overview of the development of casemix in rehabilitation, (b) to describe key characteristics of some well-established casemix and payment models in operation around the world and (c) to explore opportunities for future development arising from the lessons learned. Diagnosis alone does not adequately describe cost variation in rehabilitation. Functional dependency is considered a better cost indicator, and casemix classifications for inpatient rehabilitation in the United States and Australia rely on the Functional Independence Measure (FIM). Fixed episode-based prospective payment systems are shown to contain costs, but at the expense of poorer functional outcomes. More sophisticated models incorporating a mixture of episode and weighted per diem rates may offer greater flexibility to optimize outcome, while still providing incentive for throughput. The development of casemix in rehabilitation poses similar challenges for healthcare systems all around the world. Well-established casemix systems in the United States and Australia have afforded valuable lessons for other countries to learn from, but have not provided all the answers. A range of casemix and payment models is required to cater for different healthcare cultures, and casemix tools must capture all the key cost-determinants of treatment for patients with complex needs.
Whittington, Jesse; Sawaya, Michael A
2015-01-01
Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park's population of grizzly bears requires continued conservation-oriented management actions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allan, M.E.; Wilson, M.L.; Wightman, J.
1996-12-31
The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity & permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based onmore » marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic & petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allan, M.E.; Wilson, M.L.; Wightman, J.
1996-01-01
The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on markermore » correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.« less
Mathematical model of mass transfer at electron beam treatment
NASA Astrophysics Data System (ADS)
Konovalov, Sergey V.; Sarychev, Vladimir D.; Nevskii, Sergey A.; Kobzareva, Tatyana Yu.; Gromov, Victor E.; Semin, Alexander P.
2017-01-01
The paper proposes a model of convective mass transfer at electron beam treatment with beams in titanium alloys subjected to electro-explosion alloying by titanium diboride powder. The proposed model is based on the concept that treatment with concentrated flows of energy results in the initiation of vortices in the melted layer. The formation mechanism of these vortices rooted in the idea that the availability of temperature drop leads to the initiation of the thermo-capillary convection. For the melted layer of metal the equations of the convective heat transfer and boundary conditions in terms of the evaporated material are written. The finite element solution of these equations showed that electron-beam treatment results in the formation of multi-vortex structure that in developing captures all new areas of material. It leads to the fact that the strengthening particles are observed at the depth increasing many times the depth of their penetration according to the diffusion mechanism. The distribution of micro-hardness at depth and the thickness of strengthening zone determined from these data supported the view that proposed model of the convective mass transfer describes adequately the processes going on in the treatment with low-energy high-current electron beam.
Wind Farm LES Simulations Using an Overset Methodology
NASA Astrophysics Data System (ADS)
Ananthan, Shreyas; Yellapantula, Shashank
2017-11-01
Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.
NASA Astrophysics Data System (ADS)
Penilla, E. H.; Hardin, C. L.; Kodera, Y.; Basun, S. A.; Evans, D. R.; Garay, J. E.
2016-01-01
Light scattering due to birefringence has prevented the use of polycrystalline ceramics with anisotropic optical properties in applications such as laser gain media. However, continued development of processing technology has allowed for very low porosity and fine grains, significantly improving transparency and is paving the way for polycrystalline ceramics to be used in demanding optical applications. We present a method for producing highly transparent Cr3+ doped Al2O3 (ruby) using current activated pressure assisted densification. The one-step doping/densification process produces fine grained ceramics with well integrated (doped) Cr, resulting in good absorption and emission. In order to explain the light transmission properties, we extend the analytical model based on the Rayleigh-Gans-Debye approximation that has been previously used for undoped alumina to include absorption. The model presented captures reflection, scattering, and absorption phenomena in the ceramics. Comparison with measured transmission confirms that the model adequately describes the properties of polycrystalline ruby. In addition the measured emission spectra and emission lifetime are found to be similar to single crystals, confirming the high optical quality of the ceramics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Lianhong; Pallardy, Stephen G.; Yang, Bai
Testing complex land surface models has often proceeded by asking the question: does the model prediction agree with the observation? This approach has yet led to high-performance terrestrial models that meet the challenges of climate and ecological studies. Here we test the Community Land Model (CLM) by asking the question: does the model behave like an ecosystem? We pursue its answer by testing CLM in the ecosystem functional space (EFS) at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the Central U.S., focusing on carbon and water flux responses to precipitation regimes and associated stresses. In the observed EFS, precipitationmore » regimes and associated water and heat stresses controlled seasonal and interannual variations of net ecosystem exchange (NEE) of CO 2 and evapotranspiration in this deciduous forest ecosystem. Such controls were exerted more strongly by precipitation variability than by the total precipitation amount per se. A few simply constructed climate variability indices captured these controls, suggesting a high degree of potential predictability. While the interannual fluctuation in NEE was large, a net carbon sink was maintained even during an extreme drought year. Although CLM predicted seasonal and interanual variations in evapotranspiration reasonably well, its predictions of net carbon uptake were too small across the observed range of climate variability. Also, the model systematically underestimated the sensitivities of NEE and evapotranspiration to climate variability and overestimated the coupling strength between carbon and water fluxes. Its suspected that the modeled and observed trajectories of ecosystem fluxes did not overlap in the EFS and the model did not behave like the ecosystem it attempted to simulate. A definitive conclusion will require comprehensive parameter and structural sensitivity tests in a rigorous mathematical framework. We also suggest that future model improvements should focus on better representation and parameterization of process responses to environmental stresses and on more complete and robust representations of carbon-specific processes so that adequate responses to climate variability and a proper degree of coupling between carbon and water exchanges are captured.« less
Gu, Lianhong; Pallardy, Stephen G.; Yang, Bai; ...
2016-07-14
Testing complex land surface models has often proceeded by asking the question: does the model prediction agree with the observation? This approach has yet led to high-performance terrestrial models that meet the challenges of climate and ecological studies. Here we test the Community Land Model (CLM) by asking the question: does the model behave like an ecosystem? We pursue its answer by testing CLM in the ecosystem functional space (EFS) at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the Central U.S., focusing on carbon and water flux responses to precipitation regimes and associated stresses. In the observed EFS, precipitationmore » regimes and associated water and heat stresses controlled seasonal and interannual variations of net ecosystem exchange (NEE) of CO 2 and evapotranspiration in this deciduous forest ecosystem. Such controls were exerted more strongly by precipitation variability than by the total precipitation amount per se. A few simply constructed climate variability indices captured these controls, suggesting a high degree of potential predictability. While the interannual fluctuation in NEE was large, a net carbon sink was maintained even during an extreme drought year. Although CLM predicted seasonal and interanual variations in evapotranspiration reasonably well, its predictions of net carbon uptake were too small across the observed range of climate variability. Also, the model systematically underestimated the sensitivities of NEE and evapotranspiration to climate variability and overestimated the coupling strength between carbon and water fluxes. Its suspected that the modeled and observed trajectories of ecosystem fluxes did not overlap in the EFS and the model did not behave like the ecosystem it attempted to simulate. A definitive conclusion will require comprehensive parameter and structural sensitivity tests in a rigorous mathematical framework. We also suggest that future model improvements should focus on better representation and parameterization of process responses to environmental stresses and on more complete and robust representations of carbon-specific processes so that adequate responses to climate variability and a proper degree of coupling between carbon and water exchanges are captured.« less
Octopus: A Design Methodology for Motion Capture Wearables
2017-01-01
Human motion capture (MoCap) is widely recognised for its usefulness and application in different fields, such as health, sports, and leisure; therefore, its inclusion in current wearables (MoCap-wearables) is increasing, and it may be very useful in a context of intelligent objects interconnected with each other and to the cloud in the Internet of Things (IoT). However, capturing human movement adequately requires addressing difficult-to-satisfy requirements, which means that the applications that are possible with this technology are held back by a series of accessibility barriers, some technological and some regarding usability. To overcome these barriers and generate products with greater wearability that are more efficient and accessible, factors are compiled through a review of publications and market research. The result of this analysis is a design methodology called Octopus, which ranks these factors and schematises them. Octopus provides a tool that can help define design requirements for multidisciplinary teams, generating a common framework and offering a new method of communication between them. PMID:28809786
Octopus: A Design Methodology for Motion Capture Wearables.
Marin, Javier; Blanco, Teresa; Marin, Jose J
2017-08-15
Human motion capture (MoCap) is widely recognised for its usefulness and application in different fields, such as health, sports, and leisure; therefore, its inclusion in current wearables (MoCap-wearables) is increasing, and it may be very useful in a context of intelligent objects interconnected with each other and to the cloud in the Internet of Things (IoT). However, capturing human movement adequately requires addressing difficult-to-satisfy requirements, which means that the applications that are possible with this technology are held back by a series of accessibility barriers, some technological and some regarding usability. To overcome these barriers and generate products with greater wearability that are more efficient and accessible, factors are compiled through a review of publications and market research. The result of this analysis is a design methodology called Octopus, which ranks these factors and schematises them. Octopus provides a tool that can help define design requirements for multidisciplinary teams, generating a common framework and offering a new method of communication between them.
Including pride and its group-based, relational, and contextual features in theories of contempt.
Sullivan, Gavin Brent
2017-01-01
Sentiment includes emotional and enduring attitudinal features of contempt, but explaining contempt as a mixture of basic emotion system affects does not adequately address the family resemblance structure of the concept. Adding forms of individual, group-based, and widely shared arrogance and contempt is necessary to capture the complex mixed feelings of proud superiority when "looking down upon" and acting harshly towards others.
I. Arismendi; S. L. Johnson; J. B. Dunham
2015-01-01
Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical...
2[prime] and 3[prime] Carboranyl uridines and their diethyl ether adducts
Soloway, A.H.; Barth, R.F.; Anisuzzaman, A.K.; Alam, F.; Tjarks, W.
1992-12-15
A process is described for preparing carboranyl uridine nucleoside compounds and their diethyl ether adducts, which exhibit a tenfold increase in boron content over prior art boron containing nucleoside compounds. The carboranyl uridine nucleoside compounds exhibit enhanced lipophilicity and hydrophilic properties adequate to enable solvation in aqueous media for subsequent incorporation of the compounds in methods for boron neutron capture therapy in mammalian tumor cells. No Drawings
2' and 3' Carboranyl uridines and their diethyl ether adducts
Soloway, Albert H.; Barth, Rolf F.; Anisuzzaman, Abul K.; Alam, Fazlul; Tjarks, Werner
1992-01-01
There is disclosed a process for preparing carboranyl uridine nucleoside compounds and their diethyl ether adducts, which exhibit a tenfold increase in boron content over prior art boron containing nucleoside compounds. Said carboranyl uridine nucleoside compounds exhibit enhanced lipophilicity and hydrophilic properties adequate to enable solvation in aqueous media for subsequent incorporation of said compounds in methods for boron neutron capture therapy in mammalian tumor cells.
NASA Astrophysics Data System (ADS)
Rampidis, I.; Nikolopoulos, A.; Koukouzas, N.; Grammelis, P.; Kakaras, E.
2007-09-01
This work aims to present a pure 3-D CFD model, accurate and efficient, for the simulation of a pilot scale CFB hydrodynamics. The accuracy of the model was investigated as a function of the numerical parameters, in order to derive an optimum model setup with respect to computational cost. The necessity of the in depth examination of hydrodynamics emerges by the trend to scale up CFBCs. This scale up brings forward numerous design problems and uncertainties, which can be successfully elucidated by CFD techniques. Deriving guidelines for setting a computational efficient model is important as the scale of the CFBs grows fast, while computational power is limited. However, the optimum efficiency matter has not been investigated thoroughly in the literature as authors were more concerned for their models accuracy and validity. The objective of this work is to investigate the parameters that influence the efficiency and accuracy of CFB computational fluid dynamics models, find the optimum set of these parameters and thus establish this technique as a competitive method for the simulation and design of industrial, large scale beds, where the computational cost is otherwise prohibitive. During the tests that were performed in this work, the influence of turbulence modeling approach, time and space density and discretization schemes were investigated on a 1.2 MWth CFB test rig. Using Fourier analysis dominant frequencies were extracted in order to estimate the adequate time period for the averaging of all instantaneous values. The compliance with the experimental measurements was very good. The basic differences between the predictions that arose from the various model setups were pointed out and analyzed. The results showed that a model with high order space discretization schemes when applied on a coarse grid and averaging of the instantaneous scalar values for a 20 sec period, adequately described the transient hydrodynamic behaviour of a pilot CFB while the computational cost was kept low. Flow patterns inside the bed such as the core-annulus flow and the transportation of clusters were at least qualitatively captured.
NASA Astrophysics Data System (ADS)
Shope, C. L.; Maharjan, G. R.; Tenhunen, J.; Seo, B.; Kim, K.; Riley, J.; Arnhold, S.; Koellner, T.; Ok, Y. S.; Peiffer, S.; Kim, B.; Park, J.-H.; Huwe, B.
2014-02-01
Watershed-scale modeling can be a valuable tool to aid in quantification of water quality and yield; however, several challenges remain. In many watersheds, it is difficult to adequately quantify hydrologic partitioning. Data scarcity is prevalent, accuracy of spatially distributed meteorology is difficult to quantify, forest encroachment and land use issues are common, and surface water and groundwater abstractions substantially modify watershed-based processes. Our objective is to assess the capability of the Soil and Water Assessment Tool (SWAT) model to capture event-based and long-term monsoonal rainfall-runoff processes in complex mountainous terrain. To accomplish this, we developed a unique quality-control, gap-filling algorithm for interpolation of high-frequency meteorological data. We used a novel multi-location, multi-optimization calibration technique to improve estimations of catchment-wide hydrologic partitioning. The interdisciplinary model was calibrated to a unique combination of statistical, hydrologic, and plant growth metrics. Our results indicate scale-dependent sensitivity of hydrologic partitioning and substantial influence of engineered features. The addition of hydrologic and plant growth objective functions identified the importance of culverts in catchment-wide flow distribution. While this study shows the challenges of applying the SWAT model to complex terrain and extreme environments; by incorporating anthropogenic features into modeling scenarios, we can enhance our understanding of the hydroecological impact.
Electrical Wave Propagation in a Minimally Realistic Fiber Architecture Model of the Left Ventricle
NASA Astrophysics Data System (ADS)
Song, Xianfeng; Setayeshgar, Sima
2006-03-01
Experimental results indicate a nested, layered geometry for the fiber surfaces of the left ventricle, where fiber directions are approximately aligned in each surface and gradually rotate through the thickness of the ventricle. Numerical and analytical results have highlighted the importance of this rotating anisotropy and its possible destabilizing role on the dynamics of scroll waves in excitable media with application to the heart. Based on the work of Peskin[1] and Peskin and McQueen[2], we present a minimally realistic model of the left ventricle that adequately captures the geometry and anisotropic properties of the heart as a conducting medium while being easily parallelizable, and computationally more tractable than fully realistic anatomical models. Complementary to fully realistic and anatomically-based computational approaches, studies using such a minimal model with the addition of successively realistic features, such as excitation-contraction coupling, should provide unique insight into the basic mechanisms of formation and obliteration of electrical wave instabilities. We describe our construction, implementation and validation of this model. [1] C. S. Peskin, Communications on Pure and Applied Mathematics 42, 79 (1989). [2] C. S. Peskin and D. M. McQueen, in Case Studies in Mathematical Modeling: Ecology, Physiology, and Cell Biology, 309(1996)
Engineering graphics data entry for space station data base
NASA Technical Reports Server (NTRS)
Lacovara, R. C.
1986-01-01
The entry of graphical engineering data into the Space Station Data Base was examined. Discussed were: representation of graphics objects; representation of connectivity data; graphics capture hardware; graphics display hardware; site-wide distribution of graphics, and consolidation of tools and hardware. A fundamental assumption was that existing equipment such as IBM based graphics capture software and VAX networked facilities would be exploited. Defensible conclusions reached after study and simulations of use of these systems at the engineering level are: (1) existing IBM based graphics capture software is an adequate and economical means of entry of schematic and block diagram data for present and anticipated electronic systems for Space Station; (2) connectivity data from the aforementioned system may be incorporated into the envisioned Space Station Data Base with modest effort; (3) graphics and connectivity data captured on the IBM based system may be exported to the VAX network in a simple and direct fashion; (4) graphics data may be displayed site-wide on VT-125 terminals and lookalikes; (5) graphics hard-copy may be produced site-wide on various dot-matrix printers; and (6) the system may provide integrated engineering services at both the engineering and engineering management level.
Whittington, Jesse; Sawaya, Michael A.
2015-01-01
Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal’s home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786–1.071) for females, 0.844 (0.703–0.975) for males, and 0.882 (0.779–0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758–1.024) for females, 0.825 (0.700–0.948) for males, and 0.863 (0.771–0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park’s population of grizzly bears requires continued conservation-oriented management actions. PMID:26230262
Development and validation of a multi-dimensional measure of intellectual humility
Alfano, Mark; Iurino, Kathryn; Stey, Paul; Robinson, Brian; Christen, Markus; Yu, Feng; Lapsley, Daniel
2017-01-01
This paper presents five studies on the development and validation of a scale of intellectual humility. This scale captures cognitive, affective, behavioral, and motivational components of the construct that have been identified by various philosophers in their conceptual analyses of intellectual humility. We find that intellectual humility has four core dimensions: Open-mindedness (versus Arrogance), Intellectual Modesty (versus Vanity), Corrigibility (versus Fragility), and Engagement (versus Boredom). These dimensions display adequate self-informant agreement, and adequate convergent, divergent, and discriminant validity. In particular, Open-mindedness adds predictive power beyond the Big Six for an objective behavioral measure of intellectual humility, and Intellectual Modesty is uniquely related to Narcissism. We find that a similar factor structure emerges in Germanophone participants, giving initial evidence for the model’s cross-cultural generalizability. PMID:28813478
A dilation-driven vortex flow in sheared granular materials explains a rheometric anomaly.
Krishnaraj, K P; Nott, Prabhu R
2016-02-11
Granular flows occur widely in nature and industry, yet a continuum description that captures their important features is yet not at hand. Recent experiments on granular materials sheared in a cylindrical Couette device revealed a puzzling anomaly, wherein all components of the stress rise nearly exponentially with depth. Here we show, using particle dynamics simulations and imaging experiments, that the stress anomaly arises from a remarkable vortex flow. For the entire range of fill heights explored, we observe a single toroidal vortex that spans the entire Couette cell and whose sense is opposite to the uppermost Taylor vortex in a fluid. We show that the vortex is driven by a combination of shear-induced dilation, a phenomenon that has no analogue in fluids, and gravity flow. Dilatancy is an important feature of granular mechanics, but not adequately incorporated in existing models.
Measurements of pore-scale flow through apertures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chojnicki, Kirsten
Pore-scale aperture effects on flow in pore networks was studied in the laboratory to provide a parameterization for use in transport models. Four cases were considered: regular and irregular pillar/pore alignment with and without an aperture. The velocity field of each case was measured and simulated, providing quantitatively comparable results. Two aperture effect parameterizations were considered: permeability and transmission. Permeability values varied by an order of magnitude between the cases with and without apertures. However, transmission did not correlate with permeability. Despite having much greater permeability the regular aperture case permitted less transmission than the regular case. Moreover, both irregularmore » cases had greater transmission than the regular cases, a difference not supported by the permeabilities. Overall, these findings suggest that pore-scale aperture effects on flow though a pore-network may not be adequately captured by properties such as permeability for applications that are interested in determining particle transport volume and timing.« less
A deep mixing solution to the aluminum and oxygen isotope puzzles in pre-solar grains
NASA Astrophysics Data System (ADS)
Palmerini, S.; Trippella, O.; Busso, M.
2017-05-01
We present here the application of a model for a mass circulation mechanism in between the H-burning shell and the base of the convective envelope of low-mass asymptotic giant branch (AGB) stars, aimed at studying the isotopic composition of those pre-solar grains showing the most extreme levels of 18O depletion and high concentration of 26Mg from the decay of 26Al. The mixing scheme we present is based on a previously suggested magnetic-buoyancy process, already shown to account adequately for the formation of the main neutron source for slow neutron captures in AGB stars. We find that this scenario is also capable of reproducing for the first time the extreme values of the 17O/16O, 18O/16O, and 26Al/27Al isotopic ratios found in the mentioned oxide grains, including the highest amounts of 26Al measured there.
NASA Astrophysics Data System (ADS)
Yu, Karen; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Miller, Christopher C.; Travis, Katherine R.; Zhu, Lei; Yantosca, Robert M.; Sulprizio, Melissa P.; Cohen, Ron C.; Dibb, Jack E.; Fried, Alan; Mikoviny, Tomas; Ryerson, Thomas B.; Wennberg, Paul O.; Wisthaler, Armin
2016-04-01
Formation of ozone and organic aerosol in continental atmospheres depends on whether isoprene emitted by vegetation is oxidized by the high-NOx pathway (where peroxy radicals react with NO) or by low-NOx pathways (where peroxy radicals react by alternate channels, mostly with HO2). We used mixed layer observations from the SEAC4RS aircraft campaign over the Southeast US to test the ability of the GEOS-Chem chemical transport model at different grid resolutions (0.25° × 0.3125°, 2° × 2.5°, 4° × 5°) to simulate this chemistry under high-isoprene, variable-NOx conditions. Observations of isoprene and NOx over the Southeast US show a negative correlation, reflecting the spatial segregation of emissions; this negative correlation is captured in the model at 0.25° × 0.3125° resolution but not at coarser resolutions. As a result, less isoprene oxidation takes place by the high-NOx pathway in the model at 0.25° × 0.3125° resolution (54 %) than at coarser resolution (59 %). The cumulative probability distribution functions (CDFs) of NOx, isoprene, and ozone concentrations show little difference across model resolutions and good agreement with observations, while formaldehyde is overestimated at coarse resolution because excessive isoprene oxidation takes place by the high-NOx pathway with high formaldehyde yield. The good agreement of simulated and observed concentration variances implies that smaller-scale non-linearities (urban and power plant plumes) are not important on the regional scale. Correlations of simulated vs. observed concentrations do not improve with grid resolution because finer modes of variability are intrinsically more difficult to capture. Higher model resolution leads to decreased conversion of NOx to organic nitrates and increased conversion to nitric acid, with total reactive nitrogen oxides (NOy) changing little across model resolutions. Model concentrations in the lower free troposphere are also insensitive to grid resolution. The overall low sensitivity of modeled concentrations to grid resolution implies that coarse resolution is adequate when modeling continental boundary layer chemistry for global applications.
Link, William A; Barker, Richard J
2005-03-01
We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
Generation of Induced Pluripotent Stem Cells from Mammalian Endangered Species.
Ben-Nun, Inbar Friedrich; Montague, Susanne C; Houck, Marlys L; Ryder, Oliver; Loring, Jeanne F
2015-01-01
For some highly endangered species there are too few reproductively capable animals to maintain adequate genetic diversity, and extraordinary measures are necessary to prevent their extinction. Cellular reprogramming is a means to capture the genomes of individual animals as induced pluripotent stem cells (iPSCs), which may eventually facilitate reintroduction of genetic material into breeding populations. Here, we describe a method for generating iPSCs from fibroblasts of mammalian endangered species.
The health care Titanic: women and children first?
Holland, S; Peterson, K
1993-01-01
The plight of people who lack access to health care has captured national attention and led to a number of proposals to remedy the problem. The authors look at three types of proposals being advanced--"pro-competition" plans, "pay-or-play" plans, and a national health care system--and find that they fail to address adequately the pressing needs of two groups of the poor: women of childbearing age and elderly women.
Size of nesting female Broad-snouted Caimans (Caiman latirostris Daudin 1802).
Leiva, P M L; Simoncini, M S; Portelinha, T C G; Larriera, A; Piña, C I
2018-03-12
The southern distribution of the Broad-snouted Caiman (Caiman latirostris Daudin 1802) in Argentina occurs in Santa Fe Province, where its population has been under management by "Proyecto Yacaré" since 1990. From 1997 to 2016, we captured 77 nesting female Broad-snouted Caimans in Santa Fe Province. Our results suggest that previously defined size classes for Broad-snouted Caiman do not adequately describe the reproductively mature female segment of the population. Here we propose to change size ranges for general size classes for Broad-snouted Caiman. In addition, we have observed that reintroduced reproductive females by Proyecto Yacaré represent about 32% of captured females. These results indicate that reintroduced females by the management program are surviving and reproducing in the wild at least up to 20 years.
Measuring Children’s Media Use in the Digital Age
Vandewater, Elizabeth A.; Lee, Sook-Jung
2009-01-01
In this new and rapidly changing era of digital technology, there is increasing consensus among media scholars that there is an urgent need to develop measurement approaches which more adequately capture media use The overarching goal of this paper is facilitate the development of measurement approaches appropriate for capturing children’s media use in the digital age. The paper outlines various approaches to measurement, focusing mainly on those which have figured prominently in major existing studies of children’s media use. We identify issues related to each technique, including advantages and disadvantages. We also include a review of existing empirical comparisons of various methodologies. The paper is intended to foster discussion of the best ways to further research and knowledge regarding the impact of media on children. PMID:19763246
Beyond the 'east-west' dichotomy: Global variation in cultural models of selfhood.
Vignoles, Vivian L; Owe, Ellinor; Becker, Maja; Smith, Peter B; Easterbrook, Matthew J; Brown, Rupert; González, Roberto; Didier, Nicolas; Carrasco, Diego; Cadena, Maria Paz; Lay, Siugmin; Schwartz, Seth J; Des Rosiers, Sabrina E; Villamar, Juan A; Gavreliuc, Alin; Zinkeng, Martina; Kreuzbauer, Robert; Baguma, Peter; Martin, Mariana; Tatarko, Alexander; Herman, Ginette; de Sauvage, Isabelle; Courtois, Marie; Garðarsdóttir, Ragna B; Harb, Charles; Schweiger Gallo, Inge; Prieto Gil, Paula; Lorente Clemares, Raquel; Campara, Gabriella; Nizharadze, George; Macapagal, Ma Elizabeth J; Jalal, Baland; Bourguignon, David; Zhang, Jianxin; Lv, Shaobo; Chybicka, Aneta; Yuki, Masaki; Zhang, Xiao; Espinosa, Agustín; Valk, Aune; Abuhamdeh, Sami; Amponsah, Benjamin; Özgen, Emre; Güner, E Ülkü; Yamakoğlu, Nil; Chobthamkit, Phatthanakit; Pyszczynski, Tom; Kesebir, Pelin; Vargas Trujillo, Elvia; Balanta, Paola; Cendales Ayala, Boris; Koller, Silvia H; Jaafar, Jas Laile; Gausel, Nicolay; Fischer, Ronald; Milfont, Taciano L; Kusdil, Ersin; Çağlar, Selinay; Aldhafri, Said; Ferreira, M Cristina; Mekonnen, Kassahun Habtamu; Wang, Qian; Fülöp, Márta; Torres, Ana; Camino, Leoncio; Lemos, Flávia Cristina Silveira; Fritsche, Immo; Möller, Bettina; Regalia, Camillo; Manzi, Claudia; Brambilla, Maria; Bond, Michael Harris
2016-08-01
Markus and Kitayama's (1991) theory of independent and interdependent self-construals had a major influence on social, personality, and developmental psychology by highlighting the role of culture in psychological processes. However, research has relied excessively on contrasts between North American and East Asian samples, and commonly used self-report measures of independence and interdependence frequently fail to show predicted cultural differences. We revisited the conceptualization and measurement of independent and interdependent self-construals in 2 large-scale multinational surveys, using improved methods for cross-cultural research. We developed (Study 1: N = 2924 students in 16 nations) and validated across cultures (Study 2: N = 7279 adults from 55 cultural groups in 33 nations) a new 7-dimensional model of self-reported ways of being independent or interdependent. Patterns of global variation support some of Markus and Kitayama's predictions, but a simple contrast between independence and interdependence does not adequately capture the diverse models of selfhood that prevail in different world regions. Cultural groups emphasize different ways of being both independent and interdependent, depending on individualism-collectivism, national socioeconomic development, and religious heritage. Our 7-dimensional model will allow future researchers to test more accurately the implications of cultural models of selfhood for psychological processes in diverse ecocultural contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Ducrot, Virginie; Péry, Alexandre R. R.; Lagadic, Laurent
2010-01-01
Pesticide use leads to complex exposure and response patterns in non-target aquatic species, so that the analysis of data from standard toxicity tests may result in unrealistic risk forecasts. Developing models that are able to capture such complexity from toxicity test data is thus a crucial issue for pesticide risk assessment. In this study, freshwater snails from two genetically differentiated populations of Lymnaea stagnalis were exposed to repeated acute applications of environmentally realistic concentrations of the herbicide diquat, from the embryo to the adult stage. Hatching rate, embryonic development duration, juvenile mortality, feeding rate and age at first spawning were investigated during both exposure and recovery periods. Effects of diquat on mortality were analysed using a threshold hazard model accounting for time-varying herbicide concentrations. All endpoints were significantly impaired at diquat environmental concentrations in both populations. Snail evolutionary history had no significant impact on their sensitivity and responsiveness to diquat, whereas food acted as a modulating factor of toxicant-induced mortality. The time course of effects was adequately described by the model, which thus appears suitable to analyse long-term effects of complex exposure patterns based upon full life cycle experiment data. Obtained model outputs (e.g. no-effect concentrations) could be directly used for chemical risk assessment. PMID:20921047
Ducrot, Virginie; Péry, Alexandre R R; Lagadic, Laurent
2010-11-12
Pesticide use leads to complex exposure and response patterns in non-target aquatic species, so that the analysis of data from standard toxicity tests may result in unrealistic risk forecasts. Developing models that are able to capture such complexity from toxicity test data is thus a crucial issue for pesticide risk assessment. In this study, freshwater snails from two genetically differentiated populations of Lymnaea stagnalis were exposed to repeated acute applications of environmentally realistic concentrations of the herbicide diquat, from the embryo to the adult stage. Hatching rate, embryonic development duration, juvenile mortality, feeding rate and age at first spawning were investigated during both exposure and recovery periods. Effects of diquat on mortality were analysed using a threshold hazard model accounting for time-varying herbicide concentrations. All endpoints were significantly impaired at diquat environmental concentrations in both populations. Snail evolutionary history had no significant impact on their sensitivity and responsiveness to diquat, whereas food acted as a modulating factor of toxicant-induced mortality. The time course of effects was adequately described by the model, which thus appears suitable to analyse long-term effects of complex exposure patterns based upon full life cycle experiment data. Obtained model outputs (e.g. no-effect concentrations) could be directly used for chemical risk assessment.
Fluorescence-based proxies for lignin in freshwater dissolved organic matter
Hernes, Peter J.; Bergamaschi, Brian A.; Eckard, Robert S.; Spencer, Robert G.M.
2009-01-01
Lignin phenols have proven to be powerful biomarkers in environmental studies; however, the complexity of lignin analysis limits the number of samples and thus spatial and temporal resolution in any given study. In contrast, spectrophotometric characterization of dissolved organic matter (DOM) is rapid, noninvasive, relatively inexpensive, requires small sample volumes, and can even be measured in situ to capture fine-scale temporal and spatial detail of DOM cycling. Here we present a series of cross-validated Partial Least Squares models that use fluorescence properties of DOM to explain up to 91% of lignin compositional and concentration variability in samples collected seasonally over 2 years in the Sacramento River/San Joaquin River Delta in California, United States. These models were subsequently used to predict lignin composition and concentration from fluorescence measurements collected during a diurnal study in the San Joaquin River. While modeled lignin composition remained largely unchanged over the diurnal cycle, changes in modeled lignin concentrations were much greater than expected and indicate that the sensitivity of fluorescence-based proxies for lignin may prove invaluable as a tool for selecting the most informative samples for detailed lignin characterization. With adequate calibration, similar models could be used to significantly expand our ability to study sources and processing of DOM in complex surface water systems.
USING TIME VARIANT VOLTAGE TO CALCULATE ENERGY CONSUMPTION AND POWER USE OF BUILDING SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhmalbaf, Atefe; Augenbroe , Godfried
2015-12-09
Buildings are the main consumers of electricity across the world. However, in the research and studies related to building performance assessment, the focus has been on evaluating the energy efficiency of buildings whereas the instantaneous power efficiency has been overlooked as an important aspect of total energy consumption. As a result, we never developed adequate models that capture both thermal and electrical characteristics (e.g., voltage) of building systems to assess the impact of variations in the power system and emerging technologies of the smart grid on buildings energy and power performance and vice versa. This paper argues that the powermore » performance of buildings as a function of electrical parameters should be evaluated in addition to systems’ mechanical and thermal behavior. The main advantage of capturing electrical behavior of building load is to better understand instantaneous power consumption and more importantly to control it. Voltage is one of the electrical parameters that can be used to describe load. Hence, voltage dependent power models are constructed in this work and they are coupled with existing thermal energy models. Lack of models that describe electrical behavior of systems also adds to the uncertainty of energy consumption calculations carried out in building energy simulation tools such as EnergyPlus, a common building energy modeling and simulation tool. To integrate voltage-dependent power models with thermal models, the thermal cycle (operation mode) of each system was fed into the voltage-based electrical model. Energy consumption of systems used in this study were simulated using EnergyPlus. Simulated results were then compared with estimated and measured power data. The mean square error (MSE) between simulated, estimated, and measured values were calculated. Results indicate that estimated power has lower MSE when compared with measured data than simulated results. Results discussed in this paper will illustrate the significance of enhancing building energy models with electrical characteristics. This would support different studies such as those related to modernization of the power system that require micro scale building-grid interaction, evaluating building energy efficiency with power efficiency considerations, and also design and control decisions that rely on accuracy of building energy simulation results.« less
Petersen, James H.; DeAngelis, Donald L.
1992-01-01
The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.
NASA Astrophysics Data System (ADS)
Nicolas, A.; Fortin, J.; Guéguen, Y.
2017-10-01
Deformation and failure of rocks are important for a better understanding of many crustal geological phenomena such as faulting and compaction. In carbonate rocks among others, low-temperature deformation can either occur with dilatancy or compaction, having implications for porosity changes, failure and petrophysical properties. Hence, a thorough understanding of all the micromechanisms responsible for deformation is of great interest. In this study, a constitutive model for the low-temperature deformation of low-porosity (<20 per cent) carbonate rocks is derived from the micromechanisms identified in previous studies. The micromechanical model is based on (1) brittle crack propagation, (2) a plasticity law (interpreted in terms of dislocation glide without possibility to climb) for porous media with hardening and (3) crack nucleation due to dislocation pile-ups. The model predicts stress-strain relations and the evolution of damage during deformation. The model adequately predicts brittle behaviour at low confining pressures, which switches to a semi-brittle behaviour characterized by inelastic compaction followed by dilatancy at higher confining pressures. Model predictions are compared to experimental results from previous studies and are found to be in close agreement with experimental results. This suggests that microphysical phenomena responsible for the deformation are sufficiently well captured by the model although twinning, recovery and cataclasis are not considered. The porosity range of applicability and limits of the model are discussed.
A new physically-based windblown dust emission ...
Dust has significant impacts on weather and climate, air quality and visibility, and human health; therefore, it is important to include a windblown dust emission module in atmospheric and air quality models. In this presentation, we summarize our efforts in development of a physics-based windblown dust emission scheme and its implementation in the CMAQ modeling system. The new model incorporates the effect of the surface wind speed, soil texture, soil moisture, and surface roughness in a physically sound manner. Specifically, a newly developed dynamic relation for the surface roughness length in this model is believed to adequately represent the physics of the surface processes involved in the dust generation. Furthermore, careful attention is paid in integrating the new windblown dust module within the CMAQ to ensure that the required input parameters are correctly configured. The new model is evaluated for the case studies including the continental United States and the Northern hemisphere, and is shown to be able to capture the occurrence of the dust outbreak and the level of the soil concentration. We discuss the uncertainties and limitations of the model and briefly describe our path forward for further improvements. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based
Aguilar-Raab, Corina; Grevenstein, Dennis; Schweitzer, Jochen
2015-01-01
Social interactions have gained increasing importance, both as an outcome and as a possible mediator in psychotherapy research. Still, there is a lack of adequate measures capturing relational aspects in multi-person settings. We present a new measure to assess relevant dimensions of quality of relationships and collective efficacy regarding interpersonal interactions in diverse personal and professional social systems including couple partnerships, families, and working teams: the EVOS. Theoretical dimensions were derived from theories of systemic family therapy and organizational psychology. The study was divided in three parts: In Study 1 (N = 537), a short 9-item scale with two interrelated factors was constructed on the basis of exploratory factor analysis. Quality of relationship and collective efficacy emerged as the most relevant dimensions for the quality of social systems. Study 2 (N = 558) confirmed the measurement model using confirmatory factor analysis and established validity with measures of family functioning, life satisfaction, and working team efficacy. Measurement invariance was assessed to ensure that EVOS captures the same latent construct in all social contexts. In Study 3 (N = 317), an English language adaptation was developed, which again confirmed the original measurement model. The EVOS is a theory-based, economic, reliable, and valid measure that covers important aspects of social relationships, applicable for different social systems. It is the first instrument of its kind and an important addition to existing measures of social relationships and related outcome measures in therapeutic and other counseling settings involving multiple persons. PMID:26200357
Hurst, Zachary M.; McCleery, Robert A.; Collier, Bret A.; Fletcher, Robert J.; Silvy, Nova J.; Taylor, Peter J.; Monadjem, Ara
2013-01-01
Across the planet, high-intensity farming has transformed native vegetation into monocultures, decreasing biodiversity on a landscape scale. Yet landscape-scale changes to biodiversity and community structure often emerge from processes operating at local scales. One common process that can explain changes in biodiversity and community structure is the creation of abrupt habitat edges, which, in turn, generate edge effects. Such effects, while incredibly common, can be highly variable across space and time; however, we currently lack a general analytical framework that can adequately capture such spatio-temporal variability. We extend previous approaches for estimating edge effects to a non-linear mixed modeling framework that captures such spatio-temporal heterogeneity and apply it to understand how agricultural land-uses alter wildlife communities. We trapped small mammals along a conservation-agriculture land-use interface extending 375 m into sugarcane plantations and conservation land-uses at three sites during dry and wet seasons in Swaziland, Africa. Sugarcane plantations had significant reductions in species richness and heterogeneity, and showed an increase in community similarity, suggesting a more homogenized small mammal community. Furthermore, our modeling framework identified strong variation in edge effects on communities across sites and seasons. Using small mammals as an indicator, intensive agricultural practices appear to create high-density communities of generalist species while isolating interior species in less than 225 m. These results illustrate how agricultural land-use can reduce diversity across the landscape and that effects can be masked or magnified, depending on local conditions. Taken together, our results emphasize the need to create or retain natural habitat features in agricultural mosaics. PMID:24040269
NASA Astrophysics Data System (ADS)
Singh, Vipul
2011-12-01
The green building movement has been an effective catalyst in reducing energy demands of buildings and a large number of 'green' certified buildings have been in operation for several years. Whether these buildings are actually performing as intended, and if not, identifying specific causes for this discrepancy falls into the general realm of post-occupancy evaluation (POE). POE involves evaluating building performance in terms of energy-use, indoor environmental quality, acoustics and water-use; the first aspect i.e. energy-use is addressed in this thesis. Normally, a full year or more of energy-use and weather data is required to determine the actual post-occupancy energy-use of buildings. In many cases, either measured building performance data is not available or the time and cost implications may not make it feasible to invest in monitoring the building for a whole year. Knowledge about the minimum amount of measured data needed to accurately capture the behavior of the building over the entire year can be immensely beneficial. This research identifies simple modeling techniques to determine best time of the year to begin in-situ monitoring of building energy-use, and the least amount of data required for generating acceptable long-term predictions. Four analysis procedures are studied. The short-term monitoring for long-term prediction (SMLP) approach and dry-bulb temperature analysis (DBTA) approach allow determining the best time and duration of the year for in-situ monitoring to be performed based only on the ambient temperature data of the location. Multivariate change-point (MCP) modeling uses simulated/monitored data to determine best monitoring period of the year. This is also used to validate the SMLP and DBTA approaches. The hybrid inverse modeling method-1 predicts energy-use by combining a short dataset of monitored internal loads with a year of utility-bills, and hybrid inverse method-2 predicts long term building performance using utility-bills only. The results obtained show that often less than three to four months of monitored data is adequate for estimating the annual building energy use, provided that the monitoring is initiated at the right time, and the seasonal as well as daily variations are adequately captured by the short dataset. The predictive accuracy of the short data-sets is found to be strongly influenced by the closeness of the dataset's mean temperature to the annual average temperature. The analysis methods studied would be very useful for energy professionals involved in POE.
Capturing spiral radial growth of conifers using the superellipse to model tree-ring geometric shape
Shi, Pei-Jian; Huang, Jian-Guo; Hui, Cang; Grissino-Mayer, Henri D.; Tardif, Jacques C.; Zhai, Li-Hong; Wang, Fu-Sheng; Li, Bai-Lian
2015-01-01
Tree-rings are often assumed to approximate a circular shape when estimating forest productivity and carbon dynamics. However, tree rings are rarely, if ever, circular, thereby possibly resulting in under- or over-estimation in forest productivity and carbon sequestration. Given the crucial role played by tree ring data in assessing forest productivity and carbon storage within a context of global change, it is particularly important that mathematical models adequately render cross-sectional area increment derived from tree rings. We modeled the geometric shape of tree rings using the superellipse equation and checked its validation based on the theoretical simulation and six actual cross sections collected from three conifers. We found that the superellipse better describes the geometric shape of tree rings than the circle commonly used. We showed that a spiral growth trend exists on the radial section over time, which might be closely related to spiral grain along the longitudinal axis. The superellipse generally had higher accuracy than the circle in predicting the basal area increment, resulting in an improved estimate for the basal area. The superellipse may allow better assessing forest productivity and carbon storage in terrestrial forest ecosystems. PMID:26528316
Shi, Pei-Jian; Huang, Jian-Guo; Hui, Cang; Grissino-Mayer, Henri D; Tardif, Jacques C; Zhai, Li-Hong; Wang, Fu-Sheng; Li, Bai-Lian
2015-01-01
Tree-rings are often assumed to approximate a circular shape when estimating forest productivity and carbon dynamics. However, tree rings are rarely, if ever, circular, thereby possibly resulting in under- or over-estimation in forest productivity and carbon sequestration. Given the crucial role played by tree ring data in assessing forest productivity and carbon storage within a context of global change, it is particularly important that mathematical models adequately render cross-sectional area increment derived from tree rings. We modeled the geometric shape of tree rings using the superellipse equation and checked its validation based on the theoretical simulation and six actual cross sections collected from three conifers. We found that the superellipse better describes the geometric shape of tree rings than the circle commonly used. We showed that a spiral growth trend exists on the radial section over time, which might be closely related to spiral grain along the longitudinal axis. The superellipse generally had higher accuracy than the circle in predicting the basal area increment, resulting in an improved estimate for the basal area. The superellipse may allow better assessing forest productivity and carbon storage in terrestrial forest ecosystems.
Li, Si; Wang, Chengyuan; Nithiarasu, Perumal
2018-04-01
Quasi-one-dimensional microtubules (MTs) in cells enjoy high axial rigidity but large transverse flexibility due to the inter-protofilament (PF) sliding. This study aims to explore the structure-property relation for MTs and examine the relevance of the beam theories to their unique features. A molecular structural mechanics (MSM) model was used to identify the origin of the inter-PF sliding and its role in bending and vibration of MTs. The beam models were then fitted to the MSM to reveal how they cope with the distinct mechanical responses induced by the inter-PF sliding. Clear evidence showed that the inter-PF sliding is due to the soft inter-PF bonds and leads to the length-dependent bending stiffness. The Euler beam theory is found to adequately describe MT deformation when the inter-PF sliding is largely prohibited. Nevertheless, neither shear deformation nor the nonlocal effect considered in the 'more accurate' beam theories can fully capture the effect of the inter-PF sliding. This reflects the distinct deformation mechanisms between an MT and its equivalent continuous body.
NASA Astrophysics Data System (ADS)
Samanta, Gaurab; Beris, Antony; Handler, Robert; Housiadas, Kostas
2009-03-01
Karhunen-Loeve (KL) analysis of DNS data of viscoelastic turbulent channel flows helps us to reveal more information on the time-dependent dynamics of viscoelastic modification of turbulence [Samanta et. al., J. Turbulence (in press), 2008]. A selected set of KL modes can be used for a data reduction modeling of these flows. However, it is pertinent that verification be done against established DNS results. For this purpose, we did comparisons of velocity and conformations statistics and probability density functions (PDFs) of relevant quantities obtained from DNS and reconstructed fields using selected KL modes and time-dependent coefficients. While the velocity statistics show good agreement between results from DNS and KL reconstructions even with just hundreds of KL modes, tens of thousands of KL modes are required to adequately capture the trace of polymer conformation resulting from DNS. New modifications to KL method have therefore been attempted to account for the differences in conformation statistics. The applicability and impact of these new modified KL methods will be discussed in the perspective of data reduction modeling.
Computer simulations of liquid crystals: Defects, deformations and dynamics
NASA Astrophysics Data System (ADS)
Billeter, Jeffrey Lee
1999-11-01
Computer simulations play an increasingly important role in investigating fundamental issues in the physics of liquid crystals. Presented here are the results of three projects which utilize the unique power of simulations to probe questions which neither theory nor experiment can adequately answer. Throughout, we use the (generalized) Gay-Berne model, a widely-used phenomenological potential which captures the essential features of the anisotropic mesogen shapes and interactions. First, we used a Molecular Dynamics simulation with 65536 Gay-Berne particles to study the behaviors of topological defects in a quench from the isotropic to the nematic phase. Twist disclination loops were the dominant defects, and we saw evidence for dynamical scaling. We observed the loops separating, combining and collapsing, and we also observed numerous non-singular type-1 lines which appeared to be intimately involved with many of the loop processes. Second, we used a Molecular Dynamics simulation of a sphere embedded in a system of 2048 Gay-Berne particles to study the effects of radial anchoring of the molecules at the sphere's surface. A saturn ring defect configuration was observed, and the ring caused a driven sphere (modelling the falling ball experiment) to experience an increased resistance as it moved through the nematic. Deviations from a linear relationship between the driving force and the terminal speed are attributed to distortions of the saturn ring which we observed. The existence of the saturn ring confirms theoretical predictions for small spheres. Finally, we constructed a model for wedge-shaped molecules and used a linear response approach in a Monte Carlo simulation to investigate the flexoelectric behavior of a system of 256 such wedges. Novel potential models as well as novel analytical and visualization techniques were developed for these projects. Once again, the emphasis throughout was to investigate questions which simulations alone can adequately answer.
Modelling bidirectional fluxes of methanol and acetaldehyde with the FORCAsT canopy exchange model
Ashworth, Kirsti; Chung, Serena H.; McKinney, Karena A.; ...
2016-12-15
Here, the FORCAsT canopy exchange model was used to investigate the underlying mechanisms governing foliage emissions of methanol and acetaldehyde, two short chain oxygenated volatile organic compounds ubiquitous in the troposphere and known to have strong biogenic sources, at a northern mid-latitude forest site. The explicit representation of the vegetation canopy within the model allowed us to test the hypothesis that stomatal conductance regulates emissions of these compounds to an extent that its influence is observable at the ecosystem scale, a process not currently considered in regional- or global-scale atmospheric chemistry models. Here, we found that FORCAsT could only reproducemore » the magnitude and diurnal profiles of methanol and acetaldehyde fluxes measured at the top of the forest canopy at Harvard Forest if light-dependent emissions were introduced to the model. With the inclusion of such emissions, FORCAsT was able to successfully simulate the observed bidirectional exchange of methanol and acetaldehyde. Although we found evidence that stomatal conductance influences methanol fluxes and concentrations at scales beyond the leaf level, particularly at dawn and dusk, we were able to adequately capture ecosystem exchange without the addition of stomatal control to the standard parameterisations of foliage emissions, suggesting that ecosystem fluxes can be well enough represented by the emissions models currently used.« less
Eeftens, Marloes; Meier, Reto; Schindler, Christian; Aguilera, Inmaculada; Phuleria, Harish; Ineichen, Alex; Davey, Mark; Ducret-Stich, Regina; Keidel, Dirk; Probst-Hensch, Nicole; Künzli, Nino; Tsai, Ming-Yi
2016-04-18
Land Use Regression (LUR) is a popular method to explain and predict spatial contrasts in air pollution concentrations, but LUR models for ultrafine particles, such as particle number concentration (PNC) are especially scarce. Moreover, no models have been previously presented for the lung deposited surface area (LDSA) of ultrafine particles. The additional value of ultrafine particle metrics has not been well investigated due to lack of exposure measurements and models. Air pollution measurements were performed in 2011 and 2012 in the eight areas of the Swiss SAPALDIA study at up to 40 sites per area for NO2 and at 20 sites in four areas for markers of particulate air pollution. We developed multi-area LUR models for biannual average concentrations of PM2.5, PM2.5 absorbance, PM10, PMcoarse, PNC and LDSA, as well as alpine, non-alpine and study area specific models for NO2, using predictor variables which were available at a national level. Models were validated using leave-one-out cross-validation, as well as independent external validation with routine monitoring data. Model explained variance (R(2)) was moderate for the various PM mass fractions PM2.5 (0.57), PM10 (0.63) and PMcoarse (0.45), and was high for PM2.5 absorbance (0.81), PNC (0.87) and LDSA (0.91). Study-area specific LUR models for NO2 (R(2) range 0.52-0.89) outperformed combined-area alpine (R (2) = 0.53) and non-alpine (R (2) = 0.65) models in terms of both cross-validation and independent external validation, and were better able to account for between-area variability. Predictor variables related to traffic and national dispersion model estimates were important predictors. LUR models for all pollutants captured spatial variability of long-term average concentrations, performed adequately in validation, and could be successfully applied to the SAPALDIA cohort. Dispersion model predictions or area indicators served well to capture the between area variance. For NO2, applying study-area specific models was preferable over applying combined-area alpine/non-alpine models. Correlations between pollutants were higher in the model predictions than in the measurements, so it will remain challenging to disentangle their health effects.
A Probabilistic Model for Sediment Entrainment: the Role of Bed Irregularity
NASA Astrophysics Data System (ADS)
Thanos Papanicolaou, A. N.
2017-04-01
A generalized probabilistic model is developed in this study to predict sediment entrainment under the incipient motion, rolling, and pickup modes. A novelty of the proposed model is that it incorporates in its formulation the probability density function of the bed shear stress, instead of the near-bed velocity fluctuations, to account for the effects of both flow turbulence and bed surface irregularity on sediment entrainment. The proposed model incorporates in its formulation the collective effects of three parameters describing bed surface irregularity, namely the relative roughness, the volumetric fraction and relative position of sediment particles within the active layer. Another key feature of the model is that it provides a criterion for estimating the lift and drag coefficients jointly based on the recognition that lift and drag forces acting on sediment particles are interdependent and vary with particle protrusion and packing density. The model was validated using laboratory data of both fine and coarse sediment and was compared with previously published models. The study results show that for the fine sediment data, where the sediment particles have more uniform gradation and relative roughness is not a factor, all the examined models perform adequately. The proposed model was particularly suited for the coarse sediment data, where the increased bed irregularity was captured by the new parameters introduced in the model formulations. As a result, the proposed model yielded smaller prediction errors and physically acceptable values for the lift coefficient compared to the other models in case of the coarse sediment data.
Predicting Prostate Cancer Progression At Time of Diagnosis
2016-09-01
greater or less than 50% pattern 4—is again arbitrary and may not capture biology adequately [see an unrelated paper we published during the study period...predictor of major upgrading (p=0.02 and p=0.18, respectively). On multinomial analysis, which we feel best reflects the spectrum of biology we are...predictive of outcomes, and that unique insights into tumor biology may be gleaned by analysis of both types of biomarkers. Task 6 As noted
2017-01-01
Several reactions, known from other amine systems for CO2 capture, have been proposed for Lewatit R VP OC 1065. The aim of this molecular modeling study is to elucidate the CO2 capture process: the physisorption process prior to the CO2-capture and the reactions. Molecular modeling yields that the resin has a structure with benzyl amine groups on alternating positions in close vicinity of each other. Based on this structure, the preferred adsorption mode of CO2 and H2O was established. Next, using standard Density Functional Theory two catalytic reactions responsible for the actual CO2 capture were identified: direct amine and amine-H2O catalyzed formation of carbamic acid. The latter is a new type of catalysis. Other reactions are unlikely. Quantitative verification of the molecular modeling results with known experimental CO2 adsorption isotherms, applying a dual site Langmuir adsorption isotherm model, further supports all results of this molecular modeling study. PMID:29142339
Use of models to map potential capture of surface water
Leake, Stanley A.
2006-01-01
The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.
A novel method to estimate safety factor of capture by a fetal micropacemaker.
Vest, Adriana Nicholson; Zhou, Li; Bar-Cohen, Yaniv; Eli Loeb, Gerald
2016-07-01
We have developed a rechargeable fetal micropacemaker in order to treat severe fetal bradycardia with comorbid hydrops fetalis, a life-threatening condition in pre-term non-viable fetuses for which there are no effective treatment options. The small size and minimally invasive form factor of our design limit the volume available for circuitry and a power source. The device employs a fixed-rate and fixed-amplitude relaxation oscillator and a tiny, rechargeable lithium ion power cell. For both research and clinical applications, it is valuable to monitor the electrode-myocardium interface in order to determine that adequate pacemaker output is being provided. This is typically accomplished by observing the minimal stimulus strength that achieves threshold for pacing capture. The output of our simple micropacemaker cannot be programmatically altered to determine this minimal capture threshold, but a safety factor can be inferred by determining the refractory period for ventricular capture at a given stimulus strength. This is done by measuring the minimal timing between naturally occurring QRS complexes and pacing stimuli that successfully generate a premature ventricular contraction. The method was tested in a pilot study in four fetal sheep and the data demonstrate that a relative measure of threshold is obtainable. This method provides valuable real-time information about the electrode-tissue interface.
A novel method to estimate safety factor of capture by a fetal micropacemaker
Vest, Adriana Nicholson; Zhou, Li; Bar-Cohen, Yaniv; Loeb, Gerald Eli
2016-01-01
We have developed a rechargeable fetal micropacemaker in order to treat severe fetal bradycardia with comorbid hydrops fetalis, a life-threatening condition in pre-term non-viable fetuses for which there are no effective treatment options. The small size and minimally invasive form factor of our design limit the volume available for circuitry and a power source. The device employs a fixed-rate and fixed-amplitude relaxation oscillator and a tiny, rechargeable lithium ion power cell. For both research and clinical applications, it is valuable to monitor the electrode-myocardium interface in order to determine that adequate pacemaker output is being provided. This is typically accomplished by observing the minimal stimulus strength that achieves threshold for pacing capture. The output of our simple micropacemaker cannot be programmatically altered to determine this minimal capture threshold, but a safety factor can be inferred by determining the refractory period for ventricular capture at a given stimulus strength. This is done by measuring the minimal timing between naturally occurring QRS complexes and pacing stimuli that successfully generate a premature ventricular contraction. The method was tested in a pilot study in 4 fetal sheep and the data demonstrate that a relative measure of threshold is obtainable. This method provides valuable real-time information about the electrode-tissue interface. PMID:27340134
Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas
2012-01-01
1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.
Image-based modelling of skeletal muscle oxygenation
Clough, G. F.
2017-01-01
The supply of oxygen in sufficient quantity is vital for the correct functioning of all organs in the human body, in particular for skeletal muscle during exercise. Disease is often associated with both an inhibition of the microvascular supply capability and is thought to relate to changes in the structure of blood vessel networks. Different methods exist to investigate the influence of the microvascular structure on tissue oxygenation, varying over a range of application areas, i.e. biological in vivo and in vitro experiments, imaging and mathematical modelling. Ideally, all of these methods should be combined within the same framework in order to fully understand the processes involved. This review discusses the mathematical models of skeletal muscle oxygenation currently available that are based upon images taken of the muscle microvasculature in vivo and ex vivo. Imaging systems suitable for capturing the blood vessel networks are discussed and respective contrasting methods presented. The review further informs the association between anatomical characteristics in health and disease. With this review we give the reader a tool to understand and establish the workflow of developing an image-based model of skeletal muscle oxygenation. Finally, we give an outlook for improvements needed for measurements and imaging techniques to adequately investigate the microvascular capability for oxygen exchange. PMID:28202595
Effect of Turbulence Modeling on an Excited Jet
NASA Technical Reports Server (NTRS)
Brown, Clifford A.; Hixon, Ray
2010-01-01
The flow dynamics in a high-speed jet are dominated by unsteady turbulent flow structures in the plume. Jet excitation seeks to control these flow structures through the natural instabilities present in the initial shear layer of the jet. Understanding and optimizing the excitation input, for jet noise reduction or plume mixing enhancement, requires many trials that may be done experimentally or computationally at a significant cost savings. Numerical simulations, which model various parts of the unsteady dynamics to reduce the computational expense of the simulation, must adequately capture the unsteady flow dynamics in the excited jet for the results are to be used. Four CFD methods are considered for use in an excited jet problem, including two turbulence models with an Unsteady Reynolds Averaged Navier-Stokes (URANS) solver, one Large Eddy Simulation (LES) solver, and one URANS/LES hybrid method. Each method is used to simulate a simplified excited jet and the results are evaluated based on the flow data, computation time, and numerical stability. The knowledge gained about the effect of turbulence modeling and CFD methods from these basic simulations will guide and assist future three-dimensional (3-D) simulations that will be used to understand and optimize a realistic excited jet for a particular application.
Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C
2013-01-01
Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less
A homogenized localizing gradient damage model with micro inertia effect
NASA Astrophysics Data System (ADS)
Wang, Zhao; Poh, Leong Hien
2018-07-01
The conventional gradient enhancement regularizes structural responses during material failure. However, it induces a spurious damage growth phenomenon, which is shown here to persist in dynamics. Similar issues were reported with the integral averaging approach. Consequently, the conventional nonlocal enhancement cannot adequately describe the dynamic fracture of quasi-brittle materials, particularly in the high strain rate regime, where a diffused damage profile precludes the development of closely spaced macrocracks. To this end, a homogenization theory is proposed to translate the micro processes onto the macro scale. Starting with simple elementary models at the micro scale to describe the fracture mechanisms, an additional kinematic field is introduced to capture the variations in deformation and velocity within a unit cell. An energetic equivalence between micro and macro is next imposed to ensure consistency at the two scales. The ensuing homogenized microforce balance resembles closely the conventional gradient expression, albeit with an interaction domain that decreases with damage, complemented by a micro inertia effect. Considering a direct single pressure bar example, the homogenized model is shown to resolve the non-physical responses obtained with conventional nonlocal enhancement. The predictive capability of the homogenized model is furthermore demonstrated by considering the spall tests of concrete, with good predictions on failure characteristics such as fragmentation profiles and dynamic tensile strengths, at three different loading rates.
Modeling Household Water Consumption in a Hydro-Institutional System - The Case of Jordan
NASA Astrophysics Data System (ADS)
Klassert, C. J. A.; Gawel, E.; Klauer, B.; Sigel, K.
2014-12-01
Jordan faces an archetypal combination of high water scarcity, with a per capita water availability of around 150 CM per year significantly below the absolute scarcity threshold of 500 CM, and strong population growth, especially due to the Syrian refugee crisis. This poses a severe challenge to the already strained institutions in the Jordanian water sector. The Stanford-led G8 Belmont Forum project "Integrated Analysis of Freshwater Resources Sustainability in Jordan" aims at analyzing the potential role of water sector institutions in the pursuit of a sustainable freshwater system performance. In order to do so, the project develops a coupled hydrological and agent-based model, allowing for the exploration of physical as well as socio-economic and institutional scenarios for Jordan's water sector. The part of this integrated model in focus here is the representation of household behavior in Jordan's densely populated capital Amman. Amman's piped water supply is highly intermittent, which also affects its potability. Therefore, Amman's citizens rely on various decentralized modes of supply, depending on their socio-economic characteristics. These include water storage in roof-top and basement tanks, private tanker supply, and the purchase of bottled water. Capturing this combination of centralized and decentralized supply modes is important for an adequate representation of water consumption behavior: Firstly, it will affect the impacts of supply-side and demand-side policies, such as reductions of non-revenue water (including illegal abstractions), the introduction of continuous supply, support for storage enhancements, and water tariff reforms. Secondly, it is also necessary to differentiate the impacts of any policy on the different socio-economic groups in Amman. In order to capture the above aspects of water supply, our model is based on the tiered supply curve approach, developed by Srinivasan et al. in 2011 to model a similar situation in Chennai, India. To tailor our model to the situation in Amman, we rely on sectoral data, existing literature analyses and expert discussions with Jordanian water sector representatives. Our modeling approach allows us to directly compare policies affecting both centralized and decentralized elements of the system within a common framework.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zimmermann, Niklaus E.; Kaplan, Jed O.; Poulter, Benjamin
2016-03-01
Simulations of the spatiotemporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate. Hydrologic inundation models, such as the TOPography-based hydrological model (TOPMODEL), are based on a fundamental parameter known as the compound topographic index (CTI) and offer a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains a large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl (Lund-Potsdam-Jena Wald Schnee und Landschaft version) Dynamic Global Vegetation Model (DGVM) and quantifies uncertainties by comparing three digital elevation model (DEM) products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland data set can help to successfully delineate the seasonal and interannual variation of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows the best accuracy for capturing the spatiotemporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ˜ 10.3 Mkm2 (106 km2), with a mean annual maximum of ˜ 5.17 Mkm2 for 1980-2010. When integrated with wetland methane emission submodule, the uncertainty of global annual CH4 emissions from topography inputs is estimated to be 29.0 Tg yr-1. This study demonstrates the feasibility of TOPMODEL to capture spatial heterogeneity of inundation at a large scale and highlights the significance of correcting maximum wetland extent to improve modeling of interannual variations in wetland area. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.
Proposed roadmap for overcoming legal and financial obstacles to carbon capture and sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, Wendy; Chohen, Leah; Kostakidis-Lianos, Leah
Many existing proposals either lack sufficient concreteness to make carbon capture and geological sequestration (CCGS) operational or fail to focus on a comprehensive, long term framework for its regulation, thus failing to account adequately for the urgency of the issue, the need to develop immediate experience with large scale demonstration projects, or the financial and other incentives required to launch early demonstration projects. We aim to help fill this void by proposing a roadmap to commercial deployment of CCGS in the United States.This roadmap focuses on the legal and financial incentives necessary for rapid demonstration of geological sequestration in themore » absence of national restrictions on CO2 emissions. It weaves together existing federal programs and financing opportunities into a set of recommendations for achieving commercial viability of geological sequestration.« less
Cowdrey, Felicity A; Park, Rebecca J
2011-12-01
A process account of eating disorders (EDs) (Park et al., in press-a) proposes that preoccupation with ruminative themes of eating, weight and shape may be important in ED maintenance. No self-report measure exists to capture disorder-specific rumination in EDs. 275 healthy participants rated rumination items and completed self-report measures of ED symptoms, depression and anxiety. Principal component analysis revealed two factors, reflection and brooding. The final nine-item Ruminative Response Scale for Eating Disorders (RRS-ED) demonstrated good convergent and discriminant validity and test-retest reliability. The psychometric properties were replicated in an anorexia nervosa sample. The findings support the notion that rumination in EDs is distinct from rumination in depression and is not adequately captured by existing measures. Copyright © 2011 Elsevier Ltd. All rights reserved.
77 FR 53059 - Risk-Based Capital Guidelines: Market Risk
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
...The Office of the Comptroller of the Currency (OCC), Board of Governors of the Federal Reserve System (Board), and Federal Deposit Insurance Corporation (FDIC) are revising their market risk capital rules to better capture positions for which the market risk capital rules are appropriate; reduce procyclicality; enhance the rules' sensitivity to risks that are not adequately captured under current methodologies; and increase transparency through enhanced disclosures. The final rule does not include all of the methodologies adopted by the Basel Committee on Banking Supervision for calculating the standardized specific risk capital requirements for debt and securitization positions due to their reliance on credit ratings, which is impermissible under the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010. Instead, the final rule includes alternative methodologies for calculating standardized specific risk capital requirements for debt and securitization positions.
On modeling weak sinks in MODPATH
Abrams, Daniel B.; Haitjema, Henk; Kauffman, Leon J.
2012-01-01
Regional groundwater flow systems often contain both strong sinks and weak sinks. A strong sink extracts water from the entire aquifer depth, while a weak sink lets some water pass underneath or over the actual sink. The numerical groundwater flow model MODFLOW may allow a sink cell to act as a strong or weak sink, hence extracting all water that enters the cell or allowing some of that water to pass. A physical strong sink can be modeled by either a strong sink cell or a weak sink cell, with the latter generally occurring in low resolution models. Likewise, a physical weak sink may also be represented by either type of sink cell. The representation of weak sinks in the particle tracing code MODPATH is more equivocal than in MODFLOW. With the appropriate parameterization of MODPATH, particle traces and their associated travel times to weak sink streams can be modeled with adequate accuracy, even in single layer models. Weak sink well cells, on the other hand, require special measures as proposed in the literature to generate correct particle traces and individual travel times and hence capture zones. We found that the transit time distributions for well water generally do not require special measures provided aquifer properties are locally homogeneous and the well draws water from the entire aquifer depth, an important observation for determining the response of a well to non-point contaminant inputs.
Investigating low flow process controls, through complex modelling, in a UK chalk catchment
NASA Astrophysics Data System (ADS)
Lubega Musuuza, Jude; Wagener, Thorsten; Coxon, Gemma; Freer, Jim; Woods, Ross; Howden, Nicholas
2017-04-01
The typical streamflow response of Chalk catchments is dominated by groundwater contributions due the high degree of groundwater recharge through preferential flow pathways. The groundwater store attenuates the precipitation signal, which causes a delay between the corresponding high and low extremes in the precipitation and the stream flow signals. Streamflow responses can therefore be quite out of phase with the precipitation input to a Chalk catchment. Therefore characterising such catchment systems, including modelling approaches, clearly need to reproduce these percolation and groundwater dominated pathways to capture these dominant flow pathways. The simulation of low flow conditions for chalk catchments in numerical models is especially difficult due to the complex interactions between various processes that may not be adequately represented or resolved in the models. Periods of low stream flows are particularly important due to competing water uses in the summer, including agriculture and water supply. In this study we apply and evaluate the physically-based Pennstate Integrated Hydrologic Model (PIHM) to the River Kennet, a sub-catchment of the Thames Basin, to demonstrate how the simulations of a chalk catchment are improved by a physically-based system representation. We also use an ensemble of simulations to investigate the sensitivity of various hydrologic signatures (relevant to low flows and droughts) to the different parameters in the model, thereby inferring the levels of control exerted by the processes that the parameters represent.
Towards a physically-based multi-scale ecohydrological simulator for semi-arid regions
NASA Astrophysics Data System (ADS)
Caviedes-Voullième, Daniel; Josefik, Zoltan; Hinz, Christoph
2017-04-01
The use of numerical models as tools for describing and understanding complex ecohydrological systems has enabled to test hypothesis and propose fundamental, process-based explanations of the system system behaviour as a whole as well as its internal dynamics. Reaction-diffusion equations have been used to describe and generate organized pattern such as bands, spots, and labyrinths using simple feedback mechanisms and boundary conditions. Alternatively, pattern-matching cellular automaton models have been used to generate vegetation self-organization in arid and semi-arid regions also using simple description of surface hydrological processes. A key question is: How much physical realism is needed in order to adequately capture the pattern formation processes in semi-arid regions while reliably representing the water balance dynamics at the relevant time scales? In fact, redistribution of water by surface runoff at the hillslope scale occurs at temporal resolution of minutes while the vegetation development requires much lower temporal resolution and longer times spans. This generates a fundamental spatio-temporal multi-scale problem to be solved, for which high resolution rainfall and surface topography are required. Accordingly, the objective of this contribution is to provide proof-of-concept that governing processes can be described numerically at those multiple scales. The requirements for a simulating ecohydrological processes and pattern formation with increased physical realism are, amongst others: i. high resolution rainfall that adequately captures the triggers of growth as vegetation dynamics of arid regions respond as pulsed systems. ii. complex, natural topography in order to accurately model drainage patterns, as surface water redistribution is highly sensitive to topographic features. iii. microtopography and hydraulic roughness, as small scale variations do impact on large scale hillslope behaviour iv. moisture dependent infiltration as temporal dynamics of infiltration affects water storage under vegetation and in bare soil Despite the volume of research in this field, fundamental limitations still exist in the models regarding the aforementioned issues. Topography and hydrodynamics have been strongly simplified. Infiltration has been modelled as dependent on depth but independent of soil moisture. Temporal rainfall variability has only been addressed for seasonal rain. Spatial heterogenity of the topography as well as roughness and infiltration properties, has not been fully and explicitly represented. We hypothesize that physical processes must be robustly modelled and the drivers of complexity must be present with as much resolution as possible in order to provide the necessary realism to improve transient simulations, perhaps leading the way to virtual laboratories and, arguably, predictive tools. This work provides a first approach into a model with explicit hydrological processes represented by physically-based hydrodynamic models, coupled with well-accepted vegetation models. The model aims to enable new possibilities relating to spatiotemporal variability, arbitrary topography and representation of spatial heterogeneity, including sub-daily (in fact, arbitrary) temporal variability of rain as the main forcing of the model, explicit representation of infiltration processes, and various feedback mechanisms between the hydrodynamics and the vegetation. Preliminary testing strongly suggests that the model is viable, has the potential of producing new information of internal dynamics of the system, and allows to successfully aggregate many of the sources of complexity. Initial benchmarking of the model also reveals strengths to be exploited, thus providing an interesting research outlook, as well as weaknesses to be addressed in the immediate future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
Link, William A.; Barker, Richard J.
2005-01-01
We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T
2005-08-01
The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.
Constant-parameter capture-recapture models
Brownie, C.; Hines, J.E.; Nichols, J.D.
1986-01-01
Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.
Urban Landscape Characterization Using Remote Sensing Data For Input into Air Quality Modeling
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Estes, Maurice G., Jr.; Crosson, William; Khan, Maudood
2005-01-01
The urban landscape is inherently complex and this complexity is not adequately captured in air quality models that are used to assess whether urban areas are in attainment of EPA air quality standards, particularly for ground level ozone. This inadequacy of air quality models to sufficiently respond to the heterogeneous nature of the urban landscape can impact how well these models predict ozone pollutant levels over metropolitan areas and ultimately, whether cities exceed EPA ozone air quality standards. We are exploring the utility of high-resolution remote sensing data and urban growth projections as improved inputs to meteorological and air quality models focusing on the Atlanta, Georgia metropolitan area as a case study. The National Land Cover Dataset at 30m resolution is being used as the land use/land cover input and aggregated to the 4km scale for the MM5 mesoscale meteorological model and the Community Multiscale Air Quality (CMAQ) modeling schemes. Use of these data have been found to better characterize low density/suburban development as compared with USGS 1 km land use/land cover data that have traditionally been used in modeling. Air quality prediction for future scenarios to 2030 is being facilitated by land use projections using a spatial growth model. Land use projections were developed using the 2030 Regional Transportation Plan developed by the Atlanta Regional Commission. This allows the State Environmental Protection agency to evaluate how these transportation plans will affect future air quality.
Mihaescu, Mihai; Murugappan, Shanmugam; Kalra, Maninder; Khosla, Sid; Gutmark, Ephraim
2008-07-19
Computational fluid dynamics techniques employing primarily steady Reynolds-Averaged Navier-Stokes (RANS) methodology have been recently used to characterize the transitional/turbulent flow field in human airways. The use of RANS implies that flow phenomena are averaged over time, the flow dynamics not being captured. Further, RANS uses two-equation turbulence models that are not adequate for predicting anisotropic flows, flows with high streamline curvature, or flows where separation occurs. A more accurate approach for such flow situations that occur in the human airway is Large Eddy Simulation (LES). The paper considers flow modeling in a pharyngeal airway model reconstructed from cross-sectional magnetic resonance scans of a patient with obstructive sleep apnea. The airway model is characterized by a maximum narrowing at the site of retropalatal pharynx. Two flow-modeling strategies are employed: steady RANS and the LES approach. In the RANS modeling framework both k-epsilon and k-omega turbulence models are used. The paper discusses the differences between the airflow characteristics obtained from the RANS and LES calculations. The largest discrepancies were found in the axial velocity distributions downstream of the minimum cross-sectional area. This region is characterized by flow separation and large radial velocity gradients across the developed shear layers. The largest difference in static pressure distributions on the airway walls was found between the LES and the k-epsilon data at the site of maximum narrowing in the retropalatal pharynx.
Tool use for corpse cleaning in chimpanzees
NASA Astrophysics Data System (ADS)
van Leeuwen, Edwin J. C.; Cronin, Katherine A.; Haun, Daniel B. M.
2017-03-01
For the first time, chimpanzees have been observed using tools to clean the corpse of a deceased group member. A female chimpanzee sat down at the dead body of a young male, selected a firm stem of grass, and started to intently remove debris from his teeth. This report contributes novel behaviour to the chimpanzee’s ethogram, and highlights how crucial information for reconstructing the evolutionary origins of human mortuary practices may be missed by refraining from developing adequate observation techniques to capture non-human animals’ death responses.
1999-09-30
history. OBJECTIVES 1) Is the variability in a river’s sediment load, observed over the last 100 years or less, adequate to provide a proxy for longer-term...experiments, small basins are able to capture in terms of textural proxies , both the natural variability associated with precipitation and temperature...as well as realistic scenarios of abrupt climate change. Open ocean basins, like the Eel River, are less likely to record the proxy record of ambient
Putting semantics into the semantic web: how well can it capture biology?
Kazic, Toni
2006-01-01
Could the Semantic Web work for computations of biological interest in the way it's intended to work for movie reviews and commercial transactions? It would be wonderful if it could, so it's worth looking to see if its infrastructure is adequate to the job. The technologies of the Semantic Web make several crucial assumptions. I examine those assumptions; argue that they create significant problems; and suggest some alternative ways of achieving the Semantic Web's goals for biology.
1990-01-01
Population at risk 3579 NOTES. No one in the U.S. is effected ( health effects ) by a food shortage due to a drought 8. Land area at risk 1 3 5 Q 9 NOTES: Amount...adequately captures the characteristics most people are concerned with--human health , ecological effects , and welfare--for these problems. Because of...reasons as in India: Pure Food (10) is included because of the effects of pesticides, while Animal Habitat (9) and Stock of Wildlife (23) are included
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokolova, Z. N., E-mail: Zina.Sokolova@mail.ioffe.ru; Pikhtin, N. A.; Tarasov, I. S.
The operating characteristics of a semiconductor quantum-well laser calculated using three models are compared. These models are (i) a model not taking into account differences between the electron and hole parameters and using the electron parameters for both types of charge carriers; (ii) a model, which does not take into account differences between the electron and hole parameters and uses the hole parameters for both types of charge carriers; and (iii) a model taking into account the asymmetry between the electron and hole parameters. It is shown that, at the same velocity of electron and hole capture into an unoccupiedmore » quantum well, the laser characteristics, obtained using the three models, differ considerably. These differences are due to a difference between the filling of the electron and hole subbands in a quantum well. The electron subband is more occupied than the hole subband. As a result, at the same velocities of electron and hole capture into an empty quantum well, the effective electron-capture velocity is lower than the effective hole-capture velocity. Specifically, it is shown that for the laser structure studied the hole-capture velocity of 5 × 10{sup 5} cm/s into an empty quantum well and the corresponding electron-capture velocity of 3 × 10{sup 6} cm/s into an empty quantum well describe the rapid capture of these carriers, at which the light–current characteristic of the laser remains virtually linear up to high pump-current densities. However, an electron-capture velocity of 5 × 10{sup 5} cm/s and a corresponding hole-capture velocity of 8.4 × 10{sup 4} cm/s describe the slow capture of these carriers, causing significant sublinearity in the light–current characteristic.« less
Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble
2016-06-17
Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.
NASA Astrophysics Data System (ADS)
Matheny, A. M.; Bohrer, G.; Mirfenderesgi, G.; Schafer, K. V.; Ivanov, V. Y.
2014-12-01
Hydraulic limitations are known to control transpiration in forest ecosystems when the soil is drying or when the vapor pressure deficit between the air and stomata is very large, but they can also impact stomatal apertures under conditions of adequate soil moisture and lower evaporative demand. We use the NACP dataset of latent heat flux measurements and model observations for multiple sites and models to demonstrate models' difficulties in capturing intra-daily hysteresis. We hypothesize that this is a result of un-resolved afternoon stomata closure due to hydrodynamic stresses. The current formulations for stomatal conductance and the empirical coupling between stomatal conductance and soil moisture used by these models does not resolve the hydrodynamic process of water movement from the soil to the leaves. This approach does not take advantage of advances in our understanding of water flow and storage in the trees, or of tree and canopy structure. A more thorough representation of the tree-hydrodynamic processes could potentially remedy this significant source of model error. In a forest plot at the University of Michigan Biological Station, we use measurements of sap flux and leaf water potential to demonstrate that trees of similar type - late successional deciduous trees - have very different hydrodynamic strategies that lead to differences in their temporal patterns of stomatal conductance and thus hysteretic cycles of transpiration. These differences will lead to large differences in conductance and water use based on the species composition of the forest. We also demonstrate that the size and shape of the tree branching system leads to differences in extent of hydrodynamic stress, which may change the forest respiration patterns as the forest grows and ages. We propose a framework to resolve tree hydrodynamics in global and regional models based on the Finite-Elements Tree-Crown Hydrodynamics model (FETCH) -a hydrodynamic model that can resolve the fast dynamics of stomatal conductance. FETCH simulates water flow through a tree as a system of porous media conduits and calculates the amount of hydraulic limitation to stomatal conductance, given the atmospheric and biological variables from the global model, and could replace the current empirical formulation for stomatal adjustment based on soil moisture.
Minimum required capture radius in a coplanar model of the aerial combat problem
NASA Technical Reports Server (NTRS)
Breakwell, J. V.; Merz, A. W.
1977-01-01
Coplanar aerial combat is modeled with constant speeds and specified turn rates. The minimum capture radius which will always permit capture, regardless of the initial conditions, is calculated. This 'critical' capture radius is also the maximum range which the evader can guarantee indefinitely if the initial range, for example, is large. A composite barrier is constructed which gives the boundary, at any heading, of relative positions for which the capture radius is less than critical.
Nichols, J.D.; Pollock, K.H.
1983-01-01
Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.
Beyond positivist ecology: toward an integrated ecological ethics.
Norton, Bryan G
2008-12-01
A post-positivist understanding of ecological science and the call for an "ecological ethic" indicate the need for a radically new approach to evaluating environmental change. The positivist view of science cannot capture the essence of environmental sciences because the recent work of "reflexive" ecological modelers shows that this requires a reconceptualization of the way in which values and ecological models interact in scientific process. Reflexive modelers are ecological modelers who believe it is appropriate for ecologists to examine the motives for their choices in developing models; this self-reflexive approach opens the door to a new way of integrating values into public discourse and to a more comprehensive approach to evaluating ecological change. This reflexive building of ecological models is introduced through the transformative simile of Aldo Leopold, which shows that learning to "think like a mountain" involves a shift in both ecological modeling and in values and responsibility. An adequate, interdisciplinary approach to ecological valuation, requires a re-framing of the evaluation questions in entirely new ways, i.e., a review of the current status of interdisciplinary value theory with respect to ecological values reveals that neither of the widely accepted theories of environmental value-neither economic utilitarianism nor intrinsic value theory (environmental ethics)-provides a foundation for an ecologically sensitive evaluation process. Thus, a new, ecologically sensitive, and more comprehensive approach to evaluating ecological change would include an examination of the metaphors that motivate the models used to describe environmental change.
Uniting statistical and individual-based approaches for animal movement modelling.
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.
Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047
Limits to high-speed simulations of spiking neural networks using general-purpose computers.
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
NASA Astrophysics Data System (ADS)
Mirabolghasemi, M.; Prodanovic, M.; Choens, R. C., II; Dewers, T. A.
2016-12-01
We present a workflow to study the alteration of flow and mechanical characteristics of sandstones after shear failure, specifically modeling weakening of the formation due to CO2 injection. We use discrete elements method (DEM) to represent each sand grain as a cluster of bonded sub-particles, and model their potential crushing. We also introduce bonds between sand grain clusters to enable the modeling of the mechanical behavior of consolidated sandstones. The model is tuned by comparing our numerical compression tests on single sand grains with the experimental results reported in the literature. Once the mechanical behavior of individual grains is adequately captured by the model, a packing of such grains is subjected to shear stress. Once the packing fails under the imposed shear stress, its mechanical properties, permeability, and porosity are calculated. This test is repeated for various conditions by varying parameters such as the brittleness of single grains (the relative quartz-feldspar content of the grains), normal stress, and cement strength (assuming (chemical) weakening of the inter- and intra-grain-cluster bonds due to CO2 injection). We specifically compare the effect of cement/bond strength weakening on mechanical properties to triaxial compression experimental measurements before and after hydrous scCO2 and CO2-saturated brine injection in Boise sandstone performed in Sandia National Laboratory.
Kunkel, Amber; McLay, Laura A
2013-03-01
Emergency medical services (EMS) provide life-saving care and hospital transport to patients with severe trauma or medical conditions. Severe weather events, such as snow events, may lead to adverse patient outcomes by increasing call volumes and service times. Adequate staffing levels during such weather events are critical for ensuring that patients receive timely care. To determine staffing levels that depend on weather, we propose a model that uses a discrete event simulation of a reliability model to identify minimum staffing levels that provide timely patient care, with regression used to provide the input parameters. The system is said to be reliable if there is a high degree of confidence that ambulances can immediately respond to a given proportion of patients (e.g., 99 %). Four weather scenarios capture varying levels of snow falling and snow on the ground. An innovative feature of our approach is that we evaluate the mitigating effects of different extrinsic response policies and intrinsic system adaptation. The models use data from Hanover County, Virginia to quantify how snow reduces EMS system reliability and necessitates increasing staffing levels. The model and its analysis can assist in EMS preparedness by providing a methodology to adjust staffing levels during weather events. A key observation is that when it is snowing, intrinsic system adaptation has similar effects on system reliability as one additional ambulance.
The Husting dilemma: A methodological note
Nichols, J.D.; Hepp, G.R.; Pollock, K.H.; Hines, J.E.
1987-01-01
Recently, Gill (1985) discussed the interpretation of capture history data resulting from his own studies on the red-spotted newt, Notophthalmus viridescens , and work by Husting (1965) on spotted salamanders, Ambystoma maculatum. Gill (1985) noted that gaps in capture histories (years in which individuals were not captured, preceded and followed by years in which they were) could result from either of two very different possibilities: (1) failure of the animal to return to the fenced pond to breed (the alternative Husting (1965) favored), or (2) return of the animal to the breeding pond, but failure of the investigator to capture it and detect its presence. The authors agree entirely with Gill (1985) that capture history data such as his or those of Husting (1965) should be analyzed using models that recognize the possibility of 'census error,' and that it is important to try to distinguish between such 'error' and skipped breeding efforts. The purpose of this note is to point out the relationship between Gill's (1985:347) null model and certain capture-recapture models, and to use capture-recapture models and tests to analyze the original data of Husting (1965).
Gilmartin, Heather M; Sousa, Karen H; Battaglia, Catherine
2016-01-01
The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness Refined study. The sample was randomly split into exploration and validation subsets. The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01) and CLABSIs (reflective = -.28; composite = -.25; p = .01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled or with directional ambiguity to increase transparency and bring confidence to study findings.
Gilmartin, Heather M.; Sousa, Karen H.; Battaglia, Catherine
2016-01-01
Background The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Objectives Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. Methods A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness-Refined study. The sample was randomly split into exploration and validation subsets. Results The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01), and CLABSIs (reflective = −.28; composite = −.25; p =.01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. Discussion There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled, or with directional ambiguity, to increase transparency and bring confidence to study findings. PMID:27579507
Norris, Darren; Fortin, Marie-Josée; Magnusson, William E.
2014-01-01
Background Ecological monitoring and sampling optima are context and location specific. Novel applications (e.g. biodiversity monitoring for environmental service payments) call for renewed efforts to establish reliable and robust monitoring in biodiversity rich areas. As there is little information on the distribution of biodiversity across the Amazon basin, we used altitude as a proxy for biological variables to test whether meso-scale variation can be adequately represented by different sample sizes in a standardized, regular-coverage sampling arrangement. Methodology/Principal Findings We used Shuttle-Radar-Topography-Mission digital elevation values to evaluate if the regular sampling arrangement in standard RAPELD (rapid assessments (“RAP”) over the long-term (LTER [“PELD” in Portuguese])) grids captured patters in meso-scale spatial variation. The adequacy of different sample sizes (n = 4 to 120) were examined within 32,325 km2/3,232,500 ha (1293×25 km2 sample areas) distributed across the legal Brazilian Amazon. Kolmogorov-Smirnov-tests, correlation and root-mean-square-error were used to measure sample representativeness, similarity and accuracy respectively. Trends and thresholds of these responses in relation to sample size and standard-deviation were modeled using Generalized-Additive-Models and conditional-inference-trees respectively. We found that a regular arrangement of 30 samples captured the distribution of altitude values within these areas. Sample size was more important than sample standard deviation for representativeness and similarity. In contrast, accuracy was more strongly influenced by sample standard deviation. Additionally, analysis of spatially interpolated data showed that spatial patterns in altitude were also recovered within areas using a regular arrangement of 30 samples. Conclusions/Significance Our findings show that the logistically feasible sample used in the RAPELD system successfully recovers meso-scale altitudinal patterns. This suggests that the sample size and regular arrangement may also be generally appropriate for quantifying spatial patterns in biodiversity at similar scales across at least 90% (≈5 million km2) of the Brazilian Amazon. PMID:25170894
Pre-slip and Localized Strain Band - A Study Based on Large Sample Experiment and DIC
NASA Astrophysics Data System (ADS)
Ji, Y.; Zhuo, Y. Q.; Liu, L.; Ma, J.
2017-12-01
Meta-instability stage (MIS) is the stage occurs between a fault reaching the peak differential stress and the onset of the final stress drop. It is the crucial stage during which a fault transits from "stick" to "slip". Therefore, if one can quantitatively analyze the spatial and temporal characteristics of the deformation field of a fault at MIS, it will be of great significance both to fault mechanics and earthquake prediction study. In order to do so, a series of stick-slip experiments were conducted using a biaxial servo-controlled pressure machine. Digital images of the sample surfaces were captured by a high speed camera and processed using a digital image correlation method (DIC). If images of a rock sample are acquired before and after deformation, then DIC can be used to infer the displacement and strain fields. In our study, sample images were captured at the rate of 1000 frame per second and the resolution is 2048 by 2048 in pixel. The displacement filed, strain filed and fault displacement were calculated from the captured images. Our data shows that (1) pre-sliding can be a three-stage process, including a relative long and slow first stage at slipping rate of 7.9nm/s, a relatively short and fast second one at rate of 3µm/s and the last stage only last for 0.2s but the slipping rate reached as high as 220µm/s. (2) Localized strain bands were observed nearly perpendicular to the fault. A possible mechanism is that the pre-sliding is distributed heterogeneously along the fault, which means there are relatively adequately sliding segments and the less sliding ones, they become the constrain condition of deformation of the adjacent subregion. The localized deformation band tends to radiate from the discontinuity point of sliding. While the adequately sliding segments are competing with the less sliding ones, the strain bands are evolving accordingly.
Utilizing NASA DISCOVER-AQ Data to Examine Spatial Gradients in Complex Emission Environments
NASA Astrophysics Data System (ADS)
Buzanowicz, M. E.; Moore, W.; Crawford, J. H.; Schroeder, J.
2017-12-01
Although many regulations have been enacted with the goal of improving air quality, many parts of the US are still classified as `non-attainment areas' because they frequently violate federal air quality standards. Adequately monitoring the spatial distribution of pollutants both within and outside of non-attainment areas has been an ongoing challenge for regulators. Observations of near-surface pollution from space-based platforms would provide an unprecedented view of the spatial distribution of pollution, but this goal has not yet been realized due to fundamental limitations of satellites, specifically because the footprint size of satellite measurements may not be sufficiently small enough to capture true gradients in pollution, and rather represents an average over a large area. NASA's DISCOVER-AQ was a multi-year field campaign aimed at improving our understanding of the role that remote sensing, including satellite-based remote sensing, could play in air quality monitoring systems. DISCOVER-AQ data will be utilized to create a metric to examine spatial gradients and how satellites can capture those gradients in areas with complex emission environments. Examining horizontal variability within a vertical column is critical to understanding mixing within the atmosphere. Aircraft spirals conducted during DISCOVER-AQ were divided into octants, and averages of a given a species were calculated, with certain points receiving a flag. These flags were determined by calculating gradients between subsequent octants. Initial calculations have shown that over areas with large point source emissions, such as Platteville and Denver-La Casa in Colorado, and Essex, Maryland, satellite retrievals may not adequately capture spatial variability in the atmosphere, thus complicating satellite inversion techniques and limiting our ability to understand human exposure on sub-grid scales. Further calculations at other locations and for other trace gases are necessary to determine the effects of vertical variability within the atmosphere.
Benchmarking carbon fluxes of the ISIMIP2a biome models
Chang, Jinfeng; Ciais, Philippe; Wang, Xuhui; ...
2017-03-28
The purpose of this study is to evaluate the eight ISIMIP2a biome models against independent estimates of long-term net carbon fluxes (i.e. Net Biome Productivity, NBP) over terrestrial ecosystems for the recent four decades (1971–2010). Here, we evaluate modeled global NBP against 1) the updated global residual land sink (RLS) plus land use emissions (E LUC) from the Global Carbon Project (GCP), presented as R + L in this study by Le Quéré et al (2015), and 2) the land CO 2 fluxes from two atmospheric inversion systems: Jena CarboScope s81_v3.8 and CAMS v15r2, referred to as F Jena andmore » F CAMS respectively. The model ensemble-mean NBP (that includes seven models with land-use change) is higher than but within the uncertainty of R + L, while the simulated positive NBP trend over the last 30 yr is lower than that from R + L and from the two inversion systems. ISIMIP2a biome models well capture the interannual variation of global net terrestrial ecosystem carbon fluxes. Tropical NBP represents 31 ± 17% of global total NBP during the past decades, and the year-to-year variation of tropical NBP contributes most of the interannual variation of global NBP. According to the models, increasing Net Primary Productivity (NPP) was the main cause for the generally increasing NBP. Significant global NBP anomalies from the long-term mean between the two phases of El Niño Southern Oscillation (ENSO) events are simulated by all models (p < 0.05), which is consistent with the R + L estimate (p = 0.06), also mainly attributed to NPP anomalies, rather than to changes in heterotrophic respiration (Rh). The global NPP and NBP anomalies during ENSO events are dominated by their anomalies in tropical regions impacted by tropical climate variability. Multiple regressions between R + L, F Jena and F CAMS interannual variations and tropical climate variations reveal a significant negative response of global net terrestrial ecosystem carbon fluxes to tropical mean annual temperature variation, and a non-significant response to tropical annual precipitation variation. According to the models, tropical precipitation is a more important driver, suggesting that some models do not capture the roles of precipitation and temperature changes adequately.« less
Benchmarking carbon fluxes of the ISIMIP2a biome models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Jinfeng; Ciais, Philippe; Wang, Xuhui
The purpose of this study is to evaluate the eight ISIMIP2a biome models against independent estimates of long-term net carbon fluxes (i.e. Net Biome Productivity, NBP) over terrestrial ecosystems for the recent four decades (1971–2010). Here, we evaluate modeled global NBP against 1) the updated global residual land sink (RLS) plus land use emissions (E LUC) from the Global Carbon Project (GCP), presented as R + L in this study by Le Quéré et al (2015), and 2) the land CO 2 fluxes from two atmospheric inversion systems: Jena CarboScope s81_v3.8 and CAMS v15r2, referred to as F Jena andmore » F CAMS respectively. The model ensemble-mean NBP (that includes seven models with land-use change) is higher than but within the uncertainty of R + L, while the simulated positive NBP trend over the last 30 yr is lower than that from R + L and from the two inversion systems. ISIMIP2a biome models well capture the interannual variation of global net terrestrial ecosystem carbon fluxes. Tropical NBP represents 31 ± 17% of global total NBP during the past decades, and the year-to-year variation of tropical NBP contributes most of the interannual variation of global NBP. According to the models, increasing Net Primary Productivity (NPP) was the main cause for the generally increasing NBP. Significant global NBP anomalies from the long-term mean between the two phases of El Niño Southern Oscillation (ENSO) events are simulated by all models (p < 0.05), which is consistent with the R + L estimate (p = 0.06), also mainly attributed to NPP anomalies, rather than to changes in heterotrophic respiration (Rh). The global NPP and NBP anomalies during ENSO events are dominated by their anomalies in tropical regions impacted by tropical climate variability. Multiple regressions between R + L, F Jena and F CAMS interannual variations and tropical climate variations reveal a significant negative response of global net terrestrial ecosystem carbon fluxes to tropical mean annual temperature variation, and a non-significant response to tropical annual precipitation variation. According to the models, tropical precipitation is a more important driver, suggesting that some models do not capture the roles of precipitation and temperature changes adequately.« less
Modeling and Measurements of Multiphase Flow and Bubble Entrapment in Steel Continuous Casting
NASA Astrophysics Data System (ADS)
Jin, Kai; Thomas, Brian G.; Ruan, Xiaoming
2016-02-01
In steel continuous casting, argon gas is usually injected to prevent clogging, but the bubbles also affect the flow pattern, and may become entrapped to form defects in the final product. To investigate this behavior, plant measurements were conducted, and a computational model was applied to simulate turbulent flow of the molten steel and the transport and capture of argon gas bubbles into the solidifying shell in a continuous slab caster. First, the flow field was solved with an Eulerian k- ɛ model of the steel, which was two-way coupled with a Lagrangian model of the large bubbles using a discrete random walk method to simulate their turbulent dispersion. The flow predicted on the top surface agreed well with nailboard measurements and indicated strong cross flow caused by biased flow of Ar gas due to the slide-gate orientation. Then, the trajectories and capture of over two million bubbles (25 μm to 5 mm diameter range) were simulated using two different capture criteria (simple and advanced). Results with the advanced capture criterion agreed well with measurements of the number, locations, and sizes of captured bubbles, especially for larger bubbles. The relative capture fraction of 0.3 pct was close to the measured 0.4 pct for 1 mm bubbles and occurred mainly near the top surface. About 85 pct of smaller bubbles were captured, mostly deeper down in the caster. Due to the biased flow, more bubbles were captured on the inner radius, especially near the nozzle. On the outer radius, more bubbles were captured near to narrow face. The model presented here is an efficient tool to study the capture of bubbles and inclusion particles in solidification processes.
Informed Consent for Web Paradata Use
Couper, Mick P.; Singer, Eleanor
2013-01-01
Survey researchers are making increasing use of paradata – such as keystrokes, clicks, and timestamps – to evaluate and improve survey instruments but also to understand respondents and how they answer surveys. Since the introduction of paradata, researchers have been asking whether and how respondents should be informed about the capture and use of their paradata while completing a survey. In a series of three vignette-based experiments, we examine alternative ways of informing respondents about capture of paradata and seeking consent for their use. In all three experiments, any mention of paradata lowers stated willingness to participate in the hypothetical surveys. Even the condition where respondents were asked to consent to the use of paradata at the end of an actual survey resulted in a significant proportion declining. Our research shows that requiring such explicit consent may reduce survey participation without adequately informing survey respondents about what paradata are and why they are being used. PMID:24098312
Pijl, Mirjam Kj; Rommelse, Nanda Nj; Hendriks, Monica; De Korte, Manon Wp; Buitelaar, Jan K; Oosterling, Iris J
2018-02-01
The field of early autism research is in dire need of outcome measures that adequately reflect subtle changes in core autistic behaviors. This article compares the ability of a newly developed measure, the Brief Observation of Social Communication Change (BOSCC), and the Autism Diagnostic Observation Schedule (ADOS) to detect changes in core symptoms of autism in 44 toddlers. The results provide encouraging evidence for the Brief Observation of Social Communication Change as a candidate outcome measure, as reflected in sufficient inter- and intra-rater reliability, independency from other child characteristics, and sensitivity to capture change. Although the Brief Observation of Social Communication Change did not evidently outperform the Autism Diagnostic Observation Schedule on any of these quality criteria, the instrument may be better able to capture subtle, individual changes in core autistic symptoms. The promising findings warrant further study of this new instrument.
NASA Astrophysics Data System (ADS)
Stefaneas, Petros; Vandoulakis, Ioannis M.
2015-12-01
This paper outlines a logical representation of certain aspects of the process of mathematical proving that are important from the point of view of Artificial Intelligence. Our starting-point is the concept of proof-event or proving, introduced by Goguen, instead of the traditional concept of mathematical proof. The reason behind this choice is that in contrast to the traditional static concept of mathematical proof, proof-events are understood as processes, which enables their use in Artificial Intelligence in such contexts, in which problem-solving procedures and strategies are studied. We represent proof-events as problem-centered spatio-temporal processes by means of the language of the calculus of events, which captures adequately certain temporal aspects of proof-events (i.e. that they have history and form sequences of proof-events evolving in time). Further, we suggest a "loose" semantics for the proof-events, by means of Kolmogorov's calculus of problems. Finally, we expose the intented interpretations for our logical model from the fields of automated theorem-proving and Web-based collective proving.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chojnicki, Kirsten; Cooper, Marcia A.; Guo, Shuyue
Pore-scale aperture effects on flow in pore networks was studied in the laboratory to provide a parameterization for use in transport models. Four cases were considered: regular and irregular pillar/pore alignment with and without an aperture. The velocity field of each case was measured and simulated, providing quantitatively comparable results. Two aperture effect parameterizations were considered: permeability and transmission. Permeability values varied by an order of magnitude between the cases with and without apertures. However, transmission did not correlate with permeability. Despite having much greater permeability the regular aperture case permitted less transmission than the regular case. Moreover, both irregularmore » cases had greater transmission than the regular cases, a difference not supported by the permeabilities. Overall, these findings suggest that pore-scale aperture effects on flow though a pore-network may not be adequately captured by properties such as permeability for applications that are interested in determining particle transport volume and timing.« less
Experimental Plans for Subsystems of a Shock Wave Driven Gas Core Reactor
NASA Technical Reports Server (NTRS)
Kazeminezhad, F.; Anghai, S.
2008-01-01
This Contractor Report proposes a number of plans for experiments on subsystems of a shock wave driven pulsed magnetic induction gas core reactor (PMI-GCR, or PMD-GCR pulsed magnet driven gas core reactor). Computer models of shock generation and collision in a large-scale PMI-GCR shock tube have been performed. Based upon the simulation results a number of issues arose that can only be addressed adequately by capturing experimental data on high pressure (approx.1 atmosphere or greater) partial plasma shock wave effects in large bore shock tubes ( 10 cm radius). There are three main subsystems that are of immediate interest (for appraisal of the concept viability). These are (1) the shock generation in a high pressure gas using either a plasma thruster or pulsed high magnetic field, (2) collision of MHD or gas dynamic shocks, their interaction time, and collision pile-up region thickness, and (3) magnetic flux compression power generation (not included here).
Approximated adjusted fractional Bayes factors: A general method for testing informative hypotheses.
Gu, Xin; Mulder, Joris; Hoijtink, Herbert
2018-05-01
Informative hypotheses are increasingly being used in psychological sciences because they adequately capture researchers' theories and expectations. In the Bayesian framework, the evaluation of informative hypotheses often makes use of default Bayes factors such as the fractional Bayes factor. This paper approximates and adjusts the fractional Bayes factor such that it can be used to evaluate informative hypotheses in general statistical models. In the fractional Bayes factor a fraction parameter must be specified which controls the amount of information in the data used for specifying an implicit prior. The remaining fraction is used for testing the informative hypotheses. We discuss different choices of this parameter and present a scheme for setting it. Furthermore, a software package is described which computes the approximated adjusted fractional Bayes factor. Using this software package, psychological researchers can evaluate informative hypotheses by means of Bayes factors in an easy manner. Two empirical examples are used to illustrate the procedure. © 2017 The British Psychological Society.
Bond, Lyndal; Hilton, Shona
2014-01-01
Background: Novel policy interventions may lack evaluation-based evidence. Considerations to introduce minimum unit pricing (MUP) of alcohol in the UK were informed by econometric modelling (the ‘Sheffield model’). We aim to investigate policy stakeholders’ views of the utility of modelling studies for public health policy. Methods: In-depth qualitative interviews with 36 individuals involved in MUP policy debates (purposively sampled to include civil servants, politicians, academics, advocates and industry-related actors) were conducted and thematically analysed. Results: Interviewees felt familiar with modelling studies and often displayed detailed understandings of the Sheffield model. Despite this, many were uneasy about the extent to which the Sheffield model could be relied on for informing policymaking and preferred traditional evaluations. A tension was identified between this preference for post hoc evaluations and a desire for evidence derived from local data, with modelling seen to offer high external validity. MUP critics expressed concern that the Sheffield model did not adequately capture the ‘real life’ world of the alcohol market, which was conceptualized as a complex and, to some extent, inherently unpredictable system. Communication of modelling results was considered intrinsically difficult but presenting an appropriate picture of the uncertainties inherent in modelling was viewed as desirable. There was general enthusiasm for increased use of econometric modelling to inform future policymaking but an appreciation that such evidence should only form one input into the process. Conclusion: Modelling studies are valued by policymakers as they provide contextually relevant evidence for novel policies, but tensions exist with views of traditional evaluation-based evidence. PMID:24367068
Pohl, Steffi; Südkamp, Anna; Hardt, Katinka; Carstensen, Claus H.; Weinert, Sabine
2016-01-01
Assessing competencies of students with special educational needs in learning (SEN-L) poses a challenge for large-scale assessments (LSAs). For students with SEN-L, the available competence tests may fail to yield test scores of high psychometric quality, which are—at the same time—measurement invariant to test scores of general education students. We investigated whether we can identify a subgroup of students with SEN-L, for which measurement invariant competence measures of adequate psychometric quality may be obtained with tests available in LSAs. We furthermore investigated whether differences in test-taking behavior may explain dissatisfying psychometric properties and measurement non-invariance of test scores within LSAs. We relied on person fit indices and mixture distribution models to identify students with SEN-L for whom test scores with satisfactory psychometric properties and measurement invariance may be obtained. We also captured differences in test-taking behavior related to guessing and missing responses. As a result we identified a subgroup of students with SEN-L for whom competence scores of adequate psychometric quality that are measurement invariant to those of general education students were obtained. Concerning test taking behavior, there was a small number of students who unsystematically picked response options. Removing these students from the sample slightly improved item fit. Furthermore, two different patterns of missing responses were identified that explain to some extent problems in the assessments of students with SEN-L. PMID:26941665
An exploration of post-traumatic stress disorder in emergency nurses following Hurricane Katrina.
Battles, Elizabeth D
2007-08-01
As a result of Hurricane Katrina on August 29, 2005, ED nurses were faced with chaos during and after the storm. The purpose of this pilot study was to determine if emergency nurses have experienced signs and symptoms of post-traumatic stress disorder (PTSD) as a result of working in an emergency department of the New Orleans metropolitan area during and immediately after Hurricane Katrina. The research identifies if the nurses perceived satisfaction with measures administrators took to provide Critical Incident Stress Management (CISM). To combat burnout, absenteeism, emotional difficulties, and health problems in nurses, administration must offer adequate crisis management for those affected by a traumatic event in the workplace. Data were captured through a cross-sectional research design using self-reporting questionnaires. A questionnaire captured demographic information as well as information regarding satisfaction with CISM offered by management. The Post Traumatic Checklist (PCL) was utilized to assess PTSD symptoms in the nurse. An emergency department located approximately 40 miles north of downtown New Orleans, Louisiana, served as the setting for this study. The sample included 21 registered nurses who worked in the emergency department. Twenty percent of the nurses has symptoms of PTSD. In addition, 100% of the nurses reported that administrators did not offer CISM. To combat consequences of long-term effects of PTSD, hospital administrators must offer adequate treatment to employees. Further research is needed to expand the sample and gain a wider perspective on PTSD symptoms in nurses who worked during the Hurricane.
Li, Qi; Qin, Shaozheng; Rao, Li-Lin; Zhang, Wencai; Ying, Xiaoping; Guo, Xiuyan; Guo, Chunyan; Ding, Jinghong; Li, Shu; Luo, Jing
2011-01-01
The vast majority of decision-making research is performed under the assumption of the value maximizing principle. This principle implies that when making decisions, individuals try to optimize outcomes on the basis of cold mathematical equations. However, decisions are emotion-laden rather than cool and analytic when they tap into life-threatening considerations. Using functional magnetic resonance imaging (fMRI), this study investigated the neural mechanisms underlying vital loss decisions. Participants were asked to make a forced choice between two losses across three conditions: both losses are trivial (trivial-trivial), both losses are vital (vital-vital), or one loss is trivial and the other is vital (vital-trivial). Our results revealed that the amygdala was more active and correlated positively with self-reported negative emotion associated with choice during vital-vital loss decisions, when compared to trivial-trivial loss decisions. The rostral anterior cingulate cortex was also more active and correlated positively with self-reported difficulty of choice during vital-vital loss decisions. Compared to the activity observed during trivial-trivial loss decisions, the orbitofrontal cortex and ventral striatum were more active and correlated positively with self-reported positive emotion of choice during vital-trivial loss decisions. Our findings suggest that vital loss decisions involve emotions and cannot be adequately captured by cold computation of minimizing losses. This research will shed light on how people make vital loss decisions. PMID:21412428
A Stochastic Fractional Dynamics Model of Space-time Variability of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Travis, James E.
2013-01-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.
NASA Astrophysics Data System (ADS)
Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin
2016-03-01
A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.
Kumar Puttrevu, Santosh; Ramakrishna, Rachumallu; Bhateria, Manisha; Jain, Moon; Hanif, Kashif; Bhatta, Rabi Sankar
2017-05-01
A pharmacokinetic-pharmacodynamic (PK-PD) model was developed to describe the time course of blood pressure following oral administration of azilsartan medoxomil (AZM) and/or chlorthalidone (CLT) in spontaneously hypertensive (SH) rats. The drug concentration and pharmacological effects, including systolic blood pressure (SBP) and diastolic blood pressure (DBP) were measured by liquid chromatography-tandem mass spectrometry (LC-MS/MS) and tail-cuff manometry, respectively. Sequential PK-PD analysis was performed, wherein the plasma concentration-time data was modeled by one compartmental analysis. Subsequently PD parameters were calculated to describe the time-concentration-response relationship using indirect response (IDR) PK-PD model. The combination of AZ and CLT had greater BP lowering effect compared to AZ or CLT alone, despite of no pharmacokinetic interaction between two drugs. These findings suggest synergistic antihypertensive pharmacodynamic interaction between AZ and CLT noncompetitively, which was simulated by inhibitory function of AZ and stimulatory function of CLT after concomitant administration of the two drugs. The present model was able to capture the turnover of blood pressure adequately at different time points at two different dose levels. The current PK-PD model was successfully utilized in the simulation of PD effect at a dose combination of 0.5 and 2.5 mg/kg for AZ and CLT, respectively. The developed preclinical PK-PD model may provide guidance in the optimization of dose ratio of individual drugs in the combined pharmacotherapy of AZ and CLT at clinical situations.
Developing a model for the adequate description of electronic communication in hospitals.
Saboor, Samrend; Ammenwerth, Elske
2011-01-01
Adequate information and communication systems (ICT) can help to improve the communication in hospitals. Changes to the ICT-infrastructure of hospitals must be planed carefully. In order to support a comprehensive planning, we presented a classification of 81 common errors of the electronic communication on the MIE 2008 congress. Our objective now was to develop a data model that defines specific requirements for an adequate description of electronic communication processes We first applied the method of explicating qualitative content analysis on the error categorization in order to determine the essential process details. After this, we applied the method of subsuming qualitative content analysis on the results of the first step. A data model for the adequate description of electronic communication. This model comprises 61 entities and 91 relationships. The data model comprises and organizes all details that are necessary for the detection of the respective errors. It can be for either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach.
Stochastic time series analysis of fetal heart-rate variability
NASA Astrophysics Data System (ADS)
Shariati, M. A.; Dripps, J. H.
1990-06-01
Fetal Heart Rate(FHR) is one of the important features of fetal biophysical activity and its long term monitoring is used for the antepartum(period of pregnancy before labour) assessment of fetal well being. But as yet no successful method has been proposed to quantitatively represent variety of random non-white patterns seen in FHR. Objective of this paper is to address this issue. In this study the Box-Jenkins method of model identification and diagnostic checking was used on phonocardiographic derived FHR(averaged) time series. Models remained exclusively autoregressive(AR). Kalman filtering in conjunction with maximum likelihood estimation technique forms the parametric estimator. Diagnosrics perfonned on the residuals indicated that a second order model may be adequate in capturing type of variability observed in 1 up to 2 mm data windows of FHR. The scheme may be viewed as a means of data reduction of a highly redundant information source. This allows a much more efficient transmission of FHR information from remote locations to places with facilities and expertise for doser analysis. The extracted parameters is aimed to reflect numerically the important FHR features. These are normally picked up visually by experts for their assessments. As a result long term FHR recorded during antepartum period could then be screened quantitatively for detection of patterns considered normal or abnonnal. 1.
Heritage, Brody; Quail, Michelle; Cocks, Naomi
2018-03-05
This study explored the predictors of the outcomes of turnover and occupation attrition intentions for speech-language pathologists. The researchers examined the mediating effects of job satisfaction and strain on the relationship between stress and the latter outcomes. Additionally, the researchers examined the importance of embeddedness in predicting turnover intentions after accounting for stress, strain and job satisfaction. An online questionnaire was used to explore turnover and attrition intentions in 293 Australian speech-language pathologists. Job satisfaction contributed to a significant indirect effect on the stress and turnover intention relationship, however strain did not. There was a significant direct effect between stress and turnover intention after accounting for covariates. Embeddedness and the perceived availability of alternative jobs were also found to be significant predictors of turnover intentions. The mediating model used to predict turnover intentions also predicted occupation attrition intentions. The effect of stress on occupation attrition intentions was indirect in nature, the direct effect negated by mediating variables. Qualitative data provided complementary evidence to the quantitative model. The findings indicate that the proposed parsimonious model adequately captures predictors of speech-language pathologists' turnover and occupation attrition intentions. Workplaces and the profession may wish to consider these retention factors.
The effect of particle size distribution on the design of urban stormwater control measures
Selbig, William R.; Fienen, Michael N.; Horwatich, Judy A.; Bannerman, Roger T.
2016-01-01
An urban pollutant loading model was used to demonstrate how incorrect assumptions on the particle size distribution (PSD) in urban runoff can alter the design characteristics of stormwater control measures (SCMs) used to remove solids in stormwater. Field-measured PSD, although highly variable, is generally coarser than the widely-accepted PSD characterized by the Nationwide Urban Runoff Program (NURP). PSDs can be predicted based on environmental surrogate data. There were no appreciable differences in predicted PSD when grouped by season. Model simulations of a wet detention pond and catch basin showed a much smaller surface area is needed to achieve the same level of solids removal using the median value of field-measured PSD as compared to NURP PSD. Therefore, SCMs that used the NURP PSD in the design process could be unnecessarily oversized. The median of measured PSDs, although more site-specific than NURP PSDs, could still misrepresent the efficiency of an SCM because it may not adequately capture the variability of individual runoff events. Future pollutant loading models may account for this variability through regression with environmental surrogates, but until then, without proper site characterization, the adoption of a single PSD to represent all runoff conditions may result in SCMs that are under- or over-sized, rendering them ineffective or unnecessarily costly.
Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods
NASA Astrophysics Data System (ADS)
Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.
2012-03-01
In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.
A scoping review of nursing workforce planning and forecasting research.
Squires, Allison; Jylhä, Virpi; Jun, Jin; Ensio, Anneli; Kinnunen, Juha
2017-11-01
This study will critically evaluate forecasting models and their content in workforce planning policies for nursing professionals and to highlight the strengths and the weaknesses of existing approaches. Although macro-level nursing workforce issues may not be the first thing that many nurse managers consider in daily operations, the current and impending nursing shortage in many countries makes nursing specific models for workforce forecasting important. A scoping review was conducted using a directed and summative content analysis approach to capture supply and demand analytic methods of nurse workforce planning and forecasting. The literature on nurse workforce forecasting studies published in peer-reviewed journals as well as in grey literature was included in the scoping review. Thirty six studies met the inclusion criteria, with the majority coming from the USA. Forecasting methods were biased towards service utilization analyses and were not consistent across studies. Current methods for nurse workforce forecasting are inconsistent and have not accounted sufficiently for socioeconomic and political factors that can influence workforce projections. Additional studies examining past trends are needed to improve future modelling. Accurate nursing workforce forecasting can help nurse managers, administrators and policy makers to understand the supply and demand of the workforce to prepare and maintain an adequate and competent current and future workforce. © 2017 John Wiley & Sons Ltd.
A strain-mediated corrosion model for bioabsorbable metallic stents.
Galvin, E; O'Brien, D; Cummins, C; Mac Donald, B J; Lally, C
2017-06-01
This paper presents a strain-mediated phenomenological corrosion model, based on the discrete finite element modelling method which was developed for use with the ANSYS Implicit finite element code. The corrosion model was calibrated from experimental data and used to simulate the corrosion performance of a WE43 magnesium alloy stent. The model was found to be capable of predicting the experimentally observed plastic strain-mediated mass loss profile. The non-linear plastic strain model, extrapolated from the experimental data, was also found to adequately capture the corrosion-induced reduction in the radial stiffness of the stent over time. The model developed will help direct future design efforts towards the minimisation of plastic strain during device manufacture, deployment and in-service, in order to reduce corrosion rates and prolong the mechanical integrity of magnesium devices. The need for corrosion models that explore the interaction of strain with corrosion damage has been recognised as one of the current challenges in degradable material modelling (Gastaldi et al., 2011). A finite element based plastic strain-mediated phenomenological corrosion model was developed in this work and was calibrated based on the results of the corrosion experiments. It was found to be capable of predicting the experimentally observed plastic strain-mediated mass loss profile and the corrosion-induced reduction in the radial stiffness of the stent over time. To the author's knowledge, the results presented here represent the first experimental calibration of a plastic strain-mediated corrosion model of a corroding magnesium stent. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.
NASA Astrophysics Data System (ADS)
Henson, W.; Baillie, M. N.; Martin, D.
2017-12-01
Detailed and dynamic land-use data is one of the biggest data deficiencies facing food and water security issues. Better land-use data results in improved integrated hydrologic models that are needed to look at the feedback between land and water use, specifically for adequately representing changes and dynamics in rainfall-runoff, urban and agricultural water demands, and surface fluxes of water (e.g., evapotranspiration, runoff, and infiltration). Currently, land-use data typically are compiled from annual (e.g., Crop Scape) or multi-year composites if mapped at all. While this approach provides information about interannual land-use practices, it does not capture the dynamic changes in highly developed agricultural lands prevalent in California agriculture such as (1) dynamic land-use changes from high frequency multi-crop rotations and (2) uncertainty in sub-annual crop distribution, planting times, and cropped areas. California has collected spatially distributed data for agricultural pesticide use since 1974 through the California Pesticide Information Portal (CalPIP). A method leveraging the CalPIP database has been developed to provide vital information about dynamic agricultural land use (e.g., crop distribution and planting times) and water demand issues in Salinas Valley, California, along the central coast. This 7 billion dollar/year agricultural area produces up to 50% of U.S. lettuce and broccoli. Therefore, effective and sustainable water resource development in the area must balance the needs of this essential industry, other beneficial uses, and the environment. This new tool provides a way to provide more dynamic crop data in hydrologic models. While the current application focuses on the Salinas Valley, the methods are extensible to all of California and other states with similar pesticide reporting. The improvements in representing variability in crop patterns and associated water demands increase our understanding of land-use change and precision of hydrologic decision models. Ultimately, further refinement to the parcel level will completely capture the changing topology of agricultural land use.
Using Multitheory Model of Health Behavior Change to Predict Adequate Sleep Behavior.
Knowlden, Adam P; Sharma, Manoj; Nahar, Vinayak K
The purpose of this article was to use the multitheory model of health behavior change in predicting adequate sleep behavior in college students. A valid and reliable survey was administered in a cross-sectional design (n = 151). For initiation of adequate sleep behavior, the construct of behavioral confidence (P < .001) was found to be significant and accounted for 24.4% of the variance. For sustenance of adequate sleep behavior, changes in social environment (P < .02), emotional transformation (P < .001), and practice for change (P < .001) were significant and accounted for 34.2% of the variance.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Comparison of electromagnetic and nuclear dissociation of 17Ne
NASA Astrophysics Data System (ADS)
Wamers, F.; Marganiec, J.; Aksouh, F.; Aksyutina, Yu.; Alvarez-Pol, H.; Aumann, T.; Beceiro-Novo, S.; Bertulani, C. A.; Boretzky, K.; Borge, M. J. G.; Chartier, M.; Chatillon, A.; Chulkov, L. V.; Cortina-Gil, D.; Emling, H.; Ershova, O.; Fraile, L. M.; Fynbo, H. O. U.; Galaviz, D.; Geissel, H.; Heil, M.; Hoffmann, D. H. H.; Hoffman, J.; Johansson, H. T.; Jonson, B.; Karagiannis, C.; Kiselev, O. A.; Kratz, J. V.; Kulessa, R.; Kurz, N.; Langer, C.; Lantz, M.; Le Bleis, T.; Lehr, C.; Lemmon, R.; Litvinov, Yu. A.; Mahata, K.; Müntz, C.; Nilsson, T.; Nociforo, C.; Ott, W.; Panin, V.; Paschalis, S.; Perea, A.; Plag, R.; Reifarth, R.; Richter, A.; Riisager, K.; Rodriguez-Tajes, C.; Rossi, D.; Savran, D.; Schrieder, G.; Simon, H.; Stroth, J.; Sümmerer, K.; Tengblad, O.; Typel, S.; Weick, H.; Wiescher, M.; Wimmer, C.
2018-03-01
The Borromean drip-line nucleus 17Ne has been suggested to possess a two-proton halo structure in its ground state. In the astrophysical r p -process, where the two-proton capture reaction 15O(2 p ,γ )17Ne plays an important role, the calculated reaction rate differs by several orders of magnitude between different theoretical approaches. To add to the understanding of the 17Ne structure we have studied nuclear and electromagnetic dissociation. A 500 MeV/u 17Ne beam was directed toward lead, carbon, and polyethylene targets. Oxygen isotopes in the final state were measured in coincidence with one or two protons. Different reaction branches in the dissociation of 17Ne were disentangled. The relative populations of s and d states in 16F were determined for light and heavy targets. The differential cross section for electromagnetic dissociation (EMD) shows a continuous internal energy spectrum in the three-body system 15O+2 p . The 17Ne EMD data were compared to current theoretical models. None of them, however, yields satisfactory agreement with the experimental data presented here. These new data may facilitate future development of adequate models for description of the fragmentation process.
Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles
Cortés, Camilo; Unzueta, Luis; de los Reyes-Guzmán, Ana; Ruiz, Oscar E.; Flórez, Julián
2016-01-01
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR. PMID:27403044
Spatial optimization for decentralized non-potable water reuse
NASA Astrophysics Data System (ADS)
Kavvada, Olga; Nelson, Kara L.; Horvath, Arpad
2018-06-01
Decentralization has the potential to reduce the scale of the piped distribution network needed to enable non-potable water reuse (NPR) in urban areas by producing recycled water closer to its point of use. However, tradeoffs exist between the economies of scale of treatment facilities and the size of the conveyance infrastructure, including energy for upgradient distribution of recycled water. To adequately capture the impacts from distribution pipes and pumping requirements, site-specific conditions must be accounted for. In this study, a generalized framework (a heuristic modeling approach using geospatial algorithms) is developed that estimates the financial cost, the energy use, and the greenhouse gas emissions associated with NPR (for toilet flushing) as a function of scale of treatment and conveyance networks with the goal of determining the optimal degree of decentralization. A decision-support platform is developed to assess and visualize NPR system designs considering topography, economies of scale, and building size. The platform can be used for scenario development to explore the optimal system size based on the layout of current or new buildings. The model also promotes technology innovation by facilitating the systems-level comparison of options to lower costs, improve energy efficiency, and lower greenhouse gas emissions.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
NASA Astrophysics Data System (ADS)
Felfelani, Farshid; Wada, Yoshihide; Longuevergne, Laurent; Pokhrel, Yadu N.
2017-10-01
Hydrological models and the data derived from the Gravity Recovery and Climate Experiment (GRACE) satellite mission have been widely used to study the variations in terrestrial water storage (TWS) over large regions. However, both GRACE products and model results suffer from inherent uncertainties, calling for the need to make a combined use of GRACE and models to examine the variations in total TWS and their individual components, especially in relation to natural and human-induced changes in the terrestrial water cycle. In this study, we use the results from two state-of-the-art hydrological models and different GRACE spherical harmonic products to examine the variations in TWS and its individual components, and to attribute the changes to natural and human-induced factors over large global river basins. Analysis of the spatial patterns of the long-term trend in TWS from the two models and GRACE suggests that both models capture the GRACE-measured direction of change, but differ from GRACE as well as each other in terms of the magnitude over different regions. A detailed analysis of the seasonal cycle of TWS variations over 30 river basins shows notable differences not only between models and GRACE but also among different GRACE products and between the two models. Further, it is found that while one model performs well in highly-managed river basins, it fails to reproduce the GRACE-observed signal in snow-dominated regions, and vice versa. The isolation of natural and human-induced changes in TWS in some of the managed basins reveals a consistently declining TWS trend during 2002-2010, however; significant differences are again obvious both between GRACE and models and among different GRACE products and models. Results from the decomposition of the TWS signal into the general trend and seasonality indicate that both models do not adequately capture both the trend and seasonality in the managed or snow-dominated basins implying that the TWS variations from a single model cannot be reliably used for all global regions. It is also found that the uncertainties arising from climate forcing datasets can introduce significant additional uncertainties, making direct comparison of model results and GRACE products even more difficult. Our results highlight the need to further improve the representation of human land-water management and snow processes in large-scale models to enable a reliable use of models and GRACE to study the changes in freshwater systems in all global regions.
Lee, Jia-Cheng; Chuang, Keh-Shih; Chen, Yi-Wei; Hsu, Fang-Yuh; Chou, Fong-In; Yen, Sang-Hue; Wu, Yuan-Hung
2017-01-01
Diffuse intrinsic pontine glioma is a very frustrating disease. Since the tumor infiltrates the brain stem, surgical removal is often impossible. For conventional radiotherapy, the dose constraint of the brain stem impedes attempts at further dose escalation. Boron neutron capture therapy (BNCT), a targeted radiotherapy, carries the potential to selectively irradiate tumors with an adequate dose while sparing adjacent normal tissue. In this study, 12 consecutive patients treated with conventional radiotherapy in our institute were reviewed to evaluate the feasibility of BNCT. NCTPlan Ver. 1.1.44 was used for dose calculations. Compared with two and three fields, the average maximal dose to the normal brain may be lowered to 7.35 ± 0.72 Gy-Eq by four-field irradiation. The mean ratio of minimal dose to clinical target volume and maximal dose to normal tissue was 2.41 ± 0.26 by four-field irradiation. A therapeutic benefit may be expected with multi-field boron neutron capture therapy to treat diffuse intrinsic pontine glioma without craniotomy, while the maximal dose to the normal brain would be minimized by using the four-field setting.
Lee, Jia-Cheng; Chuang, Keh-Shih; Chen, Yi-Wei; Hsu, Fang-Yuh; Chou, Fong-In; Yen, Sang-Hue
2017-01-01
Diffuse intrinsic pontine glioma is a very frustrating disease. Since the tumor infiltrates the brain stem, surgical removal is often impossible. For conventional radiotherapy, the dose constraint of the brain stem impedes attempts at further dose escalation. Boron neutron capture therapy (BNCT), a targeted radiotherapy, carries the potential to selectively irradiate tumors with an adequate dose while sparing adjacent normal tissue. In this study, 12 consecutive patients treated with conventional radiotherapy in our institute were reviewed to evaluate the feasibility of BNCT. NCTPlan Ver. 1.1.44 was used for dose calculations. Compared with two and three fields, the average maximal dose to the normal brain may be lowered to 7.35 ± 0.72 Gy-Eq by four-field irradiation. The mean ratio of minimal dose to clinical target volume and maximal dose to normal tissue was 2.41 ± 0.26 by four-field irradiation. A therapeutic benefit may be expected with multi-field boron neutron capture therapy to treat diffuse intrinsic pontine glioma without craniotomy, while the maximal dose to the normal brain would be minimized by using the four-field setting. PMID:28662135
DOE Office of Scientific and Technical Information (OSTI.GOV)
Firestone, Ryan; Marnay, Chris
The on-site generation of electricity can offer buildingowners and occupiers financial benefits as well as social benefits suchas reduced grid congestion, improved energy efficiency, and reducedgreenhouse gas emissions. Combined heat and power (CHP), or cogeneration,systems make use of the waste heat from the generator for site heatingneeds. Real-time optimal dispatch of CHP systems is difficult todetermine because of complicated electricity tariffs and uncertainty inCHP equipment availability, energy prices, and system loads. Typically,CHP systems use simple heuristic control strategies. This paper describesa method of determining optimal control in real-time and applies it to alight industrial site in San Diego, California, tomore » examine: 1) the addedbenefit of optimal over heuristic controls, 2) the price elasticity ofthe system, and 3) the site-attributable greenhouse gas emissions, allunder three different tariff structures. Results suggest that heuristiccontrols are adequate under the current tariff structure and relativelyhigh electricity prices, capturing 97 percent of the value of thedistributed generation system. Even more value could be captured bysimply not running the CHP system during times of unusually high naturalgas prices. Under hypothetical real-time pricing of electricity,heuristic controls would capture only 70 percent of the value ofdistributed generation.« less
Numerical study of the direct pressure effect of acoustic waves in planar premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, H.; Jimenez, C.
Recently the unsteady response of 1-D premixed flames to acoustic pressure waves for the range of frequencies below and above the inverse of the flame transit time was investigated experimentally using OH chemiluminescence Wangher (2008). They compared the frequency dependence of the measured response to the prediction of an analytical model proposed by Clavin et al. (1990), derived from the standard flame model (one-step Arrhenius kinetics) and to a similar model proposed by McIntosh (1991). Discrepancies between the experimental results and the model led to the conclusion that the standard model does not provide an adequate description of the unsteadymore » response of real flames and that it is necessary to investigate more realistic chemical models. Here we follow exactly this suggestion and perform numerical studies of the response of lean methane flames using different reaction mechanisms. We find that the global flame response obtained with both detailed chemistry (GRI3.0) and a reduced multi-step model by Peters (1996) lies slightly above the predictions of the analytical model, but is close to experimental results. We additionally used an irreversible one-step Arrhenius reaction model and show the effect of the pressure dependence of the global reaction rate in the flame response. Our results suggest first that the current models have to be extended to capture the amplitude and phase results of the detailed mechanisms, and second that the correlation between the heat release and the measured OH* chemiluminescence should be studied deeper. (author)« less
A goodness-of-fit test for capture-recapture model M(t) under closure
Stanley, T.R.; Burnham, K.P.
1999-01-01
A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.
Head Motion Modeling for Human Behavior Analysis in Dyadic Interaction
Xiao, Bo; Georgiou, Panayiotis; Baucom, Brian; Narayanan, Shrikanth S.
2015-01-01
This paper presents a computational study of head motion in human interaction, notably of its role in conveying interlocutors’ behavioral characteristics. Head motion is physically complex and carries rich information; current modeling approaches based on visual signals, however, are still limited in their ability to adequately capture these important properties. Guided by the methodology of kinesics, we propose a data driven approach to identify typical head motion patterns. The approach follows the steps of first segmenting motion events, then parametrically representing the motion by linear predictive features, and finally generalizing the motion types using Gaussian mixture models. The proposed approach is experimentally validated using video recordings of communication sessions from real couples involved in a couples therapy study. In particular we use the head motion model to classify binarized expert judgments of the interactants’ specific behavioral characteristics where entrainment in head motion is hypothesized to play a role: Acceptance, Blame, Positive, and Negative behavior. We achieve accuracies in the range of 60% to 70% for the various experimental settings and conditions. In addition, we describe a measure of motion similarity between the interaction partners based on the proposed model. We show that the relative change of head motion similarity during the interaction significantly correlates with the expert judgments of the interactants’ behavioral characteristics. These findings demonstrate the effectiveness of the proposed head motion model, and underscore the promise of analyzing human behavioral characteristics through signal processing methods. PMID:26557047
NASA Astrophysics Data System (ADS)
Akinsanola, A. A.; Ajayi, V. O.; Adejare, A. T.; Adeyeri, O. E.; Gbode, I. E.; Ogunjobi, K. O.; Nikulin, G.; Abolude, A. T.
2018-04-01
This study presents evaluation of the ability of Rossby Centre Regional Climate Model (RCA4) driven by nine global circulation models (GCMs), to skilfully reproduce the key features of rainfall climatology over West Africa for the period of 1980-2005. The seasonal climatology and annual cycle of the RCA4 simulations were assessed over three homogenous subregions of West Africa (Guinea coast, Savannah, and Sahel) and evaluated using observed precipitation data from the Global Precipitation Climatology Project (GPCP). Furthermore, the model output was evaluated using a wide range of statistical measures. The interseasonal and interannual variability of the RCA4 were further assessed over the subregions and the whole of the West Africa domain. Results indicate that the RCA4 captures the spatial and interseasonal rainfall pattern adequately but exhibits a weak performance over the Guinea coast. Findings from the interannual rainfall variability indicate that the model performance is better over the larger West Africa domain than the subregions. The largest difference across the RCA4 simulated annual rainfall was found in the Sahel. Result from the Mann-Kendall test showed no significant trend for the 1980-2005 period in annual rainfall either in GPCP observation data or in the model simulations over West Africa. In many aspects, the RCA4 simulation driven by the HadGEM2-ES perform best over the region. The use of the multimodel ensemble mean has resulted to the improved representation of rainfall characteristics over the study domain.
Modelling the effect of bednet coverage on malaria transmission in South Sudan.
Mukhtar, Abdulaziz Y A; Munyakazi, Justin B; Ouifki, Rachid; Clark, Allan E
2018-01-01
A campaign for malaria control, using Long Lasting Insecticide Nets (LLINs) was launched in South Sudan in 2009. The success of such a campaign often depends upon adequate available resources and reliable surveillance data which help officials understand existing infections. An optimal allocation of resources for malaria control at a sub-national scale is therefore paramount to the success of efforts to reduce malaria prevalence. In this paper, we extend an existing SIR mathematical model to capture the effect of LLINs on malaria transmission. Available data on malaria is utilized to determine realistic parameter values of this model using a Bayesian approach via Markov Chain Monte Carlo (MCMC) methods. Then, we explore the parasite prevalence on a continued rollout of LLINs in three different settings in order to create a sub-national projection of malaria. Further, we calculate the model's basic reproductive number and study its sensitivity to LLINs' coverage and its efficacy. From the numerical simulation results, we notice a basic reproduction number, [Formula: see text], confirming a substantial increase of incidence cases if no form of intervention takes place in the community. This work indicates that an effective use of LLINs may reduce [Formula: see text] and hence malaria transmission. We hope that this study will provide a basis for recommending a scaling-up of the entry point of LLINs' distribution that targets households in areas at risk of malaria.
NASA Technical Reports Server (NTRS)
Andrews, Arlyn E.; Kawa, S. Randolph
2001-01-01
Mounting concern regarding the possibility that increasing carbon dioxide concentrations will initiate climate change has stimulated interest in the feasibility of measuring CO2 mixing ratios from satellites. Currently, the most comprehensive set of atmospheric CO2 data is from the NOAA CMDL cooperative air sampling network, consisting of more than 40 sites where flasks of air are collected approximately weekly. Sporadic observations in the troposphere and stratosphere from airborne in situ and flask samplers are also available. Although the surface network is extensive, there is a dearth of data in the Southern Hemisphere and most of the stations were intentionally placed in remote areas, far from major sources. Sufficiently precise satellite observations with adequate spatial and temporal resolution would substantially increase our knowledge of the atmospheric CO2 distribution and would undoubtedly lead to improved understanding of the global carbon budget. We use a 3-D chemical transport model to investigate the ability of potential satellite instruments with a variety of orbits, horizontal resolution and vertical weighting functions to capture the variation in the modeled CO2 fields. The model is driven by analyzed winds from the Goddard Data Assimilation Office. Simulated CO2 fields are compared with existing surface and aircraft data, and the effects of the model convection scheme and representation of the planetary boundary layer are considered.
NASA Technical Reports Server (NTRS)
Andrews, Arlyn E.; Kawa, S. Randolph; Einaudi, Franco (Technical Monitor)
2001-01-01
Mounting concern regarding the possibility that increasing carbon dioxide concentrations will initiate climate change has stimulated interest in the feasibility of measuring CO2 mixing ratios from satellites. Currently, the most comprehensive set of atmospheric CO2 data is from the NOAA CMDL cooperative air sampling network, consisting of more than 40 sites where flasks of air are collected approximately weekly. Sporadic observations in the troposphere and stratosphere from airborne in situ and flask samplers are also available. Although the surface network is extensive, there is a dearth of data in the Southern Hemisphere and most of the stations were intentionally placed in remote areas, far from major sources. Sufficiently precise satellite observations with adequate spatial and temporal resolution would substantially increase our knowledge of the atmospheric CO2 distribution and would undoubtedly lead to improved understanding of the global carbon budget. We use a 3-D chemical transport model to investigate the ability of potential satellite instruments with a variety of orbits, horizontal resolution and vertical weighting functions to capture the variation in the modeled CO2 fields. The model is driven by analyzed winds from the Goddard Data Assimilation Office. Simulated CO2 fields are compared with existing surface and aircraft data, and the effects of the model convection scheme and representation of the planetary boundary layer are considered.
Mirus, Benjamin B.; Nimmo, J.R.
2013-01-01
The impact of preferential flow on recharge and contaminant transport poses a considerable challenge to water-resources management. Typical hydrologic models require extensive site characterization, but can underestimate fluxes when preferential flow is significant. A recently developed source-responsive model incorporates film-flow theory with conservation of mass to estimate unsaturated-zone preferential fluxes with readily available data. The term source-responsive describes the sensitivity of preferential flow in response to water availability at the source of input. We present the first rigorous tests of a parsimonious formulation for simulating water table fluctuations using two case studies, both in arid regions with thick unsaturated zones of fractured volcanic rock. Diffuse flow theory cannot adequately capture the observed water table responses at both sites; the source-responsive model is a viable alternative. We treat the active area fraction of preferential flow paths as a scaled function of water inputs at the land surface then calibrate the macropore density to fit observed water table rises. Unlike previous applications, we allow the characteristic film-flow velocity to vary, reflecting the lag time between source and deep water table responses. Analysis of model performance and parameter sensitivity for the two case studies underscores the importance of identifying thresholds for initiation of film flow in unsaturated rocks, and suggests that this parsimonious approach is potentially of great practical value.
A cellular automata approach for modeling surface water runoff
NASA Astrophysics Data System (ADS)
Jozefik, Zoltan; Nanu Frechen, Tobias; Hinz, Christoph; Schmidt, Heiko
2015-04-01
This abstract reports the development and application of a two-dimensional cellular automata based model, which couples the dynamics of overland flow, infiltration processes and surface evolution through sediment transport. The natural hill slopes are represented by their topographic elevation and spatially varying soil properties infiltration rates and surface roughness coefficients. This model allows modeling of Hortonian overland flow and infiltration during complex rainfall events. An advantage of the cellular automata approach over the kinematic wave equations is that wet/dry interfaces that often appear with rainfall overland flows can be accurately captured and are not a source of numerical instabilities. An adaptive explicit time stepping scheme allows for rainfall events to be adequately resolved in time, while large time steps are taken during dry periods to provide for simulation run time efficiency. The time step is constrained by the CFL condition and mass conservation considerations. The spatial discretization is shown to be first-order accurate. For validation purposes, hydrographs for non-infiltrating and infiltrating plates are compared to the kinematic wave analytic solutions and data taken from literature [1,2]. Results show that our cellular automata model quantitatively accurately reproduces hydrograph patterns. However, recent works have showed that even through the hydrograph is satisfyingly reproduced, the flow field within the plot might be inaccurate [3]. For a more stringent validation, we compare steady state velocity, water flux, and water depth fields to rainfall simulation experiments conducted in Thies, Senegal [3]. Comparisons show that our model is able to accurately capture these flow properties. Currently, a sediment transport and deposition module is being implemented and tested. [1] M. Rousseau, O. Cerdan, O. Delestre, F. Dupros, F. James, S. Cordier. Overland flow modeling with the Shallow Water Equation using a well balanced numerical scheme: Adding efficiency or sum more complexity?. 2012.
Royle, J. Andrew; Chandler, Richard B.; Sollmann, Rahel; Gardner, Beth
2013-01-01
Spatial Capture-Recapture provides a revolutionary extension of traditional capture-recapture methods for studying animal populations using data from live trapping, camera trapping, DNA sampling, acoustic sampling, and related field methods. This book is a conceptual and methodological synthesis of spatial capture-recapture modeling. As a comprehensive how-to manual, this reference contains detailed examples of a wide range of relevant spatial capture-recapture models for inference about population size and spatial and temporal variation in demographic parameters. Practicing field biologists studying animal populations will find this book to be a useful resource, as will graduate students and professionals in ecology, conservation biology, and fisheries and wildlife management.
SPITFIRE within the MPI Earth system model: Model development and evaluation
NASA Astrophysics Data System (ADS)
Lasslop, Gitta; Thonicke, Kirsten; Kloster, Silvia
2014-09-01
Quantification of the role of fire within the Earth system requires an adequate representation of fire as a climate-controlled process within an Earth system model. To be able to address questions on the interaction between fire and the Earth system, we implemented the mechanistic fire model SPITFIRE, in JSBACH, the land surface model of the MPI Earth system model. Here, we document the model implementation as well as model modifications. We evaluate our model results by comparing the simulation to the GFED version 3 satellite-based data set. In addition, we assess the sensitivity of the model to the meteorological forcing and to the spatial variability of a number of fire relevant model parameters. A first comparison of model results with burned area observations showed a strong correlation of the residuals with wind speed. Further analysis revealed that the response of the fire spread to wind speed was too strong for the application on global scale. Therefore, we developed an improved parametrization to account for this effect. The evaluation of the improved model shows that the model is able to capture the global gradients and the seasonality of burned area. Some areas of model-data mismatch can be explained by differences in vegetation cover compared to observations. We achieve benchmarking scores comparable to other state-of-the-art fire models. The global total burned area is sensitive to the meteorological forcing. Adjustment of parameters leads to similar model results for both forcing data sets with respect to spatial and seasonal patterns. This article was corrected on 29 SEP 2014. See the end of the full text for details.
Analysis and Modeling of Ground Operations at Hub Airports
NASA Technical Reports Server (NTRS)
Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.
2000-01-01
Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.
Utility of Policy Capturing as an Approach to Graduate Admissions Decision Making.
ERIC Educational Resources Information Center
Schmidt, Frank L.; And Others
1978-01-01
The present study examined and evaluated the application of linear policy-capturing models to the real-world decision task of graduate admissions. Utility of the policy-capturing models was great enough to be of practical significance, and least-squares weights showed no predictive advantage over equal weights. (Author/CTM)
Mollenhauer, Robert; Mouser, Joshua B.; Brewer, Shannon K.
2018-01-01
Temporal and spatial variability in streams result in heterogeneous gear capture probability (i.e., the proportion of available individuals identified) that confounds interpretation of data used to monitor fish abundance. We modeled tow-barge electrofishing capture probability at multiple spatial scales for nine Ozark Highland stream fishes. In addition to fish size, we identified seven reach-scale environmental characteristics associated with variable capture probability: stream discharge, water depth, conductivity, water clarity, emergent vegetation, wetted width–depth ratio, and proportion of riffle habitat. The magnitude of the relationship between capture probability and both discharge and depth varied among stream fishes. We also identified lithological characteristics among stream segments as a coarse-scale source of variable capture probability. The resulting capture probability model can be used to adjust catch data and derive reach-scale absolute abundance estimates across a wide range of sampling conditions with similar effort as used in more traditional fisheries surveys (i.e., catch per unit effort). Adjusting catch data based on variable capture probability improves the comparability of data sets, thus promoting both well-informed conservation and management decisions and advances in stream-fish ecology.
Annual survival of Snail Kites in Florida: Radio telemetry versus capture-resighting data
Bennetts, R.E.; Dreitz, V.J.; Kitchens, W.M.; Hines, J.E.; Nichols, J.D.
1999-01-01
We estimated annual survival of Snail Kites (Rostrhamus sociabilis) in Florida using the Kaplan-Meier estimator with data from 271 radio-tagged birds over a three-year period and capture-recapture (resighting) models with data from 1,319 banded birds over a six-year period. We tested the hypothesis that survival differed among three age classes using both data sources. We tested additional hypotheses about spatial and temporal variation using a combination of data from radio telemetry and single- and multistrata capture-recapture models. Results from these data sets were similar in their indications of the sources of variation in survival, but they differed in some parameter estimates. Both data sources indicated that survival was higher for adults than for juveniles, but they did not support delineation of a subadult age class. Our data also indicated that survival differed among years and regions for juveniles but not for adults. Estimates of juvenile survival using radio telemetry data were higher than estimates using capture-recapture models for two of three years (1992 and 1993). Ancillary evidence based on censored birds indicated that some mortality of radio-tagged juveniles went undetected during those years, resulting in biased estimates. Thus, we have greater confidence in our estimates of juvenile survival using capture-recapture models. Precision of estimates reflected the number of parameters estimated and was surprisingly similar between radio telemetry and single-stratum capture-recapture models, given the substantial differences in sample sizes. Not having to estimate resighting probability likely offsets, to some degree, the smaller sample sizes from our radio telemetry data. Precision of capture-recapture models was lower using multistrata models where region-specific parameters were estimated than using single-stratum models, where spatial variation in parameters was not taken into account.
Quintiliani, Lisa M.; Yang, May H.; Ebbeling, Cara B.; Stoddard, Anne M.; Pereira, Lesley K.; Sorensen, Glorian
2009-01-01
Objectives. We assessed whether adequate sleep is linked to more healthful eating behaviors among motor freight workers and whether it mediates the effects of workplace experiences. Methods. Data were derived from a baseline survey and assessment of permanent employees at 8 trucking terminals. Bivariate and multivariate regression models were used to examine relationships between work environment, sleep adequacy, and dietary choices. Results. The sample (n = 542) was 83% White, with a mean age of 49 years and a mean body mass index of 30 kg/m2. Most of the participants were satisfied with their job (87.5%) and reported adequate sleep (51%); 30% reported job strain. In our first model, lack of job strain and greater supervisor support were significantly associated with adequate sleep. In our second model, educational level, age, and adequate sleep were significantly associated with at least 2 of the 3 healthful eating choices assessed (P < .05). However, work experiences were not significant predictors of healthful food choices when adequate sleep was included. Conclusions. Adequate sleep is associated with more healthful food choices and may mediate the effects of workplace experiences. Thus, workplace health programs should be responsive to workers' sleep patterns. PMID:19890169
Neutron Capture Gamma-Ray Libraries for Nuclear Applications
NASA Astrophysics Data System (ADS)
Sleaford, B. W.; Firestone, R. B.; Summers, N.; Escher, J.; Hurst, A.; Krticka, M.; Basunia, S.; Molnar, G.; Belgya, T.; Revay, Z.; Choi, H. D.
2011-06-01
The neutron capture reaction is useful in identifying and analyzing the gamma-ray spectrum from an unknown assembly as it gives unambiguous information on its composition. This can be done passively or actively where an external neutron source is used to probe an unknown assembly. There are known capture gamma-ray data gaps in the ENDF libraries used by transport codes for various nuclear applications. The Evaluated Gamma-ray Activation file (EGAF) is a new thermal neutron capture database of discrete line spectra and cross sections for over 260 isotopes that was developed as part of an IAEA Coordinated Research Project. EGAF is being used to improve the capture gamma production in ENDF libraries. For medium to heavy nuclei the quasi continuum contribution to the gamma cascades is not experimentally resolved. The continuum contains up to 90% of all the decay energy and is modeled here with the statistical nuclear structure code DICEBOX. This code also provides a consistency check of the level scheme nuclear structure evaluation. The calculated continuum is of sufficient accuracy to include in the ENDF libraries. This analysis also determines new total thermal capture cross sections and provides an improved RIPL database. For higher energy neutron capture there is less experimental data available making benchmarking of the modeling codes more difficult. We are investigating the capture spectra from higher energy neutrons experimentally using surrogate reactions and modeling this with Hauser-Feshbach codes. This can then be used to benchmark CASINO, a version of DICEBOX modified for neutron capture at higher energy. This can be used to simulate spectra from neutron capture at incident neutron energies up to 20 MeV to improve the gamma-ray spectrum in neutron data libraries used for transport modeling of unknown assemblies.
Estimation of sex-specific survival from capture-recapture data when sex is not always known
Nichols, J.D.; Kendall, W.L.; Hines, J.E.; Spendelow, J.A.
2004-01-01
Many animals lack obvious sexual dimorphism, making assignment of sex difficult even for observed or captured animals. For many such species it is possible to assign sex with certainty only at some occasions; for example, when they exhibit certain types of behavior. A common approach to handling this situation in capture-recapture studies has been to group capture histories into those of animals eventually identified as male and female and those for which sex was never known. Because group membership is dependent on the number of occasions at which an animal was caught or observed (known sex animals, on average, will have been observed at more occasions than unknown-sex animals), survival estimates for known-sex animals will be positively biased, and those for unknown animals will be negatively biased. In this paper, we develop capture-recapture models that incorporate sex ratio and sex assignment parameters that permit unbiased estimation in the face of this sampling problem. We demonstrate the magnitude of bias in the traditional capture-recapture approach to this sampling problem, and we explore properties of estimators from other ad hoc approaches. The model is then applied to capture-recapture data for adult Roseate Terns (Sterna dougallii) at Falkner Island, Connecticut, 1993-2002. Sex ratio among adults in this population favors females, and we tested the hypothesis that this population showed sex-specific differences in adult survival. Evidence was provided for higher survival of adult females than males, as predicted. We recommend use of this modeling approach for future capture-recapture studies in which sex cannot always be assigned to captured or observed animals. We also place this problem in the more general context of uncertainty in state classification in multistate capture-recapture models.
Asymmetric capture of Dirac dark matter by the Sun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Clementz, Stefan
2015-08-18
Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles andmore » anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.« less
The use of auxiliary variables in capture-recapture and removal experiments
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1984-01-01
The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.
Benchmarking of vertically-integrated CO2 flow simulations at the Sleipner Field, North Sea
NASA Astrophysics Data System (ADS)
Cowton, L. R.; Neufeld, J. A.; White, N. J.; Bickle, M. J.; Williams, G. A.; White, J. C.; Chadwick, R. A.
2018-06-01
Numerical modeling plays an essential role in both identifying and assessing sub-surface reservoirs that might be suitable for future carbon capture and storage projects. Accuracy of flow simulations is tested by benchmarking against historic observations from on-going CO2 injection sites. At the Sleipner project located in the North Sea, a suite of time-lapse seismic reflection surveys enables the three-dimensional distribution of CO2 at the top of the reservoir to be determined as a function of time. Previous attempts have used Darcy flow simulators to model CO2 migration throughout this layer, given the volume of injection with time and the location of the injection point. Due primarily to computational limitations preventing adequate exploration of model parameter space, these simulations usually fail to match the observed distribution of CO2 as a function of space and time. To circumvent these limitations, we develop a vertically-integrated fluid flow simulator that is based upon the theory of topographically controlled, porous gravity currents. This computationally efficient scheme can be used to invert for the spatial distribution of reservoir permeability required to minimize differences between the observed and calculated CO2 distributions. When a uniform reservoir permeability is assumed, inverse modeling is unable to adequately match the migration of CO2 at the top of the reservoir. If, however, the width and permeability of a mapped channel deposit are allowed to independently vary, a satisfactory match between the observed and calculated CO2 distributions is obtained. Finally, the ability of this algorithm to forecast the flow of CO2 at the top of the reservoir is assessed. By dividing the complete set of seismic reflection surveys into training and validation subsets, we find that the spatial pattern of permeability required to match the training subset can successfully predict CO2 migration for the validation subset. This ability suggests that it might be feasible to forecast migration patterns into the future with a degree of confidence. Nevertheless, our analysis highlights the difficulty in estimating reservoir parameters away from the region swept by CO2 without additional observational constraints.
NASA Astrophysics Data System (ADS)
Auluck, S. K. H.
2016-12-01
Recent work on the revised Gratton-Vargas model (Auluck, Phys. Plasmas 20, 112501 (2013); 22, 112509 (2015) and references therein) has demonstrated that there are some aspects of Dense Plasma Focus (DPF), which are not sensitive to details of plasma dynamics and are well captured in an oversimplified model assumption, which contains very little plasma physics. A hyperbolic conservation law formulation of DPF physics reveals the existence of a velocity threshold related to specific energy of dissociation and ionization, above which, the work done during shock propagation is adequate to ensure dissociation and ionization of the gas being ingested. These developments are utilized to formulate an algorithmic definition of DPF optimization that is valid in a wide range of applications, not limited to neutron emission. This involves determination of a set of DPF parameters, without performing iterative model calculations, that lead to transfer of all the energy from the capacitor bank to the plasma at the time of current derivative singularity and conversion of a preset fraction of this energy into magnetic energy, while ensuring that electromagnetic work done during propagation of the plasma remains adequate for dissociation and ionization of neutral gas being ingested. Such a universal optimization criterion is expected to facilitate progress in new areas of DPF research that include production of short lived radioisotopes of possible use in medical diagnostics, generation of fusion energy from aneutronic fuels, and applications in nanotechnology, radiation biology, and materials science. These phenomena are expected to be optimized for fill gases of different kinds and in different ranges of mass density compared to the devices constructed for neutron production using empirical thumb rules. A universal scaling theory of DPF design optimization is proposed and illustrated for designing devices working at one or two orders higher pressure of deuterium than the current practice of designs optimized at pressures less than 10 mbar of deuterium. These examples show that the upper limit for operating pressure is of technological (and not physical) origin.
Orthogonal-blendshape-based editing system for facial motion capture data.
Li, Qing; Deng, Zhigang
2008-01-01
The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.
Dynamics of Postcombustion CO2 Capture Plants: Modeling, Validation, and Case Study
2017-01-01
The capture of CO2 from power plant flue gases provides an opportunity to mitigate emissions that are harmful to the global climate. While the process of CO2 capture using an aqueous amine solution is well-known from experience in other technical sectors (e.g., acid gas removal in the gas processing industry), its operation combined with a power plant still needs investigation because in this case, the interaction with power plants that are increasingly operated dynamically poses control challenges. This article presents the dynamic modeling of CO2 capture plants followed by a detailed validation using transient measurements recorded from the pilot plant operated at the Maasvlakte power station in the Netherlands. The model predictions are in good agreement with the experimental data related to the transient changes of the main process variables such as flow rate, CO2 concentrations, temperatures, and solvent loading. The validated model was used to study the effects of fast power plant transients on the capture plant operation. A relevant result of this work is that an integrated CO2 capture plant might enable more dynamic operation of retrofitted fossil fuel power plants because the large amount of steam needed by the capture process can be diverted rapidly to and from the power plant. PMID:28413256
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roederer, Ian U.; Karakas, Amanda I.; Pignatari, Marco
We present a detailed analysis of the composition and nucleosynthetic origins of the heavy elements in the metal-poor ([Fe/H] = −1.62 ± 0.09) star HD 94028. Previous studies revealed that this star is mildly enhanced in elements produced by the slow neutron-capture process (s process; e.g., [Pb/Fe] = +0.79 ± 0.32) and rapid neutron-capture process (r process; e.g., [Eu/Fe] = +0.22 ± 0.12), including unusually large molybdenum ([Mo/Fe] = +0.97 ± 0.16) and ruthenium ([Ru/Fe] = +0.69 ± 0.17) enhancements. However, this star is not enhanced in carbon ([C/Fe] = −0.06 ± 0.19). We analyze an archival near-ultraviolet spectrum of HD 94028, collected using the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope, and other archival optical spectra collected frommore » ground-based telescopes. We report abundances or upper limits derived from 64 species of 56 elements. We compare these observations with s-process yields from low-metallicity AGB evolution and nucleosynthesis models. No combination of s- and r-process patterns can adequately reproduce the observed abundances, including the super-solar [As/Ge] ratio (+0.99 ± 0.23) and the enhanced [Mo/Fe] and [Ru/Fe] ratios. We can fit these features when including an additional contribution from the intermediate neutron-capture process (i process), which perhaps operated through the ingestion of H in He-burning convective regions in massive stars, super-AGB stars, or low-mass AGB stars. Currently, only the i process appears capable of consistently producing the super-solar [As/Ge] ratios and ratios among neighboring heavy elements found in HD 94028. Other metal-poor stars also show enhanced [As/Ge] ratios, hinting that operation of the i process may have been common in the early Galaxy.« less
NASA Astrophysics Data System (ADS)
Ito, A.
2017-12-01
Terrestrial ecosystems are important sink of carbon dioxide (CO2) but significant sources of other greenhouse gases such as methane (CH4) and nitrous oxide (N2O). To resolve the role of terrestrial biosphere in the climate system, we need to quantify total greenhouse gas budget with an adequate accuracy. In addition to top-down evaluation on the basis of atmospheric measurements, model-based approach is required for integration and up-scaling of filed data and for prediction under changing environment and different management practices. Since the early 2000s, we have developed a process-based model of terrestrial biogeochemical cycles focusing on atmosphere-ecosystem exchange of trace gases: Vegetation Integrated SImulator for Trace gases (VISIT). The model includes simple and comprehensive schemes of carbon and nitrogen cycles in terrestrial ecosystems, allowing us to capture dynamic nature of greenhouse gas budget. Beginning from natural ecosystems such as temperate and tropical forests, the models is now applicable to croplands by including agricultural practices such as planting, harvest, and fertilizer input. Global simulation results have been published from several papers, but model validation and benchmarking using up-to-date observations are remained for works. The model is now applied to several practical issues such as evaluation of N2O emission from bio-fuel croplands, which are expected to accomplish the mitigation target of the Paris Agreement. We also show several topics about basic model development such as revised CH4 emission affected by dynamic water-table and refined N2O emission from nitrification.
Continuum modeling of neuronal cell under blast loading
Jérusalem, Antoine; Dao, Ming
2012-01-01
Traumatic brain injuries have recently been put under the spotlight as one of the most important causes of accidental brain dysfunctions. Significant experimental and modeling efforts are thus ongoing to study the associated biological, mechanical and physical mechanisms. In the field of cell mechanics, progresses are also being made at the experimental and modeling levels to better characterize many of the cell functions such as differentiation, growth, migration and death, among others. The work presented here aims at bridging both efforts by proposing a continuum model of neuronal cell submitted to blast loading. In this approach, cytoplasm, nucleus and membrane (plus cortex) are differentiated in a representative cell geometry, and different material constitutive models are adequately chosen for each one. The material parameters are calibrated against published experimental work of cell nanoindentation at multiple rates. The final cell model is ultimately subjected to blast loading within a complete fluid-structure interaction computational framework. The results are compared to the nanoindentation simulation and the specific effects of the blast wave on the pressure and shear levels at the interfaces are identified. As a conclusion, the presented model successfully captures some of the intrinsic intracellular phenomena occurring during its deformation under blast loading and potentially leading to cell damage. It suggests more particularly the localization of damage at the nucleus membrane similarly to what has already been observed at the overall cell membrane. This degree of damage is additionally predicted to be worsened by a longer blast positive phase duration. As a conclusion, the proposed model ultimately provides a new three dimensional computational tool to evaluate intracellular damage during blast loading. PMID:22562014
NASA Astrophysics Data System (ADS)
Shilyaev, M. I.; Khromova, E. M.; Grigoriev, A. V.; Tumashova, A. V.
2011-09-01
A physical-mathematical model of the heat and mass exchange process and condensation capture of sub-micron dust particles on the droplets of dispersed liquid in a sprayer scrubber is proposed and analysed. A satisfactory agreement of computed results and experimental data on soot capturing from the cracking gases is obtained.
NASA Astrophysics Data System (ADS)
Clemo, T. M.; Ramarao, B.; Kelly, V. A.; Lavenue, M.
2011-12-01
Capture is a measure of the impact of groundwater pumping upon groundwater and surface water systems. The computation of capture through analytical or numerical methods has been the subject of articles in the literature for several decades (Bredehoeft et al., 1982). Most recently Leake et al. (2010) described a systematic way to produce capture maps in three-dimensional systems using a numerical perturbation approach in which capture from streams was computed using unit rate pumping at many locations within a MODFLOW model. The Leake et al. (2010) method advances the current state of computing capture. A limitation stems from the computational demand required by the perturbation approach wherein days or weeks of computational time might be required to obtain a robust measure of capture. In this paper, we present an efficient method to compute capture in three-dimensional systems based upon adjoint states. The efficiency of the adjoint method will enable uncertainty analysis to be conducted on capture calculations. The USGS and INTERA have collaborated to extend the MODFLOW Adjoint code (Clemo, 2007) to include stream-aquifer interaction and have applied it to one of the examples used in Leake et al. (2010), the San Pedro Basin MODFLOW model. With five layers and 140,800 grid blocks per layer, the San Pedro Basin model, provided an ideal example data set to compare the capture computed from the perturbation and the adjoint methods. The capture fraction map produced from the perturbation method for the San Pedro Basin model required significant computational time to compute and therefore the locations for the pumping wells were limited to 1530 locations in layer 4. The 1530 direct simulations of capture require approximately 76 CPU hours. Had capture been simulated in each grid block in each layer, as is done in the adjoint method, the CPU time would have been on the order of 4 years. The MODFLOW-Adjoint produced the capture fraction map of the San Pedro Basin model at 704,000 grid blocks (140,800 grid blocks x 5 layers) in just 6 minutes. The capture fraction maps from the perturbation and adjoint methods agree closely. The results of this study indicate that the adjoint capture method and its associated computational efficiency will enable scientists and engineers facing water resource management decisions to evaluate the sensitivity and uncertainty of impacts to regional water resource systems as part of groundwater supply strategies. Bredehoeft, J.D., S.S. Papadopulos, and H.H. Cooper Jr, Groundwater: The water budget myth. In Scientific Basis of Water-Resources Management, ed. National Research Council (U.S.), Geophysical Study Committee, 51-57. Washington D.C.: National Academy Press, 1982. Clemo, Tom, MODFLOW-2005 Ground-Water Model-Users Guide to Adjoint State based Sensitivity Process (ADJ), BSU CGISS 07-01, Center for the Geophysical Investigation of the Shallow Subsurface, Boise State University, 2007. Leake, S.A., H.W. Reeves, and J.E. Dickinson, A New Capture Fraction Method to Map How Pumpage Affects Surface Water Flow, Ground Water, 48(5), 670-700, 2010.
NASA Astrophysics Data System (ADS)
Kuppel, S.; Soulsby, C.; Maneta, M. P.; Tetzlaff, D.
2017-12-01
The utility of field measurements to help constrain the model solution space and identify feasible model configurations has been an increasingly central issue in hydrological model calibration. Sufficiently informative observations are necessary to ensure that the goodness of model-data fit attained effectively translates into more physically-sound information for the internal model parameters, as a basis for model structure evaluation. Here we assess to which extent the diversity of information content can inform on the suitability of a complex, process-based ecohydrological model to simulate key water flux and storage dynamics at a long-term research catchment in the Scottish Highlands. We use the fully-distributed ecohydrological model EcH2O, calibrated against long-term datasets that encompass hydrologic and energy exchanges and ecological measurements: stream discharge, soil moisture, net radiation above canopy, and pine stand transpiration. Diverse combinations of these constraints were applied using a multi-objective cost function specifically designed to avoid compensatory effects between model-data metrics. Results revealed that calibration against virtually all datasets enabled the model to reproduce streamflow reasonably well. However, parameterizing the model to adequately capture local flux and storage dynamics, such as soil moisture or transpiration, required calibration with specific observations. This indicates that the footprint of the information contained in observations varies for each type of dataset, and that a diverse database informing about the different compartments of the domain, is critical to test hypotheses of catchment function and identify a consistent model parameterization. The results foster confidence in using EcH2O to help understanding current and future ecohydrological couplings in Northern catchments.
Boser, Quinn A; Valevicius, Aïda M; Lavoie, Ewen B; Chapman, Craig S; Pilarski, Patrick M; Hebert, Jacqueline S; Vette, Albert H
2018-04-27
Quantifying angular joint kinematics of the upper body is a useful method for assessing upper limb function. Joint angles are commonly obtained via motion capture, tracking markers placed on anatomical landmarks. This method is associated with limitations including administrative burden, soft tissue artifacts, and intra- and inter-tester variability. An alternative method involves the tracking of rigid marker clusters affixed to body segments, calibrated relative to anatomical landmarks or known joint angles. The accuracy and reliability of applying this cluster method to the upper body has, however, not been comprehensively explored. Our objective was to compare three different upper body cluster models with an anatomical model, with respect to joint angles and reliability. Non-disabled participants performed two standardized functional upper limb tasks with anatomical and cluster markers applied concurrently. Joint angle curves obtained via the marker clusters with three different calibration methods were compared to those from an anatomical model, and between-session reliability was assessed for all models. The cluster models produced joint angle curves which were comparable to and highly correlated with those from the anatomical model, but exhibited notable offsets and differences in sensitivity for some degrees of freedom. Between-session reliability was comparable between all models, and good for most degrees of freedom. Overall, the cluster models produced reliable joint angles that, however, cannot be used interchangeably with anatomical model outputs to calculate kinematic metrics. Cluster models appear to be an adequate, and possibly advantageous alternative to anatomical models when the objective is to assess trends in movement behavior. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Clementz, Stefan, E-mail: emb@kth.se, E-mail: scl@kth.se
Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles andmore » anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.« less
Matteson, Kristen A.; Clark, Melissa A.
2010-01-01
Objectives: (1) To explore the effects on women's lives by heavy or irregular menstrual bleeding; (2) To examine whether aspects of women's lives most affected by heavy or irregular menstrual bleeding were adequately addressed by questions that are frequently used in clinical encounters and available questionnaires. Methods: We conducted four focus group sessions with a total of 25 English-speaking women who had reported abnormal uterine bleeding. Discussions included open-ended questions that pertained to bleeding, aspects of life affected by bleeding, and questions frequently used in clinical settings about bleeding and quality of life. Results: We identified five themes that reflected how women's lives were affected by heavy or irregular menstrual bleeding: irritation/inconvenience, bleeding-associated pain, self-consciousness about odor, social embarrassment, and ritual like behavior. Although women responded that the frequently used questions about bleeding and quality of life were important, they felt that the questions failed to go into enough depth to adequately characterize their experiences. Conclusions: Based on the themes identified in our focus group sessions, clinicians and researchers may need to change the questions used to capture “patient experience” with abnormal uterine bleeding more accurately. PMID:20437305
An extensible framework for capturing solvent effects in computer generated kinetic models.
Jalan, Amrit; West, Richard H; Green, William H
2013-03-14
Detailed kinetic models provide useful mechanistic insight into a chemical system. Manual construction of such models is laborious and error-prone, which has led to the development of automated methods for exploring chemical pathways. These methods rely on fast, high-throughput estimation of species thermochemistry and kinetic parameters. In this paper, we present a methodology for extending automatic mechanism generation to solution phase systems which requires estimation of solvent effects on reaction rates and equilibria. The linear solvation energy relationship (LSER) method of Abraham and co-workers is combined with Mintz correlations to estimate ΔG(solv)°(T) in over 30 solvents using solute descriptors estimated from group additivity. Simple corrections are found to be adequate for the treatment of radical sites, as suggested by comparison with known experimental data. The performance of scaled particle theory expressions for enthalpic-entropic decomposition of ΔG(solv)°(T) is also presented along with the associated computational issues. Similar high-throughput methods for solvent effects on free-radical kinetics are only available for a handful of reactions due to lack of reliable experimental data, and continuum dielectric calculations offer an alternative method for their estimation. For illustration, we model liquid phase oxidation of tetralin in different solvents computing the solvent dependence for ROO• + ROO• and ROO• + solvent reactions using polarizable continuum quantum chemistry methods. The resulting kinetic models show an increase in oxidation rate with solvent polarity, consistent with experiment. Further work needed to make this approach more generally useful is outlined.
Lai, Canhai; Xu, Zhijie; Li, Tingwen; ...
2017-08-05
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber's performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered sub-grid models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable accuracymore » and manageable computational effort. Previously developed filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical tubes) on the adsorber's hydrodynamics and CO 2 capture performance are then examined. A one-dimensional three-region process model is briefly introduced for comparison purpose. The CFD model matches reasonably well with the process model while provides additional information about the flow field that is not available with the process model.« less
NASA Astrophysics Data System (ADS)
Kuster, E.; Fox, G.
2016-12-01
Climate change is happening; scientists have already observed changes in sea level, increases in atmospheric carbon dioxide, and declining polar ice. The students of today are the leaders of tomorrow, and it is our duty to make sure they are well equipped and they understand the implications of climate change as part of their research and professional careers. Graduate students, in particular, are gaining valuable and necessary research, leadership, and critical thinking skills, but we need to ensure that they are receiving the appropriate climate education in their graduate training. Previous studies have primarily focused on capturing the K-12, college level, and general publics' knowledge of the climate system, concluding with recommendations on how to improve climate literacy in the classroom. While this is extremely important to study, very few studies have captured the current perception that graduate students hold regarding the amount of climate education being offered to them. This information is important to capture, as it can inform future curriculum development. We developed and distributed a nationwide survey (495 respondents) for graduate students to capture their perception on the level of climate system education being offered and their view on the importance of having climate education. We also investigated differences in the responses based on either geographic area or discipline. We compared how important graduate students felt it was to include climate education in their own discipline versus outside disciplines. The authors will discuss key findings from this ongoing research.
Modeling the Dynamic Water Resource Needs of California's Coastal Watersheds
NASA Astrophysics Data System (ADS)
Alford, C.
2009-12-01
Many watersheds face formidable water supply challenges when it comes to managing water availability to meet diverse water supply and ecosystem management objectives. California’s central coast watersheds are no exception, and both the scarcity of water resources during drier water years and mandates to establish minimum instream flows for salmon habitat have prompted interests in reassessing water management strategies for several of these watersheds. Conventional supply-oriented hydrologic models, however, are not adequate to fully investigate and describe the reciprocal implications of surface water demands for human use and the maintenance of instream flows for salmon habitat that vary both temporally and spatially within a watershed. In an effort to address this issue I developed a coastal watershed management model based on the San Gregorio watershed utilizing the Water Evaluation and Planning (WEAP) system, which permits demand-side prioritization at a time step interval and spatial resolution that captures functional supply and demand relationships. Physiographic input data such as soil type, land cover, elevation, habitat, and water demand sites were extrapolated at a sub-basin level in a GIS. Time-series climate data were collected and processed utilizing the Berkeley Water Center Data Cube at daily time steps for the period 1952 through September 2009. Recent synoptic flow measurements taken at seven tributary sites during the 2009 water year, water depth measured by pressure transducers at six sites within the watershed from September 2005 through September 2009, and daily gauge records from temporary gauges installed in 1981 were used to assess the hydrologic patterns of sub-basins and supplement historic USGS gauge flow records. Empirical functions were used to describe evapotranspiration, surface runoff, sub-surface runoff, and deep percolation. Initial model simulations carried out under both dry and wet water year scenarios were able to capture representative hydrological conditions in both the sample watershed case and an initial test case that utilized base data from a watershed with minimal land disturbance. Results from this study provide valuable insight into the effects of water use through a variety of climactic conditions and provide potential strategies for policy makers, regulators, and stakeholders to strengthen adaptive capacity to achieve sustainable water use within coastal watersheds.
Integrating Infrastructure and Institutions for Water Security in Large Urban Areas
NASA Astrophysics Data System (ADS)
Padowski, J.; Jawitz, J. W.; Carrera, L.
2015-12-01
Urban growth has forced cities to procure more freshwater to meet demands; however the relationship between urban water security, water availability and water management is not well understood. This work quantifies the urban water security of 108 large cities in the United States (n=50) and Africa (n=58) based on their hydrologic, hydraulic and institutional settings. Using publicly available data, urban water availability was estimated as the volume of water available from local water resources and those captured via hydraulic infrastructure (e.g. reservoirs, wellfields, aqueducts) while urban water institutions were assessed according to their ability to deliver, supply and regulate water resources to cities. When assessing availability, cities relying on local water resources comprised a minority (37%) of those assessed. The majority of cities (55%) instead rely on captured water to meet urban demands, with African cities reaching farther and accessing a greater number and variety of sources for water supply than US cities. Cities using captured water generally had poorer access to local water resources and maintained significantly more complex strategies for water delivery, supply and regulatory management. Eight cities, all African, are identified in this work as having water insecurity issues. These cities lack sufficient infrastructure and institutional complexity to capture and deliver adequate amounts of water for urban use. Together, these findings highlight the important interconnection between infrastructure investments and management techniques for urban areas with a limited or dwindling natural abundance of water. Addressing water security challenges in the future will require that more attention be placed not only on increasing water availability, but on developing the institutional support to manage captured water supplies.
Contingent capture and inhibition of return: a comparison of mechanisms.
Prinzmetal, William; Taylor, Jordan A; Myers, Loretta Barry; Nguyen-Espino, Jacqueline
2011-09-01
We investigated the cause(s) of two effects associated with involuntary attention in the spatial cueing task: contingent capture and inhibition of return (IOR). Previously, we found that there were two mechanisms of involuntary attention in this task: (1) a (serial) search mechanism that predicts a larger cueing effect in reaction time with more display locations and (2) a decision (threshold) mechanism that predicts a smaller cueing effect with more display locations (Prinzmetal et al. 2010). In the present study, contingent capture and IOR had completely different patterns of results when we manipulated the number of display locations and the presence of distractors. Contingent capture was best described by a search model, whereas the inhibition of return was best described by a decision model. Furthermore, we fit a linear ballistic accumulator model to the results and IOR was accounted for by a change of threshold, whereas the results from contingent capture experiments could not be fit with a change of threshold and were better fit by a search model.
Systematics of capture and fusion dynamics in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui
2017-03-01
We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.
Sommer, Marni
2010-08-01
The global development community has focused in recent decades on closing the gender gap in education, but has given insufficient attention to the specific needs of pre- and post-pubescent girls as they transition to young womanhood within the educational institution. This study explored the social context of girls' experiences of menses and schooling in northern Tanzania, with data collection focused on capturing girls' voiced concerns and recommendations. Results indicated that pubescent girls are confronted with numerous challenges to managing menses within the school environment. Many are transitioning through puberty without adequate guidance on puberty and menses management, and pursuing education in environments that lack adequate facilities, supplies, and gender sensitivity. Girls have pragmatic and realistic recommendations for how to improve school environments, ideas that should be incorporated as effective methods for improving girls' academic experiences and their healthy transitions to womanhood. Copyright 2009 The Association for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
An Ontology-Based Archive Information Model for the Planetary Science Community
NASA Technical Reports Server (NTRS)
Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris
2008-01-01
The Planetary Data System (PDS) information model is a mature but complex model that has been used to capture over 30 years of planetary science data for the PDS archive. As the de-facto information model for the planetary science data archive, it is being adopted by the International Planetary Data Alliance (IPDA) as their archive data standard. However, after seventeen years of evolutionary change the model needs refinement. First a formal specification is needed to explicitly capture the model in a commonly accepted data engineering notation. Second, the core and essential elements of the model need to be identified to help simplify the overall archive process. A team of PDS technical staff members have captured the PDS information model in an ontology modeling tool. Using the resulting knowledge-base, work continues to identify the core elements, identify problems and issues, and then test proposed modifications to the model. The final deliverables of this work will include specifications for the next generation PDS information model and the initial set of IPDA archive data standards. Having the information model captured in an ontology modeling tool also makes the model suitable for use by Semantic Web applications.
Multi-phase CFD modeling of solid sorbent carbon capture system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, E. M.; DeCroix, D.; Breault, R.
2013-07-01
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; DeCroix, David; Breault, Ronald W.
2013-07-30
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
A stochastic fractional dynamics model of space-time variability of rain
NASA Astrophysics Data System (ADS)
Kundu, Prasun K.; Travis, James E.
2013-09-01
varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.
Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J
2017-04-01
Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David; Agarwal, Deborah A.; Sun, Xin
2011-09-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.; Agarwal, D.; Sun, X.
2011-01-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
Peng, Ying; Yu, Bin; Wang, Peng; Kong, De-Guang; Chen, Bang-Hua; Yang, Xiao-Bing
2017-12-01
Outbreaks of hand-foot-mouth disease (HFMD) have occurred many times and caused serious health burden in China since 2008. Application of modern information technology to prediction and early response can be helpful for efficient HFMD prevention and control. A seasonal auto-regressive integrated moving average (ARIMA) model for time series analysis was designed in this study. Eighty-four-month (from January 2009 to December 2015) retrospective data obtained from the Chinese Information System for Disease Prevention and Control were subjected to ARIMA modeling. The coefficient of determination (R 2 ), normalized Bayesian Information Criterion (BIC) and Q-test P value were used to evaluate the goodness-of-fit of constructed models. Subsequently, the best-fitted ARIMA model was applied to predict the expected incidence of HFMD from January 2016 to December 2016. The best-fitted seasonal ARIMA model was identified as (1,0,1)(0,1,1) 12 , with the largest coefficient of determination (R 2 =0.743) and lowest normalized BIC (BIC=3.645) value. The residuals of the model also showed non-significant autocorrelations (P Box-Ljung (Q) =0.299). The predictions by the optimum ARIMA model adequately captured the pattern in the data and exhibited two peaks of activity over the forecast interval, including a major peak during April to June, and again a light peak for September to November. The ARIMA model proposed in this study can forecast HFMD incidence trend effectively, which could provide useful support for future HFMD prevention and control in the study area. Besides, further observations should be added continually into the modeling data set, and parameters of the models should be adjusted accordingly.
NASA Astrophysics Data System (ADS)
Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.
2012-02-01
Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.
Capture mechanism in Palaeotropical pitcher plants (Nepenthaceae) is constrained by climate
Moran, Jonathan A.; Gray, Laura K.; Clarke, Charles; Chin, Lijin
2013-01-01
Background and Aims Nepenthes (Nepenthaceae, approx. 120 species) are carnivorous pitcher plants with a centre of diversity comprising the Philippines, Borneo, Sumatra and Sulawesi. Nepenthes pitchers use three main mechanisms for capturing prey: epicuticular waxes inside the pitcher; a wettable peristome (a collar-shaped structure around the opening); and viscoelastic fluid. Previous studies have provided evidence suggesting that the first mechanism may be more suited to seasonal climates, whereas the latter two might be more suited to perhumid environments. In this study, this idea was tested using climate envelope modelling. Methods A total of 94 species, comprising 1978 populations, were grouped by prey capture mechanism (large peristome, small peristome, waxy, waxless, viscoelastic, non-viscoelastic, ‘wet’ syndrome and ‘dry’ syndrome). Nineteen bioclimatic variables were used to model habitat suitability at approx. 1 km resolution for each group, using Maxent, a presence-only species distribution modelling program. Key Results Prey capture groups putatively associated with perhumid conditions (large peristome, waxless, viscoelastic and ‘wet’ syndrome) had more restricted areas of probable habitat suitability than those associated putatively with less humid conditions (small peristome, waxy, non-viscoelastic and‘dry’ syndrome). Overall, the viscoelastic group showed the most restricted area of modelled suitable habitat. Conclusions The current study is the first to demonstrate that the prey capture mechanism in a carnivorous plant is constrained by climate. Nepenthes species employing peristome-based and viscoelastic fluid-based capture are largely restricted to perhumid regions; in contrast, the wax-based mechanism allows successful capture in both perhumid and more seasonal areas. Possible reasons for the maintenance of peristome-based and viscoelastic fluid-based capture mechanisms in Nepenthes are discussed in relation to the costs and benefits associated with a given prey capture strategy. PMID:23975653
Capture mechanism in Palaeotropical pitcher plants (Nepenthaceae) is constrained by climate.
Moran, Jonathan A; Gray, Laura K; Clarke, Charles; Chin, Lijin
2013-11-01
Nepenthes (Nepenthaceae, approx. 120 species) are carnivorous pitcher plants with a centre of diversity comprising the Philippines, Borneo, Sumatra and Sulawesi. Nepenthes pitchers use three main mechanisms for capturing prey: epicuticular waxes inside the pitcher; a wettable peristome (a collar-shaped structure around the opening); and viscoelastic fluid. Previous studies have provided evidence suggesting that the first mechanism may be more suited to seasonal climates, whereas the latter two might be more suited to perhumid environments. In this study, this idea was tested using climate envelope modelling. A total of 94 species, comprising 1978 populations, were grouped by prey capture mechanism (large peristome, small peristome, waxy, waxless, viscoelastic, non-viscoelastic, 'wet' syndrome and 'dry' syndrome). Nineteen bioclimatic variables were used to model habitat suitability at approx. 1 km resolution for each group, using Maxent, a presence-only species distribution modelling program. Prey capture groups putatively associated with perhumid conditions (large peristome, waxless, viscoelastic and 'wet' syndrome) had more restricted areas of probable habitat suitability than those associated putatively with less humid conditions (small peristome, waxy, non-viscoelastic and'dry' syndrome). Overall, the viscoelastic group showed the most restricted area of modelled suitable habitat. The current study is the first to demonstrate that the prey capture mechanism in a carnivorous plant is constrained by climate. Nepenthes species employing peristome-based and viscoelastic fluid-based capture are largely restricted to perhumid regions; in contrast, the wax-based mechanism allows successful capture in both perhumid and more seasonal areas. Possible reasons for the maintenance of peristome-based and viscoelastic fluid-based capture mechanisms in Nepenthes are discussed in relation to the costs and benefits associated with a given prey capture strategy.
Price, A.; Peterson, James T.
2010-01-01
Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.
Kesavan, Sujatha; Kelay, Tanika; Collins, Ruth E; Cox, Benita; Bello, Fernando; Kneebone, Roger L; Sevdalis, Nick
2013-10-01
Acute myocardial infarctions (MIs) or heart attacks are the result of a complete or an incomplete occlusion of the lumen of the coronary artery with a thrombus. Prompt diagnosis and early coronary intervention results in maximum myocardial salvage, hence time to treat is of the essence. Adequate, accurate and complete information is vital during the early stages of admission of an MI patient and can impact significantly on the quality and safety of patient care. This study aimed to record how clinical information between different clinical teams during the journey of a patient in the MI care pathway is captured and to review the flow of information within this care pathway. A prospective, descriptive, structured observational study to assess (i) current clinical information systems (CIS) utilization and (ii) real-time information availability within an acute cardiac care setting was carried out. Completeness and availability of patient information capture across four key stages of the MI care pathway were assessed prospectively. Thirteen separate information systems were utilized during the four phases of the MI pathway. Observations revealed fragmented CIS utilization, with users accessing an average of six systems to gain a complete set of patient information. Data capture was found to vary between each pathway stage and in both patient cohort risk groupings. The highest level of information completeness (100%) was observed only in the discharge stage of the MI care pathway. The lowest level of information completeness (58%) was observed in the admission stage. The study highlights fragmentation, CIS duplication, and discrepancies in the current clinical information capture and data transfer across the MI care pathway in an acute cardiac care setting. The development of an integrated and user-friendly electronic data capture and transfer system would reduce duplication and would facilitate efficient and complete information provision at the point of care. © 2012 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Rasmussen, K. L.; Prein, A. F.; Rasmussen, R. M.; Ikeda, K.; Liu, C.
2017-11-01
Novel high-resolution convection-permitting regional climate simulations over the US employing the pseudo-global warming approach are used to investigate changes in the convective population and thermodynamic environments in a future climate. Two continuous 13-year simulations were conducted using (1) ERA-Interim reanalysis and (2) ERA-Interim reanalysis plus a climate perturbation for the RCP8.5 scenario. The simulations adequately reproduce the observed precipitation diurnal cycle, indicating that they capture organized and propagating convection that most climate models cannot adequately represent. This study shows that weak to moderate convection will decrease and strong convection will increase in frequency in a future climate. Analysis of the thermodynamic environments supporting convection shows that both convective available potential energy (CAPE) and convective inhibition (CIN) increase downstream of the Rockies in a future climate. Previous studies suggest that CAPE will increase in a warming climate, however a corresponding increase in CIN acts as a balancing force to shift the convective population by suppressing weak to moderate convection and provides an environment where CAPE can build to extreme levels that may result in more frequent severe convection. An idealized investigation of fundamental changes in the thermodynamic environment was conducted by shifting a standard atmospheric profile by ± 5 °C. When temperature is increased, both CAPE and CIN increase in magnitude, while the opposite is true for decreased temperatures. Thus, even in the absence of synoptic and mesoscale variations, a warmer climate will provide more CAPE and CIN that will shift the convective population, likely impacting water and energy budgets on Earth.
Rao, Anand B; Rubin, Edward S
2002-10-15
Capture and sequestration of CO2 from fossil fuel power plants is gaining widespread interest as a potential method of controlling greenhouse gas emissions. Performance and cost models of an amine (MEA)-based CO2 absorption system for postcombustion flue gas applications have been developed and integrated with an existing power plant modeling framework that includes multipollutant control technologies for other regulated emissions. The integrated model has been applied to study the feasibility and cost of carbon capture and sequestration at both new and existing coal-burning power plants. The cost of carbon avoidance was shown to depend strongly on assumptions about the reference plant design, details of the CO2 capture system design, interactions with other pollution control systems, and method of CO2 storage. The CO2 avoidance cost for retrofit systems was found to be generally higher than for new plants, mainly because of the higher energy penalty resulting from less efficient heat integration as well as site-specific difficulties typically encountered in retrofit applications. For all cases, a small reduction in CO2 capture cost was afforded by the SO2 emission trading credits generated by amine-based capture systems. Efforts are underway to model a broader suite of carbon capture and sequestration technologies for more comprehensive assessments in the context of multipollutant environmental management.
Integrative modeling and novel particle swarm-based optimal design of wind farms
NASA Astrophysics Data System (ADS)
Chowdhury, Souma
To meet the energy needs of the future, while seeking to decrease our carbon footprint, a greater penetration of sustainable energy resources such as wind energy is necessary. However, a consistent growth of wind energy (especially in the wake of unfortunate policy changes and reported under-performance of existing projects) calls for a paradigm shift in wind power generation technologies. This dissertation develops a comprehensive methodology to explore, analyze and define the interactions between the key elements of wind farm development, and establish the foundation for designing high-performing wind farms. The primary contribution of this research is the effective quantification of the complex combined influence of wind turbine features, turbine placement, farm-land configuration, nameplate capacity, and wind resource variations on the energy output of the wind farm. A new Particle Swarm Optimization (PSO) algorithm, uniquely capable of preserving population diversity while addressing discrete variables, is also developed to provide powerful solutions towards optimizing wind farm configurations. In conventional wind farm design, the major elements that influence the farm performance are often addressed individually. The failure to fully capture the critical interactions among these factors introduces important inaccuracies in the projected farm performance and leads to suboptimal wind farm planning. In this dissertation, we develop the Unrestricted Wind Farm Layout Optimization (UWFLO) methodology to model and optimize the performance of wind farms. The UWFLO method obviates traditional assumptions regarding (i) turbine placement, (ii) turbine-wind flow interactions, (iii) variation of wind conditions, and (iv) types of turbines (single/multiple) to be installed. The allowance of multiple turbines, which demands complex modeling, is rare in the existing literature. The UWFLO method also significantly advances the state of the art in wind farm optimization by allowing simultaneous optimization of the type and the location of the turbines. Layout optimization (using UWFLO) of a hypothetical 25-turbine commercial-scale wind farm provides a remarkable 4.4% increase in capacity factor compared to a conventional array layout. A further 2% increase in capacity factor is accomplished when the types of turbines are also optimally selected. The scope of turbine selection and placement however depends on the land configuration and the nameplate capacity of the farm. Such dependencies are not clearly defined in the existing literature. We develop response surface-based models, which implicitly employ UWFLO, to quantify and analyze the roles of these other crucial design factors in optimal wind farm planning. The wind pattern at a site can vary significantly from year to year, which is not adequately captured by conventional wind distribution models. The resulting ill-predictability of the annual distribution of wind conditions introduces significant uncertainties in the estimated energy output of the wind farm. A new method is developed to characterize these wind resource uncertainties and model the propagation of these uncertainties into the estimated farm output. The overall wind pattern/regime also varies from one region to another, which demands turbines with capabilities uniquely suited for different wind regimes. Using the UWFLO method, we model the performance potential of currently available turbines for different wind regimes, and quantify their feature-based expected market suitability. Such models can initiate an understanding of the product variation that current turbine manufacturers should pursue, to adequately satisfy the needs of the naturally diverse wind energy market. The wind farm design problems formulated in this dissertation involve highly multimodal objective and constraint functions and a large number of continuous and discrete variables. An effective modification of the PSO algorithm is developed to address such challenging problems. Continuous search, as in conventional PSO, is implemented as the primary search strategy; discrete variables are then updated using a nearest-allowed-discrete-point criterion. Premature stagnation of particles due to loss of population diversity is one of the primary drawbacks of the basic PSO dynamics. A new measure of population diversity is formulated, which unlike existing metrics capture both the overall spread and the distribution of particles in the variable space. This diversity metric is then used to apply (i) an adaptive repulsion away from the best global solution in the case of continuous variables, and (ii) a stochastic update of the discrete variables. The new PSO algorithm provides competitive performance compared to a popular genetic algorithm, when applied to solve a comprehensive set of 98 mixed-integer nonlinear programming problems.
Influence of atrial substrate on local capture induced by rapid pacing of atrial fibrillation.
Rusu, Alexandru; Jacquemet, Vincent; Vesin, Jean-Marc; Virag, Nathalie
2014-05-01
Preliminary studies showed that the septum area was the only location allowing local capture of both the atria during rapid pacing of atrial fibrillation (AF) from a single site. The present model-based study investigated the influence of atrial substrate on the ability to capture AF when pacing the septum. Three biophysical models of AF with an identical anatomy from human atria but with different AF substrates were used: (i) AF based on multiple wavelets, (ii) AF based on heterogeneities in vagal activation, (iii) AF based on heterogeneities in repolarization. A fourth anatomical model without Bachmann's bundle (BB) was also implemented. Rapid pacing was applied from the septum at pacing cycle lengths in the range of 50-100% of AF cycle length. Local capture was automatically assessed with 24 pairs of electrodes evenly distributed on the atrial surface. The results were averaged over 16 AF simulations. In the homogeneous substrate, AF capture could reach 80% of the atrial surface. Heterogeneities degraded the ability to capture during AF. In the vagal substrate, the capture tended to be more regular and the degradation of the capture was not directly related to the spatial extent of the heterogeneities. In the third substrate, heterogeneities induced wave anchorings and wavebreaks even in areas close to the pacing site, with a more dramatic effect on AF capture. Finally, BB did not significantly affect the ability to capture. Atrial fibrillation substrate had a significant effect on rapid pacing outcomes. The response to therapeutic pacing may therefore be specific to each patient.
76 FR 53162 - Acceptance of Public Submissions Regarding the Study of Stable Value Contracts
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-25
... the risk of a run on a SVF? To the extent that SVC providers use value-at-risk (``VaR'') models, do such VaR models adequately assess the risk of loss resulting from such events or other possible but extremely unlikely events? Do other loss models more adequately assess the risk of loss, such as the...
Statistical inference for capture-recapture experiments
Pollock, Kenneth H.; Nichols, James D.; Brownie, Cavell; Hines, James E.
1990-01-01
This monograph presents a detailed, practical exposition on the design, analysis, and interpretation of capture-recapture studies. The Lincoln-Petersen model (Chapter 2) and the closed population models (Chapter 3) are presented only briefly because these models have been covered in detail elsewhere. The Jolly- Seber open population model, which is central to the monograph, is covered in detail in Chapter 4. In Chapter 5 we consider the "enumeration" or "calendar of captures" approach, which is widely used by mammalogists and other vertebrate ecologists. We strongly recommend that it be abandoned in favor of analyses based on the Jolly-Seber model. We consider 2 restricted versions of the Jolly-Seber model. We believe the first of these, which allows losses (mortality or emigration) but not additions (births or immigration), is likely to be useful in practice. Another series of restrictive models requires the assumptions of a constant survival rate or a constant survival rate and a constant capture rate for the duration of the study. Detailed examples are given that illustrate the usefulness of these restrictions. There often can be a substantial gain in precision over Jolly-Seber estimates. In Chapter 5 we also consider 2 generalizations of the Jolly-Seber model. The temporary trap response model allows newly marked animals to have different survival and capture rates for 1 period. The other generalization is the cohort Jolly-Seber model. Ideally all animals would be marked as young, and age effects considered by using the Jolly-Seber model on each cohort separately. In Chapter 6 we present a detailed description of an age-dependent Jolly-Seber model, which can be used when 2 or more identifiable age classes are marked. In Chapter 7 we present a detailed description of the "robust" design. Under this design each primary period contains several secondary sampling periods. We propose an estimation procedure based on closed and open population models that allows for heterogeneity and trap response of capture rates (hence the name robust design). We begin by considering just 1 age class and then extend to 2 age classes. When there are 2 age classes it is possible to distinguish immigrants and births. In Chapter 8 we give a detailed discussion of the design of capture-recapture studies. First, capture-recapture is compared to other possible sampling procedures. Next, the design of capture-recapture studies to minimize assumption violations is considered. Finally, we consider the precision of parameter estimates and present figures on proportional standard errors for a variety of initial parameter values to aid the biologist about to plan a study. A new program, JOLLY, has been written to accompany the material on the Jolly-Seber model (Chapter 4) and its extensions (Chapter 5). Another new program, JOLLYAGE, has been written for a special case of the age-dependent model (Chapter 6) where there are only 2 age classes. In Chapter 9 a brief description of the different versions of the 2 programs is given. Chapter 10 gives a brief description of some alternative approaches that were not considered in this monograph. We believe that an excellent overall view of capture- recapture models may be obtained by reading the monograph by White et al. (1982) emphasizing closed models and then reading this monograph where we concentrate on open models. The important recent monograph by Burnham et al. (1987) could then be read if there were interest in the comparison of different populations.
Capture of Hypervelocity Particles with Low-Density Aerogel
NASA Technical Reports Server (NTRS)
Hoerz, Friedrich; Cintala, Mark J.; Zolensky, Michael E.; Bernhard, Ronald B.; Haynes, Gerald; See, Thomas H.; Tsou, Peter; Brownlee, Donald E.
1998-01-01
Recent impact experiments conducted at Johnson Space Center supported a space-exposed flight instrument called the orbital debris collector (ODC) to see whether SiO2 acrogel performed adequately as a collector to capture cosmic dust particles and/or manmade debris, or whether additional development is needed. The first ODC was flown aboard the Mir for 18 months, while the second will be flown aboard a spacecraft (Stardust, to be launched in 1999) that will encounter the comet Wild 2 and return to Earth. Aerogels are highly porous materials that decelerate high-velocity particles without substantial melting or modifications to the particles' components; in other denser materials, these particles would melt or vaporize upon impact. The experimental data in this report must be considered somewhat qualitative because they are characterized by substantial, if not intolerable, scatter, possibly due to experimental difficulties in duplicating given sets of initial impact conditions. Therefore, this report is a chronological guide of the experimenters' attempts, difficulties, progress, and evaluations for future tests.
Boron Neutron Capture Therapy - A Literature Review
Nedunchezhian, Kavitaa; Thiruppathy, Manigandan; Thirugnanamurthy, Sarumathi
2016-01-01
Boron Neutron Capture Therapy (BNCT) is a radiation science which is emerging as a hopeful tool in treating cancer, by selectively concentrating boron compounds in tumour cells and then subjecting the tumour cells to epithermal neutron beam radiation. BNCT bestows upon the nuclear reaction that occurs when Boron-10, a stable isotope, is irradiated with low-energy thermal neutrons to yield α particles (Helium-4) and recoiling lithium-7 nuclei. A large number of 10 Boron (10B) atoms have to be localized on or within neoplastic cells for BNCT to be effective, and an adequate number of thermal neutrons have to be absorbed by the 10B atoms to maintain a lethal 10B (n, α) lithium-7 reaction. The most exclusive property of BNCT is that it can deposit an immense dose gradient between the tumour cells and normal cells. BNCT integrates the fundamental focusing perception of chemotherapy and the gross anatomical localization proposition of traditional radiotherapy. PMID:28209015
Eid, Daniel; Guzman-Rivero, Miguel; Rojas, Ernesto; Goicolea, Isabel; Hurtig, Anna-Karin; Illanes, Daniel; San Sebastian, Miguel
2018-01-01
This study evaluates the level of underreporting of the National Program of Leishmaniasis Control (NPLC) in two communities of Cochabamba, Bolivia during the period 2013-2014. Montenegro skin test-confirmed cases of cutaneous leishmaniasis (CL) were identified through active surveillance during medical campaigns. These cases were compared with those registered in the NPLC by passive surveillance. After matching and cleaning data from the two sources, the total number of cases and the level of underreporting of the National Program were calculated using the capture-recapture analysis. This estimated that 86 cases of CL (95% confidence interval [CI]: 62.1-110.8) occurred in the study period in both communities. The level of underreporting of the NPLC in these communities was very high: 73.4% (95% CI: 63.1-81.5%). These results can be explained by the inaccessibility of health services and centralization of the NPLC activities. This information is important to establish priorities among policy-makers and funding organizations as well as implementing adequate intervention plans.
Ruhe, Katharina Maria; Wangmo, Tenzin; De Clercq, Eva; Badarau, Domnita Oana; Ansari, Marc; Kühne, Thomas; Niggli, Felix; Elger, Bernice Simone
2016-09-01
Adequate participation of children and adolescents in their healthcare is a value underlined by several professional associations. However, little guidance exists as to how this principle can be successfully translated into practice. A total of 52 semi-structured interviews were carried out with 19 parents, 17 children, and 16 pediatric oncologists. Questions pertained to participants' experiences with patient participation in communication and decision-making. Applied thematic analysis was used to identify themes with regard to participation. Three main themes were identified: (a) modes of participation that captured the different ways in which children and adolescents were involved in their healthcare; (b) regulating participation, that is, regulatory mechanisms that allowed children, parents, and oncologists to adapt patient involvement in communication and decision-making; and (c) other factors that influenced patient participation. This last theme included aspects that had an overall impact on how children participated. Patient participation in pediatrics is a complex issue and physicians face considerable challenges in facilitating adequate involvement of children and adolescents in this setting. Nonetheless, they occupy a central role in creating room for choice and guiding parents in involving their child. Adequate training of professionals to successfully translate the principle of patient participation into practice is required. •Adequate participation of pediatric patients in communication and decision-making is recommended by professional guidelines but little guidance exists as to how to translate it into practice. What is New: •The strategies used by physicians, parents, and patients to achieve participation are complex and serve to both enable and restrict children's and adolescents' involvement.
NASA Astrophysics Data System (ADS)
Istanbulluoglu, E.; Vivoni, E. R.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
Landscape morphology has an important control on the spatial and temporal organization of basin hydrologic response to climate forcing, affecting soil moisture redistribution as well as vegetation function. On the other hand, erosion, driven by hydrology and modulated by vegetation, produces landforms over geologic time scales that reflect characteristic signatures of the dominant land forming process. Responding to extreme climate events or anthropogenic disturbances of the land surface, infrequent but rapid forms of erosion (e.g., arroyo development, landsliding) can modify topography such that basin hydrology is significantly influenced. Despite significant advances in both hydrologic and geomorphic modeling over the past two decades, the dynamic interactions between basin hydrology, geomorphology and terrestrial ecology are not adequately captured in current model frameworks. In order to investigate hydrologic-geomorphic-ecologic interactions at the basin scale we present initial efforts in integrating the CHILD landscape evolution model (Tucker et al. 2001) with the tRIBS hydrology model (Ivanov et al. 2004), both developed in a common software environment. In this talk, we present preliminary results of the numerical modeling of the coupled evolution of basin hydro-geomorphic response and resulting landscape morphology in two sets of examples. First, we discuss the long-term evolution of both the hydrologic response and the resulting basin morphology from an initially uplifted plateau. In the second set of modeling experiments, we implement changes in climate and land-use to an existing topography and compare basin hydrologic response to the model results when landscape form is fixed (e.g. no coupling between hydrology and geomorphology). Model results stress the importance of internal basin dynamics, including runoff generation mechanisms and hydrologic states, in shaping hydrologic response as well as the importance of employing comprehensive conceptualizations of hydrology in modeling landscape evolution.
Modelling the growth of Populus species using Ecosystem Demography (ED) model
NASA Astrophysics Data System (ADS)
Wang, D.; Lebauer, D. S.; Feng, X.; Dietze, M. C.
2010-12-01
Hybrid poplar plantations are an important source being evaluated for biomass production. Effective management of such plantations requires adequate growth and yield models. The Ecosystem Demography model (ED) makes predictions about the large scales of interest in above- and belowground ecosystem structure and the fluxes of carbon and water from a description of the fine-scale physiological processes. In this study, we used a workflow management tool, the Predictive Ecophysiological Carbon flux Analyzer (PECAn), to integrate literature data, field measurement and the ED model to provide predictions of ecosystem functioning. Parameters for the ED ensemble runs were sampled from the posterior distribution of ecophysiological traits of Populus species compiled from the literature using a Bayesian meta-analysis approach. Sensitivity analysis was performed to identify the parameters which contribute the most to the uncertainties of the ED model output. Model emulation techniques were used to update parameter posterior distributions using field-observed data in northern Wisconsin hybrid poplar plantations. Model results were evaluated with 5-year field-observed data in a hybrid poplar plantation at New Franklin, MO. ED was then used to predict the spatial variability of poplar yield in the coterminous United States (United States minus Alaska and Hawaii). Sensitivity analysis showed that root respiration, dark respiration, growth respiration, stomatal slope and specific leaf area contribute the most to the uncertainty, which suggests that our field measurements and data collection should focus on these parameters. The ED model successfully captured the inter-annual and spatial variability of the yield of poplar. Analyses in progress with the ED model focus on evaluating the ecosystem services of short-rotation woody plantations, such as impacts on soil carbon storage, water use, and nutrient retention.
Towards an integrated model of floodplain hydrology representing feedbacks and anthropogenic effects
NASA Astrophysics Data System (ADS)
Andreadis, K.; Schumann, G.; Voisin, N.; O'Loughlin, F.; Tesfa, T. K.; Bates, P.
2017-12-01
The exchange of water between hillslopes, river channels and floodplain can be quite complex and the difficulty in capturing the mechanisms behind it is exacerbated by the impact of human activities such as irrigation and reservoir operations. Although there has been a vast body of work on modeling hydrological processes, most of the resulting models have been limited with regards to aspects of the coupled human-natural system. For example, hydrologic models that represent processes such as evapotranspiration, infiltration, interception and groundwater dynamics often neglect anthropogenic effects or do not adequately represent the inherently two-dimensional floodplain flow. We present an integrated modeling framework that is comprised of the Variable Infiltration Capacity (VIC) hydrology model, the LISFLOOD-FP hydrodynamic model, and the Water resources Management (WM) model. The VIC model solves the energy and water balance over a gridded domain and simulates a number of hydrologic features such as snow, frozen soils, lakes and wetlands, while also representing irrigation demand from cropland areas. LISFLOOD-FP solves an approximation of the Saint-Venant equations to efficiently simulate flow in river channels and the floodplain. The implementation of WM accommodates a variety of operating rules in reservoirs and withdrawals due to consumptive demands, allowing the successful simulation of regulated flow. The models are coupled so as to allow feedbacks between their corresponding processes, therefore providing the ability to test different hypotheses about the floodplain hydrology of large-scale basins. We test this integrated framework over the Zambezi River basin by simulating its hydrology from 2000-2010, and evaluate the results against remotely sensed observations. Finally, we examine the sensitivity of streamflow and water inundation to changes in reservoir operations, precipitation and temperature.
NASA Astrophysics Data System (ADS)
González, S. J.; Pozzi, E. C. C.; Monti Hughes, A.; Provenzano, L.; Koivunoro, H.; Carando, D. G.; Thorp, S. I.; Casal, M. R.; Bortolussi, S.; Trivillin, V. A.; Garabalino, M. A.; Curotto, P.; Heber, E. M.; Santa Cruz, G. A.; Kankaanranta, L.; Joensuu, H.; Schwint, A. E.
2017-10-01
Boron neutron capture therapy (BNCT) is a treatment modality that combines different radiation qualities. Since the severity of biological damage following irradiation depends on the radiation type, a quantity different from absorbed dose is required to explain the effects observed in the clinical BNCT in terms of outcome compared with conventional photon radiation therapy. A new approach for calculating photon iso-effective doses in BNCT was introduced previously. The present work extends this model to include information from dose-response assessments in animal models and humans. Parameters of the model were determined for tumour and precancerous tissue using dose-response curves obtained from BNCT and photon studies performed in the hamster cheek pouch in vivo models of oral cancer and/or pre-cancer, and from head and neck cancer radiotherapy data with photons. To this end, suitable expressions of the dose-limiting Normal Tissue Complication and Tumour Control Probabilities for the reference radiation and for the mixed field BNCT radiation were developed. Pearson’s correlation coefficients and p-values showed that TCP and NTCP models agreed with experimental data (with r > 0.87 and p-values >0.57). The photon iso-effective dose model was applied retrospectively to evaluate the dosimetry in tumours and mucosa for head and neck cancer patients treated with BNCT in Finland. Photon iso-effective doses in tumour were lower than those obtained with the standard RBE-weighted model (between 10% to 45%). The results also suggested that the probabilities of tumour control derived from photon iso-effective doses are more adequate to explain the clinical responses than those obtained with the RBE-weighted values. The dosimetry in the mucosa revealed that the photon iso-effective doses were about 30% to 50% higher than the corresponding RBE-weighted values. While the RBE-weighted doses are unable to predict mucosa toxicity, predictions based on the proposed model are compatible with the observed clinical outcome. The extension of the photon iso-effective dose model has allowed, for the first time, the determination of the photon iso-effective dose for unacceptable complications in the dose-limiting normal tissue. Finally, the formalism developed in this work to compute photon-equivalent doses can be applied to other therapies that combine mixed radiation fields, such as hadron therapy.
González, S J; Pozzi, E C C; Monti Hughes, A; Provenzano, L; Koivunoro, H; Carando, D G; Thorp, S I; Casal, M R; Bortolussi, S; Trivillin, V A; Garabalino, M A; Curotto, P; Heber, E M; Santa Cruz, G A; Kankaanranta, L; Joensuu, H; Schwint, A E
2017-10-03
Boron neutron capture therapy (BNCT) is a treatment modality that combines different radiation qualities. Since the severity of biological damage following irradiation depends on the radiation type, a quantity different from absorbed dose is required to explain the effects observed in the clinical BNCT in terms of outcome compared with conventional photon radiation therapy. A new approach for calculating photon iso-effective doses in BNCT was introduced previously. The present work extends this model to include information from dose-response assessments in animal models and humans. Parameters of the model were determined for tumour and precancerous tissue using dose-response curves obtained from BNCT and photon studies performed in the hamster cheek pouch in vivo models of oral cancer and/or pre-cancer, and from head and neck cancer radiotherapy data with photons. To this end, suitable expressions of the dose-limiting Normal Tissue Complication and Tumour Control Probabilities for the reference radiation and for the mixed field BNCT radiation were developed. Pearson's correlation coefficients and p-values showed that TCP and NTCP models agreed with experimental data (with r > 0.87 and p-values >0.57). The photon iso-effective dose model was applied retrospectively to evaluate the dosimetry in tumours and mucosa for head and neck cancer patients treated with BNCT in Finland. Photon iso-effective doses in tumour were lower than those obtained with the standard RBE-weighted model (between 10% to 45%). The results also suggested that the probabilities of tumour control derived from photon iso-effective doses are more adequate to explain the clinical responses than those obtained with the RBE-weighted values. The dosimetry in the mucosa revealed that the photon iso-effective doses were about 30% to 50% higher than the corresponding RBE-weighted values. While the RBE-weighted doses are unable to predict mucosa toxicity, predictions based on the proposed model are compatible with the observed clinical outcome. The extension of the photon iso-effective dose model has allowed, for the first time, the determination of the photon iso-effective dose for unacceptable complications in the dose-limiting normal tissue. Finally, the formalism developed in this work to compute photon-equivalent doses can be applied to other therapies that combine mixed radiation fields, such as hadron therapy.
A Method to Capture Macroslip at Bolted Interfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, Ronald Neil; Heitman, Lili Anne Akin
2015-10-01
Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less
A Method to Capture Macroslip at Bolted Interfaces [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, Ronald Neil; Heitman, Lili Anne Akin
2016-01-01
Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less
A mechanistic model to predict the capture of gas phase mercury species using in-situ generated titania nanosize particles activated by UV irradiation is developed. The model is an extension of a recently reported model1 for photochemical reactions that accounts for the rates of...
Physics of Intact Capture of Cometary Coma Dust Samples
NASA Astrophysics Data System (ADS)
Anderson, William
2011-06-01
In 1986, Tom Ahrens and I developed a simple model for hypervelocity capture in low density foams, aimed in particular at the suggestion that such techniques could be used to capture dust during flyby of an active comet nucleus. While the model was never published in printed form, it became known to many in the cometary dust sampling community. More sophisticated models have been developed since, but our original model still retains superiority for some applications and elucidates the physics of the capture process in a more intuitive way than the more recent models. The model makes use of the small value of the Hugoniot intercept typical of highly distended media to invoke analytic expressions with functional forms common to fluid dynamics. The model successfully describes the deceleration and ablation of a particle that is large enough to see the foam as a low density continuum. I will present that model, updated with improved calculations of the temperature in the shocked foam, and show its continued utility in elucidating the phenomena of hypervelocity penetration of low-density foams.
Influence of dissolved organic matter on the complexation of mercury under sulfidic conditions.
Miller, Carrie L; Mason, Robert P; Gilmour, Cynthia C; Heyes, Andrew
2007-04-01
The complexation of Hg under sulfidic conditions influences its bioavailability for microbial methylation. Neutral dissolved Hg-sulfide complexes are readily available to Hg-methylating bacteria in culture, and thermodynamic models predict that inorganic Hg-sulfide complexes dominate dissolved Hg speciation under natural sulfidic conditions. However, these models have not been validated in the field. To examine the complexation of Hg in natural sulfidic waters, octanol/water partitioning methods were modified for use under environmentally relevant conditions, and a centrifuge ultrafiltration technique was developed. These techniques demonstrated much lower concentrations of dissolved Hg-sulfide complexes than predicted. Furthermore, the study revealed an interaction between Hg, dissolved organic matter (DOM), and sulfide that is not captured by current thermodynamic models. Whereas Hg forms strong complexes with DOM under oxic conditions, these complexes had not been expected to form in the presence of sulfide because of the stronger affinity of Hg for sulfide relative to its affinity for DOM. The observed interaction between Hg and DOM in the presence of sulfide likely involves the formation of a DOM-Hg-sulfide complex or results from the hydrophobic partitioning of neutral Hg-sulfide complexes into the higher-molecular-weight DOM. An understanding of the mechanism of this interaction and determination of complexation coefficients for the Hg-sulfide-DOM complex are needed to adequately assess how our new finding affects Hg bioavailability, sorption, and flux.
Ab initio phonon point defect scattering and thermal transport in graphene
NASA Astrophysics Data System (ADS)
Polanco, Carlos A.; Lindsay, Lucas
2018-01-01
We study the scattering of phonons from point defects and their effect on lattice thermal conductivity κ using a parameter-free ab initio Green's function methodology. Specifically, we focus on the scattering of phonons by boron (B), nitrogen (N), and phosphorus substitutions as well as single- and double-carbon vacancies in graphene. We show that changes of the atomic structure and harmonic interatomic force constants locally near defects govern the strength and frequency trends of the scattering of out-of-plane acoustic (ZA) phonons, the dominant heat carriers in graphene. ZA scattering rates due to N substitutions are nearly an order of magnitude smaller than those for B defects despite having similar mass perturbations. Furthermore, ZA phonon scattering rates from N defects decrease with increasing frequency in the lower-frequency spectrum in stark contrast to expected trends from simple models. ZA phonon-vacancy scattering rates are found to have a significantly softer frequency dependence (˜ω0 ) in graphene than typically employed in phenomenological models. The rigorous Green's function calculations demonstrate that typical mass-defect models do not adequately describe ZA phonon-defect scattering rates. Our ab initio calculations capture well the trend of κ vs vacancy density from experiments, though not the magnitudes. This work elucidates important insights into phonon-defect scattering and thermal transport in graphene, and demonstrates the applicability of first-principles methods toward describing these properties in imperfect materials.
NASA Astrophysics Data System (ADS)
Vibhava, F.; Graham, W. D.; De Rooij, R.; Maxwell, R. M.; Martin, J. B.; Cohen, M. J.
2011-12-01
The Santa Fe River Basin (SFRB) consists of three linked hydrologic units: the upper confined region (UCR), semi-confined transitional region (Cody Escarpment, CE) and lower unconfined region (LUR). Contrasting geological characteristics among these units affect streamflow generation processes. In the UCR, surface runoff and surficial stores dominate whereas in the LCR minimal surface runoff occurs and flow is dominated by groundwater sources and sinks. In the CE region the Santa Fe River (SFR) is captured entirely by a sinkhole into the Floridan aquifer, emerging as a first magnitude spring 6 km to the south. In light of these contrasting hydrological settings, developing a predictive, basin scale, physically-based hydrologic simulation model remains a research challenge. This ongoing study aims to assess the ability of a fully-coupled, physically-based three-dimensional hydrologic model (PARFLOW-CLM), to predict hydrologic conditions in the SFRB. The assessment will include testing the model's ability to adequately represent surface and subsurface flow sources, flow paths, and travel times within the basin as well as the surface-groundwater exchanges throughout the basin. In addition to simulating water fluxes, we also are collecting high resolution specific conductivity data at 10 locations throughout the river. Our objective is to exploit hypothesized strong end-member separation between riverine source water geochemistry to further refine the PARFLOW-CLM representation of riverine mixing and delivery dynamics.
NASA Astrophysics Data System (ADS)
Nomaguch, Yutaka; Fujita, Kikuo
This paper proposes a design support framework, named DRIFT (Design Rationale Integration Framework of Three layers), which dynamically captures and manages hypothesis and verification in the design process. A core of DRIFT is a three-layered design process model of action, model operation and argumentation. This model integrates various design support tools and captures design operations performed on them. Action level captures the sequence of design operations. Model operation level captures the transition of design states, which records a design snapshot over design tools. Argumentation level captures the process of setting problems and alternatives. The linkage of three levels enables to automatically and efficiently capture and manage iterative hypothesis and verification processes through design operations over design tools. In DRIFT, such a linkage is extracted through the templates of design operations, which are extracted from the patterns embeded in design tools such as Design-For-X (DFX) approaches, and design tools are integrated through ontology-based representation of design concepts. An argumentation model, gIBIS (graphical Issue-Based Information System), is used for representing dependencies among problems and alternatives. A mechanism of TMS (Truth Maintenance System) is used for managing multiple hypothetical design stages. This paper also demonstrates a prototype implementation of DRIFT and its application to a simple design problem. Further, it is concluded with discussion of some future issues.
Casula, P.; Nichols, J.D.
2003-01-01
When capturing and marking of individuals is possible, the application of newly developed capture-recapture models can remove several sources of bias in the estimation of population parameters such as local abundance and sex ratio. For example, observation of distorted sex ratios in counts or captures can reflect either different abundances of the sexes or different sex-specific capture probabilities, and capture-recapture models can help distinguish between these two possibilities. Robust design models and a model selection procedure based on information-theoretic methods were applied to study the local population structure of the endemic Sardinian chalk hill blue butterfly, Polyommatus coridon gennargenti. Seasonal variations of abundance, plus daily and weather-related variations of active populations of males and females were investigated. Evidence was found of protandry and male pioneering of the breeding space. Temporary emigration probability, which describes the proportion of the population not exposed to capture (e.g. absent from the study area) during the sampling process, was estimated, differed between sexes, and was related to temperature, a factor known to influence animal activity. The correlation between temporary emigration and average daily temperature suggested interpreting temporary emigration as inactivity of animals. Robust design models were used successfully to provide a detailed description of the population structure and activity in this butterfly and are recommended for studies of local abundance and animal activity in the field.
Attrition-enhanced sulfur capture by limestone particles in fluidized beds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saastamoinen, J.J.; Shimizu, T.
2007-02-14
Sulfur capture by limestone particles in fluidized beds is a well-established technology. The underlying chemical and physical phenomena of the process have been extensively studied and modeled. However, most of the studies have been focused on the relatively brief initial stage of the process, which extends from a few minutes to hours, yet the residence time of the particles in the boiler is much longer. Following the initial stage, a dense product layer will be formed on the particle surface, which decreases the rate of sulfur capture and the degree of utilization of the sorbent. Attrition can enhance sulfur capturemore » by removing this layer. A particle model for sulfur capture has been incorporated with an attrition model. After the initial stage, the rate of sulfur capture stabilizes, so that attrition removes the surface at the same rate as diffusion and chemical reaction produces new product in a thin surface layer of a particle. An analytical solution for the conversion of particles for this regime is presented. The solution includes the effects of the attrition rate, diffusion, chemical kinetics, pressure, and SO{sub 2} concentration, relative to conversion-dependent diffusivity and the rate of chemical reaction. The particle model results in models that describe the conversion of limestone in both fly ash and bottom ash. These are incorporated with the residence time (or reactor) models to calculate the average conversion of the limestone in fly ash and bottom ash, as well as the efficiency of sulfur capture. Data from a large-scale pressurized fluidized bed are compared with the model results.« less
A new capture fraction method to map how pumpage affects surface water flow.
Leake, Stanley A; Reeves, Howard W; Dickinson, Jesse E
2010-01-01
All groundwater pumped is balanced by removal of water somewhere, initially from storage in the aquifer and later from capture in the form of increase in recharge and decrease in discharge. Capture that results in a loss of water in streams, rivers, and wetlands now is a concern in many parts of the United States. Hydrologists commonly use analytical and numerical approaches to study temporal variations in sources of water to wells for select points of interest. Much can be learned about coupled surface/groundwater systems, however, by looking at the spatial distribution of theoretical capture for select times of interest. Development of maps of capture requires (1) a reasonably well-constructed transient or steady state model of an aquifer with head-dependent flow boundaries representing surface water features or evapotranspiration and (2) an automated procedure to run the model repeatedly and extract results, each time with a well in a different location. This paper presents new methods for simulating and mapping capture using three-dimensional groundwater flow models and presents examples from Arizona, Oregon, and Michigan.
Hopfe, Maren; Stucki, Gerold; Marshall, Ric; Twomey, Conal D; Üstün, T Bedirhan; Prodinger, Birgit
2016-02-03
Contemporary casemix systems for health services need to ensure that payment rates adequately account for actual resource consumption based on patients' needs for services. It has been argued that functioning information, as one important determinant of health service provision and resource use, should be taken into account when developing casemix systems. However, there has to date been little systematic collation of the evidence on the extent to which the addition of functioning information into existing casemix systems adds value to those systems with regard to the predictive power and resource variation explained by the groupings of these systems. Thus, the objective of this research was to examine the value of adding functioning information into casemix systems with respect to the prediction of resource use as measured by costs and length of stay. A systematic literature review was performed. Peer-reviewed studies, published before May 2014 were retrieved from CINAHL, EconLit, Embase, JSTOR, PubMed and Sociological Abstracts using keywords related to functioning ('Functioning', 'Functional status', 'Function*, 'ICF', 'International Classification of Functioning, Disability and Health', 'Activities of Daily Living' or 'ADL') and casemix systems ('Casemix', 'case mix', 'Diagnosis Related Groups', 'Function Related Groups', 'Resource Utilization Groups' or 'AN-SNAP'). In addition, a hand search of reference lists of included articles was conducted. Information about study aims, design, country, setting, methods, outcome variables, study results, and information regarding the authors' discussion of results, study limitations and implications was extracted. Ten included studies provided evidence demonstrating that adding functioning information into casemix systems improves predictive ability and fosters homogeneity in casemix groups with regard to costs and length of stay. Collection and integration of functioning information varied across studies. Results suggest that, in particular, DRG casemix systems can be improved in predicting resource use and capturing outcomes for frail elderly or severely functioning-impaired patients. Further exploration of the value of adding functioning information into casemix systems is one promising approach to improve casemix systems ability to adequately capture the differences in patient's needs for services and to better predict resource use.
Analysis of rubber supply in Sri Lanka
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartley, M.J.; Nerlove, M.; Peters, R.K. Jr.
1987-11-01
An analysis of the supply response for perennial crops is undertaken for rubber in Sir Lanka, focusing on the uprooting-replanting decision and disaggregating the typical reduced-form supply response equation into several structural relationships. This approach is compared and contrasted with Dowling's analysis of supply response for rubber in Thailand, which is based upon a sophisticated reduced-form supply function developed by Wickens and Greenfield for Brazilian coffee. Because the uprooting-replanting decision is central to understanding rubber supply response in Sri Lanka and for other perennial crops where replanting activities dominate new planting, the standard approaches do not adequately capture supply response.
Survivors in the Margins: The Invisibility of Violence Against Older Women.
Crockett, Cailin; Brandl, Bonnie; Dabby, Firoza Chic
2015-01-01
Violence against older women exists in the margins between domestic violence and elder abuse, with neither field adequately capturing the experiences of older women survivors of intimate partner violence (IPV). This commentary explores this oversight, identifying how the lack of gender analysis in the elder abuse field exacerbates older survivors' invisibility when the wider violence against women (VAW) field lacks a lifespan approach to abuse. Examining the impact of generational and aging factors on how older women experience IPV, we assert that the VAW field may be overlooking a wider population of survivors than previously thought.
Population-Attributable Risk Percentages for Racialized Risk Environments
Arriola, Kimberly Jacob; Haardörfer, Regine; McBride, Colleen M.
2016-01-01
Research about relationships between place characteristics and racial/ethnic inequities in health has largely ignored conceptual advances about race and place within the discipline of geography. Research has also almost exclusively quantified these relationships using effect estimates (e.g., odds ratios), statistics that fail to adequately capture the full impact of place characteristics on inequities and thus undermine our ability to translate research into action. We draw on geography to further develop the concept of “racialized risk environments,” and we argue for the routine calculation of race/ethnicity-specific population-attributable risk percentages. PMID:27552263
NASA Astrophysics Data System (ADS)
Taddele, Y. D.; Ayana, E.; Worqlul, A. W.; Srinivasan, R.; Gerik, T.; Clarke, N.
2017-12-01
The research presented in this paper is conducted in Ethiopia, which is located in the horn of Africa. Ethiopian economy largely depends on rainfed agriculture, which employs 80% of the labor force. The rainfed agriculture is frequently affected by droughts and dry spells. Small scale irrigation is considered as the lifeline for the livelihoods of smallholder farmers in Ethiopia. Biophysical models are highly used to determine the agricultural production, environmental sustainability, and socio-economic outcomes of small scale irrigation in Ethiopia. However, detailed spatially explicit data is not adequately available to calibrate and validate simulations from biophysical models. The Soil and Water Assessment Tool (SWAT) model was setup using finer resolution spatial and temporal data. The actual evapotranspiration (AET) estimation from the SWAT model was compared with two remotely sensed data, namely the Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectrometer (MODIS). The performance of the monthly satellite data was evaluated with correlation coefficient (R2) over the different land use groups. The result indicated that over the long term and monthly the AVHRR AET captures the pattern of SWAT simulated AET reasonably well, especially on agricultural dominated landscapes. A comparison between SWAT simulated AET and AVHRR AET provided mixed results on grassland dominated landscapes and poor agreement on forest dominated landscapes. Results showed that the AVHRR AET products showed superior agreement with the SWAT simulated AET than MODIS AET. This suggests that remotely sensed products can be used as valuable tool in properly modeling small scale irrigation.
Wave propagation in ordered, disordered, and nonlinear photonic band gap materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lidorikis, Elefterios
Photonic band gap materials are artificial dielectric structures that give the promise of molding and controlling the flow of optical light the same way semiconductors mold and control the electric current flow. In this dissertation the author studied two areas of photonic band gap materials. The first area is focused on the properties of one-dimensional PBG materials doped with Kerr-type nonlinear material, while, the second area is focused on the mechanisms responsible for the gap formation as well as other properties of two-dimensional PBG materials. He first studied, in Chapter 2, the general adequacy of an approximate structure model inmore » which the nonlinearity is assumed to be concentrated in equally-spaced very thin layers, or 6-functions, while the rest of the space is linear. This model had been used before, but its range of validity and the physical reasons for its limitations were not quite clear yet. He performed an extensive examination of many aspects of the model's nonlinear response and comparison against more realistic models with finite-width nonlinear layers, and found that the d-function model is quite adequate, capturing the essential features in the transmission characteristics. The author found one exception, coming from the deficiency of processing a rigid bottom band edge, i.e. the upper edge of the gaps is always independent of the refraction index contrast. This causes the model to miss-predict that there are no soliton solutions for a positive Kerr-coefficient, something known to be untrue.« less
Model Uncertainties for Valencia RPA Effect for MINERvA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gran, Richard
2017-05-08
This technical note describes the application of the Valencia RPA multi-nucleon effect and its uncertainty to QE reactions from the GENIE neutrino event generator. The analysis of MINERvA neutrino data in Rodrigues et al. PRL 116 071802 (2016) paper makes clear the need for an RPA suppression, especially at very low momentum and energy transfer. That published analysis does not constrain the magnitude of the effect; it only tests models with and without the effect against the data. Other MINERvA analyses need an expression of the model uncertainty in the RPA effect. A well-described uncertainty can be used for systematics for unfolding, for model errors in the analysis of non-QE samples, and as input for fitting exercises for model testing or constraining backgrounds. This prescription takes uncertainties on the parameters in the Valencia RPA model and adds a (not-as-tight) constraint from muon capture data. For MINERvA we apply it as a 2D (more » $$q_0$$,$$q_3$$) weight to GENIE events, in lieu of generating a full beyond-Fermi-gas quasielastic events. Because it is a weight, it can be applied to the generated and fully Geant4 simulated events used in analysis without a special GENIE sample. For some limited uses, it could be cast as a 1D $Q^2$ weight without much trouble. This procedure is a suitable starting point for NOvA and DUNE where the energy dependence is modest, but probably not adequate for T2K or MicroBooNE.« less
Wavefront Sensing for WFIRST with a Linear Optical Model
NASA Technical Reports Server (NTRS)
Jurling, Alden S.; Content, David A.
2012-01-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
Stress induced phase transitions in silicon
NASA Astrophysics Data System (ADS)
Budnitzki, M.; Kuna, M.
2016-10-01
Silicon has a tremendous importance as an electronic, structural and optical material. Modeling the interaction of a silicon surface with a pointed asperity at room temperature is a major step towards the understanding of various phenomena related to brittle as well as ductile regime machining of this semiconductor. If subjected to pressure or contact loading, silicon undergoes a series of stress-driven phase transitions accompanied by large volume changes. In order to understand the material's response for complex non-hydrostatic loading situations, dedicated constitutive models are required. While a significant body of literature exists for the dislocation dominated high-temperature deformation regime, the constitutive laws used for the technologically relevant rapid low-temperature loading have severe limitations, as they do not account for the relevant phase transitions. We developed a novel finite deformation constitutive model set within the framework of thermodynamics with internal variables that captures the stress induced semiconductor-to-metal (cd-Si → β-Si), metal-to-amorphous (β-Si → a-Si) as well as amorphous-to-amorphous (a-Si → hda-Si, hda-Si → a-Si) transitions. The model parameters were identified in part directly from diamond anvil cell data and in part from instrumented indentation by the solution of an inverse problem. The constitutive model was verified by successfully predicting the transformation stress under uniaxial compression and load-displacement curves for different indenters for single loading-unloading cycles as well as repeated indentation. To the authors' knowledge this is the first constitutive model that is able to adequately describe cyclic indentation in silicon.
Systems Analysis of Physical Absorption of CO2 in Ionic Liquids for Pre-Combustion Carbon Capture.
Zhai, Haibo; Rubin, Edward S
2018-04-17
This study develops an integrated technical and economic modeling framework to investigate the feasibility of ionic liquids (ILs) for precombustion carbon capture. The IL 1-hexyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide is modeled as a potential physical solvent for CO 2 capture at integrated gasification combined cycle (IGCC) power plants. The analysis reveals that the energy penalty of the IL-based capture system comes mainly from the process and product streams compression and solvent pumping, while the major capital cost components are the compressors and absorbers. On the basis of the plant-level analysis, the cost of CO 2 avoided by the IL-based capture and storage system is estimated to be $63 per tonne of CO 2 . Technical and economic comparisons between IL- and Selexol-based capture systems at the plant level show that an IL-based system could be a feasible option for CO 2 capture. Improving the CO 2 solubility of ILs can simplify the capture process configuration and lower the process energy and cost penalties to further enhance the viability of this technology.
Electrofishing capture probability of smallmouth bass in streams
Dauwalter, D.C.; Fisher, W.L.
2007-01-01
Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vahdat, Nader
2013-09-30
The project provided hands-on training and networking opportunities to undergraduate students in the area of carbon dioxide (CO2) capture and transport, through fundamental research study focused on advanced separation methods that can be applied to the capture of CO2 resulting from the combustion of fossil-fuels for power generation . The project team’s approach to achieve its objectives was to leverage existing Carbon Capture and Storage (CCS) course materials and teaching methods to create and implement an annual CCS short course for the Tuskegee University community; conduct a survey of CO2 separation and capture methods; utilize data to verify and developmore » computer models for CO2 capture and build CCS networks and hands-on training experiences. The objectives accomplished as a result of this project were: (1) A comprehensive survey of CO2 capture methods was conducted and mathematical models were developed to compare the potential economics of the different methods based on the total cost per year per unit of CO2 avoidance; and (2) Training was provided to introduce the latest CO2 capture technologies and deployment issues to the university community.« less
Müller, Bárbara S F; Neves, Leandro G; de Almeida Filho, Janeo E; Resende, Márcio F R; Muñoz, Patricio R; Dos Santos, Paulo E T; Filho, Estefano Paludzyszyn; Kirst, Matias; Grattapaglia, Dario
2017-07-11
The advent of high-throughput genotyping technologies coupled to genomic prediction methods established a new paradigm to integrate genomics and breeding. We carried out whole-genome prediction and contrasted it to a genome-wide association study (GWAS) for growth traits in breeding populations of Eucalyptus benthamii (n =505) and Eucalyptus pellita (n =732). Both species are of increasing commercial interest for the development of germplasm adapted to environmental stresses. Predictive ability reached 0.16 in E. benthamii and 0.44 in E. pellita for diameter growth. Predictive abilities using either Genomic BLUP or different Bayesian methods were similar, suggesting that growth adequately fits the infinitesimal model. Genomic prediction models using ~5000-10,000 SNPs provided predictive abilities equivalent to using all 13,787 and 19,506 SNPs genotyped in the E. benthamii and E. pellita populations, respectively. No difference was detected in predictive ability when different sets of SNPs were utilized, based on position (equidistantly genome-wide, inside genes, linkage disequilibrium pruned or on single chromosomes), as long as the total number of SNPs used was above ~5000. Predictive abilities obtained by removing relatedness between training and validation sets fell near zero for E. benthamii and were halved for E. pellita. These results corroborate the current view that relatedness is the main driver of genomic prediction, although some short-range historical linkage disequilibrium (LD) was likely captured for E. pellita. A GWAS identified only one significant association for volume growth in E. pellita, illustrating the fact that while genome-wide regression is able to account for large proportions of the heritability, very little or none of it is captured into significant associations using GWAS in breeding populations of the size evaluated in this study. This study provides further experimental data supporting positive prospects of using genome-wide data to capture large proportions of trait heritability and predict growth traits in trees with accuracies equal or better than those attainable by phenotypic selection. Additionally, our results document the superiority of the whole-genome regression approach in accounting for large proportions of the heritability of complex traits such as growth in contrast to the limited value of the local GWAS approach toward breeding applications in forest trees.
Toolbox for Urban Mobility Simulation: High Resolution Population Dynamics for Global Cities
NASA Astrophysics Data System (ADS)
Bhaduri, B. L.; Lu, W.; Liu, C.; Thakur, G.; Karthik, R.
2015-12-01
In this rapidly urbanizing world, unprecedented rate of population growth is not only mirrored by increasing demand for energy, food, water, and other natural resources, but has detrimental impacts on environmental and human security. Transportation simulations are frequently used for mobility assessment in urban planning, traffic operation, and emergency management. Previous research, involving purely analytical techniques to simulations capturing behavior, has investigated questions and scenarios regarding the relationships among energy, emissions, air quality, and transportation. Primary limitations of past attempts have been availability of input data, useful "energy and behavior focused" models, validation data, and adequate computational capability that allows adequate understanding of the interdependencies of our transportation system. With increasing availability and quality of traditional and crowdsourced data, we have utilized the OpenStreetMap roads network, and has integrated high resolution population data with traffic simulation to create a Toolbox for Urban Mobility Simulations (TUMS) at global scale. TUMS consists of three major components: data processing, traffic simulation models, and Internet-based visualizations. It integrates OpenStreetMap, LandScanTM population, and other open data (Census Transportation Planning Products, National household Travel Survey, etc.) to generate both normal traffic operation and emergency evacuation scenarios. TUMS integrates TRANSIMS and MITSIM as traffic simulation engines, which are open-source and widely-accepted for scalable traffic simulations. Consistent data and simulation platform allows quick adaption to various geographic areas that has been demonstrated for multiple cities across the world. We are combining the strengths of geospatial data sciences, high performance simulations, transportation planning, and emissions, vehicle and energy technology development to design and develop a simulation framework to assist decision makers at all levels - local, state, regional, and federal. Using Cleveland, Tennessee as an example, in this presentation, we illustrate how emerging cities could easily assess future land use scenario driven impacts on energy and environment utilizing such a capability.
NASA Astrophysics Data System (ADS)
Berntsen, Jarle; Alendal, Guttorm; Avlesen, Helge; Thiem, Øyvind
2018-05-01
The flow of dense water along continental slopes is considered. There is a large literature on the topic based on observations and laboratory experiments. In addition, there are many analytical and numerical studies of dense water flows. In particular, there is a sequence of numerical investigations using the dynamics of overflow mixing and entrainment (DOME) setup. In these papers, the sensitivity of the solutions to numerical parameters such as grid size and numerical viscosity coefficients and to the choices of methods and models is investigated. In earlier DOME studies, three different bottom boundary conditions and a range of vertical grid sizes are applied. In other parts of the literature on numerical studies of oceanic gravity currents, there are statements that appear to contradict choices made on bottom boundary conditions in some of the DOME papers. In the present study, we therefore address the effects of the bottom boundary condition and vertical resolution in numerical investigations of dense water cascading on a slope. The main finding of the present paper is that it is feasible to capture the bottom Ekman layer dynamics adequately and cost efficiently by using a terrain-following model system using a quadratic drag law with a drag coefficient computed to give near-bottom velocity profiles in agreement with the logarithmic law of the wall. Many studies of dense water flows are performed with a quadratic bottom drag law and a constant drag coefficient. It is shown that when using this bottom boundary condition, Ekman drainage will not be adequately represented. In other studies of gravity flow, a no-slip bottom boundary condition is applied. With no-slip and a very fine resolution near the seabed, the solutions are essentially equal to the solutions obtained with a quadratic drag law and a drag coefficient computed to produce velocity profiles matching the logarithmic law of the wall. However, with coarser resolution near the seabed, there may be a substantial artificial blocking effect when using no-slip.
Results of the eruptive column model inter-comparison study
Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza
2016-01-01
This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.
Ridge, Lasso and Bayesian additive-dominance genomic models.
Azevedo, Camila Ferreira; de Resende, Marcos Deon Vilela; E Silva, Fabyano Fonseca; Viana, José Marcelo Soriano; Valente, Magno Sávio Ferreira; Resende, Márcio Fernando Ribeiro; Muñoz, Patricio
2015-08-25
A complete approach for genome-wide selection (GWS) involves reliable statistical genetics models and methods. Reports on this topic are common for additive genetic models but not for additive-dominance models. The objective of this paper was (i) to compare the performance of 10 additive-dominance predictive models (including current models and proposed modifications), fitted using Bayesian, Lasso and Ridge regression approaches; and (ii) to decompose genomic heritability and accuracy in terms of three quantitative genetic information sources, namely, linkage disequilibrium (LD), co-segregation (CS) and pedigree relationships or family structure (PR). The simulation study considered two broad sense heritability levels (0.30 and 0.50, associated with narrow sense heritabilities of 0.20 and 0.35, respectively) and two genetic architectures for traits (the first consisting of small gene effects and the second consisting of a mixed inheritance model with five major genes). G-REML/G-BLUP and a modified Bayesian/Lasso (called BayesA*B* or t-BLASSO) method performed best in the prediction of genomic breeding as well as the total genotypic values of individuals in all four scenarios (two heritabilities x two genetic architectures). The BayesA*B*-type method showed a better ability to recover the dominance variance/additive variance ratio. Decomposition of genomic heritability and accuracy revealed the following descending importance order of information: LD, CS and PR not captured by markers, the last two being very close. Amongst the 10 models/methods evaluated, the G-BLUP, BAYESA*B* (-2,8) and BAYESA*B* (4,6) methods presented the best results and were found to be adequate for accurately predicting genomic breeding and total genotypic values as well as for estimating additive and dominance in additive-dominance genomic models.
Evaluation of a Mesoscale Convective System in Variable-Resolution CESM
NASA Astrophysics Data System (ADS)
Payne, A. E.; Jablonowski, C.
2017-12-01
Warm season precipitation over the Southern Great Plains (SGP) follows a well observed diurnal pattern of variability, peaking at night-time, due to the eastward propagation of mesoscale convection systems that develop over the eastern slopes of the Rockies in the late afternoon. While most climate models are unable to adequately capture the organization of convection and characteristic pattern of precipitation over this region, models with high enough resolution to explicitly resolve convection show improvement. However, high resolution simulations are computationally expensive and, in the case of regional climate models, are subject to boundary conditions. Newly developed variable resolution global climate models strike a balance between the benefits of high-resolution regional climate models and the large-scale dynamics of global climate models and low computational cost. Recently developed parameterizations that are insensitive to the model grid scale provide a way to improve model performance. Here, we present an evaluation of the newly available Cloud Layers Unified by Binormals (CLUBB) parameterization scheme in a suite of variable-resolution CESM simulations with resolutions ranging from 110 km to 7 km within a regionally refined region centered over the SGP Atmospheric Radiation Measurement (ARM) site. Simulations utilize the hindcast approach developed by the Department of Energy's Cloud-Associated Parameterizations Testbed (CAPT) for the assessment of climate models. We limit our evaluation to a single mesoscale convective system that passed over the region on May 24, 2008. The effects of grid-resolution on the timing and intensity of precipitation, as well as, on the transition from shallow to deep convection are assessed against ground-based observations from the SGP ARM site, satellite observations and ERA-Interim reanalysis.
Contributions of Microtubule Dynamic Instability and Rotational Diffusion to Kinetochore Capture.
Blackwell, Robert; Sweezy-Schindler, Oliver; Edelmaier, Christopher; Gergely, Zachary R; Flynn, Patrick J; Montes, Salvador; Crapo, Ammon; Doostan, Alireza; McIntosh, J Richard; Glaser, Matthew A; Betterton, Meredith D
2017-02-07
Microtubule dynamic instability allows search and capture of kinetochores during spindle formation, an important process for accurate chromosome segregation during cell division. Recent work has found that microtubule rotational diffusion about minus-end attachment points contributes to kinetochore capture in fission yeast, but the relative contributions of dynamic instability and rotational diffusion are not well understood. We have developed a biophysical model of kinetochore capture in small fission-yeast nuclei using hybrid Brownian dynamics/kinetic Monte Carlo simulation techniques. With this model, we have studied the importance of dynamic instability and microtubule rotational diffusion for kinetochore capture, both to the lateral surface of a microtubule and at or near its end. Over a range of biologically relevant parameters, microtubule rotational diffusion decreased capture time, but made a relatively small contribution compared to dynamic instability. At most, rotational diffusion reduced capture time by 25%. Our results suggest that while microtubule rotational diffusion can speed up kinetochore capture, it is unlikely to be the dominant physical mechanism for typical conditions in fission yeast. In addition, we found that when microtubules undergo dynamic instability, lateral captures predominate even in the absence of rotational diffusion. Counterintuitively, adding rotational diffusion to a dynamic microtubule increases the probability of end-on capture. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less
Neutron stars and millisecond pulsars from accretion-induced collapse in globular clusters
NASA Technical Reports Server (NTRS)
Bailyn, Charles D.; Grindlay, Jonathan E.
1990-01-01
This paper examines the limits on the number of millisecond pulsars which could be formed in globular clusters by the generally accepted scenario (in which a neutron star is created by the supernova of an initially massive star and subsequently captures a companion to form a low-mass X-ray binary which eventually becomes a millisecond pulsar). It is found that, while the number of observed low-mass X-ray binaries can be adequately explained in this way, the reasonable assumption that the pulsar luminosity function in clusters extends below the current observational limits down to the luminosity of the faintest millisecond pulsars in the field suggests a cluster population of millisecond pulsars which is substantially larger than the standard model can produce. Alleviating this problem by postulating much shorter lifetimes for the X-ray binaries requires massive star populations sufficiently large that the mass loss resulting from their evolution would be likely to unbind the cluster. It is argued that neutron star formation in globular clusters by accretion-induced collapse of white dwarfs may resolve the discrepancy in birthrates.
Parekh, Ruchi; Armañanzas, Rubén; Ascoli, Giorgio A
2015-04-01
Digital reconstructions of axonal and dendritic arbors provide a powerful representation of neuronal morphology in formats amenable to quantitative analysis, computational modeling, and data mining. Reconstructed files, however, require adequate metadata to identify the appropriate animal species, developmental stage, brain region, and neuron type. Moreover, experimental details about tissue processing, neurite visualization and microscopic imaging are essential to assess the information content of digital morphologies. Typical morphological reconstructions only partially capture the underlying biological reality. Tracings are often limited to certain domains (e.g., dendrites and not axons), may be incomplete due to tissue sectioning, imperfect staining, and limited imaging resolution, or can disregard aspects irrelevant to their specific scientific focus (such as branch thickness or depth). Gauging these factors is critical in subsequent data reuse and comparison. NeuroMorpho.Org is a central repository of reconstructions from many laboratories and experimental conditions. Here, we introduce substantial additions to the existing metadata annotation aimed to describe the completeness of the reconstructed neurons in NeuroMorpho.Org. These expanded metadata form a suitable basis for effective description of neuromorphological data.
Phytoplankton productivity in relation to light intensity: A simple equation
Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.
1987-01-01
A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.
Visualizing 3D Food Microstructure Using Tomographic Methods: Advantages and Disadvantages.
Wang, Zi; Herremans, Els; Janssen, Siem; Cantre, Dennis; Verboven, Pieter; Nicolaï, Bart
2018-03-25
X-ray micro-computed tomography (micro-CT) provides the unique ability to capture intact internal microstructure data without significant preparation of the sample. The fundamentals of micro-CT technology are briefly described along with a short introduction to basic image processing, quantitative analysis, and derivative computational modeling. The applications and limitations of micro-CT in industries such as meat, dairy, postharvest, and bread/confectionary are discussed to serve as a guideline to the plausibility of utilizing the technique for detecting features of interest. Component volume fractions, their respective size/shape distributions, and connectivity, for example, can be utilized for product development, manufacturing process tuning and/or troubleshooting. In addition to determining structure-function relations, micro-CT can be used for foreign material detection to further ensure product quality and safety. In most usage scenarios, micro-CT in its current form is perfectly adequate for determining microstructure in a wide variety of food products. However, in low-contrast and low-stability samples, emphasis is placed on the shortcomings of the current systems to set realistic expectations for the intended users.
Magnetic Capture of a Molecular Biomarker from Synovial Fluid in a Rat Model of Knee Osteoarthritis
Yarmola, Elena G.; Shah, Yash; Arnold, David P.; Dobson, Jon; Allen, Kyle D.
2015-01-01
Biomarker development for osteoarthritis (OA) often begins in rodent models, but can be limited by an inability to aspirate synovial fluid from a rodent stifle (similar to the human knee). To address this limitation, we have developed a magnetic nanoparticle-based technology to collect biomarkers from a rodent stifle, termed magnetic capture. Using a common OA biomarker - the c-terminus telopeptide of type II collagen (CTXII) - magnetic capture was optimized in vitro using bovine synovial fluid and then tested in a rat model of knee OA. Anti-CTXII antibodies were conjugated to the surface of superparamagnetic iron oxide-containing polymeric particles. Using these anti-CTXII particles, magnetic capture was able to estimate the level of CTXII in 25 µL aliquots of bovine synovial fluid; and under controlled conditions, this estimate was unaffected by synovial fluid viscosity. Following in vitro testing, anti-CTXII particles were tested in a rat monoiodoacetate model of knee OA. CTXII could be magnetically captured from a rodent stifle without the need to aspirate fluid and showed 10 fold changes in CTXII levels from OA-affected joints relative to contralateral control joints. Combined, these data demonstrate the ability and sensitivity of magnetic capture for post-mortem analysis of OA biomarkers in the rat. PMID:26136062
Magnetic Capture of a Molecular Biomarker from Synovial Fluid in a Rat Model of Knee Osteoarthritis.
Yarmola, Elena G; Shah, Yash; Arnold, David P; Dobson, Jon; Allen, Kyle D
2016-04-01
Biomarker development for osteoarthritis (OA) often begins in rodent models, but can be limited by an inability to aspirate synovial fluid from a rodent stifle (similar to the human knee). To address this limitation, we have developed a magnetic nanoparticle-based technology to collect biomarkers from a rodent stifle, termed magnetic capture. Using a common OA biomarker--the c-terminus telopeptide of type II collagen (CTXII)--magnetic capture was optimized in vitro using bovine synovial fluid and then tested in a rat model of knee OA. Anti-CTXII antibodies were conjugated to the surface of superparamagnetic iron oxide-containing polymeric particles. Using these anti-CTXII particles, magnetic capture was able to estimate the level of CTXII in 25 μL aliquots of bovine synovial fluid; and under controlled conditions, this estimate was unaffected by synovial fluid viscosity. Following in vitro testing, anti-CTXII particles were tested in a rat monoiodoacetate model of knee OA. CTXII could be magnetically captured from a rodent stifle without the need to aspirate fluid and showed tenfold changes in CTXII levels from OA-affected joints relative to contralateral control joints. Combined, these data demonstrate the ability and sensitivity of magnetic capture for post-mortem analysis of OA biomarkers in the rat.
FRAP Analysis: Accounting for Bleaching during Image Capture
Wu, Jun; Shekhar, Nandini; Lele, Pushkar P.; Lele, Tanmay P.
2012-01-01
The analysis of Fluorescence Recovery After Photobleaching (FRAP) experiments involves mathematical modeling of the fluorescence recovery process. An important feature of FRAP experiments that tends to be ignored in the modeling is that there can be a significant loss of fluorescence due to bleaching during image capture. In this paper, we explicitly include the effects of bleaching during image capture in the model for the recovery process, instead of correcting for the effects of bleaching using reference measurements. Using experimental examples, we demonstrate the usefulness of such an approach in FRAP analysis. PMID:22912750
Base flow calibration in a global hydrological model
NASA Astrophysics Data System (ADS)
van Beek, L. P.; Bierkens, M. F.
2006-12-01
Base flow constitutes an important water resource in many parts of the world. Its provenance and yield over time are governed by the storage capacity of local aquifers and the internal drainage paths, which are difficult to capture at the global scale. To represent the spatial and temporal variability in base flow adequately in a distributed global model at 0.5 degree resolution, we resorted to the conceptual model of aquifer storage of Kraaijenhoff- van de Leur (1958) that yields the reservoir coefficient for a linear groundwater store. This model was parameterised using global information on drainage density, climatology and lithology. Initial estimates of aquifer thickness, permeability and specific porosity from literature were linked to the latter two categories and calibrated to low flow data by means of simulated annealing so as to conserve the ordinal information contained by them. The observations used stem from the RivDis dataset of monthly discharge. From this dataset 324 stations were selected with at least 10 years of observations in the period 1958-1991 and an areal coverage of at least 10 cells of 0.5 degree. The dataset was split between basins into a calibration and validation set whilst preserving a representative distribution of lithology types and climate zones. Optimisation involved minimising the absolute differences between the simulated base flow and the lowest 10% of the observed monthly discharge. Subsequently, the reliability of the calibrated parameters was tested by reversing the calibration and validation sets.
NASA Astrophysics Data System (ADS)
Soriano, M., Jr.; Deziel, N. C.; Saiers, J. E.
2017-12-01
The rapid expansion of unconventional oil and gas (UO&G) production, made possible by advances in hydraulic fracturing (fracking), has triggered concerns over risks this extraction poses to water resources and public health. Concerns are particularly acute within communities that host UO&G development and rely heavily on shallow aquifers as sources of drinking water. This research aims to develop a quantitative framework to evaluate the vulnerability of drinking water wells to contamination from UO&G activities. The concept of well vulnerability is explored through application of backwards travel time probability modeling to estimate the likelihood that capture zones of drinking water wells circumscribe source locations of UO&G contamination. Sources of UO&G contamination considered in this analysis include gas well pads and documented sites of UO&G wastewater and chemical spills. The modeling approach is illustrated for a portion of Susquehanna County, Pennsylvania, where more than one thousand shale gas wells have been completed since 2005. Data from a network of eight multi-level groundwater monitoring wells installed in the study site in 2015 are used to evaluate the model. The well vulnerability concept is proposed as a physically based quantitative tool for policy-makers dealing with the management of contamination risks of drinking water wells. In particular, the model can be used to identify adequate setback distances of UO&G activities from drinking water wells and other critical receptors.
Parameter-expanded data augmentation for Bayesian analysis of capture-recapture models
Royle, J. Andrew; Dorazio, Robert M.
2012-01-01
Data augmentation (DA) is a flexible tool for analyzing closed and open population models of capture-recapture data, especially models which include sources of hetereogeneity among individuals. The essential concept underlying DA, as we use the term, is based on adding "observations" to create a dataset composed of a known number of individuals. This new (augmented) dataset, which includes the unknown number of individuals N in the population, is then analyzed using a new model that includes a reformulation of the parameter N in the conventional model of the observed (unaugmented) data. In the context of capture-recapture models, we add a set of "all zero" encounter histories which are not, in practice, observable. The model of the augmented dataset is a zero-inflated version of either a binomial or a multinomial base model. Thus, our use of DA provides a general approach for analyzing both closed and open population models of all types. In doing so, this approach provides a unified framework for the analysis of a huge range of models that are treated as unrelated "black boxes" and named procedures in the classical literature. As a practical matter, analysis of the augmented dataset by MCMC is greatly simplified compared to other methods that require specialized algorithms. For example, complex capture-recapture models of an augmented dataset can be fitted with popular MCMC software packages (WinBUGS or JAGS) by providing a concise statement of the model's assumptions that usually involves only a few lines of pseudocode. In this paper, we review the basic technical concepts of data augmentation, and we provide examples of analyses of closed-population models (M 0, M h , distance sampling, and spatial capture-recapture models) and open-population models (Jolly-Seber) with individual effects.
NASA Astrophysics Data System (ADS)
Luo, Ning; Illman, Walter A.
2016-09-01
Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.
Sampling for Global Epidemic Models and the Topology of an International Airport Network
Bobashev, Georgiy; Morris, Robert J.; Goedecke, D. Michael
2008-01-01
Mathematical models that describe the global spread of infectious diseases such as influenza, severe acute respiratory syndrome (SARS), and tuberculosis (TB) often consider a sample of international airports as a network supporting disease spread. However, there is no consensus on how many cities should be selected or on how to select those cities. Using airport flight data that commercial airlines reported to the Official Airline Guide (OAG) in 2000, we have examined the network characteristics of network samples obtained under different selection rules. In addition, we have examined different size samples based on largest flight volume and largest metropolitan populations. We have shown that although the bias in network characteristics increases with the reduction of the sample size, a relatively small number of areas that includes the largest airports, the largest cities, the most-connected cities, and the most central cities is enough to describe the dynamics of the global spread of influenza. The analysis suggests that a relatively small number of cities (around 200 or 300 out of almost 3000) can capture enough network information to adequately describe the global spread of a disease such as influenza. Weak traffic flows between small airports can contribute to noise and mask other means of spread such as the ground transportation. PMID:18776932
[The relationship between mood disorders and temperament, character and personality].
Sayin, Aslihan; Aslan, Salçuk
2005-01-01
The terms temperament, character and personality have been used almost synonymously despite their different meanings. Hippocratic physicians conceptualized illness, including melancholia, in dimensional terms as an out-growth of premorbid characteristics. In modern times, full-scale application of this dimensional concept to psychiatric disorders led Kraepelin, Schneider and Kretschmer to hypothesize that the 'endogenous psychoses are nothing other than marked accentuation of normal types of temperament'. Akiskal's 'soft-bipolarity' and 'affective temperaments' concepts and Cloninger's psychobiological model of temperament and character, which includes four temperament and three character dimensions, are examples of this dimensional approach from the last two decades. Hypotheses concerning the relationship between personality disorders and mood disorders have been described, but it is likely that a single unitary model would not adequately capture the complexity inherent in the relationship between mood and personality disorders. The DSM multiaxial approach to diagnosis encourages the clinician to distinguish state (Axis I) from trait (Axis II) features of mental disorders. Categorical systems like DSM have been criticised because of their inability to mention temperament, character and personality features. In this review, examples of dimensional approaches to mood disorders are given and discussed under the influence of temperament, character and personality disorders. For this purpose, literature from 1980 to 2004 has been reviewed through Pub/med, using the following key words.
Robust algebraic image enhancement for intelligent control systems
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morrelli, Michael
1993-01-01
Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.
Hammoudeh, Weeam; Hogan, Dennis; Giacaman, Rita
2013-11-01
This study investigates changes in the quality of life (QoL) of Gaza Palestinians before and after the Israeli winter 2008-2009 war using the World Health Organization's WHOQOL-Bref; the extent to which this instrument adequately measures changing situations; and its responsiveness to locally developed human insecurity and distress measures appropriate for context. Ordinary least squares regression analysis was performed to detect how demographic and socioeconomic variables usually associated with QoL were associated with human insecurity and distress. We estimated the usual baseline model for the three QoL domains, and a second set of models including these standard variables and human insecurity and distress to assess how personal exposure to political violence affects QoL. No difference between the quality of life scores in 2005 and 2009 was found, with results suggesting lack of sensitivity of WHOQOL-Bref in capturing changes resulting from intensification of preexisting political violence. Results show that human insecurity and individual distress significantly increased in 2009 compared to 2005. Results indicate that a political domain may provide further understanding of and possibly increase the sensitivity of the instrument to detect changes in the Qol of Palestinians and possibly other populations experiencing intensified political violence.
Non-invasive genetic censusing and monitoring of primate populations.
Arandjelovic, Mimi; Vigilant, Linda
2018-03-01
Knowing the density or abundance of primate populations is essential for their conservation management and contextualizing socio-demographic and behavioral observations. When direct counts of animals are not possible, genetic analysis of non-invasive samples collected from wildlife populations allows estimates of population size with higher accuracy and precision than is possible using indirect signs. Furthermore, in contrast to traditional indirect survey methods, prolonged or periodic genetic sampling across months or years enables inference of group membership, movement, dynamics, and some kin relationships. Data may also be used to estimate sex ratios, sex differences in dispersal distances, and detect gene flow among locations. Recent advances in capture-recapture models have further improved the precision of population estimates derived from non-invasive samples. Simulations using these methods have shown that the confidence interval of point estimates includes the true population size when assumptions of the models are met, and therefore this range of population size minima and maxima should be emphasized in population monitoring studies. Innovations such as the use of sniffer dogs or anti-poaching patrols for sample collection are important to ensure adequate sampling, and the expected development of efficient and cost-effective genotyping by sequencing methods for DNAs derived from non-invasive samples will automate and speed analyses. © 2018 Wiley Periodicals, Inc.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
NASA Astrophysics Data System (ADS)
Sedigh Marvasti, S.; Gnanadesikan, A.; Bidokhti, A. A.; Dunne, J. P.; Ghader, S.
2016-02-01
Recent years have shown an increase in harmful algal blooms in the Northwest Arabian Sea and Gulf of Oman, raising the question of whether climate change will accelerate this trend. This has led us to examine whether the Earth System Models used to simulate phytoplankton productivity accurately capture bloom dynamics in this region - both in terms of the annual cycle and interannual variability. Satellite data (SeaWIFS ocean color) show two climatological blooms in this region, a wintertime bloom peaking in February and a summertime bloom peaking in September. On a regional scale, interannual variability of the wintertime bloom is dominated by cyclonic eddies which vary in location from one year to another. Two coarse (1°) models with the relatively complex biogeochemistry (TOPAZ) capture the annual cycle but neither eddies nor the interannual variability. An eddy-resolving model (GFDL CM2.6) with a simpler biogeochemistry (miniBLING) displays larger interannual variability, but overestimates the wintertime bloom and captures eddy-bloom coupling in the south but not in the north. The models fail to capture both the magnitude of the wintertime bloom and its modulation by eddies in part because of their failure to capture the observed sharp thermocline and/or nutricline in this region. When CM2.6 is able to capture such features in the Southern part of the basin, eddies modulate diffusive nutrient supply to the surface (a mechanism not previously emphasized in the literature). For the model to simulate the observed wintertime blooms within cyclones, it will be necessary to represent this relatively unusual nutrient structure as well as the cyclonic eddies. This is a challenge in the Northern Arabian Sea as it requires capturing the details of the outflow from the Persian Gulf - something that is poorly done in global models.
Coggins, L.G.; Pine, William E.; Walters, C.J.; Martell, S.J.D.
2006-01-01
We present a new model to estimate capture probabilities, survival, abundance, and recruitment using traditional Jolly-Seber capture-recapture methods within a standard fisheries virtual population analysis framework. This approach compares the numbers of marked and unmarked fish at age captured in each year of sampling with predictions based on estimated vulnerabilities and abundance in a likelihood function. Recruitment to the earliest age at which fish can be tagged is estimated by using a virtual population analysis method to back-calculate the expected numbers of unmarked fish at risk of capture. By using information from both marked and unmarked animals in a standard fisheries age structure framework, this approach is well suited to the sparse data situations common in long-term capture-recapture programs with variable sampling effort. ?? Copyright by the American Fisheries Society 2006.
Contributions of microtubule rotation and dynamic instability to kinetochore capture
NASA Astrophysics Data System (ADS)
Sweezy-Schindler, Oliver; Edelmaier, Christopher; Blackwell, Robert; Glaser, Matt; Betterton, Meredith
2014-03-01
The capture of lost kinetochores (KCs) by microtubules (MTs) is a crucial part of prometaphase during mitosis. Microtubule dynamic instability has been considered the primary mechanism of KC capture, but recent work discovered that lateral KC attachment to pivoting MTs enabled rapid capture even with significantly reduced MT dynamics. We aim to understand the relative contributions of MT rotational diffusion and dynamic instability to KC capture, as well as KC capture through end-on and/or lateral attachment. Our model consists of rigid MTs and a spherical KC, which are allowed to diffuse inside a spherical nuclear envelope consistent with the geometry of fission yeast. For simplicity, we include a single spindle pole body, which is anchored to the nuclear membrane, and its associated polar MTs. Brownian dynamics treats the diffusion of the MTs and KC and kinetic Monte Carlo models stochastic processes such as dynamic instability. NSF 1546021.
Subtask 2.18 - Advancing CO 2 Capture Technology: Partnership for CO 2 Capture (PCO 2C) Phase III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kay, John; Azenkeng, Alexander; Fiala, Nathan
2016-03-31
Industries and utilities continue to investigate ways to decrease their carbon footprint. Carbon capture and storage (CCS) can enable existing power generation facilities to meet the current national CO 2 reduction goals. The Partnership for CO2 Capture Phase III focused on several important research areas in an effort to find ways to decrease the cost of capture across both precombustion and postcombustion platforms. Two flue gas pretreatment technologies for postcombustion capture, an SO 2 reduction scrubbing technology from Cansolv Technologies Inc. and the Tri-Mer filtration technology that combines particulate, NOx, and SO 2 control, were evaluated on the Energy &more » Environmental Research Center’s (EERC’s) pilot-scale test system. Pretreating the flue gas should enable more efficient, and therefore less expensive, CO 2 capture. Both technologies were found to be effective in pretreating flue gas prior to CO 2 capture. Two new postcombustion capture solvents were tested, one from the Korea Carbon Capture and Sequestration R&D Center (KCRC) and one from CO 2 Solutions Incorporated. Both of these solvents showed the ability to capture CO 2 while requiring less regeneration energy, which would reduce the cost of capture. Hydrogen separation membranes from Commonwealth Scientific and Industrial Research Organisation were evaluated through precombustion testing. They are composed of vanadium alloy, which is less expensive than the palladium alloys that are typically used. Their performance was comparable to that of other membranes that have been tested at the EERC. Aspen Plus® software was used to model the KCRC and CO 2 Solutions solvents and found that they would result in significantly improved overall plant performance. The modeling effort also showed that the parasitic steam load at partial capture of 45% is less than half that of 90% overall capture, indicating savings that could be accrued if 90% capture is not required. Modeling of three regional power plants using the Carnegie Mellon Integrated Environmental Control Model showed that, among other things, the use of a bypass during partial capture may minimize the size of the capture tower(s) and result in a slight reduction in the revenue required to operate the capture facility. The results reinforced that a one-size-fits-all approach cannot be taken to adding capture to a power plant. Laboratory testing indicated that Fourier transform infrared spectroscopy could be used to continuously sample stack emissions at CO 2 capture facilities to detect and quantify any residual amine or its degradation products, particularly nitrosamines. The information gathered during Phase III is important for utility stakeholders as they determine how to reduce their CO 2 emissions in a carbon-constrained world. This subtask was funded through the EERC–U.S. Department of Energy (DOE) Joint Program on Research and Development for Fossil Energy-Related Resources Cooperative Agreement No. DE-FC26-08NT43291. Nonfederal funding was provided by the North Dakota Industrial Commission, PPL Montana, Nebraska Public Power District, Tri-Mer Corporation, Montana–Dakota Utilities Co., Basin Electric Power Cooperative, KCRC/Korean Institute of Energy Research, Cansolv Technologies, and CO 2 Solutions, Inc.« less
1993-04-01
In spite of repeated warnings about laser safety practices, as well as the availability of laser safety eyewear (LSE), eye injuries continue to occur during use of surgical lasers, as discussed in the Clinical Perspective, "Laser Energy and Its Dangers to Eyes," preceding this Evaluation. We evaluated 48 models of LSE, including goggles, spectacles, and wraps, from 11 manufacturers. The evaluated models are designed with absorptive lenses that provide protection from CO2 (carbon dioxide), Nd:YAG (neodymium:yttrium-aluminum-garnet), and 532 (frequency-doubled Nd:YAG) surgical laser wavelengths; several models provide multiwavelength protection. (Refer to ECRI's Product Comparison System report on LSE for specifications of other models.) Although most of the evaluated models can adequately protect users from laser energy--provided that the eyewear is used--many models of LSE, especially goggles, are designed with little regard for the needs of actual use (e.g., adequate labeling, no alteration of color perception, sufficient field of vision [FOV], comfort). Because these factors can discourage people from using LSE, we encourage manufacturers to develop new and improved models that will be worn. We based our ratings primarily on the laser protection provided by the optical density (OD) of the lenses; we acknowledge the contribution of Montana Laser Optics Inc., of Bozeman, Montana, in performing our OD testing. We also considered actual-use factors, such as those mentioned above, to be significant. Among the models rated Acceptable is one whose labeled OD is lower than the level we determined to be adequate for use during most laser surgery; however, this model offers protection under specific conditions of use (e.g., for use by spectators some distance from the surgical site, for use during endoscopic procedures) that should be determined by the laser safety officer (LSO). LSE that would put the wearer at risk are rated Unacceptable (e.g., some models are not properly or clearly labeled, have measured ODs that are not adequate for protection, or significantly restrict the wearer's FOV); also, LSE with side shields that do not offer adequate protection from diffuse laser energy are rated Unacceptable. Those models that offer adequate protection for surgical applications, but whose measured OD is less than their labeled OD, are rated Acceptable--Not Recommended; if the discrepancy is great, they are rated Unacceptable. Those models whose labels were removed during cleaning are rated Conditionally Acceptable.(ABSTRACT TRUNCATED AT 400 WORDS)
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Individual heterogeneity and identifiability in capture-recapture models
Link, W.A.
2004-01-01
Individual heterogeneity in detection probabilities is a far more serious problem for capture-recapture modeling than has previously been recognized. In this note, I illustrate that population size is not an identifiable parameter under the general closed population mark-recapture model Mh. The problem of identifiability is obvious if the population includes individuals with pi = 0, but persists even when it is assumed that individual detection probabilities are bounded away from zero. Identifiability may be attained within parametric families of distributions for pi, but not among parametric families of distributions. Consequently, in the presence of individual heterogeneity in detection probability, capture-recapture analysis is strongly model dependent.
Malhotra, Karan; Buraimoh, Olatunbosun; Thornton, James; Cullen, Nicholas; Singh, Dishan; Goldberg, Andrew J
2016-06-20
To determine whether an entirely electronic system can be used to capture both patient-reported outcomes (electronic Patient-Reported Outcome Measures, ePROMs) as well as clinician-validated diagnostic and complexity data in an elective surgical orthopaedic outpatient setting. To examine patients' experience of this system and factors impacting their experience. Retrospective analysis of prospectively collected data. Single centre series. Outpatient clinics at an elective foot and ankle unit in the UK. All new adult patients attending elective orthopaedic outpatient clinics over a 32-month period. All patients were invited to complete ePROMs prior to attending their outpatient appointment. At their appointment, those patients who had not completed ePROMs were offered the opportunity to complete it on a tablet device with technical support. Matched diagnostic and complexity data were captured by the treating consultant during the appointment. Capture rates of patient-reported and clinician-reported data. All information and technology (IT) failures, language and disability barriers were captured. Patients were asked to rate their experience of using ePROMs. The scoring systems used included EQ-5D-5L, the Manchester-Oxford Foot Questionnaire (MOxFQ) and the Visual Analogue Scale (VAS) pain score. Out of 2534 new patients, 2176 (85.9%) completed ePROMs, of whom 1090 (50.09%) completed ePROMs at home/work prior to their appointment. 31.5% used a mobile (smartphone/tablet) device. Clinician-reported data were captured on 2491 patients (98.3%). The mean patient experience score of using Patient-Reported Outcome Measures (PROMs) was 8.55±1.85 out of 10 and 666 patients (30.61%) left comments. Of patients leaving comments, 214 (32.13%) felt ePROMs did not adequately capture their symptoms and these patients had significantly lower patient experience scores (p<0.001). This study demonstrates the successful implementation of technology into a service improvement programme. Excellent capture rates of ePROMs and clinician-validated diagnostic data can be achieved within a National Health Service setting. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modekurti, S.; Bhattacharyya, D.; Zitney, S.
2012-01-01
Solid-sorbent-based CO{sub 2} capture processes have strong potential for reducing the overall energy penalty for post-combustion capture from the flue gas of a conventional pulverized coal power plant. However, the commercial success of this technology is contingent upon it operating over a wide range of capture rates, transient events, malfunctions, and disturbances, as well as under uncertainties. To study these operational aspects, a dynamic model of a solid-sorbent-based CO{sub 2} capture process has been developed. In this work, a one-dimensional (1D), non-isothermal, dynamic model of a two-stage bubbling fluidized bed (BFB) adsorber-reactor system with overflow-type weir configuration has been developedmore » in Aspen Custom Modeler (ACM). The physical and chemical properties of the sorbent used in this study are based on a sorbent (32D) developed at National Energy Technology Laboratory (NETL). Each BFB is divided into bubble, emulsion, and cloud-wake regions with the assumptions that the bubble region is free of solids while both gas and solid phases coexist in the emulsion and cloud-wake regions. The BFB dynamic model includes 1D partial differential equations (PDEs) for mass and energy balances, along with comprehensive reaction kinetics. In addition to the two BFB models, the adsorber-reactor system includes 1D PDE-based dynamic models of the downcomer and outlet hopper, as well as models of distributors, control valves, and other pressure-drop devices. Consistent boundary and initial conditions are considered for simulating the dynamic model. Equipment items are sized and appropriate heat transfer options, wherever needed, are provided. Finally, a valid pressure-flow network is developed and a lower-level control system is designed. Using ACM, the transient responses of various process variables such as flue gas and sorbent temperatures, overall CO{sub 2} capture, level of solids in the downcomer and hopper have been studied by simulating typical disturbances such as change in the temperature, flowrate, and composition of the flue gas. To maintain the overall CO{sub 2} capture at a desired level in face of the typical disturbances, two control strategies were considered–a proportional-integral-derivative (PID)-based feedback control strategy and a feedforward-augmented feedback control strategy. Dynamic simulation results show that both the strategies result in unacceptable overshoot/undershoot and a long settling time. To improve the control system performance, a linear model predictive controller (LMPC) is designed. In summary, the overall results illustrate how optimizing the operation and control of carbon capture systems can have a significant impact on the extent and the rate at which commercial-scale capture processes will be scaled-up, deployed, and used in the years to come.« less
Catch as Catch Can: The History of the Theory of Gravitational Capture.
ERIC Educational Resources Information Center
Osipov, Y.
1992-01-01
Traces cosmogonic history of solar system from Laplace's hypothesis of revolving gas nebulae, to Newton's two-body problem with its mathematical impossibility of gravitational capture, to the isosceles three-body problem of Schmidt and Sitnikov with its notion of partial capture, and finally to the total capture model of Alexeyev verified by the…
Nichols, James D.; Pollock, Kenneth H.; Hines, James E.
1984-01-01
The robust design of Pollock (1982) was used to estimate parameters of a Maryland M. pennsylvanicus population. Closed model tests provided strong evidence of heterogeneity of capture probability, and model M eta (Otis et al., 1978) was selected as the most appropriate model for estimating population size. The Jolly-Seber model goodness-of-fit test indicated rejection of the model for this data set, and the M eta estimates of population size were all higher than the Jolly-Seber estimates. Both of these results are consistent with the evidence of heterogeneous capture probabilities. The authors thus used M eta estimates of population size, Jolly-Seber estimates of survival rate, and estimates of birth-immigration based on a combination of the population size and survival rate estimates. Advantages of the robust design estimates for certain inference procedures are discussed, and the design is recommended for future small mammal capture-recapture studies directed at estimation.
Study on the keV neutron capture reaction in 56Fe and 57Fe
NASA Astrophysics Data System (ADS)
Wang, Taofeng; Lee, Manwoo; Kim, Guinyun; Ro, Tae-Ik; Kang, Yeong-Rok; Igashira, Masayuki; Katabuchi, Tatsuya
2014-03-01
The neutron capture cross-sections and the radiative capture gamma-ray spectra from the broad resonances of 56Fe and 57Fe in the neutron energy range from 10 to 90keV and 550keV have been measured with an anti-Compton NaI(Tl) detector. Pulsed keV neutrons were produced from the 7Li 7Be reaction by bombarding the lithium target with the 1.5ns bunched proton beam from the 3MV Pelletron accelerator. The incident neutron spectrum on a capture sample was measured by means of a time-of-flight (TOF) method with a 6Li -glass detector. The number of weighted capture counts of the iron or gold sample was obtained by applying a pulse height weighting technique to the corresponding capture gamma-ray pulse height spectrum. The neutron capture gamma-ray spectra were obtained by unfolding the observed capture gamma-ray pulse height spectra. To achieve further understanding on the mechanism of neutron radiative capture reaction and study on physics models, theoretical calculations of the -ray spectra for 56Fe and 57Fe with the POD program have been performed by applying the Hauser-Feshbach statistical model. The dominant ingredients to perform the statistical calculation were the Optical Model Potential (OMP), the level densities described by the Mengoni-Nakajima approach, and the -ray transmission coefficients described by -ray strength functions. The comparison of the theoretical calculations, performed only for the 550keV point, show a good agreement with the present experimental results.
Salience-Based Selection: Attentional Capture by Distractors Less Salient Than the Target
Goschy, Harriet; Müller, Hermann Joseph
2013-01-01
Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. PMID:23382820
Cycle development and design for CO{sub 2} capture from flue gas by vacuum swing adsorption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jun Zhang; Paul A. Webley
CO{sub 2} capture and storage is an important component in the development of clean power generation processes. One CO{sub 2} capture technology is gas-phase adsorption, specifically pressure (or vacuum) swing adsorption. The complexity of these processes makes evaluation and assessment of new adsorbents difficult and time-consuming. In this study, we have developed a simple model specifically targeted at CO{sub 2} capture by pressure swing adsorption and validated our model by comparison with data from a fully instrumented pilot-scale pressure swing adsorption process. The model captures non-isothermal effects as well as nonlinear adsorption and nitrogen coadsorption. Using the model and ourmore » apparatus, we have designed and studied a large number of cycles for CO{sub 2} capture. We demonstrate that by careful management of adsorption fronts and assembly of cycles based on understanding of the roles of individual steps, we are able to quickly assess the effect of adsorbents and process parameters on capture performance and identify optimal operating regimes and cycles. We recommend this approach in contrast to exhaustive parametric studies which tend to depend on specifics of the chosen cycle and adsorbent. We show that appropriate combinations of process steps can yield excellent process performance and demonstrate how the pressure drop, and heat loss, etc. affect process performance through their effect on adsorption fronts and profiles. Finally, cyclic temperature profiles along the adsorption column can be readily used to infer concentration profiles - this has proved to be a very useful tool in cyclic function definition. Our research reveals excellent promise for the application of pressure/vacuum swing adsorption technology in the arena of CO{sub 2} capture from flue gases. 20 refs., 6 figs., 2 tabs.« less
Cycle development and design for CO2 capture from flue gas by vacuum swing adsorption.
Zhang, Jun; Webley, Paul A
2008-01-15
CO2 capture and storage is an important component in the development of clean power generation processes. One CO2 capture technology is gas-phase adsorption, specifically pressure (or vacuum) swing adsorption. The complexity of these processes makes evaluation and assessment of new adsorbents difficult and time-consuming. In this study, we have developed a simple model specifically targeted at CO2 capture by pressure swing adsorption and validated our model by comparison with data from a fully instrumented pilot-scale pressure swing adsorption process. The model captures nonisothermal effects as well as nonlinear adsorption and nitrogen coadsorption. Using the model and our apparatus, we have designed and studied a large number of cycles for CO2 capture. We demonstrate that by careful management of adsorption fronts and assembly of cycles based on understanding of the roles of individual steps, we are able to quickly assess the effect of adsorbents and process parameters on capture performance and identify optimal operating regimes and cycles. We recommend this approach in contrast to exhaustive parametric studies which tend to depend on specifics of the chosen cycle and adsorbent. We show that appropriate combinations of process steps can yield excellent process performance and demonstrate how the pressure drop, and heat loss, etc. affect process performance through their effect on adsorption fronts and profiles. Finally, cyclic temperature profiles along the adsorption column can be readily used to infer concentration profiles-this has proved to be a very useful tool in cyclic function definition. Our research reveals excellent promise for the application of pressure/vacuum swing adsorption technology in the arena of CO2 capture from flue gases.
Integrated approach for stress based lifing of aero gas turbine blades
NASA Astrophysics Data System (ADS)
Abu, Abdullahi Obonyegba
In order to analyse the turbine blade life, the damage due to the combined thermal and mechanical loads should be adequately accounted for. This is more challenging when detailed component geometry is limited. Therefore, a compromise between the level of geometric detail and the complexity of the lifing method to be implemented would be necessary. This research focuses on how the life assessment of aero engine turbine blades can be done, considering the balance between available design inputs and adequate level of fidelity. Accordingly, the thesis contributes to developing a generic turbine blade lifing method that is based on the engine thermodynamic cycle; as well as integrating critical design/technological factors and operational parameters that influence the aero engine blade life. To this end, thermo-mechanical fatigue was identified as the critical damage phenomenon driving the life of the turbine blade.. The developed approach integrates software tools and numerical models created using the minimum design information typically available at the early design stages. Using finite element analysis of an idealised blade geometry, the approach captures relevant impacts of thermal gradients and thermal stresses that contribute to the thermo-mechanical fatigue damage on the gas turbine blade. The blade life is evaluated using the Neu/Sehitoglu thermo-mechanical fatigue model that considers damage accumulation due to fatigue, oxidation, and creep. The leading edge is examined as a critical part of the blade to estimate the damage severity for different design factors and operational parameters. The outputs of the research can be used to better understand how the environment and the operating conditions of the aircraft affect the blade life consumption and therefore what is the impact on the maintenance cost and the availability of the propulsion system. This research also finds that the environmental (oxidation) effect drives the blade life and the blade coolant side was the critical location. Furthermore, a parametric and sensitivity study of the Neu/Sehitoglu model parameters suggests that in addition to four previously reported parameters, the sensitivity of the phasing to oxidation damage would be critical to overall blade life..
A Stochastic Fractional Dynamics Model of Rainfall Statistics
NASA Astrophysics Data System (ADS)
Kundu, Prasun; Travis, James
2013-04-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.
Li, Cheryl; Shoji, Satoshi; Beebe, Jean
2018-05-18
The purpose of this study was to characterize pharmacokinetics (PK) of PF-04236921, a novel anti-IL-6 monoclonal antibody, and its pharmacokinetics/pharmacodynamics (PK/PD) relationship on serum C-Reactive Protein (CRP) in healthy volunteers and patients with rheumatoid arthritis (RA), systemic lupus erythematosus (SLE), and Crohn's disease (CD) METHODS: Population modelling analyses were conducted using nonlinear mixed effects modelling. Data from 2 phase 1 healthy volunteer studies, a phase 1 RA study, a Phase 2 CD study, and a Phase 2 SLE study were included. A 2-compartment model with first order absorption and linear elimination and a mechanism-based indirect response model adequately described the PK and PK/PD relationships, respectively. Central compartment volume of distribution (Vc) positively correlated with body weight. Clearance (CL) negatively correlated with baseline albumin concentration and positively correlated with baseline CRP and creatinine clearance, and was slightly lower in females. After correcting for covariates, CL in CD subjects was approximately 60% higher than other populations. Maximum inhibition of PF-04236921 on CRP production (I max ) negatively correlated with baseline albumin. I max positively correlated with baseline CRP and the relationship was captured as a covariance structure in the PK/PD model. Integrated population PK and PK/PD models of PF-04236921 have been developed using pooled data from healthy subjects and autoimmune patients. The current model enables simulation of PF-04236921 PK and PD profiles under various dosing regimens and patient populations and should facilitate future clinical study of PF-04236921 and other anti-IL6 monoclonal antibodies. This article is protected by copyright. All rights reserved.
Forecasting the Performance of Agroforestry Systems
NASA Astrophysics Data System (ADS)
Luedeling, E.; Shepherd, K.
2014-12-01
Agroforestry has received considerable attention from scientists and development practitioners in recent years. It is recognized as a cornerstone of many traditional agricultural systems, as well as a new option for sustainable land management in currently treeless agricultural landscapes. Agroforestry systems are diverse, but most manifestations supply substantial ecosystem services, including marketable tree products, soil fertility, water cycle regulation, wildlife habitat and carbon sequestration. While these benefits have been well documented for many existing systems, projecting the outcomes of introducing new agroforestry systems, or forecasting system performance under changing environmental or climatic conditions, remains a substantial challenge. Due to the various interactions between system components, the multiple benefits produced by trees and crops, and the host of environmental, socioeconomic and cultural factors that shape agroforestry systems, mechanistic models of such systems quickly become very complex. They then require a lot of data for site-specific calibration, which presents a challenge for their use in new environmental and climatic domains, especially in data-scarce environments. For supporting decisions on the scaling up of agroforestry technologies, new projection methods are needed that can capture system complexity to an adequate degree, while taking full account of the fact that data on many system variables will virtually always be highly uncertain. This paper explores what projection methods are needed for supplying decision-makers with useful information on the performance of agroforestry in new places or new climates. Existing methods are discussed in light of these methodological needs. Finally, a participatory approach to performance projection is proposed that captures system dynamics in a holistic manner and makes probabilistic projections about expected system performance. This approach avoids the temptation to take spuriously precise model results at face value, and it is able to make predictions even where data is scarce. It thus provides a rapid and honest assessment option that can quickly supply decision-makers with system performance estimates, offering an opportunity to improve the targeting of agroforestry interventions.
NASA Astrophysics Data System (ADS)
Niang, C.
2015-12-01
Intraseasonal variability of rainfall over West Africa plays a significant role in the economy of the region and is highly linked to agriculture and water resources. This research study aims to investigate the relationship between Madden Julian Oscillation (MJO) and rainfall over West Africa during the boreal summer in the the state-of-the-art Atmospheric Model Intercomparison Project (AMIP) type simulations performed by Atmosphere General Circulation Models (GCMs) forced with prescribed Sea Surface Temperature (SST). It aims to determine the impact of MJO on rainfall and convection over West Africa and identify the dynamical processes which are involved in the state-of-the-art climate simulations. The simulations show in general good skills in capturing its main characteristics as well as its influence on rainfall over West Africa. On the global scale, most models simulated an eastward spatio-temporal propagation of enhanced and suppressed convection similar to the observed. However, over West Africa the MJO signal is weak in few of the models although there is a good coherence in the eastward propagation. The influence on rainfall is well captured in both Sahel and Guinea regions thereby adequately producing the transition between positive and negative rainfall anomalies through the different phases as seen in the observation. Furthermore, the results show that strong active convective phase is clearly associated with the African Easterly Jet (AEJ) but the weak convective phase is associated with a much weaker AEJ particularly over coastal Ghana. In assessing the mechanisms which are involved in the above impacts the convectively equatorial coupled waves (CCEW) are analysed separately. The analysis of the longitudinal propagation of zonal wind at 850hPa and outgoing longwave radiation (OLR) shows that the CCEW are very weak and their extention are very limited beyong West African region. It was found that the westward coupled equatorial Rossby waves are needed to bring out the MJO-convection link over the region and this relationship is well reproduced by all the models. Results also confirmed that it may be possible to predict the anomalous convection over West Africa with a time lead of 15-20 day with regard to Indian Ocean and AMIP simulations performed well in this regard.
Modeling the effect of toe clipping on treefrog survival: Beyond the return rate
Waddle, J.H.; Rice, K.G.; Mazzotti, F.J.; Percival, H.F.
2008-01-01
Some studies have described a negative effect of toe clipping on return rates of marked anurans, but the return rate is limited in that it does not account for heterogeneity of capture probabilities. We used open population mark-recapture models to estimate both apparent survival (ϕ) and the recapture probability (p) of two treefrog species individually marked by clipping 2–4 toes. We used information-theoretic model selection to examine the effect of toe clipping on survival while accounting for variation in capture probability. The model selection results indicate strong support for an effect of toe clipping on survival of Green Treefrogs (Hyla cinerea) and only limited support for an effect of toe clipping on capture probability. We estimate there was a mean absolute decrease in survival of 5.02% and 11.16% for Green Treefrogs with three and four toes removed, respectively, compared to individuals with just two toes removed. Results for Squirrel Treefrogs (Hyla squirella) indicate little support for an effect of toe clipping on survival but may indicate some support for a negative effect on capture probability. We believe that the return rate alone should not be used to examine survival of marked animals because constant capture probability must be assumed, and our examples demonstrate how capture probability may vary over time and among groups. Mark-recapture models provide a method for estimating the effect of toe clipping on anuran survival in situations where unique marks are applied.
Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni
2017-12-01
The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.
Hastings, K L
2001-02-02
Immune-based systemic hypersensitivities account for a significant number of adverse drug reactions. There appear to be no adequate nonclinical models to predict systemic hypersensitivity to small molecular weight drugs. Although there are very good methods for detecting drugs that can induce contact sensitization, these have not been successfully adapted for prediction of systemic hypersensitivity. Several factors have made the development of adequate models difficult. The term systemic hypersensitivity encompases many discrete immunopathologies. Each type of immunopathology presumably is the result of a specific cluster of immunologic and biochemical phenomena. Certainly other factors, such as genetic predisposition, metabolic idiosyncrasies, and concomitant diseases, further complicate the problem. Therefore, it may be difficult to find common mechanisms upon which to construct adequate models to predict specific types of systemic hypersensitivity reactions. There is some reason to hope, however, that adequate methods could be developed for at least identifying drugs that have the potential to produce signs indicative of a general hazard for immune-based reactions.
NASA Astrophysics Data System (ADS)
Pellerin, B. A.; Bergamaschi, B. A.; Saraceno, J.; Downing, B. D.; Crawford, C.; Gilliom, R.; Frederick, P.
2013-12-01
Nitrogen flux from the Mississippi River to the Gulf of Mexico has received considerable attention because it fuels primary production on the continental shelf and can contribute to the summer hypoxia observed in the Gulf. Accurately quantifying the load of nitrogen - particularly as nitrate - to the Gulf is critical for both predicting the size of the oxygen-depleted dead zone and establishing targets for N load reduction from the basin. Fluxes have been historically calculated with load estimation models using 5-10 years of discrete nitrate data collected approximately 12-18 times per year. These traditional monthly to biweekly sampling intervals often fail to adequately capture hydrologic pulses ranging from early snowmelt periods to short-duration rainfall events in small streams, but the ability to adequately resolve patterns in water quality in large rivers has received much less attention. The recent commercial availability of in situ optical sensors for nitrate, together with new techniques for data collection and analysis, provides an opportunity to measure nitrate concentration on time scales in which environmental conditions actually change. Data have been collected and analyzed from a USGS optical nitrate sensor deployed in the Mississippi River at Baton Rouge, Louisiana, since November 2011. Our nitrate data, collected at three hour intervals, shows a strong relationship to depth- and width-integrated discrete nitrate concentrations measured on 20 dates (r2=0.99, slope=1) after correcting for a consistent, small positive bias (0.10 mg/L). The close relationship between the in situ data measured on edge of the channel and the depth- and width-integrated sample suggests that the fixed sensor measurements provide a robust proxy for cross-sectional averaged nitrate concentrations at Baton Rouge under a range of flow conditions. Nitrate concentrations ranged from a low of 0.19 mg/L as N on September 11, 2012 to a high of 3.09 mg/L as N on July 12, 2013. This covers nearly the entire range of nitrate concentrations measured at Baton Rouge (2005-2013) and 30 miles upriver at St. Francisville (1996-2013). Seasonality in nitrate concentrations and discharge was observed, but daily values of discharge and nitrate concentrations reveal a decoupling both between dry and wet years and within a given year. Results from our study also suggest an anomalously high flush of nitrate from the upper basin in the wet spring of 2013, with higher than expected daily nitrate loads based on the daily runoff. A comparison of calculated (e.g. sensor) versus modeled spring nitrate loads reveals differences of up to 30% during certain months, although the implications of those difference for predicting the size of the Gulf hypoxia are not yet known.
A Machine-Learning-Driven Sky Model.
Satylmys, Pynar; Bashford-Rogers, Thomas; Chalmers, Alan; Debattista, Kurt
2017-01-01
Sky illumination is responsible for much of the lighting in a virtual environment. A machine-learning-based approach can compactly represent sky illumination from both existing analytic sky models and from captured environment maps. The proposed approach can approximate the captured lighting at a significantly reduced memory cost and enable smooth transitions of sky lighting to be created from a small set of environment maps captured at discrete times of day. The author's results demonstrate accuracy close to the ground truth for both analytical and capture-based methods. The approach has a low runtime overhead, so it can be used as a generic approach for both offline and real-time applications.
A new capture fraction method to map how pumpage affects surface water flow
Leake, S.A.; Reeves, H.W.; Dickinson, J.E.
2010-01-01
All groundwater pumped is balanced by removal of water somewhere, initially from storage in the aquifer and later from capture in the form of increase in recharge and decrease in discharge. Capture that results in a loss of water in streams, rivers, and wetlands now is a concern in many parts of the United States. Hydrologists commonly use analytical and numerical approaches to study temporal variations in sources of water to wells for select points of interest. Much can be learned about coupled surface/groundwater systems, however, by looking at the spatial distribution of theoretical capture for select times of interest. Development of maps of capture requires (1) a reasonably well-constructed transient or steady state model of an aquifer with head-dependent flow boundaries representing surface water features or evapotranspiration and (2) an automated procedure to run the model repeatedly and extract results, each time with a well in a different location. This paper presents new methods for simulating and mapping capture using three-dimensional groundwater flow models and presents examples from Arizona, Oregon, and Michigan. Journal compilation ?? 2010 National Ground Water Association. No claim to original US government works.
NASA Astrophysics Data System (ADS)
Tikhomirov, Georgy; Bahdanovich, Rynat; Pham, Phu
2017-09-01
Precise calculation of energy release in a nuclear reactor is necessary to obtain the correct spatial power distribution and predict characteristics of burned nuclear fuel. In this work, previously developed method for calculation neutron-capture reactions - capture component - contribution in effective energy release in a fuel core of nuclear reactor is discussed. The method was improved and implemented to the different models of VVER-1000 reactor developed for MCU 5 and MCNP 4 computer codes. Different models of equivalent cell and fuel assembly in the beginning of fuel cycle were calculated. These models differ by the geometry, fuel enrichment and presence of burnable absorbers. It is shown, that capture component depends on fuel enrichment and presence of burnable absorbers. Its value varies for different types of hot fuel assemblies from 3.35% to 3.85% of effective energy release. Average capture component contribution in effective energy release for typical serial fresh fuel of VVER-1000 is 3.5%, which is 7 MeV/fission. The method will be used in future to estimate the dependency of capture energy on fuel density, burn-up, etc.
Andalam, Sidharta; Ramanna, Harshavardhan; Malik, Avinash; Roop, Parthasarathi; Patel, Nitish; Trew, Mark L
2016-08-01
Virtual heart models have been proposed for closed loop validation of safety-critical embedded medical devices, such as pacemakers. These models must react in real-time to off-the-shelf medical devices. Real-time performance can be obtained by implementing models in computer hardware, and methods of compiling classes of Hybrid Automata (HA) onto FPGA have been developed. Models of ventricular cardiac cell electrophysiology have been described using HA which capture the complex nonlinear behavior of biological systems. However, many models that have been used for closed-loop validation of pacemakers are highly abstract and do not capture important characteristics of the dynamic rate response. We developed a new HA model of cardiac cells which captures dynamic behavior and we implemented the model in hardware. This potentially enables modeling the heart with over 1 million dynamic cells, making the approach ideal for closed loop testing of medical devices.
Gallien, Laure; Thuiller, Wilfried; Fort, Noémie; Boleda, Marti; Alberto, Florian J; Rioux, Delphine; Lainé, Juliette; Lavergne, Sébastien
2016-01-01
Climatic niche shifts have been documented in a number of invasive species by comparing the native and adventive climatic ranges in which they occur. However, these shifts likely represent changes in the realized climatic niches of invasive species, and may not necessarily be driven by genetic changes in climatic affinities. Until now the role of rapid niche evolution in the spread of invasive species remains a challenging issue with conflicting results. Here, we document a likely genetically-based climatic niche expansion of an annual plant invader, the common ragweed (Ambrosia artemisiifolia L.), a highly allergenic invasive species causing substantial public health issues. To do so, we looked for recent evolutionary change at the upward migration front of its adventive range in the French Alps. Based on species climatic niche models estimated at both global and regional scales we stratified our sampling design to adequately capture the species niche, and localized populations suspected of niche expansion. Using a combination of species niche modeling, landscape genetics models and common garden measurements, we then related the species genetic structure and its phenotypic architecture across the climatic niche. Our results strongly suggest that the common ragweed is rapidly adapting to local climatic conditions at its invasion front and that it currently expands its niche toward colder and formerly unsuitable climates in the French Alps (i.e. in sites where niche models would not predict its occurrence). Such results, showing that species climatic niches can evolve on very short time scales, have important implications for predictive models of biological invasions that do not account for evolutionary processes.
NASA Astrophysics Data System (ADS)
Solman, Silvina A.; Pessacg, Natalia L.
2012-01-01
In this study the capability of the MM5 model in simulating the main mode of intraseasonal variability during the warm season over South America is evaluated through a series of sensitivity experiments. Several 3-month simulations nested into ERA40 reanalysis were carried out using different cumulus schemes and planetary boundary layer schemes in an attempt to define the optimal combination of physical parameterizations for simulating alternating wet and dry conditions over La Plata Basin (LPB) and the South Atlantic Convergence Zone regions, respectively. The results were compared with different observational datasets and model evaluation was performed taking into account the spatial distribution of monthly precipitation and daily statistics of precipitation over the target regions. Though every experiment was able to capture the contrasting behavior of the precipitation during the simulated period, precipitation was largely underestimated particularly over the LPB region, mainly due to a misrepresentation in the moisture flux convergence. Experiments using grid nudging of the winds above the planetary boundary layer showed a better performance compared with those in which no constrains were imposed to the regional circulation within the model domain. Overall, no single experiment was found to perform the best over the entire domain and during the two contrasting months. The experiment that outperforms depends on the area of interest, being the simulation using the Grell (Kain-Fritsch) cumulus scheme in combination with the MRF planetary boundary layer scheme more adequate for subtropical (tropical) latitudes. The ensemble of the sensitivity experiments showed a better performance compared with any individual experiment.
Mobile Laser Scanning for Indoor Modelling
NASA Astrophysics Data System (ADS)
Thomson, C.; Apostolopoulos, G.; Backes, D.; Boehm, J.
2013-10-01
The process of capturing and modelling buildings has gained increased focus in recent years with the rise of Building Information Modelling (BIM). At the heart of BIM is a process change for the construction and facilities management industries whereby a BIM aids more collaborative working through better information exchange, and as a part of the process Geomatic/Land Surveyors are not immune from the changes. Terrestrial laser scanning has been proscribed as the preferred method for rapidly capturing buildings for BIM geometry. This is a process change from a traditional measured building survey just with a total station and is aided by the increasing acceptance of point cloud data being integrated with parametric building models in BIM tools such as Autodesk Revit or Bentley Architecture. Pilot projects carried out previously by the authors to investigate the geometry capture and modelling of BIM confirmed the view of others that the process of data capture with static laser scan setups is slow and very involved requiring at least two people for efficiency. Indoor Mobile Mapping Systems (IMMS) present a possible solution to these issues especially in time saved. Therefore this paper investigates their application as a capture device for BIM geometry creation over traditional static methods through a fit-for-purpose test.
Capture Versus Capture Zones: Clarifying Terminology Related to Sources of Water to Wells.
Barlow, Paul M; Leake, Stanley A; Fienen, Michael N
2018-03-15
The term capture, related to the source of water derived from wells, has been used in two distinct yet related contexts by the hydrologic community. The first is a water-budget context, in which capture refers to decreases in the rates of groundwater outflow and (or) increases in the rates of recharge along head-dependent boundaries of an aquifer in response to pumping. The second is a transport context, in which capture zone refers to the specific flowpaths that define the three-dimensional, volumetric portion of a groundwater flow field that discharges to a well. A closely related issue that has become associated with the source of water to wells is streamflow depletion, which refers to the reduction in streamflow caused by pumping, and is a type of capture. Rates of capture and streamflow depletion are calculated by use of water-budget analyses, most often with groundwater-flow models. Transport models, particularly particle-tracking methods, are used to determine capture zones to wells. In general, however, transport methods are not useful for quantifying actual or potential streamflow depletion or other types of capture along aquifer boundaries. To clarify the sometimes subtle differences among these terms, we describe the processes and relations among capture, capture zones, and streamflow depletion, and provide proposed terminology to distinguish among them. Published 2018. This article is a U.S. Government work and is in the public domain in the USA. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.
Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment
NASA Astrophysics Data System (ADS)
Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.
2008-05-01
The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.
Influence of nonelectrostatic ion-ion interactions on double-layer capacitance
NASA Astrophysics Data System (ADS)
Zhao, Hui
2012-11-01
Recently a Poisson-Helmholtz-Boltzmann (PHB) model [Bohinc , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.85.031130 85, 031130 (2012)] was developed by accounting for solvent-mediated nonelectrostatic ion-ion interactions. Nonelectrostatic interactions are described by a Yukawa-like pair potential. In the present work, we modify the PHB model by adding steric effects (finite ion size) into the free energy to derive governing equations. The modified PHB model is capable of capturing both ion specificity and ion crowding. This modified model is then employed to study the capacitance of the double layer. More specifically, we focus on the influence of nonelectrostatic ion-ion interactions on charging a double layer near a flat surface in the presence of steric effects. We numerically compute the differential capacitance as a function of the voltage under various conditions. At small voltages and low salt concentrations (dilute solution), we find out that the predictions from the modified PHB model are the same as those from the classical Poisson-Boltzmann theory, indicating that nonelectrostatic ion-ion interactions and steric effects are negligible. At moderate voltages, nonelectrostatic ion-ion interactions play an important role in determining the differential capacitance. Generally speaking, nonelectrostatic interactions decrease the capacitance because of additional nonelectrostatic repulsion among excess counterions inside the double layer. However, increasing the voltage gradually favors steric effects, which induce a condensed layer with crowding of counterions near the electrode. Accordingly, the predictions from the modified PHB model collapse onto those computed by the modified Poisson-Boltzmann theory considering steric effects alone. Finally, theoretical predictions are compared and favorably agree with experimental data, in particular, in concentrated solutions, leading one to conclude that the modified PHB model adequately predicts the diffuse-charge dynamics of the double layer with ion specificity and steric effects.
Akhtar, Saeed; Rozi, Shafquat
2009-01-01
AIM: To identify the stochastic autoregressive integrated moving average (ARIMA) model for short term forecasting of hepatitis C virus (HCV) seropositivity among volunteer blood donors in Karachi, Pakistan. METHODS: Ninety-six months (1998-2005) data on HCV seropositive cases (1000-1 × month-1) among male volunteer blood donors tested at four major blood banks in Karachi, Pakistan were subjected to ARIMA modeling. Subsequently, a fitted ARIMA model was used to forecast HCV seropositive donors for 91-96 mo to contrast with observed series of the same months. To assess the forecast accuracy, the mean absolute error rate (%) between the observed and predicted HCV seroprevalence was calculated. Finally, a fitted ARIMA model was used for short-term forecasts beyond the observed series. RESULTS: The goodness-of-fit test of the optimum ARIMA (2,1,7) model showed non-significant autocorrelations in the residuals of the model. The forecasts by ARIMA for 91-96 mo closely followed the pattern of observed series for the same months, with mean monthly absolute forecast errors (%) over 6 mo of 6.5%. The short-term forecasts beyond the observed series adequately captured the pattern in the data and showed increasing tendency of HCV seropositivity with a mean ± SD HCV seroprevalence (1000-1 × month-1) of 24.3 ± 1.4 over the forecast interval. CONCLUSION: To curtail HCV spread, public health authorities need to educate communities and health care providers about HCV transmission routes based on known HCV epidemiology in Pakistan and its neighboring countries. Future research may focus on factors associated with hyperendemic levels of HCV infection. PMID:19340903
28.1 VARIETIES OF SELF DISORDER: A BIO-PHENO-SOCIAL MODEL OF SCHIZOPHRENIA
Nelson, Barnaby; Sass, Louis
2018-01-01
Abstract Background The self-disorder model offers a unifying way of conceptualizing schizophrenia’s highly diverse symptoms (positive, negative, disorganized), of capturing their distinctive bizarreness, and of conceiving their longitudinal development. These symptoms are viewed as differing manifestations of an underlying disorder of ‘core-self’: hyperreflexivity/diminished-self presence with accompanying disturbances of “grip” or “hold” on reality. Methods We have recently revised and tested this phenomenological model, in particular distinguishing primary versus-secondary factors, in offering a bio-pheno-social model of schizophrenia spectrum disorders. Results The revised model is consistent with recent empirical findings and offers several advantages: 1) It helps account for the temporal variations of the symptoms or syndrome, including longitudinal progression, but also the shorter-term, situationally-reactive, sometimes defensive, and possibly quasi-agentive variability of symptom-expression that can occur in schizophrenia (consistent with understanding some aspects of self-disturbance as dynamic and mutable, involving shifting attitudes or experiential orientations). 2) It accommodates the overlapping of some key schizophrenic symptoms with certain non-schizophrenia spectrum conditions involving dissociation (depersonalization and derealization), including Depersonalization Disorder and Panic Disorder, thereby acknowledging both shared and distinguishing symptoms. 3) It integrates recent neurocognitive, neurobiological, and psychosocial (e.g., influence of trauma and culture) findings into a coherent but multi-factorial neuropsychological account. Discussion An adequate model of schizophrenia will postulate shared disturbances of core-self experiences that nevertheless can follow several distinct pathways and occur in various forms. Such a model is preferable to uni-dimensional alternatives—whether of schizophrenia or core self disturbance—given its ability to account for distinctive yet varying experiential and neurocognitive abnormalities found in research on schizophrenia, and to integrate these with recent psychosocial as well as neurobiological findings.
Dynamics of basaltic glass dissolution - Capturing microscopic effects in continuum scale models
NASA Astrophysics Data System (ADS)
Aradóttir, E. S. P.; Sigfússon, B.; Sonnenthal, E. L.; Björnsson, G.; Jónsson, H.
2013-11-01
The method of 'multiple interacting continua' (MINC) was applied to include microscopic rate-limiting processes in continuum scale reactive transport models of basaltic glass dissolution. The MINC method involves dividing the system up to ambient fluid and grains, using a specific surface area to describe the interface between the two. The various grains and regions within grains can then be described by dividing them into continua separated by dividing surfaces. Millions of grains can thus be considered within the method without the need to explicity discretizing them. Four continua were used for describing a dissolving basaltic glass grain; the first one describes the ambient fluid around the grain, while the second, third and fourth continuum refer to a diffusive leached layer, the dissolving part of the grain and the inert part of the grain, respectively. The model was validated using the TOUGHREACT simulator and data from column flow through experiments of basaltic glass dissolution at low, neutral and high pH values. Successful reactive transport simulations of the experiments and overall adequate agreement between measured and simulated values provides validation that the MINC approach can be applied for incorporating microscopic effects in continuum scale basaltic glass dissolution models. Equivalent models can be used when simulating dissolution and alteration of other minerals. The study provides an example of how numerical modeling and experimental work can be combined to enhance understanding of mechanisms associated with basaltic glass dissolution. Column outlet concentrations indicated basaltic glass to dissolve stoichiometrically at pH 3. Predictive simulations with the developed MINC model indicated significant precipitation of secondary minerals within the column at neutral and high pH, explaining observed non-stoichiometric outlet concentrations at these pH levels. Clay, zeolite and hydroxide precipitation was predicted to be most abundant within the column.
NASA Astrophysics Data System (ADS)
Verma, Manish K.
Terrestrial gross primary productivity (GPP) is the largest and most variable component of the carbon cycle and is strongly influenced by phenology. Realistic characterization of spatio-temporal variation in GPP and phenology is therefore crucial for understanding dynamics in the global carbon cycle. In the last two decades, remote sensing has become a widely-used tool for this purpose. However, no study has comprehensively examined how well remote sensing models capture spatiotemporal patterns in GPP, and validation of remote sensing-based phenology models is limited. Using in-situ data from 144 eddy covariance towers located in all major biomes, I assessed the ability of 10 remote sensing-based methods to capture spatio-temporal variation in GPP at annual and seasonal scales. The models are based on different hypotheses regarding ecophysiological controls on GPP and span a range of structural and computational complexity. The results lead to four main conclusions: (i) at annual time scale, models were more successful capturing spatial variability than temporal variability; (ii) at seasonal scale, models were more successful in capturing average seasonal variability than interannual variability; (iii) simpler models performed as well or better than complex models; and (iv) models that were best at explaining seasonal variability in GPP were different from those that were best able to explain variability in annual scale GPP. Seasonal phenology of vegetation follows bounded growth and decay, and is widely modeled using growth functions. However, the specific form of the growth function affects how phenological dynamics are represented in ecosystem and remote sensing-base models. To examine this, four different growth functions (the logistic, Gompertz, Mirror-Gompertz and Richards function) were assessed using remotely sensed and in-situ data collected at several deciduous forest sites. All of the growth functions provided good statistical representation of in-situ and remote sensing time series. However, the Richards function captured observed asymmetric dynamics that were not captured by the other functions. The timing of key phenophase transitions derived using the Richards function therefore agreed best with observations. This suggests that ecosystem models and remote-sensing algorithms would benefit from using the Richards function to represent phenological dynamics.
EPA MODELING TOOLS FOR CAPTURE ZONE DELINEATION
The EPA Office of Research and Development supports a step-wise modeling approach for design of wellhead protection areas for water supply wells. A web-based WellHEDSS (wellhead decision support system) is under development for determining when simple capture zones (e.g., centri...
NASA Astrophysics Data System (ADS)
Mfumu Kihumba, Antoine; Ndembo Longo, Jean; Vanclooster, Marnik
2016-03-01
A multivariate statistical modelling approach was applied to explain the anthropogenic pressure of nitrate pollution on the Kinshasa groundwater body (Democratic Republic of Congo). Multiple regression and regression tree models were compared and used to identify major environmental factors that control the groundwater nitrate concentration in this region. The analyses were made in terms of physical attributes related to the topography, land use, geology and hydrogeology in the capture zone of different groundwater sampling stations. For the nitrate data, groundwater datasets from two different surveys were used. The statistical models identified the topography, the residential area, the service land (cemetery), and the surface-water land-use classes as major factors explaining nitrate occurrence in the groundwater. Also, groundwater nitrate pollution depends not on one single factor but on the combined influence of factors representing nitrogen loading sources and aquifer susceptibility characteristics. The groundwater nitrate pressure was better predicted with the regression tree model than with the multiple regression model. Furthermore, the results elucidated the sensitivity of the model performance towards the method of delineation of the capture zones. For pollution modelling at the monitoring points, therefore, it is better to identify capture-zone shapes based on a conceptual hydrogeological model rather than to adopt arbitrary circular capture zones.
Hierarchical modeling of bycatch rates of sea turtles in the western North Atlantic
Gardner, B.; Sullivan, P.J.; Epperly, S.; Morreale, S.J.
2008-01-01
Previous studies indicate that the locations of the endangered loggerhead Caretta caretta and critically endangered leatherback Dermochelys coriacea sea turtles are influenced by water temperatures, and that incidental catch rates in the pelagic longline fishery vary by region. We present a Bayesian hierarchical model to examine the effects of environmental variables, including water temperature, on the number of sea turtles captured in the US pelagic longline fishery in the western North Atlantic. The modeling structure is highly flexible, utilizes a Bayesian model selection technique, and is fully implemented in the software program WinBUGS. The number of sea turtles captured is modeled as a zero-inflated Poisson distribution and the model incorporates fixed effects to examine region-specific differences in the parameter estimates. Results indicate that water temperature, region, bottom depth, and target species are all significant predictors of the number of loggerhead sea turtles captured. For leatherback sea turtles, the model with only target species had the most posterior model weight, though a re-parameterization of the model indicates that temperature influences the zero-inflation parameter. The relationship between the number of sea turtles captured and the variables of interest all varied by region. This suggests that management decisions aimed at reducing sea turtle bycatch may be more effective if they are spatially explicit. ?? Inter-Research 2008.
A model for long-distance dispersal of boll weevils (Coleoptera: Curculionidae)
NASA Astrophysics Data System (ADS)
Westbrook, John K.; Eyster, Ritchie S.; Allen, Charles T.
2011-07-01
The boll weevil, Anthonomus grandis (Boheman), has been a major insect pest of cotton production in the US, accounting for yield losses and control costs on the order of several billion US dollars since the introduction of the pest in 1892. Boll weevil eradication programs have eliminated reproducing populations in nearly 94%, and progressed toward eradication within the remaining 6%, of cotton production areas. However, the ability of weevils to disperse and reinfest eradicated zones threatens to undermine the previous investment toward eradication of this pest. In this study, the HYSPLIT atmospheric dispersion model was used to simulate daily wind-aided dispersal of weevils from the Lower Rio Grande Valley (LRGV) of southern Texas and northeastern Mexico. Simulated weevil dispersal was compared with weekly capture of weevils in pheromone traps along highway trap lines between the LRGV and the South Texas / Winter Garden zone of the Texas Boll Weevil Eradication Program. A logistic regression model was fit to the probability of capturing at least one weevil in individual pheromone traps relative to specific values of simulated weevil dispersal, which resulted in 60.4% concordance, 21.3% discordance, and 18.3% ties in estimating captures and non-captures. During the first full year of active eradication with widespread insecticide applications in 2006, the dispersal model accurately estimated 71.8%, erroneously estimated 12.5%, and tied 15.7% of capture and non-capture events. Model simulations provide a temporal risk assessment over large areas of weevil reinfestation resulting from dispersal by prevailing winds. Eradication program managers can use the model risk assessment information to effectively schedule and target enhanced trapping, crop scouting, and insecticide applications.
A model for long-distance dispersal of boll weevils (Coleoptera: Curculionidae).
Westbrook, John K; Eyster, Ritchie S; Allen, Charles T
2011-07-01
The boll weevil, Anthonomus grandis (Boheman), has been a major insect pest of cotton production in the US, accounting for yield losses and control costs on the order of several billion US dollars since the introduction of the pest in 1892. Boll weevil eradication programs have eliminated reproducing populations in nearly 94%, and progressed toward eradication within the remaining 6%, of cotton production areas. However, the ability of weevils to disperse and reinfest eradicated zones threatens to undermine the previous investment toward eradication of this pest. In this study, the HYSPLIT atmospheric dispersion model was used to simulate daily wind-aided dispersal of weevils from the Lower Rio Grande Valley (LRGV) of southern Texas and northeastern Mexico. Simulated weevil dispersal was compared with weekly capture of weevils in pheromone traps along highway trap lines between the LRGV and the South Texas/Winter Garden zone of the Texas Boll Weevil Eradication Program. A logistic regression model was fit to the probability of capturing at least one weevil in individual pheromone traps relative to specific values of simulated weevil dispersal, which resulted in 60.4% concordance, 21.3% discordance, and 18.3% ties in estimating captures and non-captures. During the first full year of active eradication with widespread insecticide applications in 2006, the dispersal model accurately estimated 71.8%, erroneously estimated 12.5%, and tied 15.7% of capture and non-capture events. Model simulations provide a temporal risk assessment over large areas of weevil reinfestation resulting from dispersal by prevailing winds. Eradication program managers can use the model risk assessment information to effectively schedule and target enhanced trapping, crop scouting, and insecticide applications.
NASA Technical Reports Server (NTRS)
Malcuit, Robert J.; Winters, Ronald R.
1993-01-01
Regardless of one's favorite model for the origin of the Earth-Moon system (fission, coformation, tidal capture, giant-impact) the early history of lunar orbital evolution would produce significant thermal and earth and ocean tidal effects on the primitive earth. Three of the above lunar origin models (fission, coformation, giant-impact) feature a circular orbit which undergoes a progressive increase in orbital radius from the time of origin to the present time. In contrast, a tidal capture model places the moon in an elliptical orbit undergoing progressive circularization from the time of capture (for model purposes about 3.9 billion years ago) for at least a few 10(exp 8) years following the capture event. Once the orbit is circularized, the subsequent tidal history for a tidal capture scenario is similar to that for other models of lunar origin and features a progressive increase in orbital radius to the current state of the lunar orbit. This elliptical orbit phase, if it occurred, should have left a distinctive signature in the terrestrial and lunar rock records. Depositional events would be associated terrestrial shorelines characterized by abnormally high, but progressively decreasing, ocean tidal amplitudes and ranges associated with such an orbital evolution. Several rock units in the age range 3.6-2.5 billion years before present are reported to have a major tidal component. Examples are the Warrawoona, Fortescue, and Hamersley Groups of Western Australia and the Pangola and Witwatersand Supergroups of South Africa. Detailed study of the features of these tidal sequences may be helpful in deciphering the style of lunar orbital evolution during the Archean Eon.
Developments in capture-γ libraries for nonproliferation applications
NASA Astrophysics Data System (ADS)
Hurst, A. M.; Firestone, R. B.; Sleaford, B. W.; Bleuel, D. L.; Basunia, M. S.; Bečvář, F.; Belgya, T.; Bernstein, L. A.; Carroll, J. J.; Detwiler, B.; Escher, J. E.; Genreith, C.; Goldblum, B. L.; Krtička, M.; Lerch, A. G.; Matters, D. A.; McClory, J. W.; McHale, S. R.; Révay, Zs.; Szentmiklosi, L.; Turkoglu, D.; Ureche, A.; Vujic, J.
2017-09-01
The neutron-capture reaction is fundamental for identifying and analyzing the γ-ray spectrum from an unknown assembly because it provides unambiguous information on the neutron-absorbing isotopes. Nondestructive-assay applications may exploit this phenomenon passively, for example, in the presence of spontaneous-fission neutrons, or actively where an external neutron source is used as a probe. There are known gaps in the Evaluated Nuclear Data File libraries corresponding to neutron-capture γ-ray data that otherwise limit transport-modeling applications. In this work, we describe how new thermal neutron-capture data are being used to improve information in the neutron-data libraries for isotopes relevant to nonproliferation applications. We address this problem by providing new experimentally-deduced partial and total neutron-capture reaction cross sections and then evaluate these data by comparison with statistical-model calculations.
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
NASA Technical Reports Server (NTRS)
2004-01-01
This image of a model capture magnet was taken after an experiment in a Mars simulation chamber at the University of Aarhus, Denmark. It has some dust on it, but not as much as that on the Mars Exploration Rover Spirit's capture magnet. The capture and filter magnets on both Mars Exploration Rovers were delivered by the magnetic properties team at the Center for Planetary Science, Copenhagen, Denmark.Gabbert, Silke; Hilber, Isabel
2016-12-01
A core aim of the European chemicals legislation REACH is to ensure that the risks caused by substances of very high concern (SVHC) are adequately controlled. Authorisation - i.e. the formal approval of certain uses of SVHC for a limited time - is a key regulatory instrument in order to achieve this goal. For SVHC which are, in addition to their toxicity, (very) persistent and/or (very) bioaccumulative (PBT/vPvB chemicals), decision-making on the authorisation is conditional on a socio-economic analysis (SEA). In a SEA companies must demonstrate that the gains from keeping a chemical in use outweigh expected damage costs for society. The current setup of the REACH authorisation process, including existing guidance on performing a SEA, ignores that PBT/vPvB chemicals are stock pollutants. This paper explores the implications of incorporating stock pollution effects of these chemicals into a SEA on authorisation decision-making. We develop a cost-benefit approach which includes stock dynamics of PBT/vPvB chemicals. This allows identifying the decision rules for granting or refusing an authorisation. Furthermore, we generalize the model to an entire set of damage functions. We show that ignoring stock pollution effects in a SEA may lead to erroneous decisions on the use of PBT/vPvB chemicals because long-term impacts are not adequately captured. Using a historic case of DDT soil contamination as an illustrative example we discuss information requirements and challenges for authorisation decisions on the use of PBT/vPvB chemicals under REACH. Copyright © 2016 Elsevier Ltd. All rights reserved.
Two-proton capture on the 68Se nucleus with a new self-consistent cluster model
NASA Astrophysics Data System (ADS)
Hove, D.; Garrido, E.; Jensen, A. S.; Sarriguren, P.; Fynbo, H. O. U.; Fedorov, D. V.; Zinner, N. T.
2018-07-01
We investigate the two-proton capture reaction of the prominent rapid proton capture waiting point nucleus, 68Se, that produces the borromean nucleus 70Kr (68Se + p + p). We apply a recently formulated general model where the core nucleus, 68Se, is treated in the mean-field approximation and the three-body problem of the two valence protons and the core is solved exactly. We compare using two popular Skyrme interactions, SLy4 and SkM*. We calculate E2 electromagnetic two-proton dissociation and capture cross sections, and derive the temperature dependent capture rates. We vary the unknown 2+ resonance energy without changing any of the structures computed self-consistently for both core and valence particles. We find rates increasing quickly with temperature below 2-4 GK after which we find rates varying by about a factor of two independent of 2+ resonance energy. The capture mechanism is sequential through the f5/2 proton-core resonance, but the continuum background contributes significantly.
Just health care (II): Is equality too much?
Fleck, L M
1989-12-01
In a previous essay I criticized Engelhardt's libertarian conception of justice, which grounds the view that society's obligation to assure access to adequate health care for all is a matter of beneficence. Beneficence fails to capture the moral stringency associated with many claims for access to health care. In the present paper I argue that these claims are really matters of justice proper, where justice is conceived along moderate egalitarian lines, such as those suggested by Rawls and Daniels, rather than strong egalitarian lines. Further, given the empirical complexity associated with the distribution of contemporary health care, I argue that what we really need to address the relevant policy issues adequately is a theory of health care justice, as opposed to an all-purpose conception of justice. Daniels has made an important start toward that goal, though there are some large policy areas which I discuss that his account of health care justice does not really speak to. Finally, practical matters of health care justice really need to be addressed in a 'non-ideal' mode, a framework in which philosophers have done little.
Should we reconsider the routine use of placebo controls in clinical research?
Avins, Andrew L; Cherkin, Daniel C; Sherman, Karen J; Goldberg, Harley; Pressman, Alice
2012-04-27
Modern clinical-research practice favors placebo controls over usual-care controls whenever a credible placebo exists. An unrecognized consequence of this preference is that clinicians are more limited in their ability to provide the benefits of the non-specific healing effects of placebos in clinical practice. We examined the issues in choosing between placebo and usual-care controls. We considered why placebo controls place constraints on clinicians and the trade-offs involved in the choice of control groups. We find that, for certain studies, investigators should consider usual-care controls, even if an adequate placebo is available. Employing usual-care controls would be of greatest value for pragmatic trials evaluating treatments to improve clinical care and for which threats to internal validity can be adequately managed without a placebo-control condition. Intentionally choosing usual-care controls, even when a satisfactory placebo exists, would allow clinicians to capture the value of non-specific therapeutic benefits that are common to all interventions. The result could be more effective, patient-centered care that makes the best use of both specific and non-specific benefits of medical interventions.
Research on Capturing of Customer Requirements Based on Innovation Theory
NASA Astrophysics Data System (ADS)
junwu, Ding; dongtao, Yang; zhenqiang, Bao
To exactly and effectively capture customer requirements information, a new customer requirements capturing modeling method was proposed. Based on the analysis of function requirement models of previous products and the application of technology system evolution laws of the Theory of Innovative Problem Solving (TRIZ), the customer requirements could be evolved from existing product designs, through modifying the functional requirement unit and confirming the direction of evolution design. Finally, a case study was provided to illustrate the feasibility of the proposed approach.
Delineating baseflow contribution areas for streams - A model and methods comparison
NASA Astrophysics Data System (ADS)
Chow, Reynold; Frind, Michael E.; Frind, Emil O.; Jones, Jon P.; Sousa, Marcelo R.; Rudolph, David L.; Molson, John W.; Nowak, Wolfgang
2016-12-01
This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to model characteristics, discretization schemes, delineation methods, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternative and more reliable approach that provides probability intervals for the baseflow contribution areas, taking uncertainty into account. The two approaches can be used together to enhance the confidence in the final outcome.
Review of the WECC EDT phase 2 EIM benefits analysis and results report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veselka, T.D.; Poch, L.A.; Botterud, A.
A region-wide Energy Imbalance Market (EIM) was recently proposed by the Western Electricity Coordinating Council (WECC). In order for the Western Area Power Administration (Western) to make more informed decisions regarding its involvement in the EIM, Western asked Argonne National Laboratory (Argonne) to review the EIM benefits study (the October 2011 revision) performed by Energy and Environmental Economics, Inc. (E3). Key components of the E3 analysis made use of results from a study conducted by the National Renewable Energy Laboratory (NREL); therefore, we also reviewed the NREL work. This report examines E3 and NREL methods and models used in themore » EIM study. Estimating EIM benefits is very challenging because of the complex nature of the Western Interconnection (WI), the variability and uncertainty of renewable energy resources, and the complex decisions and potentially strategic bidding of market participants. Furthermore, methodologies used for some of the more challenging aspects of the EIM have not yet matured. This review is complimentary of several components of the EIM study. Analysts and modelers clearly took great care when conducting detailed simulations of the WI using well-established industry tools under stringent time and budget constraints. However, it is our opinion that the following aspects of the study and the interpretation of model results could be improved upon in future analyses. The hurdle rate methodology used to estimate current market inefficiencies does not directly model the underlying causes of sub-optimal dispatch and power flows. It assumes that differences between historical flows and modeled flows can be attributed solely to market inefficiencies. However, flow differences between model results and historical data can be attributed to numerous simplifying assumptions used in the model and in the input data. We suggest that alternative approaches be explored in order to better estimate the benefits of introducing market structures like the EIM. In addition to more efficient energy transactions in the WI, the EIM would reduce the amount of flexibility reserves needed to accommodate forecast errors associated with variable production from wind and solar energy resources. The modeling approach takes full advantage of variable resource diversity over the entire market footprint, but the projected reduction in flexibility reserves may be overly optimistic. While some reduction would undoubtedly occur, the EIM is only an energy market and would therefore not realize the same reduction in reserves as an ancillary services market. In our opinion the methodology does not adequately capture the impact of transmission constraints on the deployment of flexibility reserves. Estimates of flexibility reserves assume that forecast errors follow a normal distribution. Improved estimates could be obtained by using other probability distributions to estimate up and down reserves to capture the underlying uncertainty of these resources under specific operating conditions. Also, the use of a persistence forecast method for solar is questionable, because solar insolation follows a deterministic pattern dictated by the sun's path through the sky. We suggest a more rigorous method for forecasting solar insolation using the sun's relatively predictable daily pattern at specific locations. The EIM study considered only one scenario for hydropower resources. While this scenario is within the normal range over the WI footprint, it represents a severe drought condition in the Colorado River Basin from which Western schedules power. Given hydropower's prominent role in the WI, we recommend simulating a range of hydropower conditions since the relationship between water availability and WI dispatch costs is nonlinear. Also, the representation of specific operational constraints faced by hydropower operators in the WI needs improvements. The model used in the study cannot fully capture all of the EIM impacts and complexities of power system operations. In particular, a primary benefit of the EIM is a shorter dispatch interval; namely, 5 minutes. However, the model simulates the dispatch hourly. Therefore it cannot adequately measure the benefits of a more frequent dispatch. A tool with a finer time resolution would significantly improve simulation accuracy. When the study was conducted, the rules for the EIM were not clearly defined and it was appropriate to estimate societal benefits of the EIM assuming a perfect market without a detailed specification of the market design. However, incorporating a more complete description of market rules will allow for better estimates of EIM benefits. Furthermore, performing analyses using specific market rules can identify potential design flaws that may be difficult and expensive to correct after the market is established. Estimated cost savings from a more efficient dispatch are less than one percent of the total cost of electricity production.« less
Murray, Kim; Johnston, Kate; Cunnane, Helen; Kerr, Charlotte; Spain, Debbie; Gillan, Nicola; Hammond, Neil; Murphy, Declan; Happé, Francesca
2017-06-01
Real-life social processing abilities of adults with autism spectrum disorders (ASD) can be hard to capture in lab-based experimental tasks. A novel measure of social cognition, the "Strange Stories Film task' (SSFt), was designed to overcome limitations of available measures in the field. Brief films were made based on the scenarios from the Strange Stories task (Happé) and designed to capture the subtle social-cognitive difficulties observed in ASD adults. Twenty neurotypical adults were recruited to pilot the new measure. A final test set was produced and administered to a group of 20 adults with ASD and 20 matched controls, alongside established social cognition tasks and questionnaire measures of empathy, alexithymia and ASD traits. The SSFt was more effective than existing measures at differentiating the ASD group from the control group. In the ASD group, the SSFt was associated with the Strange Stories task. The SSFt is a potentially useful tool to identify social cognitive dis/abilities in ASD, with preliminary evidence of adequate convergent validity. Future research directions are discussed. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1120-1132. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Representing Carbon Capture and Storage in MARKAL EPAUS9r16a
Energy system models are used to evaluate the energy and environmental implications of alternative pathways for producing and using energy. Many such models include representations of the costs and capacities of carbon capture and sequestration (CCS). In this presentation, Dan Lo...
Origin of the main r-process elements
NASA Astrophysics Data System (ADS)
Otsuki, K.; Truran, J.; Wiescher, M.; Gorres, J.; Mathews, G.; Frekers, D.; Mengoni, A.; Bartlett, A.; Tostevin, J.
2006-07-01
The r-process is supposed to be a primary process which assembles heavy nuclei from a photo-dissociated nucleon gas. Hence, the reaction flow through light elements can be important as a constraint on the conditions for the r-process. We have studied the impact of di-neutron capture and the neutron-capture of light (Z<10) elements on r-process nucleosynthesis in three different environments: neutrino-driven winds in Type II supernovae; the prompt explosion of low mass supernovae; and neutron star mergers. Although the effect of di-neutron capture is not significant for the neutrino-driven wind model or low-mass supernovae, it becomes significant in the neutron-star merger model. The neutron-capture of light elements, which has been studied extensively for neutrino-driven wind models, also impacts the other two models. We show that it may be possible to identify the astrophysical site for the main r-process if the nuclear physics uncertainties in current r-process calculations could be reduced.
ERIC Educational Resources Information Center
Scheiter, Katharina; Schubert, Carina; Schüler, Anne
2018-01-01
Background: When learning with text and pictures, learners often fail to adequately process the materials, which can be explained as a failure to self-regulate one's learning by choosing adequate cognitive learning processes. Eye movement modelling examples (EMME) showing how to process multimedia instruction have improved elementary school…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farías, R. O.; Trivillin, V. A.; Portu, A. M.
Purpose: Many types of lung tumors have a very poor prognosis due to their spread in the whole organ volume. The fact that boron neutron capture therapy (BNCT) would allow for selective targeting of all the nodules regardless of their position, prompted a preclinical feasibility study of ex situ BNCT at the thermal neutron facility of RA-3 reactor in the province of Buenos Aires, Argentina. (L)-4p-dihydroxy-borylphenylalanine fructose complex (BPA-F) biodistribution studies in an adult sheep model and computational dosimetry for a human explanted lung were performed to evaluate the feasibility and the therapeutic potential of ex situ BNCT. Methods: Twomore » kinds of boron biodistribution studies were carried out in the healthy sheep: a set of pharmacokinetic studies without lung excision, and a set that consisted of evaluation of boron concentration in the explanted and perfused lung. In order to assess the feasibility of the clinical application of ex situ BNCT at RA-3, a case of multiple lung metastases was analyzed. A detailed computational representation of the geometry of the lung was built based on a real collapsed human lung. Dosimetric calculations and dose limiting considerations were based on the experimental results from the adult sheep, and on the most suitable information published in the literature. In addition, a workable treatment plan was considered to assess the clinical application in a realistic scenario. Results: Concentration-time profiles for the normal sheep showed that the boron kinetics in blood, lung, and skin would adequately represent the boron behavior and absolute uptake expected in human tissues. Results strongly suggest that the distribution of the boron compound is spatially homogeneous in the lung. A constant lung-to-blood ratio of 1.3 ± 0.1 was observed from 80 min after the end of BPA-F infusion. The fact that this ratio remains constant during time would allow the blood boron concentration to be used as a surrogate and indirect quantification of the estimated value in the explanted healthy lung. The proposed preclinical animal model allowed for the study of the explanted lung. As expected, the boron concentration values fell as a result of the application of the preservation protocol required to preserve the lung function. The distribution of the boron concentration retention factor was obtained for healthy lung, with a mean value of 0.46 ± 0.14 consistent with that reported for metastatic colon carcinoma model in rat perfused lung. Considering the human lung model and suitable tumor control probability for lung cancer, a promising average fraction of controlled lesions higher than 85% was obtained even for a low tumor-to-normal boron concentration ratio of 2. Conclusions: This work reports for the first time data supporting the validity of the ovine model as an adequate human surrogate in terms of boron kinetics and uptake in clinically relevant tissues. Collectively, the results and analysis presented would strongly suggest that ex situ whole lung BNCT irradiation is a feasible and highly promising technique that could greatly contribute to the treatment of metastatic lung disease in those patients without extrapulmonary spread, increasing not only the expected overall survival but also the resulting quality of life.« less
Farías, R O; Garabalino, M A; Ferraris, S; Santa María, J; Rovati, O; Lange, F; Trivillin, V A; Monti Hughes, A; Pozzi, E C C; Thorp, S I; Curotto, P; Miller, M E; Santa Cruz, G A; Bortolussi, S; Altieri, S; Portu, A M; Saint Martin, G; Schwint, A E; González, S J
2015-07-01
Many types of lung tumors have a very poor prognosis due to their spread in the whole organ volume. The fact that boron neutron capture therapy (BNCT) would allow for selective targeting of all the nodules regardless of their position, prompted a preclinical feasibility study of ex situ BNCT at the thermal neutron facility of RA-3 reactor in the province of Buenos Aires, Argentina. (l)-4p-dihydroxy-borylphenylalanine fructose complex (BPA-F) biodistribution studies in an adult sheep model and computational dosimetry for a human explanted lung were performed to evaluate the feasibility and the therapeutic potential of ex situ BNCT. Two kinds of boron biodistribution studies were carried out in the healthy sheep: a set of pharmacokinetic studies without lung excision, and a set that consisted of evaluation of boron concentration in the explanted and perfused lung. In order to assess the feasibility of the clinical application of ex situ BNCT at RA-3, a case of multiple lung metastases was analyzed. A detailed computational representation of the geometry of the lung was built based on a real collapsed human lung. Dosimetric calculations and dose limiting considerations were based on the experimental results from the adult sheep, and on the most suitable information published in the literature. In addition, a workable treatment plan was considered to assess the clinical application in a realistic scenario. Concentration-time profiles for the normal sheep showed that the boron kinetics in blood, lung, and skin would adequately represent the boron behavior and absolute uptake expected in human tissues. Results strongly suggest that the distribution of the boron compound is spatially homogeneous in the lung. A constant lung-to-blood ratio of 1.3 ± 0.1 was observed from 80 min after the end of BPA-F infusion. The fact that this ratio remains constant during time would allow the blood boron concentration to be used as a surrogate and indirect quantification of the estimated value in the explanted healthy lung. The proposed preclinical animal model allowed for the study of the explanted lung. As expected, the boron concentration values fell as a result of the application of the preservation protocol required to preserve the lung function. The distribution of the boron concentration retention factor was obtained for healthy lung, with a mean value of 0.46 ± 0.14 consistent with that reported for metastatic colon carcinoma model in rat perfused lung. Considering the human lung model and suitable tumor control probability for lung cancer, a promising average fraction of controlled lesions higher than 85% was obtained even for a low tumor-to-normal boron concentration ratio of 2. This work reports for the first time data supporting the validity of the ovine model as an adequate human surrogate in terms of boron kinetics and uptake in clinically relevant tissues. Collectively, the results and analysis presented would strongly suggest that ex situ whole lung BNCT irradiation is a feasible and highly promising technique that could greatly contribute to the treatment of metastatic lung disease in those patients without extrapulmonary spread, increasing not only the expected overall survival but also the resulting quality of life.
Software Review: A program for testing capture-recapture data for closure
Stanley, Thomas R.; Richards, Jon D.
2005-01-01
Capture-recapture methods are widely used to estimate population parameters of free-ranging animals. Closed-population capture-recapture models, which assume there are no additions to or losses from the population over the period of study (i.e., the closure assumption), are preferred for population estimation over the open-population models, which do not assume closure, because heterogeneity in detection probabilities can be accounted for and this improves estimates. In this paper we introduce CloseTest, a new Microsoft® Windows-based program that computes the Otis et al. (1978) and Stanley and Burnham (1999) closure tests for capture-recapture data sets. Information on CloseTest features and where to obtain the program are provided.
Heckman, James J; Kautz, Tim
2012-08-01
This paper summarizes recent evidence on what achievement tests measure; how achievement tests relate to other measures of "cognitive ability" like IQ and grades; the important skills that achievement tests miss or mismeasure, and how much these skills matter in life. Achievement tests miss, or perhaps more accurately, do not adequately capture, soft skills -personality traits, goals, motivations, and preferences that are valued in the labor market, in school, and in many other domains. The larger message of this paper is that soft skills predict success in life, that they causally produce that success, and that programs that enhance soft skills have an important place in an effective portfolio of public policies.
Girman, Cynthia J; Faries, Douglas; Ryan, Patrick; Rotelli, Matt; Belger, Mark; Binkowitz, Bruce; O'Neill, Robert
2014-05-01
The use of healthcare databases for comparative effectiveness research (CER) is increasing exponentially despite its challenges. Researchers must understand their data source and whether outcomes, exposures and confounding factors are captured sufficiently to address the research question. They must also assess whether bias and confounding can be adequately minimized. Many study design characteristics may impact on the results; however, minimal if any sensitivity analyses are typically conducted, and those performed are post hoc. We propose pre-study steps for CER feasibility assessment and to identify sensitivity analyses that might be most important to pre-specify to help ensure that CER produces valid interpretable results.
Simulation of mercury capture by sorbent injection using a simplified model.
Zhao, Bingtao; Zhang, Zhongxiao; Jin, Jing; Pan, Wei-Ping
2009-10-30
Mercury pollution by fossil fuel combustion or solid waste incineration is becoming the worldwide environmental concern. As an effective control technology, powdered sorbent injection (PSI) has been successfully used for mercury capture from flue gas with advantages of low cost and easy operation. In order to predict the mercury capture efficiency for PSI more conveniently, a simplified model, which is based on the theory of mass transfer, isothermal adsorption and mass balance, is developed in this paper. The comparisons between theoretical results of this model and experimental results by Meserole et al. [F.B. Meserole, R. Chang, T.R. Carrey, J. Machac, C.F.J. Richardson, Modeling mercury removal by sorbent injection, J. Air Waste Manage. Assoc. 49 (1999) 694-704] demonstrate that the simplified model is able to provide good predictive accuracy. Moreover, the effects of key parameters including the mass transfer coefficient, sorbent concentration, sorbent physical property and sorbent adsorption capacity on mercury adsorption efficiency are compared and evaluated. Finally, the sensitive analysis of impact factor indicates that the injected sorbent concentration plays most important role for mercury capture efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Li, Tingwen
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber’s performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable simulations andmore » manageable computational effort. Previously developed two filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical) on the adsorber’s hydrodynamics and CO2 capture performance are then examined. The simulation result subsequently is compared and contrasted with another predicted by a one-dimensional three-region process model.« less
Hydrodynamic Capture of Particles by Micro-swimmers under Hele-Shaw Flows
NASA Astrophysics Data System (ADS)
Mishler, Grant; Tsang, Alan Cheng Hou; Pak, On Shun
2017-11-01
We explore a hydrodynamic capture mechanism of a driven particle by a micro-swimmer in confined microfluidic environments with an idealized model. The capture is mediated by the hydrodynamic interactions between the micro-swimmer, the driven particle, and the background flow. This capture mechanism relies on the existence of attractive stable equilibrium configurations between the driven particle and the micro-swimmer, which occurs when the background flow is larger than a certain critical threshold. Dynamics and stability of capture and non-capture events will be discussed. This study may have potential applications in the study of capture and delivery of therapeutic payloads by micro-swimmers as well as particle self-assembly under confinements.