Science.gov

Sample records for accounting sac-sma model

  1. SAC-SMA a priori parameter differences and their impact on distributed hydrologic model simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Ziya; Koren, Victor; Reed, Seann; Smith, Michael; Zhang, Yu; Moreda, Fekadu; Cosgrove, Brian

    2012-02-01

    SummaryDeriving a priori gridded parameters is an important step in the development and deployment of an operational distributed hydrologic model. Accurate a priori parameters can reduce the manual calibration effort and/or speed up the automatic calibration process, reduce calibration uncertainty, and provide valuable information at ungauged locations. Underpinned by reasonable parameter data sets, distributed hydrologic modeling can help improve water resource and flood and flash flood forecasting capabilities. Initial efforts at the National Weather Service Office of Hydrologic Development (NWS OHD) to derive a priori gridded Sacramento Soil Moisture Accounting (SAC-SMA) model parameters for the conterminous United States (CONUS) were based on a relatively coarse resolution soils property database, the State Soil Geographic Database (STATSGO) (Soil Survey Staff, 2011) and on the assumption of uniform land use and land cover. In an effort to improve the parameters, subsequent work was performed to fully incorporate spatially variable land cover information into the parameter derivation process. Following that, finer-scale soils data (the county-level Soil Survey Geographic Database (SSURGO) ( Soil Survey Staff, 2011a,b), together with the use of variable land cover data, were used to derive a third set of CONUS, a priori gridded parameters. It is anticipated that the second and third parameter sets, which incorporate more physical data, will be more realistic and consistent. Here, we evaluate whether this is actually the case by intercomparing these three sets of a priori parameters along with their associated hydrologic simulations which were generated by applying the National Weather Service Hydrology Laboratory's Research Distributed Hydrologic Model (HL-RDHM) ( Koren et al., 2004) in a continuous fashion with an hourly time step. This model adopts a well-tested conceptual water balance model, SAC-SMA, applied on a regular spatial grid, and links to physically

  2. Evaluation of the Sacramento Soil Moisture Accounting Model for Flood Forecasting in a Hawaiian Watershed

    NASA Astrophysics Data System (ADS)

    Awal, R.; Fares, A.; Michaud, J.; Chu, P.; Fares, S.; Rosener, M.; Kevin, K.

    2012-12-01

    The focus of this study was to assess the performance of the U.S. National Weather Service Sacramento Soil Moisture Accounting Model (SAC-SMA) on the flash flood prone Hanalei watershed, Kauai, Hawaii, using site specific hydrologic data. The model was calibrated and validated using six-years of observed field hydrological data, e.g., stream flow, and spatially distributed rainfall. The ordinary kriging method was used to calculate mean watershed wide hourly precipitation for the six years using data from twenty rain gauges from north shore Kauai including five rain gauges within the watershed. Ranges of the values of a priori SAC-SMA parameters were also estimated based on the site specific soil hydrological properties; these calculated values were well within those reported in literature for different watersheds SAC-SMA was run for one year runs using the calibration and validation data. The performance of model in predicting streamflow using average watershed wide values of the a priori parameters was very poor. SAC-SMA over predicted streamflow throughout the year as compared to observed streamflow data. The upper limit of the lower layer tension water capacity, LZTWM, parameter was higher than those reported in the literature this might be due to the wetter conditions, higher precipitation, in Hanalei watershed (>6400mm) than the other previously studied watersheds (<1600mm). When the upper bound of LZTWM varied between 2500 and 3000 during calibration, SAC-SMA's performance improved to satisfactory and even to good for almost all years based on PBIAS and Nash-Sutcliffe coefficients of efficiency. When we used optimized parameter of one year to other years for the validation, the performance of optimized parameter of year 2005 was satisfactory for most of the year when upper bound of LZTWM = 2500 and the optimized parameter of year 2004 was satisfactory for most of the year when upper bound of LZTWM = 3000. The annual precipitation of 2004 was the highest

  3. A physical interpretation of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Moayeri, MohamadMehdi; Pande, Saket

    2015-04-01

    It is intuitive that instability of hydrological system representation, in the sense of how perturbations in input forcings translate into perturbation in a hydrologic response, may depend on its hydrological characteristics. Responses of unstable systems are thus complex to model. We interpret complexity in this context and define complexity as a measure of instability in hydrological system representation. We provide algorithms to quantify model complexity in this context. We use Sacramento soil moisture accounting model (SAC-SMA) parameterized for MOPEX basins and quantify complexities of corresponding models. Relationships between hydrologic characteristics of MOPEX basins such as location, precipitation seasonality index, slope, hydrologic ratios, saturated hydraulic conductivity and NDVI and respective model complexities are then investigated. We hypothesize that complexities of basin specific SAC-SMA models correspond to aforementioned hydrologic characteristics, thereby suggesting that model complexity, in the context presented here, may have a physical interpretation.

  4. MODIS-derived Potential Evapotranspiration Estimates for Operational Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Kim, J.; Hogue, T.

    2005-12-01

    The current SACramento Soil Moisture Accounting Model (SAC-SMA), used by the National Weather Service, is the primarily model for hydrologic forecasting across the United States. Potential evapotranspiration (PET), one of the required inputs, remains rather simplistic. The model traditionally uses a regional pan evaporation estimate due to the difficulty in acquiring more sophisticated measurements. This study explores an alternative methodology using only remote sensing information to capture the monthly mean distribution of potential evapotranspiration (PET) for the SAC-SMA model. We apply a simple scheme proposed by Jiang and Islam (2005) to estimate the net radiation and estimate PET within the context of the Priestley-Taylor equation using data gathered from the MODIS Terra platform. PET estimates from the MODIS data are compared with those derived from Oklahoma Mesonet ground-based measurements and traditional pan evaporation estimates. Preliminary results will be presented for the Illinois River basin at Watts (OK) identified as part of the National Weather Service's Distributed Modeling Intercomparison Project (DMIP). The resultant streamflow simulations will illustrate the sensitivity of the SAC-SMA model to potential evaporation inputs from different sources and the possibility of the application of a stand-alone PET method for un-gauged basins.

  5. Hydrologic Modeling on a 4km Grid over the Conterminous United States (CONUS)

    NASA Astrophysics Data System (ADS)

    Moreda, F.; Koren, V.; Cui, Z.; Reed, S.; Zhang, Z.; Smith, M.

    2005-12-01

    The Hydrology Laboratory (HL) of the NOAA/National Weather Service's Office of Hydrologic Development (OHD) is developing advanced water resources products to meet the expanding demands of the public. Recently, the HL distributed modeling research program embarked on an exciting new development for large-scale, fine-resolution soil moisture modeling. We expect this work to provide important contributions to meet the Nation's need for water resources information such as drought. In the present work, the Sacramento Soil Moisture Accounting (SAC-SMA) model is reformulated to compute physically-based estimates of soil moisture using the equations of heat transfer and a soil column representation of SAC water storage and movement. This modified SAC-SMA now runs within the HL distributed model, with the capability to generate estimates of physically-based soil moisture content at the 4 km grid scale over CONUS. In addition, we have added the NWS operational temperature index-based snow model (SNOW-17) to run on a 4 km grid. We expect that these new capabilities will help produce detailed soil moisture fields which will benefit flash flood forecasting as well as water resource management applications. In this paper, we present the first results of the new distributed SAC-SMA and SNOW-17 models over the CONUS area. Distinctive characteristics of our CONUS runs are: (a) distributed SNOW-17 model parameters are estimated from physical factors (b) NWS-gridded monthly potential evaporation products are used, (c) the forcings are the Stage IV multi-sensor precipitation mosaicked NEXRAD products. To evaluate the results, we compare our CONUS soil moisture estimates to those from the 12 km products from the North American Land Data Assimilation System (NLDAS), and to the Oklahoma Mesonet soil moisture data for regional verification.

  6. Parameterization of soil moisture-heat transfer processes for conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Koren, V.

    2003-04-01

    Recent developments in land surface modeling have significantly improved representation of cold season processes in atmospheric models. However, a conceptual representation of a soil profile in commonly used watershed models complicates the implementation of physically-based heat/moisture transfer models that require numerical integration over the soil profile. This study is focused on developing and testing a procedure to transform layer-structured heat-moisture states into reservoir-type states used by watershed models, and vice versa. The Sacramento Soil Moisture Accounting model (SAC-SMA) is used to estimate soil moisture states and runoff components, and a layer integrated form of the heat transfer equation is used to estimate soil temperature and unfrozen water states. The SAC-SMA storages, represented as totals of tension and free water plus a moisture below wilting point, are recalculated into a number of soil layers using soil texture data. Three or four layers are usually used with much higher resolution in the upper zone. At each time step, SAC-SMA liquid water storage changes due to rainfall/snowmelt are estimated and transformed into soil profile moisture states of the heat transfer numerical scheme. The heat transfer component then splits the total water content into frozen and liquid water portions, which are then converted back into SAC-SMA states. A conceptual parameterization is used to account for changes in percolation and runoff components due to frozen ground effects. Parameterization was tested using a number of sites in the Northwest of the US when soil temperature measurements were available at few soil layers. Only daily precipitation and air temperature data were used in simulations. Solid and liquid soil moisture contents, and soil temperature at five layers were simulated for 3--5 years. Test results suggested that a conceptual representation of soil moisture fluxes combined with a physically-based heat transfer model provided

  7. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.

    2015-04-01

    This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  8. Parameter estimation of hydrologic models using data assimilation

    NASA Astrophysics Data System (ADS)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  9. Comparison of a Neural Network and a Conceptual Model for Rainfall-Runoff Modelling with Monthly Input

    NASA Astrophysics Data System (ADS)

    Chochlidakis, Chronis; Daliakopoulos, Ioannis; Tsanis, Ioannis

    2014-05-01

    Rainfall-runoff (RR) models contain parameters that can seldom be directly measured or estimated by expert judgment, but are rather inferred by calibration against a historical record of input-output datasets. Here, a comparison is made between a conceptual model and an Artificial Neural Network (ANN) for efficient modeling of complex hydrological processes. The monthly rainfall, streamflow, and evapotranspiration data from 15 catchments in Crete, Greece are used to compare the proposed methodologies. Genetic Algorithms (GA) are applied for the stochastic calibration of the parameters in the Sacramento Soil Moisture Accounting (SAC-SMA) model yielding R2 values between 0.65 and 0.90. A Feedforward NN (FNN) is trained using a time delay approach, optimized through trial and error for each catchment, yielding R2 values between 0.70 and 0.91. The results obtained show that the ANN models can be superior to the conventional conceptual models due to their ability to handle the non-linearity and dynamic nature of the natural physical processes in a more efficient manner. On the other hand, SAC-SMA depicts high flows with greater accuracy and results suggest that conceptual models can be more robust in extrapolating beyond historical record limits.

  10. Analysis of Streamflow Predictive Uncertainty using Multiple Hydrologic Models in Climate Change Impact Study

    NASA Astrophysics Data System (ADS)

    Yang, P.; Najafi, M.; Moradkhani, H.

    2010-12-01

    Based on the statistically downscaled outputs from 8 global climate model projections and 2 emission scenarios we assess the uncertainties associated with GCMs and hydrologic models by means of multi-modeling. As there is no conceivable reason that any hydrologic model is performing better under all circumstances, four hydrologic models are selected for the hydrologic impact study: the Sacramento Soil Moisture Accounting (SAC-SMA) model, Conceptual HYdrologic MODel (HYMOD), Thornthwaite-Mather model (TM) and the Precipitation Runoff Modeling System (PRMS). Three objective functions are adopted to calibrate each model. The hydrologic model simulations are combined using the Bayesian Model Averaging (BMA) method. This study shows that the application of the BMA in analyzing the models ensemble is useful in minimizing the uncertainty in selecting the hydrologic model selection. It is also concluded that the hydrologic model uncertainty is considerably smaller than GCM uncertainty, except during the dry season.

  11. Evaporative partitioning in a unified land model

    NASA Astrophysics Data System (ADS)

    Livneh, B.; Lettenmaier, D. P.; Restrepo, P. J.

    2009-12-01

    Accurate partitioning of precipitation into evapotranspiration and runoff, and more generally estimation of the surface water balance, is crucial both for hydrologic forecasting and numerical weather and climate prediction. One important aspect of this issue is the partitioning of evapotranspiration into soil evaporation, canopy evaporation, and plant transpiration, which in turn has implications for other terms in the surface water balance. In the first part of the study, we tested several well known land surface models in multi-year simulations over the continental U.S. Among the models, which included the Variable Infiltration Capacity (VIC) model, the Community Land Model (CLM), the Noah Land Surface Model (Noah LSM), and the NASA Catchment model, there were substantial variations in the partitioning. These results motivated a more detailed evaluation, using data for two catchments that were a part of the second phase of the Distributed Model Intercomparison Project (DMIP-2), the East Fork Carson River Basin and the Illinois River Basin. In this portion of the study, we evaluated a unified land model (ULM) which is a merger of the NWS Sacramento Soil Moisture Accounting model (SAC-SMA), which is used operationally for flood and seasonal streamflow prediction, and the Noah LSM, which is the land scheme used in NOAA’s suite of weather and climate prediction models. Our overall objective is to leverage the operational strengths of each model, specifically to improve streamflow prediction and soil moisture states within the Noah LSM framework, and to add a vegetation component to the SAC-SMA model. Partitioning of evapotranspiration into its three components is a key part of the ULM performance, and controls our ability to use calibrated SAC-SMA parameters within the ULM framework. In our evaluations at the DMIP-2 sites, we examined sensitivities of soil moisture and evaporative components in ULM to changes in vegetation cover, root zone depth, canopy

  12. UQLab - A Software Platform for Uncertainty Quantification of Complex System Models

    NASA Astrophysics Data System (ADS)

    Wang, C.; Duan, Q.; Gong, W.

    2014-12-01

    UQLab (Uncertainty quantification Laboratory) is a flexible, user-friendly software platform that integrates different kinds of UQ methods including experimental design, sensitivity analysis, uncertainty analysis, surrogate modeling and optimization methods to characterize uncertainty of complex system models. It is written in Python language and can run on all common operating systems. UQLab has a graphic user interface (GUI) that allows users to enter commands and output analysis results via pull-down menus. It is equipped with a model driver generator that allows any system model to be linked with the software. The only requirement is to make sure the executable code, control file and output file of interest of a model accessible by the software. Through two geophysics models: the Sacramento Soil Moisture Accounting Model (SAC-SMA) and Common Land Model (CoLM), this presentation intends to demonstrate that UQLab is an effective and easy UQ tool to use, and can be applied to a wide range of applications.

  13. Using Time-Varying Sensitivity Analysis to Understand the Effects of Model Formulation on Model Behavior

    NASA Astrophysics Data System (ADS)

    Herman, J. D.; Reed, P. M.; Wagener, T.

    2012-12-01

    Lumped rainfall-runoff models are widely used for flow prediction, but a long-recognized need exists for diagnostic tools to determine whether the process-level behavior of a model aligns with the expectations inherent in its formulation. To this end, we develop a comprehensive exploration of dominant processes in the Hymod, HBV, and Sacramento Soil Moisture Accounting (SAC-SMA) model structures. Model controls are isolated using time-varying Sobol sensitivity analysis for twelve MOPEX watersheds in the eastern United States over a ten-year period. Sensitivity indices are visualized along gradients of observed precipitation and flow characteristics to identify key behavioral differences between the three models and to connect these back to the models' underlying assumptions. Results indicate that dominant processes strongly depend on time-varying hydroclimatic conditions. Parameters associated with surface processes generally dominate under dry conditions, while parameters associated with routing processes dominate under high flow conditions. The results highlight significant inter-model differences in dominant processes, even in models sharing the same process formulation (e.g., the soil moisture formulation in the Hymod and HBV models). The dominant processes identified are often counterintuitive; even these simple models represent complex, nonlinear systems, and the links between formulation and behavior are very difficult to discern a priori as complexity increases. Scrutinizing the links between model formulation and behavior becomes an important diagnostic approach, particularly in applications such as predictions under change where it is critical to identify how a model's dominant processes shift under hydrologic extremes. Sensitive parameters in the (a) Hymod, (b) SAC-SMA, and (c) HBV watershed models as conditions change from dry to wet. This is a qualitative summary of the time-varying sensitivity indices from twelve watersheds across a range of

  14. Hydrologic evaluation of a Generalized Statistical Uncertainty Model for Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Sarachi, S.; Hsu, K. L.; Sorooshian, S.

    2014-12-01

    Development of satellite based precipitation retrieval algorithms and using them in hydroclimatic studies have been of great interest to hydrologists. It is important to understand the uncertainty associated with precipitation products and how they further contribute to the variability in stream flow simulation. In this study a mixture model of Generalized Normal Distribution and Gamma distribution (GND-G) is used to model the joint probability distribution of satellite-based (PERSIANN) and stage IV radar rainfall. The study area for constructing the uncertainty model covers a 15°×15°box of 0.25°×0.25° cells over the eastern United States for summer 2004 to 2009. Cells are aggregated in space and time to obtain data with different resolutions for the construction of the model's parameter space. This uncertainty model is evaluated using data from National Weather Service (NWS) Distributed Hydrologic Model Intercomparison Project - Phase 2 (DMIP 2) basin over Illinois River basin south of Siloam, OK. This data covers the time period of 2006 to 2008.The uncertainty range of precipitation is estimated. The impact of precipitation uncertainty to the stream flow estimation is demonstrated by Monte Carlo simulation of precipitation forcing in the Sacramento Soil Moisture Accounting (SAC-SMA) model. The results show that using precipitation along with its uncertainty distribution as forcing to SAC-SMA make it possible to have an estimation of the uncertainty associated with the stream flow simulation ( in this case study %90 confidence interval is used). The mean of this stream flow confidence interval is compared to the reference stream flow for evaluation of the model and the results show that this method helps to better estimate the variability of the stream flow simulation along with its statistics e.g. percent bias and root mean squared error.

  15. Performance and Probabilistic Verification of Regional Parameter Estimates for Conceptual Rainfall-runoff Models

    NASA Astrophysics Data System (ADS)

    Franz, K.; Hogue, T.; Barco, J.

    2007-12-01

    Identification of appropriate parameter sets for simulation of streamflow in ungauged basins has become a significant challenge for both operational and research hydrologists. This is especially difficult in the case of conceptual models, when model parameters typically must be "calibrated" or adjusted to match streamflow conditions in specific systems (i.e. some of the parameters are not directly observable). This paper addresses the performance and uncertainty associated with transferring conceptual rainfall-runoff model parameters between basins within large-scale ecoregions. We use the National Weather Service's (NWS) operational hydrologic model, the SACramento Soil Moisture Accounting (SAC-SMA) model. A Multi-Step Automatic Calibration Scheme (MACS), using the Shuffle Complex Evolution (SCE), is used to optimize SAC-SMA parameters for a group of watersheds with extensive hydrologic records from the Model Parameter Estimation Experiment (MOPEX) database. We then explore "hydroclimatic" relationships between basins to facilitate regionalization of parameters for an established ecoregion in the southeastern United States. The impact of regionalized parameters is evaluated via standard model performance statistics as well as through generation of hindcasts and probabilistic verification procedures to evaluate streamflow forecast skill. Preliminary results show climatology ("climate neighbor") to be a better indicator of transferability than physical similarities or proximity ("nearest neighbor"). The mean and median of all the parameters within the ecoregion are the poorest choice for the ungauged basin. The choice of regionalized parameter set affected the skill of the ensemble streamflow hindcasts, however, all parameter sets show little skill in forecasts after five weeks (i.e. climatology is as good an indicator of future streamflows). In addition, the optimum parameter set changed seasonally, with the "nearest neighbor" showing the highest skill in the

  16. Uncertainty Analysis of Flash Flood Guidance: Topographic Data and Model Parameter Errors

    NASA Astrophysics Data System (ADS)

    Georgakakos, K. P.; Ntelekos, A. A.; Krajewski, W. F.

    2004-05-01

    Flash Flood Guidance (FFG) is the volume of rainfall required to generate bankfull flows at the outlet of a basin over a specified time interval and initial soil moisture conditions. Operationally the soil moisture conditions are generated every 6 hours by the execution of the Sacramento - Soil Moisture Accounting (SAC - SMA) model at the River Forecast Centers (RFC's). This guidance is used with actual radar rainfall data over the basin to assist with the production of flash flood warnings. The backbone of the FFG system is the Threshold Runoff (Thresh-R), the calculation of which is done offline as a one time task. Thersh-R is the volume of effective rainfall of a given duration needed to cause bankfull flows at the basin outlet. In this study, bankfull conditions from uniform steady flow and the Geomorphologic Unit Hydrograph theory are used for the calculation of Thresh-R for a basin located in Illinois River at Oklahoma. The uncertainty related with the GIS and channel data for the calculation of thresh-R is introduced and an ensemble of threshold runoff values is produced. Then, the FFG is modeled with the use of a time-continuous approximation of the upper zone of the SAC-SMA hydrologic model and quadratic function approximations. The thresh-R ensemble is fed into the FFG model to study the uncertainty in the FFG values due to the uncertainty in the GIS and channel data that contribute to the uncertainty of threshold runoff. The numerical experiments are then repeated but additional uncertainty in the key parameters of the analytical Sacramento model solution is added, to study the synergistic effect of both uncertainties. The results of analysis are presented and the parameters that affect more the FFG uncertainty are identified. The need of transforming the currently deterministic operational FFG system to a probabilistic or an ensemble one is also discussed.

  17. Model Accounting Program. Adopters Guide.

    ERIC Educational Resources Information Center

    Beaverton School District 48, OR.

    The accounting cluster demonstration project conducted at Aloha High School in the Beaverton, Oregon, school district developed a model curriculum for high school accounting. The curriculum is based on interviews with professionals in the accounting field and emphasizes the use of computers. It is suitable for use with special needs students as…

  18. Fuzzy conceptual rainfall runoff models

    NASA Astrophysics Data System (ADS)

    Özelkan, Ertunga C.; Duckstein, Lucien

    2001-11-01

    A fuzzy conceptual rainfall-runoff (CRR) framework is proposed herein to deal with those parameter uncertainties of conceptual rainfall-runoff models, that are related to data and/or model structure: with every element of the rainfall-runoff model assumed to be possibly uncertain, taken here as being fuzzy. First, the conceptual rainfall-runoff system is fuzzified and then different operational modes are formulated using fuzzy rules; second, the parameter identification aspect is examined using fuzzy regression techniques. In particular, bi-objective and tri-objective fuzzy regression models are applied in the case of linear conceptual rainfall-runoff models so that the decision maker may be able to trade off prediction vagueness (uncertainty) and the embedding outliers. For the non-linear models, a fuzzy least squares regression framework is applied to derive the model parameters. The methodology is illustrated using: (1) a linear conceptual rainfall-runoff model; (2) an experimental two-parameter model; and (3) a simplified version of the Sacramento soil moisture accounting model of the US National Weather Services river forecast system (SAC-SMA) known as the six-parameter model. It is shown that the fuzzy logic framework enables the decision maker to gain insight about the model sensitivity and the uncertainty stemming from the elements of the CRR model.

  19. Building Cyberinfrastructure to Support a Real-time National Flood Model

    NASA Astrophysics Data System (ADS)

    Salas, F. R.; Maidment, D. R.; Tolle, K.; Navarro, C.; David, C. H.; Corby, R.

    2014-12-01

    The National Weather Service (NWS) is divided into 13 regional forecast centers across the country where the Sacramento Soil Moisture Accounting (SAC-SMA) model is run on average over a 10 day period, 5 days in the past and 5 days in the future. Model inputs and outputs such as precipitation and surface runoff are spatially aggregated over approximately 6,600 forecast basins with an average area of 1,200 square kilometers. In contrast, the NHDPlus dataset, which represents the geospatial fabric of the country, defines over 3 million catchments with an average area of 3 square kilometers. Downscaling the NWS land surface model outputs to the NHDPlus catchment scale in real-time requires the development of cyberinfrastructure to manage, share, compute and visualize large quantities of hydrologic data; streamflow computations through time for over 3 million river reaches. Between September 2014 and May 2015, the National Flood Interoperability Experiment (NFIE), coordinated through the Integrated Water Resource Science and Services (IWRSS) partners, will focus on building a national flood model for the country. This experiment will work to seamlessly integrate data and model services available on local and cloud servers (e.g. Azure) through disparate data sources operating at various spatial and temporal scales. As such, this paper will present a scalable information model that leverages the Routing Application for Parallel Computation of Discharge (RAPID) model to produce real-time flow estimates for approximately 67,000 NHDPlus river reaches in the NWS West Gulf River Forecast Center region.

  20. Combined assimilation of streamflow and satellite soil moisture with the particle filter and geostatistical modeling

    NASA Astrophysics Data System (ADS)

    Yan, Hongxiang; Moradkhani, Hamid

    2016-08-01

    Assimilation of satellite soil moisture and streamflow data into a distributed hydrologic model has received increasing attention over the past few years. This study provides a detailed analysis of the joint and separate assimilation of streamflow and Advanced Scatterometer (ASCAT) surface soil moisture into a distributed Sacramento Soil Moisture Accounting (SAC-SMA) model, with the use of recently developed particle filter-Markov chain Monte Carlo (PF-MCMC) method. Performance is assessed over the Salt River Watershed in Arizona, which is one of the watersheds without anthropogenic effects in Model Parameter Estimation Experiment (MOPEX). A total of five data assimilation (DA) scenarios are designed and the effects of the locations of streamflow gauges and the ASCAT soil moisture on the predictions of soil moisture and streamflow are assessed. In addition, a geostatistical model is introduced to overcome the significantly biased satellite soil moisture and also discontinuity issue. The results indicate that: (1) solely assimilating outlet streamflow can lead to biased soil moisture estimation; (2) when the study area can only be partially covered by the satellite data, the geostatistical approach can estimate the soil moisture for those uncovered grid cells; (3) joint assimilation of streamflow and soil moisture from geostatistical modeling can further improve the surface soil moisture prediction. This study recommends that the geostatistical model is a helpful tool to aid the remote sensing technique and the hydrologic DA study.

  1. Needed: An Updated Accountability Model

    ERIC Educational Resources Information Center

    Tucker, Marc

    2015-01-01

    It must have seemed simple to the framers of No Child Left Behind. For years, they had poured more and more money into federal programs for schools, yet reading performance had not improved. It appeared that the money had gone down a rat hole, and Congress was ready to hold schools accountable. It was time to get tough. Unfortunately, the…

  2. An Institutional Accountability Model for Community Colleges.

    ERIC Educational Resources Information Center

    Harbour, Clifford P.

    2003-01-01

    Proposes a model for managing a community college's accountability environment and shows how it can be applied. Reports that the model is premised on the pluralistic perspective of accountability (Kearns), and uses Christensen's value network for building the community college model. (Contains 37 references.) (AUTH/NB)

  3. Implementing a trustworthy cost-accounting model.

    PubMed

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders. PMID:26492763

  4. Evaluation and Sensitivity Analysis of An Ensemble-based Coupled Flash Flood and Landslide Modelling System Using Remote Sensing Forcing

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Hong, Y.; Gourley, J. J.; Xue, X.; He, X.

    2015-12-01

    Heavy rainfall-triggered landslides are often associated with flood events and cause additional loss of life and property. It is pertinent to build a robust coupled flash flood and landslide disaster early warning system for disaster preparedness and hazard management based. In this study, we built an ensemble-based coupled flash flood and landslide disaster early warning system, which is aimed for operational use by the US National Weather Service, by integrating the Coupled Routing and Excess STorage (CREST) model and Sacramento Soil Moisture Accounting Model (SAC-SMA) with the physically based SLope-Infiltration-Distributed Equilibrium (SLIDE) landslide prediction model. We further evaluated this ensemble-based prototype warning system by conducting multi-year simulations driven by the Multi-Radar Multi-Sensor (MRMS) rainfall estimates in North Carolina and Oregon. We comprehensively evaluated the predictive capabilities of this system against observed and reported flood and landslides events. We then evaluated the sensitivity of the coupled system to the simulated hydrological processes. Our results show that the system is generally capable of making accurate predictions of flash flood and landslide events in terms of their locations and time of occurrence. The occurrence of predicted landslides show high sensitivity to total infiltration and soil water content, highlighting the importance of accurately simulating the hydrological processes on the accurate forecasting of rainfall triggered landslide events.

  5. Parameterization of distributed hydrological models: learning from the experiences of lumped modeling

    NASA Astrophysics Data System (ADS)

    Moreda, Fekadu; Koren, Victor; Zhang, Ziya; Reed, Seann; Smith, Michael

    2006-03-01

    The Hydrology Lab (HL) of the National Oceanic and Atmospheric Administration's National Weather Service (NOAA/NWS), Office of Hydrologic Development (OHD) is currently developing and testing the HL Research Modeling System (HL-RMS). Currently, the system has one snow model (SNOW-17) and two runoff models: the Sacramento Soil Moisture Accounting (SAC-SMA) and the Continuous Antecedent Precipitation Index (CONT-API). The NWS lumped CONT-API model is operational in one of the NWS River Forecasting Centers (RFCs), the Middle Atlantic RFC (MARFC) in the United States. This study deals with the derivation of a priori distributed parameters for the CONT-API model. In our strategy, initial distributed parameters based on the calibrated lumped CONT-API parameters of 67 basins in the Susquehanna River Basin are derived. This study shows that the CONT-API model six-hourly calibrated parameters can be used in an one-hourly lumped model with only minor changes in total runoff volume (less than 5%). However, to obtain the timing of simulated hydrographs, appropriate one-hourly unit hydrographs need to be derived. A priori-distributed model parameters were based on relationships between soil properties and calibrated lumped CONT-API parameters. Multiple linear regressions with coefficients of determination ranging from 0.39 to 0.63 were obtained for 10 lumped model parameters. Using these predicted parameters, the lumped model produced simulations having Nash-Sutcliffe efficiency, Neff, statistics ranging from 0.69 to 0.78 for five of the 67 basins. These are commensurate with goodness-of-fit statistics from lumped model calibrations. Furthermore, application of the method in deriving a priori parameters gave a promising result in distributed model simulations.

  6. Time-varying sensitivity analysis clarifies the effects of watershed model formulation on model behavior

    NASA Astrophysics Data System (ADS)

    Herman, J. D.; Reed, P. M.; Wagener, T.

    2013-03-01

    Lumped rainfall-runoff models are widely used for flow prediction, but a long-recognized need exists for diagnostic tools to determine whether the process-level behavior of a model aligns with the expectations inherent in its formulation. To this end, we develop a comprehensive exploration of dominant parameters in the Hymod, HBV, and Sacramento Soil Moisture Accounting (SAC-SMA) model structures. Model controls are isolated using time-varying Sobol' sensitivity analysis for twelve MOPEX watersheds in the eastern United States over a 10 year period. Sensitivity indices are visualized along gradients of observed precipitation and streamflow to identify key behavioral differences between the three models and to connect these back to the models' underlying assumptions. Results indicate that the models' dominant parameters strongly depend on time-varying hydroclimatic conditions. Parameters associated with surface processes such as evapotranspiration and runoff generally dominate under dry conditions, when high evaporative fluxes are required for accurate simulation. Parameters associated with routing processes typically dominate under high-flow conditions, when performance depends on the timing of flow events. The results highlight significant inter-model differences in performance controls, even in cases where the models share similar process formulations. The dominant parameters identified can be counterintuitive; even these simple models represent complex, nonlinear systems, and the links between formulation and behavior are difficult to discern a priori as complexity increases. Scrutinizing the links between model formulation and behavior becomes an important diagnostic approach, particularly in applications such as predictions under change where dominant model controls will shift under hydrologic extremes.

  7. Satellite-derived potential evapotranspiration for distributed hydrologic runoff modeling

    NASA Astrophysics Data System (ADS)

    Spies, R. R.; Franz, K. J.; Bowman, A.; Hogue, T. S.; Kim, J.

    2012-12-01

    Distributed models have the ability of incorporating spatially variable data, especially high resolution forcing inputs such as precipitation, temperature and evapotranspiration in hydrologic modeling. Use of distributed hydrologic models for operational streamflow prediction has been partially hindered by a lack of readily available, spatially explicit input observations. Potential evapotranspiration (PET), for example, is currently accounted for through PET input grids that are based on monthly climatological values. The goal of this study is to assess the use of satellite-based PET estimates that represent the temporal and spatial variability, as input to the National Weather Service (NWS) Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM). Daily PET grids are generated for six watersheds in the upper Mississippi River basin using a method that applies only MODIS satellite-based observations and the Priestly Taylor formula (MODIS-PET). The use of MODIS-PET grids will be tested against the use of the current climatological PET grids for simulating basin discharge. Gridded surface temperature forcing data are derived by applying the inverse distance weighting spatial prediction method to point-based station observations from the Automated Surface Observing System (ASOS) and Automated Weather Observing System (AWOS). Precipitation data are obtained from the Climate Prediction Center's (CPC) Climatology-Calibrated Precipitation Analysis (CCPA). A-priori gridded parameters for the Sacramento Soil Moisture Accounting Model (SAC-SMA), Snow-17 model, and routing model are initially obtained from the Office of Hydrologic Development and further calibrated using an automated approach. The potential of the MODIS-PET to be used in an operational distributed modeling system will be assessed with the long-term goal of promoting research to operations transfers and advancing the science of hydrologic forecasting.

  8. Modeling habitat dynamics accounting for possible misclassification

    USGS Publications Warehouse

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  9. Distributed models for operational river forecasting: research, development, and implementation

    NASA Astrophysics Data System (ADS)

    Smith, M.

    2003-04-01

    The National Weather Service (NWS) is uniquely mandated amongst federal agencies to provide river forecasts for the United States. To accomplish this mission, the NWS uses the NWS River Forecast System (NWSRFS). The NWSRFS is a collection of hydrologic, hydraulic, data collection, and forecast display algorithms employed at 13 River Forecast Centers (RFCs) throughout the US. Within the NWS, the Hydrology Lab (HL) of the Office of Hydrologic Development conducts research and development to improve the NWS models and products. Areas of current research include, snow, frozen ground, dynamic channel routing, radar and satellite precipitation estimation, uncertainty, and new approaches to rainfall runoff modeling. A prominent area of research lately has been the utility of distributed models to improve the accuracy of NWS forecasts and to provide meaningful hydrologic simulations at ungaged interior nodes. Current river forecast procedures center on lumped applications of the conceptual Sacramento Soil Moisture Accounting (SAC-SMA) model to transform rainfall to runoff. Unit hydrographs are used to convert runoff to discharge hydrographs at gaged locations. Hydrologic and hydraulic routing methods are used to route hydrographs to downstream computational points. Precipitation inputs to the models have been traditionally defined from rain gage observations. With the nationwide implementation of the Next Generation Radar platforms (NEXRAD), the NWS has precipitation estimates of unprecedented spatial and temporal resolution. In order to most effectively use these high resolution data, recent research has been devoted towards the development of distributed hydrologic models to improve the accuracy of NWS forecasts. The development of distributed models in HL is following specific scientific research and implementation strategies, each consisting of several elements. In its science strategy, HL has conducted a highly successful comparison of distributed models (Distributed

  10. To Trust or Not to Trust: Assessing the consistency of controls across hydrologic models

    NASA Astrophysics Data System (ADS)

    Herman, J. D.

    2011-12-01

    Watershed models can vary significantly in their formulation and complexity. Conceptual lumped models are the most widely used type, but they have received criticism for their limited physical interpretability. A key challenge in drawing process-level inferences from these models lies in the systematic assessment of how their controlling parameters and processes change over time. The extensive use of these models and the increasing popularity of multi-model frameworks highlight the need for diagnostic approaches that can rigorously evaluate conceptual model structures, with a particular focus on the consistency of their implied process controls. In this study, we develop a diagnostic method to explore the consistency of dominant process controls across the HBV, HyMod, and Sacramento Soil Moisture Accounting (SAC-SMA) model structures. The parametric controls for several signature metrics are determined using Sobol Sensitivity Analysis for twelve watersheds selected across a hydro-climatic gradient in the eastern United States. These controls are evaluated to determine whether, and under what conditions, the models' behavior is consistent with our perception of the underlying system. Controls are also compared across models to explore the impact of model structure choice on process-level inferences. Results indicate that each of the three model structures offers a functionally different simplification of the physical system. Strong seasonal variation in parametric sensitivities allows for comparisons between real-world dominant processes and those implied by the models. These dynamic sensitivities often behave differently across models, which emphasizes the danger of inferring process-level information from individual model structures.

  11. GOES Solar Radiation for Evapotranspiration Estimation and Streamflow Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The National Weather Service River Forecast System uses the Sacramento Soil Moisture Accounting (SAC-SMA) rainfall-runoff model to produce daily river and flood forecasts and issue flood warnings. The manual observations of total sky cover used to estimate solar radiation and potential evapotranspir...

  12. Evaluation of climate anomalies impacts on the Upper Blue Nile Basin in Ethiopia using a distributed and a lumped hydrologic model

    NASA Astrophysics Data System (ADS)

    Elsanabary, Mohamed Helmy; Gan, Thian Yew

    2015-11-01

    Evaluating the climate anomalies impacts on the Upper Blue Nile Basin (UBNB), Ethiopia, a large basin with scarce hydroclimatic data, through hydrologic modeling is a challenge. A fully distributed, physically-based model, a modified version of the Interactions Soil-Biosphere Atmosphere model of Météo France (MISBA), and a lumped, conceptual rainfall-runoff Sacramento model, SAC-SMA of the US National Weather Service, were used to simulate the streamflow of UBNB. To study the potential hydrologic effect of climate anomalies on the UBNB, rainfall and temperature data observed when climate anomalies were active, were resampled and used to drive MISBA and SAC-SMA. To obtain representative, distributed precipitation data in mountainous basins, it was found that a 3% adjustment factor for every 25 m rise in elevation was needed to orographically correct the rainfall over UBNB. The performance of MISBA applied to UBNB improved after MISBA was modified so that it could simulate evaporation loss from the canopy, providing coefficient of determination (R2) = 0.58, and root mean square error (RMSE) = 0.34 m3/s in comparison with the observed streamflow. In contrast, the performance of SAC-SMA at the calibration run and the validation run is better than that of MISBA, such that R2 is 0.79 for calibration and 0.82 for validation even though it models the hydrology of UBNB in a lumped, conceptual framework as against the physically-based, fully distributed framework of MISBA. El Niño tends to decrease the June-September rainfall but increase the February-May rainfall, while La Niña has opposite effect on the rainfall of UBNB. Based on the simulations of MISBA and SAC-SMA for UBNB, La Niña and Indian Ocean Dipole (IOD) tend to have a wetting effect while El Niño has a drying effect on the streamflow of the UBNB. In addition, El Niño Southern Oscillation (ENSO) and IOD increase the streamflow variability more than changing the magnitude of streamflow. The results provide

  13. Assimilation of AMSR-E snow water equivalent data in a spatially-lumped snow model

    NASA Astrophysics Data System (ADS)

    Dziubanski, David J.; Franz, Kristie J.

    2016-09-01

    Accurately initializing snow model states in hydrologic prediction models is important for estimating future snowmelt, water supplies, and flooding potential. While ground-based snow observations give the most reliable information about snowpack conditions, they are spatially limited. In the north-central USA, there are no continual observations of hydrologically critical snow variables. Satellites offer the most likely source of spatial snow data, such as the snow water equivalent (SWE), for this region. In this study, we test the impact of assimilating SWE data from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) instrument into the US National Weather Service (NWS) SNOW17 model for seven watersheds in the Upper Mississippi River basin. The SNOW17 is coupled with the NWS Sacramento Soil Moisture Accounting (SACSMA) model, and both simulated SWE and discharge are evaluated. The ensemble Kalman filter (EnKF) assimilation framework is applied and updating occurs on a daily cycle for water years 2006-2011. Prior to assimilation, AMSR-E data is bias corrected using data from the National Operational Hydrologic Remote Sensing Center (NOHRSC) airborne snow survey program. An average AMSR-E SWE bias of -17.91 mm was found for the study basins. SNOW17 and SAC-SMA model parameters from the North Central River Forecast Center (NCRFC) are used. Compared to a baseline run without assimilation, the SWE assimilation improved discharge for five of the seven study sites, in particular for high discharge magnitudes associated with snow melt runoff. SWE and discharge simulations suggest that the SNOW17 is underestimating SWE and snowmelt rates in the study basins. Deep snow conditions and periods of snowmelt may have introduced error into the assimilation due to difficulty obtaining accurate brightness temperatures under these conditions. Overall results indicate that the AMSR-E data and EnKF are viable and effective solutions for improving simulations

  14. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    SciTech Connect

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient

  15. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGESBeta

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more

  16. Accountability.

    ERIC Educational Resources Information Center

    The Newsletter of the Comprehensive Center-Region VI, 1999

    1999-01-01

    Controversy surrounding the accountability movement is related to how the movement began in response to dissatisfaction with public schools. Opponents see it as one-sided, somewhat mean-spirited, and a threat to the professional status of teachers. Supporters argue that all other spheres of the workplace have accountability systems and that the…

  17. Accountability.

    ERIC Educational Resources Information Center

    Lashway, Larry

    1999-01-01

    This issue reviews publications that provide a starting point for principals looking for a way through the accountability maze. Each publication views accountability differently, but collectively these readings argue that even in an era of state-mandated assessment, principals can pursue proactive strategies that serve students' needs. James A.…

  18. Accountability Study of the Program for Trainable Mentally Retarded Children and Youth. Accountability Model.

    ERIC Educational Resources Information Center

    Duval County Schools, Jacksonville, FL.

    Reported in summary form is an accountability model developed in Duval County, Florida (funded through Title VI) for planning, developing, and operating an educational program for trainable mentally handicapped children. Background information consists in such aspects of accountability as systematic planning, assessing, and refining educational…

  19. AB 1725 Model Accountability System. California Community Colleges. Revised.

    ERIC Educational Resources Information Center

    California Community Colleges, Sacramento. Board of Governors.

    This report proposes a model accountability system for the California community colleges to comply with the directives of Assembly Bill 1725 (AB 1725). The purpose of the accountability system is to provide colleges and districts, the board of governors, and the California legislature with information that will allow for the continued improvement…

  20. Accountability: An Action Model for the Public Schools.

    ERIC Educational Resources Information Center

    DeMont, Bill[ie; DeMont, Roger

    The model proposed in this book specifies that there are four types of interrelated practices that determine the extent to which accountability is realized. These are (1) the identification of primary accountability agents and their respective program responsibilities, (2) the execution of internal program reviews by those program officers, (3)…

  1. The Accounting Class as Accounting Firm: A Model Program for Developing Technical and Managerial Skills

    ERIC Educational Resources Information Center

    Docherty, Gary

    1976-01-01

    One way to bring the accounting office into the classroom is to conduct the class as a "company." Such a class is aimed at developing students' technical and managerial skills, as well as their career awareness and career goals. Performance goals, a course description, and overall objectives of the course are given and might serve as a model.…

  2. Program Evaluation: The Accountability Bridge Model for Counselors

    ERIC Educational Resources Information Center

    Astramovich, Randall L.; Coker, J. Kelly

    2007-01-01

    The accountability and reform movements in education and the human services professions have pressured counselors to demonstrate outcomes of counseling programs and services. Evaluation models developed for large-scale evaluations are generally impractical for counselors to implement. Counselors require practical models to guide them in planning…

  3. An application of model-based reasoning to accounting systems

    SciTech Connect

    Nado, R.; Chams, M.; Delisio, J.; Hamscher, W.

    1996-12-31

    An important problem faced by auditors is gauging how much reliance can be placed on the accounting systems that process millions of transactions to produce the numbers summarized in a company`s financial statements. Accounting systems contain internal controls, procedures designed to detect and correct errors and irregularities that may occur in the processing of transactions. In a complex accounting system, it can be an extremely difficult task for the auditor to anticipate the possible errors that can occur and to evaluate the effectiveness of the controls at detecting them. An accurate analysis must take into account the unique features of each company`s business processes. To cope with this complexity and variability, the Comet system applies a model-based reasoning approach to the analysis of accounting systems and their controls. An auditor uses Comet to create a hierarchical flowchart model that describes the intended processing of business transactions by an accounting system and the operation of its controls. Comet uses the constructed model to automatically analyze the effectiveness of the controls in detecting potential errors. Price Waterhouse auditors have used Comet on a variety of real audits in several countries around the world.

  4. The CARE model of social accountability: promoting cultural change.

    PubMed

    Meili, Ryan; Ganem-Cuenca, Alejandra; Leung, Jannie Wing-sea; Zaleschuk, Donna

    2011-09-01

    On the 10th anniversary of Health Canada and the Association of Faculties of Medicine of Canada's publication in 2001 of Social Accountability: A Vision for Canadian Medical Schools, the authors review the progress at one Canadian medical school, the College of Medicine at the University of Saskatchewan, in developing a culture of social accountability. They review the changes that have made the medical school more socially accountable and the steps taken to make those changes possible. In response to calls for socially accountable medical schools, the College of Medicine created a Social Accountability Committee to oversee the integration of these principles into the college. The committee developed the CARE model (Clinical activity, Advocacy, Research, Education and training) as a guiding tool for social accountability initiatives toward priority health concerns and as a means of evaluation. Diverse faculty and student committees have emerged as a result and have had far-reaching impacts on the college and communities: from changes in curricula and admissions to community programming and international educational experiences. Although a systematic assessment of the CARE model is needed, early evidence shows that the most significant effects can be found in the cultural shift in the college, most notably among students. The CARE model may serve as an important example for other educational institutions in the development of health practitioners and research that is responsive to the needs of their communities. PMID:21785308

  5. Verification of Advances in a Coupled Snow-runoff Modeling Framework for Operational Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.

  6. Liquid drop model of spherical nuclei with account of viscosity

    NASA Astrophysics Data System (ADS)

    Khokonov, A. Kh.

    2016-01-01

    In the frame of nuclear liquid drop model an analytical solution for the frequency of capillary oscillations is obtained with taking into account the damping due to viscosity and surrounding medium polarizability. The model has been applied for estimation of even-even spherical nuclei surface tension and viscosity. It has been shown that energy shift of capillary oscillations of even-even spherical nuclei due to viscous dissipation gives viscosities in the interval 4.2- 7.6 MeVfm-2c-1 for nuclei from 10646Pd to 19880Hg.

  7. Advancing Ensemble Streamflow Prediction with Stochastic Meteorological Forcings for Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Caraway, N.; Wood, A. W.; Rajagopalan, B.; Zagona, E. A.; Daugherty, L.

    2012-12-01

    River Forecast Centers of National Weather Service (NWS) produce seasonal streamflow forecasts via a method called Ensemble Streamflow Prediction (ESP). NWS ESP forces the temperature index Snow17 and Sacramento Soil Moisture Accounting model (SAC-SMA) models with historical weather sequences for the forecasting period, starting from models' current watershed initial conditions, to produce ensemble streamflow forecasts. There are two major drawbacks of this method: (i) the ensembles are limited to the length of historical, limiting ensemble variability and (ii) incorporating seasonal climate forecasts (e.g., El Nino Southern Oscillation) relies on adjustment or weighting of ESP streamflow sequences. These drawbacks motivate the research presented here, which has two components: (i) a multi-site stochastic weather generator and (ii) generation of ensemble weather forecast inputs to the NWS model to produce ensemble streamflow forecasts. We enhanced the K-nearest neighbor bootstrap based stochastic generator include: (i) clustering the forecast locations into climatologically homogeneous regions to better capture the spatial heterogeneity and, (ii) conditioning the weather forecasts on a probabilistic seasonal climate forecast. This multi-site stochastic weather generator runs in R and the NWS models run within the new Community Hydrologic Prediction System, a forecasting sequence we label WG-ESP. The WG-ESP framework was applied to generate ensemble forecasts of spring season (April-July) streamflow in the San Juan River Basin, one of the major tributaries of the Colorado River, for the period 1981-2010. The hydrologic model requires daily weather sequences at 66 locations in the basin. The enhanced daily weather generator sequences captured the distributional properties and spatial dependence of the climatological ESP, and also generated weather sequences consistent with conditioning on seasonal climate forecasts. Spring season ensemble forecast lead times from

  8. Accommodating environmental variation in population models: metaphysiological biomass loss accounting.

    PubMed

    Owen-Smith, Norman

    2011-07-01

    1. There is a pressing need for population models that can reliably predict responses to changing environmental conditions and diagnose the causes of variation in abundance in space as well as through time. In this 'how to' article, it is outlined how standard population models can be modified to accommodate environmental variation in a heuristically conducive way. This approach is based on metaphysiological modelling concepts linking populations within food web contexts and underlying behaviour governing resource selection. Using population biomass as the currency, population changes can be considered at fine temporal scales taking into account seasonal variation. Density feedbacks are generated through the seasonal depression of resources even in the absence of interference competition. 2. Examples described include (i) metaphysiological modifications of Lotka-Volterra equations for coupled consumer-resource dynamics, accommodating seasonal variation in resource quality as well as availability, resource-dependent mortality and additive predation, (ii) spatial variation in habitat suitability evident from the population abundance attained, taking into account resource heterogeneity and consumer choice using empirical data, (iii) accommodating population structure through the variable sensitivity of life-history stages to resource deficiencies, affecting susceptibility to oscillatory dynamics and (iv) expansion of density-dependent equations to accommodate various biomass losses reducing population growth rate below its potential, including reductions in reproductive outputs. Supporting computational code and parameter values are provided. 3. The essential features of metaphysiological population models include (i) the biomass currency enabling within-year dynamics to be represented appropriately, (ii) distinguishing various processes reducing population growth below its potential, (iii) structural consistency in the representation of interacting populations and

  9. Short communication: Accounting for new mutations in genomic prediction models.

    PubMed

    Casellas, Joaquim; Esquivelzeta, Cecilia; Legarra, Andrés

    2013-08-01

    Genomic evaluation models so far do not allow for accounting of newly generated genetic variation due to mutation. The main target of this research was to extend current genomic BLUP models with mutational relationships (model AM), and compare them against standard genomic BLUP models (model A) by analyzing simulated data. Model performance and precision of the predicted breeding values were evaluated under different population structures and heritabilities. The deviance information criterion (DIC) clearly favored the mutational relationship model under large heritabilities or populations with moderate-to-deep pedigrees contributing phenotypic data (i.e., differences equal or larger than 10 DIC units); this model provided slightly higher correlation coefficients between simulated and predicted genomic breeding values. On the other hand, null DIC differences, or even relevant advantages for the standard genomic BLUP model, were reported under small heritabilities and shallow pedigrees, although precision of the genomic breeding values did not differ across models at a significant level. This method allows for slightly more accurate genomic predictions and handling of newly created variation; moreover, this approach does not require additional genotyping or phenotyping efforts, but a more accurate handing of available data. PMID:23746579

  10. Accounting for Errors in Model Analysis Theory: A Numerical Approach

    NASA Astrophysics Data System (ADS)

    Sommer, Steven R.; Lindell, Rebecca S.

    2004-09-01

    By studying the patterns of a group of individuals' responses to a series of multiple-choice questions, researchers can utilize Model Analysis Theory to create a probability distribution of mental models for a student population. The eigenanalysis of this distribution yields information about what mental models the students possess, as well as how consistently they utilize said mental models. Although the theory considers the probabilistic distribution to be fundamental, there exists opportunities for random errors to occur. In this paper we will discuss a numerical approach for mathematically accounting for these random errors. As an example of this methodology, analysis of data obtained from the Lunar Phases Concept Inventory will be presented. Limitations and applicability of this numerical approach will be discussed.

  11. Uncertainties in Surface Runoff Forecasts Driven by Probabilistic Quantitative Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    Ntelekos, A. A.; Ciach, G. J.; Georgakakos, K. P.; Krajewski, W. F.

    2004-05-01

    This work focuses on several aspects of the ensemble flood forecasting with the embedded input and model uncertainties. In most of the short term forecasting, hydrologic models apply the input rainfall estimates assuming that they are error-free. For example, this is the case with the operational use of the Sacramento--Soil Moisture Accounting (SAC-SMA) model at the US National Weather Service (NWS) River Forecast Centers (RFC's). We apply an analytical approximation of the upper soil zone equations in the SAC-SMA model to study the propagation of the uncertainties in rainfall estimates into runoff generation. The ensembles of the rainfall estimates are produced by randomizing the radar-rainfall arrays costructed from the WSR-88D data. These ensembles are the specific outcomes of a general probabilistic quantitative precipitation estimation (PQPE) procedure currently developed by the University of Iowa for the NWS. The parameters of the rainfall uncertainty generator describe the conditional distributions of the error process and its spatiotemporal dependences. This investigation is performed using two different uncertainty schemes. In the first scenario, only the errors in rainfall estimates are assumed. Here, the parameter values of the SAC-SMA model are fixed and based on the data from a watershed located within the Illinois River basin in Oklahoma. In the second scenario, nominal uncertainties in the SAC-SMA model parameters are added. Our study aims to identify those characteristics of the radar-rainfall error process that are mostly responsible for the uncertainty in surface runoff production by the operational hydrologic models.

  12. A Meta-modeling Framework to Support Accountability in Business Process Modeling

    NASA Astrophysics Data System (ADS)

    Zou, Joe; de Vaney, Christopher; Wang, Yan

    Accountability is becoming a central theme in business today in the midst of global financial crisis as the corporate scandals and fallouts dominate the front pages of the press. Businesses are demanding more accountability measures built-in at the business process modeling level. Currently the business process modeling standards and methods mainly focus on the sequential flow aspect of business process and leave the business aspect of accountability largely untouched. In this paper, we extend the OMG’s business modeling specifications to define a business accountability meta-model. The meta-model is complementary to the OMG’s Model-Driven Architecture (MDA) vision, laying out the foundation for future model generation and transformation for creating accountable business process solutions.

  13. Carbosoil, a land evaluation model for soil carbon accounting

    NASA Astrophysics Data System (ADS)

    Anaya-Romero, M.; Muñoz-Rojas, M.; Pino, R.; Jordan, A.; Zavala, L. M.; De la Rosa, D.

    2012-04-01

    The belowground carbon content is particularly difficult to quantify and most of the time is assumed to be a fixed fraction or ignored for lack of better information. In this respect, this research presents a land evaluation tool, Carbosoil, for predicting soil carbon accounting where this data are scarce or not available, as a new component of MicroLEIS DSS. The pilot study area was a Mediterranean region (Andalusia, Southern Spain) during 1956-2007. Input data were obtained from different data sources and include 1689 soil profiles from Andalusia (S Spain). Previously, detailed studies of changes in LU and vegetation carbon stocks, and soil organic carbon (SOC) dynamic were carried out. Previous results showed the influence of LU, climate (mean temperature and rainfall) and soil variables related with SOC dynamics. For instance, SCS decreased in Cambisols and Regosols by 80% when LU changed from forest to heterogeneous agricultural areas. Taking this into account, the input variables considered were LU, site (elevation, slope, erosion, type-of-drainage, and soil-depth), climate (mean winter/summer temperature and annual precipitation), and soil (pH, nitrates, CEC, sand/clay content, bulk density and field capacity). The available data set was randomly split into two parts: training-set (75%), and validation-set (25%). The model was built by using multiple linear regression. The regression coefficient (R2) obtained in the calibration and validation of Carbosoil was >0.9 for the considered soil sections (0-25, 25-50, and 50-75 cm). The validation showed the high accuracy of the model and its capacity to discriminate carbon distribution regarding different climate, LU and soil management scenarios. Carbosoil model together with the methodologies and information generated in this work will be a useful basis to accurately quantify and understanding the distribution of soil carbon account helpful for decision makers.

  14. Reconstruction of Danio rerio Metabolic Model Accounting for Subcellular Compartmentalisation

    PubMed Central

    Bekaert, Michaël

    2012-01-01

    Plant and microbial metabolic engineering is commonly used in the production of functional foods and quality trait improvement. Computational model-based approaches have been used in this important endeavour. However, to date, fish metabolic models have only been scarcely and partially developed, in marked contrast to their prominent success in metabolic engineering. In this study we present the reconstruction of fully compartmentalised models of the Danio rerio (zebrafish) on a global scale. This reconstruction involves extraction of known biochemical reactions in D. rerio for both primary and secondary metabolism and the implementation of methods for determining subcellular localisation and assignment of enzymes. The reconstructed model (ZebraGEM) is amenable for constraint-based modelling analysis, and accounts for 4,988 genes coding for 2,406 gene-associated reactions and only 418 non-gene-associated reactions. A set of computational validations (i.e., simulations of known metabolic functionalities and experimental data) strongly testifies to the predictive ability of the model. Overall, the reconstructed model is expected to lay down the foundations for computational-based rational design of fish metabolic engineering in aquaculture. PMID:23166792

  15. Accounting for uncertainty in distributed flood forecasting models

    NASA Astrophysics Data System (ADS)

    Cole, Steven J.; Robson, Alice J.; Bell, Victoria A.; Moore, Robert J.; Pierce, Clive E.; Roberts, Nigel

    2010-05-01

    Recent research investigating the uncertainty of distributed hydrological flood forecasting models will be presented. These findings utilise the latest advances in rainfall estimation, ensemble nowcasting and Numerical Weather Prediction (NWP). The hydrological flood model that forms the central focus of the study is the Grid-to-Grid Model or G2G: this is a distributed grid-based model that produces area-wide flood forecasts across the modelled domain. Results from applying the G2G Model across the whole of England and Wales on a 1 km grid will be shown along with detailed regional case studies of major floods, such as those of summer 2007. Accounting for uncertainty will be illustrated using ensemble rainfall forecasts from both the Met Office's STEPS nowcasting and high-resolution (~1.5 km) NWP systems. When these rainfall forecasts are used as input to the G2G Model, risk maps of flood exceedance can be produced in animated form that allow the evolving flood risk to be visualised in space and time. Risk maps for a given forecast horizon (e.g. the next 6 hours) concisely summarise a wealth of spatio-temporal flood forecast information and provide an efficient means to identify ‘hot spots' of flood risk. These novel risk maps can be used to support flood warning in real-time and are being trialled operationally across England and Wales by the new joint Environment Agency and Met Office Flood Forecasting Centre.

  16. Hydrologic Analysis for Kankakee River Watershed Streamflow Accounting Model

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Knapp, H. V.

    2014-12-01

    Streamflow frequency is used for in-stream flow needs evaluation, water supply planning, water quality analysis, and stream classification, among other purposes. The Illinois Streamflow Accounting Model (ILSAM) was developed to predict the streamflow frequency for Illinois streams and has the capacity to explore anthropogenic impacts on streamflow frequency. Over the past two decades, ILSAM has been applied to ten major watersheds in Illinois. This study updates the hydrologic analysis for the Kankakee River watershed. The hydrologic analyses used to develop the model involved evaluating streamflow records from gaging stations and developing regional equations to estimate flows at ungaged sites throughout the watersheds. Impacts to flow quantity from dams, water supplies, and treated wastewater effluents are examined. The baseline flow condition is the flow record at gaged sites which includes historic anthropogenic effects. The unaltered flow condition, influenced primarily by climate, topography, hydrogeology, and land use, is determined by separating the effects of historic human impact. The effects of the various human modifications to flow in the basin have changed substantially over the history of the available streamflow records. The present flow condition is determined by assuming that current human impact extends back throughout the history of available streamflow records, and statistical estimates are computed accordingly. Flow frequency estimates for each gaging record are adjusted to account for differences in the period of record and other factors such as the hydrologic persistence of low flow. For ungaged sites, a regional regression based on unaltered flow conditions is developed to estimate flow frequency, and adjustments are made to account for human impacts to represent the present flow condition for all sites.

  17. Learning models of PTSD: Theoretical accounts and psychobiological evidence.

    PubMed

    Lissek, Shmuel; van Meurs, Brian

    2015-12-01

    Learning abnormalities have long been centrally implicated in posttraumatic psychopathology. Indeed, of all anxiety disorders, PTSD may be most clearly attributable to discrete, aversive learning events. In PTSD, such learning is acquired during the traumatic encounter and is expressed as both conditioned fear to stimuli associated with the event and more general over-reactivity-or failure to adapt-to intense, novel, or fear-related stimuli. The relatively straightforward link between PTSD and these basic, evolutionarily old, learning processes of conditioning, sensitization, and habituation affords models of PTSD comprised of fundamental, experimentally tractable mechanisms of learning that have been well characterized across a variety of mammalian species including humans. Though such learning mechanisms have featured prominently in explanatory models of psychological maladjustment to trauma for at least 90years, much of the empirical testing of these models has occurred only in the past two decades. The current review delineates the variety of theories forming this longstanding tradition of learning-based models of PTSD, details empirical evidence for such models, attempts an integrative account of results from this literature, and specifies limitations of, and future directions for, studies testing learning models of PTSD. PMID:25462219

  18. Descriptive accounts of thermodynamic and colloidal models of asphaltene flocculation

    SciTech Connect

    Leontaritis, K.J.; Kawanaka, S.; Mansoori, G.A.

    1987-01-01

    At the present the oil industry, basically, is combating the problem of asphaltene deposition through remedial rather than preventive techniques. Mechanical and chemical cleaning methods are being improvised to maintain production, transportation, and processing of petroleum at economical levels. There are a number of recent reports that indicate so. The research community, currently, is rather unfamiliar with the reasons and extent of the asphaltene deposition problem. This paper reviews the experiences of the oil industry with asphaltene precipitation and presents justifications and a descriptive account for the development of two different models for asphaltene flocculation. In one of the models the authors consider the asphaltenes to be dissolved in the oil in a true liquid state and dwell upon statistical thermodynamic techniques of multicomponent mixtures to predict their phase behavior. In the other model, they consider asphaltenes to exist in oil in a colloidal state, as minute suspended particles, and utilize colloidal science techniques to predict their colloidal behavior. Experimental work over the last 40 years suggests that asphaltenes possess a wide molecular weight distribution and they exist in both colloidal and dissolved states in the crude oil. Further pursue of the subject in this direction by both the industrial and research communities is warranted.

  19. Accounting for Water Insecurity in Modeling Domestic Water Demand

    NASA Astrophysics Data System (ADS)

    Galaitsis, S. E.; Huber-lee, A. T.; Vogel, R. M.; Naumova, E.

    2013-12-01

    Water demand management uses price elasticity estimates to predict consumer demand in relation to water pricing changes, but studies have shown that many additional factors effect water consumption. Development scholars document the need for water security, however, much of the water security literature focuses on broad policies which can influence water demand. Previous domestic water demand studies have not considered how water security can affect a population's consumption behavior. This study is the first to model the influence of water insecurity on water demand. A subjective indicator scale measuring water insecurity among consumers in the Palestinian West Bank is developed and included as a variable to explore how perceptions of control, or lack thereof, impact consumption behavior and resulting estimates of price elasticity. A multivariate regression model demonstrates the significance of a water insecurity variable for data sets encompassing disparate water access. When accounting for insecurity, the R-squaed value improves and the marginal price a household is willing to pay becomes a significant predictor for the household quantity consumption. The model denotes that, with all other variables held equal, a household will buy more water when the users are more water insecure. Though the reasons behind this trend require further study, the findings suggest broad policy implications by demonstrating that water distribution practices in scarcity conditions can promote consumer welfare and efficient water use.

  20. Capture-recapture survival models taking account of transients

    USGS Publications Warehouse

    Pradel, R.; Hines, J.E.; Lebreton, J.D.; Nichols, J.D.

    1997-01-01

    The presence of transient animals, common enough in natural populations, invalidates the estimation of survival by traditional capture- recapture (CR) models designed for the study of residents only. Also, the study of transit is interesting in itself. We thus develop here a class of CR models to describe the presence of transients. In order to assess the merits of this approach we examme the bias of the traditional survival estimators in the presence of transients in relation to the power of different tests for detecting transients. We also compare the relative efficiency of an ad hoc approach to dealing with transients that leaves out the first observation of each animal. We then study a real example using lazuli bunting (Passerina amoena) and, in conclusion, discuss the design of an experiment aiming at the estimation of transience. In practice, the presence of transients is easily detected whenever the risk of bias is high. The ad hoc approach, which yields unbiased estimates for residents only, is satisfactory in a time-dependent context but poorly efficient when parameters are constant. The example shows that intermediate situations between strict 'residence' and strict 'transience' may exist in certain studies. Yet, most of the time, if the study design takes into account the expected length of stay of a transient, it should be possible to efficiently separate the two categories of animals.

  1. Compassionate Accountability in Residential Care: A Trauma Informed Model

    ERIC Educational Resources Information Center

    Cimmarusti, Rocco A.; Gamero, Soe L.

    2009-01-01

    This article examines techniques for holding youth in residential care accountable for their behavior. Based on the use of trauma treatment theory, the authors believe that holding one accountable can actually be conceptualized and put into practice as a nurturing operation. For traumatized individuals, more traditional approaches to…

  2. Conception of a cost accounting model for doctors' offices.

    PubMed

    Britzelmaier, Bernd; Eller, Brigitte

    2004-01-01

    Physicians are required, due to economical, financial, competitive, demographical and market-induced framework conditions, to pay increasing attention to the entrepreneurial administration of their offices. Because of restructuring policies throughout the public health system--on the grounds of increasing financing problems--more and better transparency of costs will be indispensable in all fields of medical activities in the future. The more cost-conscious public health insurance institutions or other public health funds will need professional cost accounting systems, which will provide, for minimum maintenance expense, standardised basis cost information as a device for decision. The conception of cost accounting for doctors' offices presented in this paper shows an integrated cost accounting approach based on activity and marginal costing philosophy. The conception presented provides a suitable basis for the development of standard software for cost accounting systems for doctors' offices. PMID:18048220

  3. The Impact of Growth Models on AYP Subgroup Accountability

    ERIC Educational Resources Information Center

    Radmer, Elaine Marie

    2012-01-01

    The No Child Left Behind Act (2001) increased the federal presence in the test-based accountability movement with its goal of all children meeting standard by 2014. To measure progress toward this goal, each state created a series of intermediate goals. Schools or districts that attained the goal for a given year in 9 different subgroups made…

  4. Modeling Trajectories in Social Program Outcomes for Performance Accountability

    ERIC Educational Resources Information Center

    Gordon, Rachel A.; Heinrich, Carolyn J.

    2004-01-01

    Government and public focus on accountability for program outcomes, combined with practical and ethical constraints on experimental designs, make nonexperimental studies of social programs an increasingly common approach to producing information on program performance. In this paper, we compare the effectiveness of alternative nonexperimental…

  5. Accounting for Recoil Effects in Geochronometers: A New Model Approach

    NASA Astrophysics Data System (ADS)

    Lee, V. E.; Huber, C.

    2012-12-01

    dated grain is a major control on the magnitude of recoil loss, the first feature is the ability to calculate recoil effects on isotopic compositions for realistic, complex grain shapes and surface roughnesses. This is useful because natural grains may have irregular shapes that do not conform to simple geometric descriptions. Perhaps more importantly, the surface area over which recoiled nuclides are lost can be significantly underestimated when grain surface roughness is not accounted for, since the recoil distances can be of similar characteristic lengthscales to surface roughness features. The second key feature is the ability to incorporate dynamical geologic processes affecting grain surfaces in natural settings, such as dissolution and crystallization. We describe the model and its main components, and point out implications for the geologically-relevant chronometers mentioned above.

  6. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329

  7. 31 CFR Appendix A to Part 212 - Model Notice to Account Holder

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false Model Notice to Account Holder A... CONTAINING FEDERAL BENEFIT PAYMENTS Pt. 212, App. A Appendix A to Part 212—Model Notice to Account Holder A financial institution may use the following model notice to meet the requirements of § 212.7. Although...

  8. 31 CFR Appendix A to Part 212 - Model Notice to Account Holder

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false Model Notice to Account Holder A... CONTAINING FEDERAL BENEFIT PAYMENTS Pt. 212, App. A Appendix A to Part 212—Model Notice to Account Holder A financial institution may use the following model notice to meet the requirements of § 212.7. Although...

  9. 31 CFR Appendix A to Part 212 - Model Notice to Account Holder

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false Model Notice to Account Holder A... CONTAINING FEDERAL BENEFIT PAYMENTS Pt. 212, App. A Appendix A to Part 212—Model Notice to Account Holder A financial institution may use the following model notice to meet the requirements of § 212.7. Although...

  10. Refining metabolic models and accounting for regulatory effects.

    PubMed

    Kim, Joonhoon; Reed, Jennifer L

    2014-10-01

    Advances in genome-scale metabolic modeling allow us to investigate and engineer metabolism at a systems level. Metabolic network reconstructions have been made for many organisms and computational approaches have been developed to convert these reconstructions into predictive models. However, due to incomplete knowledge these reconstructions often have missing or extraneous components and interactions, which can be identified by reconciling model predictions with experimental data. Recent studies have provided methods to further improve metabolic model predictions by incorporating transcriptional regulatory interactions and high-throughput omics data to yield context-specific metabolic models. Here we discuss recent approaches for resolving model-data discrepancies and building context-specific metabolic models. Once developed highly accurate metabolic models can be used in a variety of biotechnology applications. PMID:24632483

  11. Teacher Effects, Value-Added Models, and Accountability

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2014-01-01

    Background: In the last decade, the effects of teachers on student performance (typically manifested as state-wide standardized tests) have been re-examined using statistical models that are known as value-added models. These statistical models aim to compute the unique contribution of the teachers in promoting student achievement gains from grade…

  12. Resource Allocation Models and Accountability: A Jamaican Case Study

    ERIC Educational Resources Information Center

    Nkrumah-Young, Kofi K.; Powell, Philip

    2008-01-01

    Higher education institutions (HEIs) may be funded privately, by the state or by a mixture of the two. Nevertheless, any state financing of HE necessitates a mechanism to determine the level of support and the channels through which it is to be directed; that is, a resource allocation model. Public funding, through resource allocation models,…

  13. Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems

    EPA Science Inventory

    Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...

  14. The Development of Conditional Reasoning: A Mental Model Account.

    ERIC Educational Resources Information Center

    Markovits, Henry; Barrouillet, Pierre

    2002-01-01

    Proposes a variant of mental model theory which suggests that the development of conditional reasoning (if--then) can be explained by such factors as the capacity of working memory, range of knowledge available to a reasoner, and his/her ability to access this knowledge "on-line." Finds much empirical data explained by this model. (Author/SD)

  15. A PDS Governance Model: Building Collaboration and Accountability.

    ERIC Educational Resources Information Center

    MacIsaac, Douglas; Tichenor, Mercedes; Heins, Elizabeth

    This paper describes the evolution of a Professional Development School (PDS) governance model that began with a one-school partnership and moved to a multiple site partnership network. The governance model is based on a union formed between a private university and a local school district. It emphasizes shared decision making and collaborative…

  16. 31 CFR Appendix A to Part 212 - Model Notice to Account Holder

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false Model Notice to Account Holder A Appendix A to Part 212 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... CONTAINING FEDERAL BENEFIT PAYMENTS Pt. 212, App. A Appendix A to Part 212—Model Notice to Account Holder...

  17. Applying the International Medical Graduate Program Model to Alleviate the Supply Shortage of Accounting Doctoral Faculty

    ERIC Educational Resources Information Center

    HassabElnaby, Hassan R.; Dobrzykowski, David D.; Tran, Oanh Thikie

    2012-01-01

    Accounting has been faced with a severe shortage in the supply of qualified doctoral faculty. Drawing upon the international mobility of foreign scholars and the spirit of the international medical graduate program, this article suggests a model to fill the demand in accounting doctoral faculty. The underlying assumption of the suggested model is…

  18. 76 FR 29249 - Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-20

    ... HUMAN SERVICES Centers for Medicare & Medicaid Services Medicare Program; Pioneer Accountable Care... participate in the Pioneer Accountable Care Organization Model for a period beginning in 2011 and ending...://innovations.cms.gov/areas-of-focus/seamless-and-coordinated-care-models/pioneer-aco . Application...

  19. A Historical Account of the Hypodermic Model in Mass Communication.

    ERIC Educational Resources Information Center

    Bineham, Jeffery L.

    1988-01-01

    Critiques different historical conceptions of mass communication research. Argues that the different conceptions of the history of mass communication research, and of the hypodermic model (viewing the media as an all-powerful and direct influence on society), influence the theoretical and methodological choices made by mass media scholars. (MM)

  20. Mental Models and the Suppositional Account of Conditionals

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Gauffroy, Caroline; Lecas, Jean-Francois

    2008-01-01

    The mental model theory of conditional reasoning presented by P. N. Johnson-Laird and R. M. J. Byrne (2002) has recently been the subject of criticisms (e.g., J. St. B. T. Evans, D. E. Over, & S. J. Handley, 2005). The authors argue that the theoretical conflict can be resolved by differentiating 2 kinds of reasoning, reasoning about possibilities…

  1. Facilitative Orthographic Neighborhood Effects: The SERIOL Model Account

    ERIC Educational Resources Information Center

    Whitney, Carol; Lavidor, Michal

    2005-01-01

    A large orthographic neighborhood (N) facilitates lexical decision for central and left visual field/right hemisphere (LVF/RH) presentation, but not for right visual field/left hemisphere (RVF/LH) presentation. Based on the SERIOL model of letter-position encoding, this asymmetric N effect is explained by differential activation patterns at the…

  2. Leadership Accountability Models: Issues of Policy and Practice.

    ERIC Educational Resources Information Center

    Wallace, Stephen O.; Sweatt, Owen; Acker-Hocevar, Michele

    This paper explores two questions: "What philosophical views of educational leadership will adequately allow us to meet the demands of a rapidly changing world?" and "How should such leadership be assessed?" The article asserts that evaluation of educational leadership needs to break away from the limitations of restrictive models to become…

  3. Modeling tools to Account for Ethanol Impacts on BTEX Plumes

    EPA Science Inventory

    Widespread usage of ethanol in gasoline leads to impacts at leak sites which differ from those of non-ethanol gasolines. The presentation reviews current research results on the distribution of gasoline and ethanol, biodegradation, phase separation and cosolvancy. Model results f...

  4. Modeling of Explosion Gas Dynamics with Account of Detonation

    NASA Astrophysics Data System (ADS)

    Morozov, D. O.

    2013-11-01

    The physical and hydrodynamic processes in the initial phase of explosion of condensed explosives in the air have been considered. The role of the processes of energy release connected with the explosive detonation has been analyzed. The equations of formal kinetics for modeling the processes of transformation of the original substance into detonation products have been described. The results obtained with the use of the equation of state of an ideal gas with a constant adiabatic index have been compared with calculations, where for the equation of state wide-range tables of properties of the air and explosion products were used. The stage of detonation of an explosive is included in the self-consistent hydrodynamic model used for describing the explosion processes from the moment of initiation of the detonation wave to the moment the air shock wave is formed, as well as in describing its propagation and attenuation.

  5. A Mathematical Model of Sentimental Dynamics Accounting for Marital Dissolution

    PubMed Central

    Rey, José-Manuel

    2010-01-01

    Background Marital dissolution is ubiquitous in western societies. It poses major scientific and sociological problems both in theoretical and therapeutic terms. Scholars and therapists agree on the existence of a sort of second law of thermodynamics for sentimental relationships. Effort is required to sustain them. Love is not enough. Methodology/Principal Findings Building on a simple version of the second law we use optimal control theory as a novel approach to model sentimental dynamics. Our analysis is consistent with sociological data. We show that, when both partners have similar emotional attributes, there is an optimal effort policy yielding a durable happy union. This policy is prey to structural destabilization resulting from a combination of two factors: there is an effort gap because the optimal policy always entails discomfort and there is a tendency to lower effort to non-sustaining levels due to the instability of the dynamics. Conclusions/Significance These mathematical facts implied by the model unveil an underlying mechanism that may explain couple disruption in real scenarios. Within this framework the apparent paradox that a union consistently planned to last forever will probably break up is explained as a mechanistic consequence of the second law. PMID:20360987

  6. Dynamic model of production enterprises based on accounting registers and its identification

    NASA Astrophysics Data System (ADS)

    Sirazetdinov, R. T.; Samodurov, A. V.; Yenikeev, I. A.; Markov, D. S.

    2016-06-01

    The report focuses on the mathematical modeling of economic entities based on accounting registers. Developed the dynamic model of financial and economic activity of the enterprise as a system of differential equations. Created algorithms for identification of parameters of the dynamic model. Constructed and identified the model of Russian machine-building enterprises.

  7. Key Elements for Educational Accountability Models in Transition: A Guide for Policymakers

    ERIC Educational Resources Information Center

    Klau, Kenneth

    2010-01-01

    State educational accountability models are in transition. Whether modifying the present accountability system to comply with existing state and federal requirements or anticipating new ones--such as the U.S. Department of Education's (ED) Race to the Top competition--recording the experiences of state education agencies (SEAs) that are currently…

  8. A Dynamic Simulation Model of the Management Accounting Information Systems (MAIS)

    NASA Astrophysics Data System (ADS)

    Konstantopoulos, Nikolaos; Bekiaris, Michail G.; Zounta, Stella

    2007-12-01

    The aim of this paper is to examine the factors which determine the problems and the advantages on the design of management accounting information systems (MAIS). A simulation is carried out with a dynamic model of the MAIS design.

  9. A Teacher Accountability Model for Overcoming Self-Exclusion of Pupils

    ERIC Educational Resources Information Center

    Jamal, Abu-Hussain; Tilchin, Oleg; Essawi, Mohammad

    2015-01-01

    Self-exclusion of pupils is one of the prominent challenges of education. In this paper we propose the TERA model, which shapes the process of creating formative accountability of teachers to overcome the self-exclusion of pupils. Development of the model includes elaboration and integration of interconnected model components. The TERA model…

  10. Modeling Floods under Climate Change Condition in Otava River, Czech Republic: A Time Scale Issue

    NASA Astrophysics Data System (ADS)

    Danhelka, J.; Krejci, J.; Vlasak, T.

    2009-04-01

    While modeling of climate change (CC) impact on low flow and water balance is commonly done using daily time series of Global Circulation models (GCM) outputs, assessing CC impact on rare events as floods demands for special methodology. Paper demonstrates methodology, results and its sensitivity to the length of simulation in meso-scale basin. Multiple regional projection of temperature and precipitation under A2, A1B a B1 scenarios for 2040-2069 were evaluated in study of Czech Hydrometeorological Institute and Charles University (Pretel et al. 2008) for the Czech Republic. Daily time series of length of 30 years and 100 years (precipitation, Tmax, Tmin) were generated using LARS-WG (Semenov, 2008) based on expected monthly change of temperature and precipitation amount and variability for upper Otava river basin in mountainous region in SW Bohemia. Daily precipitation data were distributed into 6h time step using three step random generator. Spatial distribution of precipitation was based on random sampling of relevant historical analogues while temperature was distributed using simple vertical gradient rule. Derived time series of A2, A1B, B1 and recent climate (RC) scenarios inputted calibrated hydrological modeling system AquaLog (using SAC-SMA for rainfall-runoff modeling). Correction of SAC-SMA parameter defining potential evapotranspiration for changed climate was applied. Evaluation was made for Susice profile (534.5 km2), representing the mountainous part of the basin, and downstream Katovice profile (1133.4 km2). Results proved expected decrease of annual flow by 5-10 % (10-15 % in summer, 0-5 % in winter) for all modeled CC scenarios (for period 2040-2069) compared to recent climate. Design flows were computed based on yearly peaks using standard methodology. Decrease in design flow curves was observed for Katovice while no change (A1B, B1) or increase (A2) was found for Susice in 100 years time series. Estimates of 100y floods based on 30 or 100 years

  11. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  12. School Board Improvement Plans in Relation to the AIP Model of Educational Accountability: A Content Analysis

    ERIC Educational Resources Information Center

    van Barneveld, Christina; Stienstra, Wendy; Stewart, Sandra

    2006-01-01

    For this study we analyzed the content of school board improvement plans in relation to the Achievement-Indicators-Policy (AIP) model of educational accountability (Nagy, Demeris, & van Barneveld, 2000). We identified areas of congruence and incongruence between the plans and the model. Results suggested that the content of the improvement plans,…

  13. Testing the limits of the 'joint account' model of genetic information: a legal thought experiment.

    PubMed

    Foster, Charles; Herring, Jonathan; Boyd, Magnus

    2015-05-01

    We examine the likely reception in the courtroom of the 'joint account' model of genetic confidentiality. We conclude that the model, as modified by Gilbar and others, is workable and reflects, better than more conventional legal approaches, both the biological and psychological realities and the obligations owed under Articles 8 and 10 of the European Convention on Human Rights (ECHR). PMID:24965717

  14. Articulated Instruction Objectives Guide for Accounting (Module 5.0--Accounting I) (Module 6.0--Accounting II). Project Period, March 1981-February 1982 (Pilot Model). Edition I.

    ERIC Educational Resources Information Center

    Chandler, Wylda; And Others

    Developed during a project designed to provide a continuous, competency-based line of vocational training in business and office education programs at the secondary and postsecondary levels, this package consists of an instructor's guide and learning modules for use in Accounting I and II. Various aspects of implementing and articulating secondary…

  15. A Distributed Hydrologic Model, HL-RDHM, for Flash Flood Forecasting in Hawaiian Watersheds

    NASA Astrophysics Data System (ADS)

    Fares, A.; Awal, R.; Michaud, J.; Chu, P.; Fares, S.; Kevin, K.; Rosener, M.

    2012-12-01

    Hawai'i's watersheds are flash flood prone due to their small contributing areas, and frequent intense spatially variable precipitation. Accurate simulation of the hydrology of these watersheds should incorporate spatial variability of at least the major input data, e.g., precipitation. The goal of this study is to evaluate the performance of the U.S. National Weather Service Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) in flash flood forecasting at Hanalei watershed, Kauai, Hawai'i. Some of the major limitations of using HL-RDHM in Hawaii are: i) Hawaii lies outside the Hydrologic Rainfall Analysis Project (HRAP) coordinate system of the continental US (CONUS), unavailability of a priori SAC-SMA parameter grids, and absence of hourly multi-sensor NEXRAD based precipitation grids. The specific objectives of this study were to i) run HL-RDHM outside CONUS domain, and ii) evaluate the performance of HL-RDHM for flash flood forecasting in the flood prone Hanalei watershed, Kauai, Hawai'i. We i) modified HRAP coordinate system; ii) generated input data of precipitation grids at different resolutions using data from 20 precipitation gauges five of which were within Hanalei watershed; iii) and generated SAC-SMA and routing parameter grids for the modified HRAP coordinate system. The one HRAP resolution grid (4 km x 4 km) was not accurate; thus, the basin averaged annual hourly precipitation of 1 HRAP grid is comparatively lower than that of ½ and ¼ HRAP grids. The performance of HL-RDHM using basin averaged a priori grids and distributed a priori grids was reasonable even using non-optimized a priori parameter values for 2008 data. HL-RDHM reasonably matched the observed streamflow magnitudes of peaks and time to peak during the calibration and validation periods. Overall, HL-RDHM performance is "good" to "very good" if we use input data of finer resolution grids (½ HRAP or ¼ HRAP) and precipitation grids interpolated from sufficient data of

  16. Accounting for downscaling and model uncertainties in examining the impacts of climate change on hydrological systems

    NASA Astrophysics Data System (ADS)

    Franklin, M.; Yan, E.; Demissie, Y.

    2010-12-01

    Statistical downscaling is a widely used method of transforming global climate model output to a regional or local scale for impact assessment studies. Uncertainties, both in the predictions generated through statistical downscaling and in the climate model simulations themselves, are rarely accounted for in the resultant downscaled climate parameters. Using observational meteorological data from 130 weather stations located in the upper midwest region of the U.S. and the 30-member ensemble of Community Climate System Model forecasts under the A1B SRES scenario, probability distribution functions (PDF) accounting for the aforementioned downscaling and model uncertainties were generated for daily precipitation, maximum and minimum temperature. Two-stage downscaling was performed for each model ensemble member resulting in 30 daily estimates of temperature and precipitation for each weather station. As temperature is a much smoother spatial and temporal process than precipitation, separate downscaling methods were developed for these two parameters. The standard errors from the downscaling stages were retained to quantify uncertainty in the estimates. Combined with the 30 realizations for each day, PDFs were generated that characterize both sources of uncertainty. Repeated samples drawn from the resultant PDFs served as inputs to the Soil and Water Assessment Tool (SWAT) hydrological model. The impact of climate change, accounting for uncertainty in downscaling and the climate model, on the hydrological cycle of the upper Mississippi river basin was assessed. Sensitivity in the SWAT model to uncertainty in the input parameters was also examined.

  17. Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions.

    PubMed

    Mohr, David C; Cuijpers, Pim; Lehman, Kenneth

    2011-01-01

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical literature, that can guide research into human support components of eHealth interventions. A review of the literature revealed little relevant information from clinical sciences. Applicable literature was drawn primarily from organizational psychology, motivation theory, and computer-mediated communication (CMC) research. We have developed a model, referred to as "Supportive Accountability." We argue that human support increases adherence through accountability to a coach who is seen as trustworthy, benevolent, and having expertise. Accountability should involve clear, process-oriented expectations that the patient is involved in determining. Reciprocity in the relationship, through which the patient derives clear benefits, should be explicit. The effect of accountability may be moderated by patient motivation. The more intrinsically motivated patients are, the less support they likely require. The process of support is also mediated by the communications medium (eg, telephone, instant messaging, email). Different communications media each have their own potential benefits and disadvantages. We discuss the specific components of accountability, motivation, and CMC medium in detail. The proposed model is a first step toward understanding how human support enhances adherence to eHealth interventions. Each component of the proposed model is a testable hypothesis. As we develop viable human support models, these should be manualized to facilitate dissemination. PMID:21393123

  18. Supportive Accountability: A Model for Providing Human Support to Enhance Adherence to eHealth Interventions

    PubMed Central

    2011-01-01

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical literature, that can guide research into human support components of eHealth interventions. A review of the literature revealed little relevant information from clinical sciences. Applicable literature was drawn primarily from organizational psychology, motivation theory, and computer-mediated communication (CMC) research. We have developed a model, referred to as “Supportive Accountability.” We argue that human support increases adherence through accountability to a coach who is seen as trustworthy, benevolent, and having expertise. Accountability should involve clear, process-oriented expectations that the patient is involved in determining. Reciprocity in the relationship, through which the patient derives clear benefits, should be explicit. The effect of accountability may be moderated by patient motivation. The more intrinsically motivated patients are, the less support they likely require. The process of support is also mediated by the communications medium (eg, telephone, instant messaging, email). Different communications media each have their own potential benefits and disadvantages. We discuss the specific components of accountability, motivation, and CMC medium in detail. The proposed model is a first step toward understanding how human support enhances adherence to eHealth interventions. Each component of the proposed model is a testable hypothesis. As we develop viable human support models, these should be manualized to facilitate dissemination. PMID:21393123

  19. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  20. The Politics and Statistics of Value-Added Modeling for Accountability of Teacher Preparation Programs

    ERIC Educational Resources Information Center

    Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas

    2014-01-01

    Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…

  1. A Process Assessment Model for Evaluation, Improvement & Accountability in Effectively Meeting Organizational Purpose and Goals.

    ERIC Educational Resources Information Center

    Packard, Richard D.; Dereshiwsky, Mary I.

    A process for evaluating the effectiveness of educational organizations, with a focus on accountability, is described. An evaluation of 15 pilot-test school districts in the Arizona Career Ladder Project reveals the existence of a major discrepancy between meeting program requirements and achieving program success. A theoretical model of…

  2. Students' Use of the Energy Model to Account for Changes in Physical Systems

    ERIC Educational Resources Information Center

    Papadouris, Nico; Constantinou, Constantinos P.; Kyratsi, Theodora

    2008-01-01

    The aim of this study is to explore the ways in which students, aged 11-14 years, account for certain changes in physical systems and the extent to which they draw on an energy model as a common framework for explaining changes observed in diverse systems. Data were combined from two sources: interviews with 20 individuals and an open-ended…

  3. An Expansion of the Trait-State-Occasion Model: Accounting for Shared Method Variance

    ERIC Educational Resources Information Center

    LaGrange, Beth; Cole, David A.

    2008-01-01

    This article examines 4 approaches for explaining shared method variance, each applied to a longitudinal trait-state-occasion (TSO) model. Many approaches have been developed to account for shared method variance in multitrait-multimethod (MTMM) data. Some of these MTMM approaches (correlated method, orthogonal method, correlated method minus one,…

  4. Developing a Model for Identifying Students at Risk of Failure in a First Year Accounting Unit

    ERIC Educational Resources Information Center

    Smith, Malcolm; Therry, Len; Whale, Jacqui

    2012-01-01

    This paper reports on the process involved in attempting to build a predictive model capable of identifying students at risk of failure in a first year accounting unit in an Australian university. Identifying attributes that contribute to students being at risk can lead to the development of appropriate intervention strategies and support…

  5. Accounting for Accountability.

    ERIC Educational Resources Information Center

    Colorado State Dept. of Education, Denver. Cooperative Accountability Project.

    This publication reports on two Regional Educational Accountability Conferences on Techniques sponsored by the Cooperative Accountability Project. Accountability is described as an "emotionally-charged issue" and an "operationally demanding concept." Overviewing accountability, major speakers emphasized that accountability is a means toward…

  6. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  7. Aviation security cargo inspection queuing simulation model for material flow and accountability

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Rose, Terri A; Brumback, Daryl L

    2009-01-01

    Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we develop an aviation security cargo inspection queuing simulation model for material flow and accountability that will allow cargo managers to conduct impact studies of current and proposed business practices as they relate to inspection procedures, material flow, and accountability.

  8. Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad

    2006-01-01

    This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'propagate-burn-propagate' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons between it and three methods using modified linearized covariance propagation are made. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. In the second method, the maneuver is not modeled but the uncertainty in its magnitude is accounted for by the inclusion of a process noise matrix. In the third method, which is essentially a hybrid of the first two, the nominal portion of the maneuver is included via the state transition matrix while a process noise matrix is used to account for the magnitude uncertainty. Since this method also correctly accounts for the presence of the maneuver in the nominal orbit, it is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, Pc. However, applications for the two other methods exist and are briefly discussed. Despite the fact that the process model ('propagate-burn-propagate') that was studied was very simple - point-mass gravitational effects due to the Earth combined with an impulsive delta-V in the velocity direction for the maneuver - generalizations to more complex scenarios, including high fidelity force models, finite duration maneuvers, and maneuver pointing errors, are straightforward and are discussed in the conclusion.

  9. Cost accounting models used for price-setting of health services: an international review.

    PubMed

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals. PMID:25082465

  10. Predicting NonInertial Effects with Algebraic Stress Models which Account for Dissipation Rate Anisotropies

    NASA Technical Reports Server (NTRS)

    Jongen, T.; Machiels, L.; Gatski, T. B.

    1997-01-01

    Three types of turbulence models which account for rotational effects in noninertial frames of reference are evaluated for the case of incompressible, fully developed rotating turbulent channel flow. The different types of models are a Coriolis-modified eddy-viscosity model, a realizable algebraic stress model, and an algebraic stress model which accounts for dissipation rate anisotropies. A direct numerical simulation of a rotating channel flow is used for the turbulent model validation. This simulation differs from previous studies in that significantly higher rotation numbers are investigated. Flows at these higher rotation numbers are characterized by a relaminarization on the cyclonic or suction side of the channel, and a linear velocity profile on the anticyclonic or pressure side of the channel. The predictive performance of the three types of models are examined in detail, and formulation deficiencies are identified which cause poor predictive performance for some of the models. Criteria are identified which allow for accurate prediction of such flows by algebraic stress models and their corresponding Reynolds stress formulations.

  11. A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks

    PubMed Central

    Wang, Ping; Zhang, Lin; Li, Victor O. K.

    2013-01-01

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708

  12. Accounting for cross-sectoral linkages of climate change impacts based on multi-model projections

    NASA Astrophysics Data System (ADS)

    Frieler, Katja

    2013-04-01

    Understanding how natural and human systems will be affected by climate change is not possible without accounting for cascading effects across different sectors. However, cross-sectoral inter-linkages remain strongly underrepresented in model-based assessments of climate change impacts. Based on the currently unique cross-sectoral multi-model data set generated for ISI-MIP (the first Inter-Sectoral Impact Model Intercomparison Project), we investigate climate-induced adaptation pressures on the global food production system, taking into account cross-sectoral co-limitations and response options, and quantifying uncertainties due to different model categories involved (climate-, crop-, hydrology-, ecosystem-models). Results from 7 global crop models are synthesised to analyse changes in global wheat, maize, rice, and soy production as a function of global mean warming, on current agricultural land. To integrate constraints on the availability of water we propose a simple approach to estimate the maximum possible increase in global production based on limitations of renewable irrigation water as projected by 11 global hydrological models. The effect is compared to the production increase due to land-use changes as suggested by the demand fulfilling agro-economic model MAgPIE. While providing production increases the extension of farmland exerts a strong pressure on natural vegetation systems. This pressure is again compared to the pressure on natural vegetation that is induced by climate change itself. The analysis will provide a cross sectoral synthesis of the ISI-MIP results.

  13. Meta-analysis of diagnostic tests accounting for disease prevalence: a new model using trivariate copulas.

    PubMed

    Hoyer, A; Kuss, O

    2015-05-20

    In real life and somewhat contrary to biostatistical textbook knowledge, sensitivity and specificity (and not only predictive values) of diagnostic tests can vary with the underlying prevalence of disease. In meta-analysis of diagnostic studies, accounting for this fact naturally leads to a trivariate expansion of the traditional bivariate logistic regression model with random study effects. In this paper, a new model is proposed using trivariate copulas and beta-binomial marginal distributions for sensitivity, specificity, and prevalence as an expansion of the bivariate model. Two different copulas are used, the trivariate Gaussian copula and a trivariate vine copula based on the bivariate Plackett copula. This model has a closed-form likelihood, so standard software (e.g., SAS PROC NLMIXED) can be used. The results of a simulation study have shown that the copula models perform at least as good but frequently better than the standard model. The methods are illustrated by two examples. PMID:25712874

  14. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection

    USGS Publications Warehouse

    Zipkin, Elise F.; Grant, Evan H. Campbell; Fagan, William F.

    2012-01-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multi-species hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions of species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation dataset. We found that wetland hydroperiod (the length of time that a wetland holds water) as well as the occurrence state in the prior year were generally the most important factors in determining occupancy. The model with only habitat covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multi-species models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  15. Solitary Waves in the Model of Active Media, Taking into Account Effects of Relaxation

    NASA Astrophysics Data System (ADS)

    Likus, W.; Vladimirov, V. A.

    2015-04-01

    We study a system of differential equations simulating transport phenomena in active structured media. The model is a generalization of McKean's modification of the celebrated FitzHugh-Nagumo system, describing the nerve impulse propagation in axon. It takes into account the effects of memory, connected with the presence of internal structure. We construct explicitly the localized traveling wave solutions and analyze their stability.

  16. Contributions of Land Inventory and Biometrics for Characterizing Disturbance in Ecosystem and Carbon Accounting Models

    NASA Astrophysics Data System (ADS)

    Birdsey, R.; Pan, Y.

    2004-12-01

    Nearly all forest lands of the U.S. are disturbed or affected by disturbance. Each decade about half of the total forest land area is disturbed by harvesting, grazing, wildfire, pests, and other natural causes. Because of the patchy nature of forest disturbances and the amount of edges, a large proportion of undisturbed interior forest area is affected by disturbances. Some ecosystem and carbon accounting models require spatially explicit input data about frequency or effects of disturbance. Current and historical information about forest disturbances includes both geospatial and statistical data. Examples of geospatial data sets from land inventories include maps of insect defoliation and N deposition. Examples of primarily statistical data sets that can be made into geospatial data sets include county-level statistics from forest inventory and other census data about land use or cover. A variety of techniques are available to manipulate statistical data sets into geospatial data sets. Many ecosystem and carbon accounting models lack ability to simulate the dynamics of disturbance and instead only represent potential forest vegetation. Several techniques are being developed in large-scale ecosystem models to address this issue. One approach is illustrated by Production Efficiency Models (PEMs) that use satellite-derived information to estimate vegetation productivity and C changes affected by landscape changes and climatic variability. Models that rely primarily on remote-sensing information lack ability to separate kinds of disturbances that may have similar canopy impacts, such as forest management and land use change. Also, PEMs lack the capacity to detect impacts of global change stressors such as CO2, N deposition and ozone. Another approach is to combine disturbance models with ecosystem models. Such model combinations use different integration approaches. Whether using ecosystem models to provide growth information to parameterize disturbance models, or

  17. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    NASA Astrophysics Data System (ADS)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  18. A database model for evaluating material accountability safeguards effectiveness against protracted theft

    SciTech Connect

    Sicherman, A.; Fortney, D.S.; Patenaude, C.J.

    1993-07-01

    DOE Material Control and Accountability Order 5633.3A requires that facilities handling special nuclear material evaluate their effectiveness against protracted theft (repeated thefts of small quantities of material, typically occurring over an extended time frame, to accumulate a goal quantity). Because a protracted theft attempt can extend over time, material accountability-like (MA) safeguards may help detect a protracted theft attempt in progress. Inventory anomalies, and material not in its authorized location when requested for processing are examples of MA detection mechanisms. Crediting such detection in evaluations, however, requires taking into account potential insider subversion of MA safeguards. In this paper, the authors describe a database model for evaluating MA safeguards effectiveness against protracted theft that addresses potential subversion. The model includes a detailed yet practical structure for characterizing various types of MA activities, lists of potential insider MA defeat methods and access/authority related to MA activities, and an initial implementation of built-in MA detection probabilities. This database model, implemented in the new Protracted Insider module of ASSESS (Analytic System and Software for Evaluating Safeguards and Security), helps facilitate the systematic collection of relevant information about MA activity steps, and ``standardize`` MA safeguards evaluations.

  19. A new Bayesian recursive technique for parameter estimation

    NASA Astrophysics Data System (ADS)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  20. Generation of SEEAW asset accounts based on water resources management models

    NASA Astrophysics Data System (ADS)

    Pedro-Monzonís, María; Solera, Abel; Andreu, Joaquín

    2015-04-01

    One of the main challenges in the XXI century is related with the sustainable use of water. This is due to the fact that water is an essential element for the life of all who inhabit our planet. In many cases, the lack of economic valuation of water resources causes an inefficient water use. In this regard, society expects of policymakers and stakeholders maximise the profit produced per unit of natural resources. Water planning and the Integrated Water Resources Management (IWRM) represent the best way to achieve this goal. The System of Environmental-Economic Accounting for Water (SEEAW) is displayed as a tool for water allocation which enables the building of water balances in a river basin. The main concern of the SEEAW is to provide a standard approach which allows the policymakers to compare results between different territories. But building water accounts is a complex task due to the difficulty of the collection of the required data. Due to the difficulty of gauging the components of the hydrological cycle, the use of simulation models has become an essential tool extensively employed in last decades. The target of this paper is to present the building up of a database that enables the combined use of hydrological models and water resources models developed with AQUATOOL DSSS to fill in the SEEAW tables. This research is framed within the Water Accounting in a Multi-Catchment District (WAMCD) project, financed by the European Union. Its main goal is the development of water accounts in the Mediterranean Andalusian River Basin District, in Spain. This research pretends to contribute to the objectives of the "Blueprint to safeguard Europe's water resources". It is noteworthy that, in Spain, a large part of these methodological decisions are included in the Spanish Guideline of Water Planning with normative status guaranteeing consistency and comparability of the results.

  1. Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad

    2006-01-01

    This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'prop-burn-prop' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons are made with three methods using modified linearized covariance propagation. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. The second method does not model the maneuver but includes a process noise matrix to account for the uncertainty in its magnitude. The third method, which is essentially a hybrid of the first two, includes the nominal portion of the maneuver via the state transition matrix and uses a process noise matrix to account for the magnitude uncertainty. The first method is unable to produce the final orbit covariance except in the case of zero maneuver uncertainty. The second method yields good accuracy for the final covariance matrix but fails to model the final orbital state accurately. Agreement between the simulated covariance data produced by this method and the Monte Carlo truth data fell within 0.5-2.5 percent over a range of maneuver sizes that span two orders of magnitude (0.1-20 m/s). The third method, which yields a combination of good accuracy in the computation of the final covariance matrix and correct accounting for the presence of the maneuver in the nominal orbit, is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, PC. However, applications for the two other methods exist and are briefly discussed. Although

  2. On the Value of Climate Elasticity Indices to Assess the Impact of Climate Change on Streamflow Projection using an ensemble of bias corrected CMIP5 dataset

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet; Moradkhani, Hamid

    2015-04-01

    Changes in two climate elasticity indices, i.e. temperature and precipitation elasticity of streamflow, were investigated using an ensemble of bias corrected CMIP5 dataset as forcing to two hydrologic models. The Variable Infiltration Capacity (VIC) and the Sacramento Soil Moisture Accounting (SAC-SMA) hydrologic models, were calibrated at 1/16 degree resolution and the simulated streamflow was routed to the basin outlet of interest. We estimated precipitation and temperature elasticity of streamflow from: (1) observed streamflow; (2) simulated streamflow by VIC and SAC-SMA models using observed climate for the current climate (1963-2003); (3) simulated streamflow using simulated climate from 10 GCM - CMIP5 dataset for the future climate (2010-2099) including two concentration pathways (RCP4.5 and RCP8.5) and two downscaled climate products (BCSD and MACA). The streamflow sensitivity to long-term (e.g., 30-year) average annual changes in temperature and precipitation is estimated for three periods i.e. 2010-40, 2040-70 and 2070-99. We compared the results of the three cases to reflect on the value of precipitation and temperature indices to assess the climate change impacts on Columbia River streamflow. Moreover, these three cases for two models are used to assess the effects of different uncertainty sources (model forcing, model structure and different pathways) on the two climate elasticity indices.

  3. Spectral Neugebauer-based color halftone prediction model accounting for paper fluorescence.

    PubMed

    Hersch, Roger David

    2014-08-20

    We present a spectral model for predicting the fluorescent emission and the total reflectance of color halftones printed on optically brightened paper. By relying on extended Neugebauer models, the proposed model accounts for the attenuation by the ink halftones of both the incident exciting light in the UV wavelength range and the emerging fluorescent emission in the visible wavelength range. The total reflectance is predicted by adding the predicted fluorescent emission relative to the incident light and the pure reflectance predicted with an ink-spreading enhanced Yule-Nielsen modified Neugebauer reflectance prediction model. The predicted fluorescent emission spectrum as a function of the amounts of cyan, magenta, and yellow inks is very accurate. It can be useful to paper and ink manufacturers who would like to study in detail the contribution of the fluorescent brighteners and the attenuation of the fluorescent emission by ink halftones. PMID:25321109

  4. Can the Accountable Care Organization model facilitate integrated care in England?

    PubMed

    Ahmed, Faheem; Mays, Nicholas; Ahmed, Naeem; Bisognano, Maureen; Gottlieb, Gary

    2015-10-01

    Following the global economic recession, health care systems have experienced intense political pressure to contain costs without compromising quality. One response is to focus on improving the continuity and coordination of care, which is seen as beneficial for both patients and providers. However, cultural and structural barriers have proved difficult to overcome in the quest to provide integrated care for entire populations. By holding groups of providers responsible for the health outcomes of a designated population, in the United States, Accountable Care Organizations are regarded as having the potential to foster collaboration across the continuum of care. They could have a similar role in England's National Health Service. However, it is important to consider the difference in context before implementing a similar model, adapted to suit the system's strengths. Working together, general practice federations and the Academic Health Science Networks could form the basis of accountable care in England. PMID:26079144

  5. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    EPA Science Inventory

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  6. A novel VLES model accounting for near-wall turbulence: physical rationale and applications

    NASA Astrophysics Data System (ADS)

    Jakirlic, Suad; Chang, Chi-Yao; Kutej, Lukas; Tropea, Cameron

    2014-11-01

    A novel VLES (Very Large Eddy Simulation) model whose non-resolved residual turbulence is modelled by using an advanced near-wall eddy-viscosity model accounting for the near-wall Reynolds stress anisotropy influence on the turbulence viscosity by modelling appropriately the velocity scale in the relevant formulation (Hanjalic et al., 2004) is proposed. It represents a variable resolution Hybrid LES/RANS (Reynolds-Averaged Navier-Stokes) computational scheme enabling a seamless transition from RANS to LES depending on the ratio of the turbulent viscosities associated with the unresolved scales corresponding to the LES cut-off and the `unsteady' scales pertinent to the turbulent properties of the VLES residual motion, which varies within the flow domain. The VLES method is validated interactively in the process of the model derivation by computing fully-developed flow in a plane channel (important representative of wall-bounded flows, underlying the log-law for the velocity field, for studying near-wall Reynolds stress anisotropy) and a separating flow over a periodic arrangement of smoothly-contoured 2-D hills. The model performances are also assessed in capturing the natural decay of the homogeneous isotropic turbulence. The model is finally applied to swirling flow in a vortex tube, flow in an IC-engine configuration and flow past a realistic car model.

  7. Operational hydrological ensemble forecasts in France, taking into account rainfall and hydrological model uncertainties.

    NASA Astrophysics Data System (ADS)

    Mathevet, T.; Garavaglia, F.; Gailhard, J.; Garçon, R.; Dubus, L.

    2009-09-01

    In operational conditions, the actual quality of meteorological and hydrological forecasts do not allow decision-making in a certain future. In this context, meteorological and hydrological ensemble forecasts allow a better representation of forecasts uncertainties. Compared to classical deterministic forecasts, ensemble forecasts improve the human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. In this paper, we present a hydrological ensemble forecasting system under development at EDF (French Hydropower Company). Our results were updated, taking into account a longer rainfall forecasts archive. Our forecasting system both takes into account rainfall forecasts uncertainties and hydrological model forecasts uncertainties. Hydrological forecasts were generated using the MORDOR model (Andreassian et al., 2006), developed at EDF and used on a daily basis in operational conditions on a hundred of watersheds. Two sources of rainfall forecasts were used : one is based on ECMWF forecasts, another is based on an analogues approach (Obled et al., 2002). Two methods of hydrological model forecasts uncertainty estimation were used : one is based on the use of equifinal parameter sets (Beven & Binley, 1992), the other is based on the statistical modelisation of the hydrological forecast empirical uncertainty (Montanari et al., 2004 ; Schaefli et al., 2007). Daily operational hydrological 7-day ensemble forecasts during 4 years (from 2005 to 2008) in few alpine watersheds were evaluated. Finally, we present a way to combine rainfall and hydrological model forecast uncertainties to achieve a good probabilistic calibration. Our results show that the combination of ECMWF and analogues-based rainfall forecasts allow a good probabilistic calibration of rainfall forecasts. They show also that the statistical modeling of the hydrological forecast empirical

  8. Micromechanical modeling of elastic properties of cortical bone accounting for anisotropy of dense tissue.

    PubMed

    Salguero, Laura; Saadat, Fatemeh; Sevostianov, Igor

    2014-10-17

    The paper analyzes the connection between microstructure of the osteonal cortical bone and its overall elastic properties. The existing models either neglect anisotropy of the dense tissue or simplify cortical bone microstructure (accounting for Haversian canals only). These simplifications (related mostly to insufficient mathematical apparatus) complicate quantitative analysis of the effect of microstructural changes - produced by age, microgravity, or some diseases - on the overall mechanical performance of cortical bone. The present analysis fills this gap; it accounts for anisotropy of the dense tissue and uses realistic model of the porous microstructure. The approach is based on recent results of Sevostianov et al. (2005) and Saadat et al. (2012) on inhomogeneities in a transversely-isotropic material. Bone's microstructure is modeled according to books of Martin and Burr (1989), Currey (2002), and Fung (1993) and includes four main families of pores. The calculated elastic constants for porous cortical bone are in agreement with available experimental data. The influence of each of the pore types on the overall moduli is examined. PMID:25234350

  9. Accounting for Epistemic Uncertainty in PSHA: Logic Tree and Ensemble Model

    NASA Astrophysics Data System (ADS)

    Taroni, M.; Marzocchi, W.; Selva, J.

    2014-12-01

    The logic tree scheme is the probabilistic framework that has been widely used in the last decades to take into account epistemic uncertainties in probabilistic seismic hazard analysis (PSHA). Notwithstanding the vital importance for PSHA to incorporate properly the epistemic uncertainties, we argue that the use of the logic tree in a PSHA context has conceptual and practical drawbacks. Despite some of these drawbacks have been reported in the past, a careful evaluation of their impact on PSHA is still lacking. This is the goal of the present work. In brief, we show that i) PSHA practice does not meet the assumptions that stand behind the logic tree scheme; ii) the output of a logic tree is often misinterpreted and/or misleading, e.g., the use of percentiles (median included) in a logic tree scheme raises theoretical difficulties from a probabilistic point of view; iii) in case the assumptions that stand behind a logic tree are actually met, this leads to several problems in testing any PSHA model. We suggest a different strategy - based on ensemble modeling - to account for epistemic uncertainties in a more proper probabilistic framework. Finally, we show that in many PSHA practical applications, the logic tree is de facto loosely applied to build sound ensemble models.

  10. Accounting for Epistemic Uncertainty in PSHA: Logic Tree and Ensemble Model

    NASA Astrophysics Data System (ADS)

    Taroni, Matteo; Marzocchi, Warner; Selva, Jacopo

    2014-05-01

    The logic tree scheme is the probabilistic framework that has been widely used in the last decades to take into account epistemic uncertainties in probabilistic seismic hazard analysis (PSHA). Notwithstanding the vital importance for PSHA to incorporate properly the epistemic uncertainties, we argue that the use of the logic tree in a PSHA context has conceptual and practical drawbacks. Despite some of these drawbacks have been reported in the past, a careful evaluation of their impact on PSHA is still lacking. This is the goal of the present work. In brief, we show that i) PSHA practice does not meet the assumptions that stand behind the logic tree scheme; ii) the output of a logic tree is often misinterpreted and/or misleading, e.g., the use of percentiles (median included) in a logic tree scheme raises theoretical difficulties from a probabilistic point of view; iii) in case the assumptions that stand behind a logic tree are actually met, this leads to several problems in testing any PSHA model. We suggest a different strategy - based on ensemble modeling - to account for epistemic uncertainties in a more proper probabilistic framework. Finally, we show that in many PSHA practical applications, the logic tree is improperly applied to build sound ensemble models.

  11. The Transformative Individual School Counseling Model: An Accountability Model for Urban School Counselors

    ERIC Educational Resources Information Center

    Eschenauer, Robert; Chen-Hayes, Stuart F.

    2005-01-01

    The realities and needs of urban students, families, and educators have outgrown traditional individual counseling models. The American School Counselor Association's National Model and National Standards and the Education Trust's Transforming School Counseling Initiative encourage professional school counselors to shift roles toward implementing…

  12. Climate projections of future extreme events accounting for modelling uncertainties and historical simulation biases

    NASA Astrophysics Data System (ADS)

    Brown, Simon J.; Murphy, James M.; Sexton, David M. H.; Harris, Glen R.

    2014-11-01

    A methodology is presented for providing projections of absolute future values of extreme weather events that takes into account key uncertainties in predicting future climate. This is achieved by characterising both observed and modelled extremes with a single form of non-stationary extreme value (EV) distribution that depends on global mean temperature and which includes terms that account for model bias. Such a distribution allows the prediction of future "observed" extremes for any period in the twenty-first century. Uncertainty in modelling future climate, arising from a wide range of atmospheric, oceanic, sulphur cycle and carbon cycle processes, is accounted for by using probabilistic distributions of future global temperature and EV parameters. These distributions are generated by Bayesian sampling of emulators with samples weighted by their likelihood with respect to a set of observational constraints. The emulators are trained on a large perturbed parameter ensemble of global simulations of the recent past, and the equilibrium response to doubled CO2. Emulated global EV parameters are converted to the relevant regional scale through downscaling relationships derived from a smaller perturbed parameter regional climate model ensemble. The simultaneous fitting of the EV model to regional model data and observations allows the characterisation of how observed extremes may change in the future irrespective of biases that may be present in the regional models simulation of the recent past climate. The clearest impact of a parameter perturbation in this ensemble was found to be the depth to which plants can access water. Members with shallow soils tend to be biased hot and dry in summer for the observational period. These biases also appear to have an impact on the potential future response for summer temperatures with some members with shallow soils having increases for extremes that reduce with extreme severity. We apply this methodology for London, using the

  13. FPLUME-1.0: An integrated volcanic plume model accounting for ash aggregation

    NASA Astrophysics Data System (ADS)

    Folch, A.; Costa, A.; Macedonio, G.

    2015-09-01

    Eruption Source Parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1-D cross-section averaged eruption column model based on the Buoyant Plume Theory (BPT). The model accounts for plume bent over by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an "effective" grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice-versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the "effective" particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland).

  14. A three-dimensional model of mammalian tyrosinase active site accounting for loss of function mutations.

    PubMed

    Schweikardt, Thorsten; Olivares, Concepción; Solano, Francisco; Jaenicke, Elmar; García-Borrón, José Carlos; Decker, Heinz

    2007-10-01

    Tyrosinases are the first and rate-limiting enzymes in the synthesis of melanin pigments responsible for colouring hair, skin and eyes. Mutation of tyrosinases often decreases melanin production resulting in albinism, but the effects are not always understood at the molecular level. Homology modelling of mouse tyrosinase based on recently published crystal structures of non-mammalian tyrosinases provides an active site model accounting for loss-of-function mutations. According to the model, the copper-binding histidines are located in a helix bundle comprising four densely packed helices. A loop containing residues M374, S375 and V377 connects the CuA and CuB centres, with the peptide oxygens of M374 and V377 serving as hydrogen acceptors for the NH-groups of the imidazole rings of the copper-binding His367 and His180. Therefore, this loop is essential for the stability of the active site architecture. A double substitution (374)MS(375) --> (374)GG(375) or a single M374G mutation lead to a local perturbation of the protein matrix at the active site affecting the orientation of the H367 side chain, that may be unable to bind CuB reliably, resulting in loss of activity. The model also accounts for loss of function in two naturally occurring albino mutations, S380P and V393F. The hydroxyl group in S380 contributes to the correct orientation of M374, and the substitution of V393 for a bulkier phenylalanine sterically impedes correct side chain packing at the active site. Therefore, our model explains the mechanistic necessity for conservation of not only active site histidines but also adjacent amino acids in tyrosinase. PMID:17850513

  15. Accounting for "hot spots" and "hot moments" in soil carbon models for water-limited ecosystems

    NASA Astrophysics Data System (ADS)

    O'Donnell, Frances; Caylor, Kelly

    2010-05-01

    Soil organic carbon (SOC) dynamics in water-limited ecosystems are complicated by the stochastic nature of rainfall and patchy structure of vegetation, which can lead to "hot spots" and "hot moments" of high biological activity. Non-linear models that use spatial and temporal averages of forcing variables are unable to account for these phenomena and are likely to produce biased results. In this study we present a model of SOC abundance that accounts for spatial heterogeneity at the plant scale and temporal variability in soil moisture content at the daily scale. We approximated an existing simulation-based model of SOC dynamics as a stochastic differential equation driven by multiplicative noise that can be solved numerically for steady-state sizes of three SOC pools. We coupled this to a model of water balance and SOC input rate at a point for a given cover type, defined by the number of shrub and perennial grass root systems and canopies overlapping the point. Using a probabilistic description of vegetation structure based on a two dimensional Poisson process, we derived analytical expressions for the distribution of cover types across a landscape and produced weighted averages of SOC stocks. An application of the model to a shortgrass steppe ecosystem in Colorado, USA, replicated empirical data on spatial patterns and average abundance of SOC, whereas a version of the model using spatially averaged forcing variables overestimated SOC stocks by 12%. The model also successfully replicated data from paired desert grassland sites in New Mexico, USA, that had and had not been affected by woody plant encroachment, indicating that the model could be a useful tool for understanding and predicting the effect of woody plant encroachment on regional carbon budgets. We performed a theoretical analysis of a simplified version of the model to estimate the bias introduced by using spatial averages of forcing variables to model SOC stocks across a range of climatic conditions

  16. Accounting for damage to model the influence of a pinning point on the grounding line dynamics

    NASA Astrophysics Data System (ADS)

    Brondex, Julien; Gagliardini, Olivier; Gillet-Chaulet, Fabien; Krug, Jean; Durand, Gaël

    2015-04-01

    The ice released from Greenland and Antarctic ice sheets into the ocean is the main cause of the current observed sea-level rise. Using the open-source finite-element code Elmer/Ice, a previous study (Favier et al., 2012) investigated the impact of a localised contact point between a floating ice shelf and the bedrock and showed its stabilizing effect on ice discharge. The large amount of friction introduced locally by a pinning point induces a rapid decrease of ice velocities upstream the contact leading to an advance of the grounding line seaward until the grounded ice-sheet and the ice rise merge together. This causes a slow down of ice discharge which is consistent with observations on real ice-shelves. However, highly crevassed zones surrounding those pinning points are commonly observed, highlighting strong damage patterns which were not taken into account in Favier et al. (2012). Damage has a strong influence on ice rheology as ice gets softer when damaged, therefore accelerating the ice flow. Recently, a damage model has been implemented within Elmer/Ice (Krug et al., 2014). In this model, damage is created in areas where the maximum principal Cauchy stress is higher than a stress threshold. Damage is then advected with ice flow and its impact on viscosity is taken into account by modifying the enhancement factor of Glen's flow law. Since high shear stresses predominate in the vicinity of pinning points, damage is likely to appear in those areas, making ice more fluid and thus lessening the stabilizing effect previously observed. To check the validity of this hypothesis, the pinning point experiment is repeated taking damage into account. The impact of basal crevasses filled with sea water, which tend to counteract the compressive stresses due to cryostatic pressure and thus to promote damage formation under the shelf, is investigated as well.

  17. Accounting for sex differences in PTSD: A multi-variable mediation model

    PubMed Central

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    Background Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually. Objective The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. Method The study was a quasi-prospective questionnaire survey assessing PTSD and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, N=450) and 6 months after the robbery (T2, N=368; 61.1% females). Mediation was examined using an analysis designed specifically to test a multiple mediator model. Results Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion), tonic immobility, panic, dissociation, negative posttraumatic cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. Conclusions The findings suggest that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma-related psychiatric disorders

  18. Network meta-analysis models to account for variability in treatment definitions: application to dose effects.

    PubMed

    Del Giovane, Cinzia; Vacchi, Laura; Mavridis, Dimitris; Filippini, Graziella; Salanti, Georgia

    2013-01-15

    For a network meta-analysis, an interlinked network of nodes representing competing treatments is needed. It is often challenging to define the nodes as these typically refer to similar but rarely identical interventions. The objectives of this paper are as follows: (i) to present a series of network meta-analysis models that account for variation in the definition of the nodes and (ii) to exemplify the models where variation in the treatment definitions relates to the dose. Starting from the model that assumes each node has a 'fixed' definition, we gradually introduce terms to explain variability by assuming that each node has several subnodes that relate to different doses. The effects of subnodes are considered monotonic, linked with a 'random walk', random but exchangeable, or have a linear pattern around the treatment mean effect. Each model can be combined with different assumptions for the consistency of effects and might impact on the ranking of the treatments. Goodness of fit, heterogeneity and inconsistency were assessed. The models are illustrated in a star network for the effectiveness of fluoride toothpaste and in a full network comparing agents for multiple sclerosis. The fit and parsimony measures indicate that in the fluoride network the impact of the dose subnodes is important whereas in the multiple sclerosis network the model without subnodes is the most appropriate. The proposed approach can be a useful exploratory tool to explain sources of heterogeneity and inconsistency when there is doubt whether similar interventions should be grouped under the same node. PMID:22815277

  19. Advances in stream shade modelling. Accounting for canopy overhang and off-centre view

    NASA Astrophysics Data System (ADS)

    Davies-Colley, R.; Meleason, M. A.; Rutherford, K.

    2005-05-01

    Riparian shade controls the stream thermal regime and light for photosynthesis of stream plants. The quantity difn (diffuse non-interceptance), defined as the proportion of incident lighting received under a sky of uniform brightness, is useful for general specification of stream light exposure, having the virtue that it can be measured directly with common light sensors of appropriate spatial and spectral character. A simple model (implemented in EXCEL-VBA) (Davies-Colley & Rutherford Ecol. Engrg in press) successfully reproduces the broad empirical trend of decreasing difn at the channel centre with increasing ratio of canopy height to stream width. We have now refined this model to account for (a) foliage overhanging the channel (for trees of different canopy form), and (b) off-centre view of the shade (rather than just the channel centre view). We use two extreme geometries bounding real (meandering) streams: the `canyon' model simulates an infinite straight canal, whereas the `cylinder' model simulates a stream meandering so tightly that its geometry collapses into an isolated pool in the forest. The model has been validated using a physical `rooftop' model of the cylinder case, with which it is possible to measure shade with different geometries.

  20. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    PubMed Central

    2011-01-01

    Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment. PMID:21974866

  1. Toward a formalized account of attitudes: The Causal Attitude Network (CAN) model.

    PubMed

    Dalege, Jonas; Borsboom, Denny; van Harreveld, Frenk; van den Berg, Helma; Conner, Mark; van der Maas, Han L J

    2016-01-01

    This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions between these reactions arise through direct causal influences (e.g., the belief that snakes are dangerous causes fear of snakes) and mechanisms that support evaluative consistency between related contents of evaluative reactions (e.g., people tend to align their belief that snakes are useful with their belief that snakes help maintain ecological balance). In the CAN model, the structure of attitude networks conforms to a small-world structure: evaluative reactions that are similar to each other form tight clusters, which are connected by a sparser set of "shortcuts" between them. We argue that the CAN model provides a realistic formalized measurement model of attitudes and therefore fills a crucial gap in the attitude literature. Furthermore, the CAN model provides testable predictions for the structure of attitudes and how they develop, remain stable, and change over time. Attitude strength is conceptualized in terms of the connectivity of attitude networks and we show that this provides a parsimonious account of the differences between strong and weak attitudes. We discuss the CAN model in relation to possible extensions, implication for the assessment of attitudes, and possibilities for further study. PMID:26479706

  2. Hybrid modelling of open glow discharge with account of nonlocal ionization by fast electrons

    NASA Astrophysics Data System (ADS)

    Eliseev, Stepan; Eremin, Denis; Kudryavtsev, Anatoly

    2015-11-01

    Cage and open discharges as well as hollow cathode devices are used for creating negative glow plasma. In order to perform numerical simulations of such kind of plasma object properly it is necessary to account for nonlocal excitation and ionization induced by fast electrons emitted from cathode and accelerated up to energies 102-103eV in cathode voltage drop. In this work a numerical study of open discharge in argon is presented. Simulations were performed using simple hybrid model that incorporates nonlocal ionization by fast electrons into ``extended'' fluid framework. Electron energy balance is written with account of electron heating due to coulomb interaction between ``bulks'' (with energies less than 1eV) and ``intermediate'' electrons (with energies up to inelastic collisions energy threshold). Distributions of main discharge parameters, such charged particle densities, electron temperature, electric potential, current-voltage characteristics of the discharge were obtained. Comparison with experimental results showed good agreement and suggests good applicability of the model. This work was supported by Russian Science Foundation (project #14-19-00311).

  3. Implementation of a cost-accounting model in a biobank: practical implications.

    PubMed

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level. PMID:25792217

  4. Pathology service line: a model for accountable care organizations at an academic medical center.

    PubMed

    Sussman, Ira; Prystowsky, Michael B

    2012-05-01

    Accountable care is designed to manage the health of patients using a capitated cost model rather than fee for service. Pay for performance is an attempt to use quality and not service reduction as the way to decrease costs. Pathologists will have to demonstrate value to the system. This value will include (1) working with clinical colleagues to optimize testing protocols, (2) reducing unnecessary testing in both clinical and anatomic pathology, (3) guiding treatment by helping to personalize therapy, (4) designing laboratory information technology solutions that will promote and facilitate accurate, complete data mining, and (5) administering efficient cost-effective laboratories. The pathology service line was established to improve the efficiency of delivering pathology services and to provide more effective support of medical center programs. We have used this model effectively at the Montefiore Medical Center for the past 14 years. PMID:22333926

  5. Water accounting for stressed river basins based on water resources management models.

    PubMed

    Pedro-Monzonís, María; Solera, Abel; Ferrer, Javier; Andreu, Joaquín; Estrela, Teodoro

    2016-09-15

    Water planning and the Integrated Water Resources Management (IWRM) represent the best way to help decision makers to identify and choose the most adequate alternatives among other possible ones. The System of Environmental-Economic Accounting for Water (SEEA-W) is displayed as a tool for the building of water balances in a river basin, providing a standard approach to achieve comparability of the results between different territories. The target of this paper is to present the building up of a tool that enables the combined use of hydrological models and water resources models to fill in the SEEA-W tables. At every step of the modelling chain, we are capable to build the asset accounts and the physical water supply and use tables according to SEEA-W approach along with an estimation of the water services costs. The case study is the Jucar River Basin District (RBD), located in the eastern part of the Iberian Peninsula in Spain which as in other many Mediterranean basins is currently water-stressed. To guide this work we have used PATRICAL model in combination with AQUATOOL Decision Support System (DSS). The results indicate that for the average year the total use of water in the district amounts to 15,143hm(3)/year, being the Total Water Renewable Water Resources 3909hm(3)/year. On the other hand, the water service costs in Jucar RBD amounts to 1634 million € per year at constant 2012 prices. It is noteworthy that 9% of these costs correspond to non-conventional resources, such as desalinated water, reused water and water transferred from other regions. PMID:27161139

  6. Modeling energy expenditure and oxygen consumption in human exposure models: accounting for fatigue and EPOC.

    PubMed

    Isaacs, Kristin; Glen, Graham; Mccurdy, Thomas; Smith, Luther

    2008-05-01

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled individual's physical activity level as described in an activity diary. Activity level is quantified via standardized values of metabolic equivalents of work (METS) for the activity being performed and converted into activity-specific oxygen consumption estimates. However, oxygen consumption remains elevated after a moderate- or high-intensity activity is completed. This effect, which is termed excess post-exercise oxygen consumption (EPOC), requires upward adjustment of the METS estimates that follow high-energy expenditure events, to model subsequent increased ventilation and intake dose rates. In addition, since an individual's capacity for work decreases during extended activity, methods are also required to adjust downward those METS estimates that exceed physiologically realistic limits over time. A unified method for simultaneously performing these adjustments is developed. The method simulates a cumulative oxygen deficit for each individual and uses it to impose appropriate time-dependent reductions in the METS time series and additions for EPOC. The relationships between the oxygen deficit and METS limits are nonlinear and are derived from published data on work capacity and oxygen consumption. These modifications result in improved modeling of ventilation patterns, and should improve intake dose estimates associated with exposure to airborne environmental contaminants. PMID:17805234

  7. Accounting for model uncertainty in EC-Earth3: impact of SPPT on seasonal forecast quality

    NASA Astrophysics Data System (ADS)

    Batté, Lauriane; Doblas-Reyes, Francisco J.

    2014-05-01

    In recent years, stochastic parameterization has emerged as a new method to take into account atmospheric model uncertainty in monthly to decadal climate predictions. One straightforward method consists in applying in-run multiplicative noise to the physical tendencies computed by the atmospheric model. The stochastically perturbed parameterization tendency scheme (SPPT; Palmer et al. 2009) designed at ECMWF introduces univariate Gaussian perturbations to the wind, humidity and temperature tendencies. This method along with a stochastic backscatter algorithm showed promising results at a monthly-to-seasonal scale with the ECMWF coupled model when compared to multi-model ensemble techniques (Weisheimer et al., 2011). SPPT was implemented in the IFS atmospheric component of the EC-Earth3 Earth system model. Two sets of space and time scales for the perturbation patterns were tested in EC-Earth3 to represent uncertainties at the seasonal time scale. The impact of these perturbations is analyzed in terms of systematic error, spread-to-skill ratio, anomaly correlation of the ensemble mean as well as probabilistic skill. Results depend on the variable and region studied. Over the Tropical Pacific, both of the SPPT settings tested result in a reduction of the systematic error and an increase in ensemble spread for sea-surface temperature, and these improvements translate into enhanced probabilistic skill.

  8. A simple lumped rainfall runoff model accounting for the spatial variability of drainage density

    NASA Astrophysics Data System (ADS)

    Di Lazzaro, M.; Zarlenga, A.; Volpi, E.

    2013-12-01

    Definition of drainage density (dd) as the inverse of twice the hillslope-to-channel length allows to create maps based on Digital Terrain Analysis that are clearly able to reveal the sharp contrast between neighbouring geologic provinces. This contrast is deeply correlated to the patterns of landscape dissection. Despite the fact that this definition can be applied relatively easily through the use of Geographic Information Systems (GIS), surprisingly its applications have been limited so far. Among them we consider in this work representing the spatial heterogeneity in the field of dd coupled with the within-catchment organization of the basin itself. Previous works highlight how dd affects key hydrological variables such as residence times, runoff coefficients, hydraulic conductivities, sediment yield and recession curves. Effects of dd drainage density can be classified as direct and indirect. Among direct effects is accounted the small extension of hillslope lengths where dd is high, which results in shorter corrivation times. Direct effects are intrinsically included in geomorphological rainfall-runoff models. Among indirect effects it has been proved that higher dd are related to impervious rocky hillslopes and to steeper slopes: this enhances the generation of higher flood peaks. In zones with high dd, shallow soils and low permeability prevent rainfall infiltration, so that runoff volumes are large. In areas of low drainage density hydraulic conductivities are expected to be higher, hydrological paths are mostly developed in the groundwater, where water is 'stored' for larger times. Despite the evidence of within-catchment variations of drainage density, a reliable schematization to account in a simplified model both direct and indirect effects, such as its strong correlation with permeability, has not yet been formulated. Traditional approaches in rainfall runoff modelling, based on the geomorphological derivation of the distribution of contributing areas

  9. An agent-based simulation model to study accountable care organizations.

    PubMed

    Liu, Pai; Wu, Shinyi

    2016-03-01

    Creating accountable care organizations (ACOs) has been widely discussed as a strategy to control rapidly rising healthcare costs and improve quality of care; however, building an effective ACO is a complex process involving multiple stakeholders (payers, providers, patients) with their own interests. Also, implementation of an ACO is costly in terms of time and money. Immature design could cause safety hazards. Therefore, there is a need for analytical model-based decision-support tools that can predict the outcomes of different strategies to facilitate ACO design and implementation. In this study, an agent-based simulation model was developed to study ACOs that considers payers, healthcare providers, and patients as agents under the shared saving payment model of care for congestive heart failure (CHF), one of the most expensive causes of sometimes preventable hospitalizations. The agent-based simulation model has identified the critical determinants for the payment model design that can motivate provider behavior changes to achieve maximum financial and quality outcomes of an ACO. The results show nonlinear provider behavior change patterns corresponding to changes in payment model designs. The outcomes vary by providers with different quality or financial priorities, and are most sensitive to the cost-effectiveness of CHF interventions that an ACO implements. This study demonstrates an increasingly important method to construct a healthcare system analytics model that can help inform health policy and healthcare management decisions. The study also points out that the likely success of an ACO is interdependent with payment model design, provider characteristics, and cost and effectiveness of healthcare interventions. PMID:24715674

  10. On the Use of Factor-Analytic Multinomial Logit Item Response Models to Account for Individual Differences in Response Style

    ERIC Educational Resources Information Center

    Johnson, Timothy R.; Bolt, Daniel M.

    2010-01-01

    Multidimensional item response models are usually implemented to model the relationship between item responses and two or more traits of interest. We show how multidimensional multinomial logit item response models can also be used to account for individual differences in response style. This is done by specifying a factor-analytic model for…

  11. Comparison of Two Curriculum/Instructional Design Models: Ralph W. Tyler and Siena College Accounting Class, ACCT205.

    ERIC Educational Resources Information Center

    Denham, Thomas J.

    This paper compares the curriculum design model developed by R. W. Tyler (1902-1994) with a model of instructional design at Siena College, Albany, New York, as exemplified in a course taught by L. Stokes, professor of accounting. Tyler's model, which became the basis for many other models of instruction, consisted of four parts: (1) defining…

  12. Regional collaboration as a model for fostering accountability and transforming health care.

    PubMed

    Speir, Alan M; Rich, Jeffrey B; Crosby, Ivan; Fonner, Edwin

    2009-01-01

    An era of increasing budgetary constraints, misaligned payers and providers, and a competitive system where United States health outcomes are outpaced by less well-funded nations is motivating policy-makers to seek more effective means for promoting cost-effective delivery and accountability. This article illustrates an effective working model of regional collaboration focused on improving health outcomes, containing costs, and making efficient use of resources in cardiovascular surgical care. The Virginia Cardiac Surgery Quality Initiative is a decade-old collaboration of cardiac surgeons and hospital providers in Virginia working to improve outcomes and contain costs by analyzing comparative data, identifying top performers, and replicating best clinical practices on a statewide basis. The group's goals and objectives, along with 2 generations of performance improvement initiatives, are examined. These involve attempts to improve postoperative outcomes and use of tools for decision support and modeling. This work has led the group to espouse a more integrated approach to performance improvement and to formulate principles of a quality-focused payment system. This is one in which collaboration promotes regional accountability to deliver quality care on a cost-effective basis. The Virginia Cardiac Surgery Quality Initiative has attempted to test a global pricing model and has implemented a pay-for-performance program where physicians and hospitals are aligned with common objectives. Although this collaborative approach is a work in progress, authors point out preconditions applicable to other regions and medical specialties. A road map of short-term next steps is needed to create an adaptive payment system tied to the national agenda for reforming the delivery system. PMID:19632558

  13. A margin model to account for respiration-induced tumour motion and its variability

    NASA Astrophysics Data System (ADS)

    Coolens, Catherine; Webb, Steve; Shirato, H.; Nishioka, K.; Evans, Phil M.

    2008-08-01

    In order to reduce the sensitivity of radiotherapy treatments to organ motion, compensation methods are being investigated such as gating of treatment delivery, tracking of tumour position, 4D scanning and planning of the treatment, etc. An outstanding problem that would occur with all these methods is the assumption that breathing motion is reproducible throughout the planning and delivery process of treatment. This is obviously not a realistic assumption and is one that will introduce errors. A dynamic internal margin model (DIM) is presented that is designed to follow the tumour trajectory and account for the variability in respiratory motion. The model statistically describes the variation of the breathing cycle over time, i.e. the uncertainty in motion amplitude and phase reproducibility, in a polar coordinate system from which margins can be derived. This allows accounting for an additional gating window parameter for gated treatment delivery as well as minimizing the area of normal tissue irradiated. The model was illustrated with abdominal motion for a patient with liver cancer and tested with internal 3D lung tumour trajectories. The results confirm that the respiratory phases around exhale are most reproducible and have the smallest variation in motion amplitude and phase (approximately 2 mm). More importantly, the margin area covering normal tissue is significantly reduced by using trajectory-specific margins (as opposed to conventional margins) as the angular component is by far the largest contributor to the margin area. The statistical approach to margin calculation, in addition, offers the possibility for advanced online verification and updating of breathing variation as more data become available.

  14. FPLUME-1.0: An integral volcanic plume model accounting for ash aggregation

    NASA Astrophysics Data System (ADS)

    Folch, A.; Costa, A.; Macedonio, G.

    2016-02-01

    Eruption source parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1-D (one-dimensional) cross-section-averaged eruption column model based on the buoyant plume theory (BPT). The model accounts for plume bending by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in the presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an effective grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the effective particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland). The modular structure of the code facilitates the implementation in the future code versions of more quantitative ash aggregation parameterization as further observations and experiment data will be available for better constraining ash aggregation processes.

  15. FPLUME-1.0: An integral volcanic plume model accounting for ash aggregation

    NASA Astrophysics Data System (ADS)

    Folch, Arnau; Costa, Antonio; Macedonio, Giovanni

    2016-04-01

    Eruption Source Parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1D cross-section averaged eruption column model based on the Buoyant Plume Theory (BPT). The model accounts for plume bending by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an "effective" grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice-versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the "effective" particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland). The modular structure of the code facilitates the implementation in the future code versions of more quantitative ash aggregation parameterization as further observations and experiments data will be available for better constraining ash aggregation processes.

  16. A unifying modeling of plant shoot gravitropism with an explicit account of the effects of growth.

    PubMed

    Bastien, Renaud; Douady, Stéphane; Moulia, Bruno

    2014-01-01

    Gravitropism, the slow reorientation of plant growth in response to gravity, is a major determinant of the form and posture of land plants. Recently a universal model of shoot gravitropism, the AC model, was presented, in which the dynamics of the tropic movement is only determined by the conflicting controls of (1) graviception that tends to curve the plants toward the vertical, and (2) proprioception that tends to keep the stem straight. This model was found to be valid for many species and over two orders of magnitude of organ size. However, the motor of the movement, the elongation, was purposely neglected in the AC model. If growth effects are to be taken into account, it is necessary to consider the material derivative, i.e., the rate of change of curvature bound to expanding and convected organ elements. Here we show that it is possible to rewrite the material equation of curvature in a compact simplified form that directly expresses the curvature variation as a function of the median elongation and of the distribution of the differential growth. By using this extended model, called the ACĖ model, growth is found to have two main destabilizing effects on the tropic movement: (1) passive orientation drift, which occurs when a curved element elongates without differential growth, and (2) fixed curvature, when an element leaves the elongation zone and is no longer able to actively change its curvature. By comparing the AC and ACĖ models to experiments, these two effects are found to be negligible. Our results show that the simplified AC mode can be used to analyze gravitropism and posture control in actively elongating plant organs without significant information loss. PMID:24782876

  17. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    NASA Astrophysics Data System (ADS)

    Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.

    2016-07-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( < 0.063 mm) ash (3-59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  18. A model of attractive interactions to account for fluid-fluid phase separation of protein solutions

    NASA Astrophysics Data System (ADS)

    Malfois, Marc; Bonneté, Françoise; Belloni, Luc; Tardieu, Annette

    1996-08-01

    Concentrated γ-crystallin and lysozyme solutions have been reported to undergo a fluid-fluid phase separation when cooled below a critical temperature. This behavior is under control of the weak forces acting in solution between macromolecules. We have used small angle x-ray scattering at the synchrotron radiation facility LURE (Orsay, France) to analyze the interaction potentials. A model of attractive interactions which depends upon three parameters, protein diameter, potential depth, and range, is able to account for both the x-ray structure factors measured at high temperature and for the low temperature phase separation. Although van der Waals forces could be at the origin of the attractive interaction potentials, other more specific effects also contribute to the protein phase diagrams.

  19. Causal Inference in Occupational Epidemiology: Accounting for the Healthy Worker Effect by Using Structural Nested Models

    PubMed Central

    Naimi, Ashley I.; Richardson, David B.; Cole, Stephen R.

    2013-01-01

    In a recent issue of the Journal, Kirkeleit et al. (Am J Epidemiol. 2013;177(11):1218–1224) provided empirical evidence for the potential of the healthy worker effect in a large cohort of Norwegian workers across a range of occupations. In this commentary, we provide some historical context, define the healthy worker effect by using causal diagrams, and use simulated data to illustrate how structural nested models can be used to estimate exposure effects while accounting for the healthy worker survivor effect in 4 simple steps. We provide technical details and annotated SAS software (SAS Institute, Inc., Cary, North Carolina) code corresponding to the example analysis in the Web Appendices, available at http://aje.oxfordjournals.org/. PMID:24077092

  20. Can the Enceladus plume account for 16 GW? A Boiling Liquid Model

    NASA Astrophysics Data System (ADS)

    Nakajima, M.; Ingersoll, A. P.

    2012-12-01

    Since the detection of water vapor plumes from the tiger stripes at the south pole of Enceladus (Porco et al., 2006), several models have been suggested to explain the plume mechanism (e.g., Schmidt et al., 2008; Kieffer et al., 2009). The so-called "Icy Chamber Model" suggests ice sublimation under the stripes causes the plumes. One of the problems with the model is that it cannot explain the high salinity of the plumes (Postberg et al., 2009) because ice particles condensing from a vapor are relatively salt free. Secondly, the model has difficulties to explain the observed high heat flux (15.8 GW, Howett et al., 2011) with only heat conduction through the ice as a heat source. According to previous models (Nimmo et al., 2007; Abramov and Spencer, 2009), the conductive heat flux is only 1-4 GW. Nimmo et al., (2007) suggested that the latent heat release by the sublimation of a large amount of water vapor (90% by mass) to ice particles under the crack could account for the heat flux. However, Ingersoll & Pankine, (2010) found out that such sublimation of the vapor occurs only up to 1% by mass under their parameter range. To solve these problems, we investigate the "Boiling Liquid Model", which assumes that liquid water under the stripes causes the plumes. The model is favored because ice particles derived from a salty liquid can have high salinity. Enforcing conservation of mass, momentum, and energy, we construct a simple atmospheric model that includes controlled boiling and interaction between the gas and the icy wall. We first assume that the heat radiated to space comes entirely from the heat generated by condensation of the gas onto the ice wall. We vary the crack width and height as parameters and find out the conductive heat flux is ~1 GW, just as in the icy chamber model. We then investigate the additional heating processes, such as radiation from the particles after they come out of the crack and from the ones formed under the surface due to variations of

  1. A simple method to account for drift orbit effects when modeling radio frequency heating in tokamaks

    NASA Astrophysics Data System (ADS)

    Van Eester, D.

    2005-09-01

    In the last years tremendous progress was made in modeling radio frequency heating in tokamaks. Not only the adopted models have gradually become more realistic, also the present generation of computers has allowed to study wave-particle interaction effects with previously unattainable detail. In the present paper a semi-analytical method is adopted to evaluate the dielectric response of a plasma to electromagnetic waves in the ion cyclotron domain of frequencies accounting for drift orbit effects in an axisymmetric tokamak. The method relies on subdividing the orbit into elementary segments in which the integrations can be performed analytically or by tabulation, and it hinges on the local bookkeeping of the relation between the variables defining an orbit and those describing the magnetic geometry. Although the method allows computation of elementary building blocks for either the wave or the Fokker-Planck equation, the focus here is on the latter. Using the coefficients evaluated using the proposed semi-analytical method, a 3-D Fokker-Planck code was developed which accounts for the radial width of the guiding center orbits and thus not only describes RF induced velocity space diffusion, but equally accounts for the RF induced radial drift. Preliminary results of this new 3-D Fokker-Planck code are presented. The adopted numerical resolution relies on a subdivision of the integration domain in tetrahedres. This specific shape of the elementary volumes allows imposing the boundary conditions (in particular the nonlocal conditions across the curved trapped/passing boundary connecting one trapped to two passing orbits) elegantly. The particular chosen shape also readily permits zooming in on regions where more detail is required. Casting the equation in its weak Galerkin form, it is solved relying on the finite element technique. Unless special attention is devoted to the optimization of the inversion of the system of linear equations resulting from projecting the

  2. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Duputel, Z.; Jiang, J.; Jolivet, R.; Simons, M.; Rivera, L.; Ampuero, J.-P.; Riel, B.; Owen, S. E.; Moore, A. W.; Samsonov, S. V.; Ortega Culaciati, F.; Minson, S. E.

    2015-10-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  3. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    PubMed

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-01-01

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions. PMID:22438468

  4. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    PubMed

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both. PMID:27198876

  5. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    USGS Publications Warehouse

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  6. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    NASA Astrophysics Data System (ADS)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  7. Carbon accounting of forest bioenergy: from model calibrations to policy options (Invited)

    NASA Astrophysics Data System (ADS)

    Lamers, P.

    2013-12-01

    knowledge in the field by comparing different state-of-the-art temporal forest carbon modeling efforts, and discusses whether or to what extent a deterministic ';carbon debt' accounting is possible and appropriate. It concludes upon the possible scientific and eventually political choices in temporal carbon accounting for regulatory frameworks including alternative options to address unintentional carbon losses within forest ecosystems/bioenergy systems.

  8. Modelling of gas-metal arc welding taking into account metal vapour

    NASA Astrophysics Data System (ADS)

    Schnick, M.; Fuessel, U.; Hertel, M.; Haessler, M.; Spille-Kohoff, A.; Murphy, A. B.

    2010-11-01

    The most advanced numerical models of gas-metal arc welding (GMAW) neglect vaporization of metal, and assume an argon atmosphere for the arc region, as is also common practice for models of gas-tungsten arc welding (GTAW). These models predict temperatures above 20 000 K and a temperature distribution similar to GTAW arcs. However, spectroscopic temperature measurements in GMAW arcs demonstrate much lower arc temperatures. In contrast to measurements of GTAW arcs, they have shown the presence of a central local minimum of the radial temperature distribution. This paper presents a GMAW model that takes into account metal vapour and that is able to predict the local central minimum in the radial distributions of temperature and electric current density. The influence of different values for the net radiative emission coefficient of iron vapour, which vary by up to a factor of hundred, is examined. It is shown that these net emission coefficients cause differences in the magnitudes, but not in the overall trends, of the radial distribution of temperature and current density. Further, the influence of the metal vaporization rate is investigated. We present evidence that, for higher vaporization rates, the central flow velocity inside the arc is decreased and can even change direction so that it is directed from the workpiece towards the wire, although the outer plasma flow is still directed towards the workpiece. In support of this thesis, we have attempted to reproduce the measurements of Zielińska et al for spray-transfer mode GMAW numerically, and have obtained reasonable agreement.

  9. Towards an improvement of carbon accounting for wildfires: incorporation of charcoal production into carbon emission models

    NASA Astrophysics Data System (ADS)

    Doerr, Stefan H.; Santin, Cristina; de Groot, Bill

    2015-04-01

    Every year fires release to the atmosphere the equivalent to 20-30% of the carbon (C) emissions from fossil fuel consumption, with future emissions from wildfires expected to increase under a warming climate. Critically, however, part of the biomass C affected by fire is not emitted during burning, but converted into charcoal, which is very resistant to environmental degradation and, thus, contributes to long-term C sequestration. The magnitude of charcoal production from wildfires as a long-term C sink remains essentially unknown and, to the date, charcoal production has not been included in wildfire emission and C budget models. Here we present complete inventories of charcoal production in two fuel-rich, but otherwise very different ecosystems: i) a boreal conifer forest (experimental stand-replacing crown fire; Canada, 2012) and a dry eucalyptus forest (high-intensity fuel reduction burn; Australia 2014). Our data show that, when considering all the fuel components and quantifying all the charcoal produced from each (i.e. bark, dead wood debris, fine fuels), the overall amount of charcoal produced is significant: up to a third of the biomass C affected by fire. These findings indicate that charcoal production from wildfires could represent a major and currently unaccounted error in the estimation of the effects of wildfires in the global C balance. We suggest an initial approach to include charcoal production in C emission models, by using our case study of a boreal forest fire and the Canadian Fire Effects Model (CanFIRE). We also provide recommendations of how a 'conversion factor' for charcoal production could be relatively easily estimated when emission factors for different types of fuels and fire conditions are experimentally obtained. Ultimately, this presentation is a call for integrative collaboration between the fire emission modelling community and the charcoal community to work together towards the improvement of C accounting for wildfires.

  10. Accounting for the kinetics in order parameter analysis: Lessons from theoretical models and a disordered peptide

    NASA Astrophysics Data System (ADS)

    Berezovska, Ganna; Prada-Gracia, Diego; Mostarda, Stefano; Rao, Francesco

    2012-11-01

    Molecular simulations as well as single molecule experiments have been widely analyzed in terms of order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such descriptions are inaccurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-state-models of the system under study. Application of the methodology to theoretical models with a noisy order parameter as well as the dynamics of a disordered peptide illustrates the possibility to build accurate descriptions of molecular processes on the sole basis of order parameter time series without using any supplementary information.

  11. A performance weighting procedure for GCMs based on explicit probabilistic models and accounting for observation uncertainty

    NASA Astrophysics Data System (ADS)

    Renard, Benjamin; Vidal, Jean-Philippe

    2016-04-01

    In recent years, the climate modeling community has put a lot of effort into releasing the outputs of multimodel experiments for use by the wider scientific community. In such experiments, several structurally distinct GCMs are run using the same observed forcings (for the historical period) or the same projected forcings (for the future period). In addition, several members are produced for a single given model structure, by running each GCM with slightly different initial conditions. This multiplicity of GCM outputs offers many opportunities in terms of uncertainty quantification or GCM comparisons. In this presentation, we propose a new procedure to weight GCMs according to their ability to reproduce the observed climate. Such weights can be used to combine the outputs of several models in a way that rewards good-performing models and discards poorly-performing ones. The proposed procedure has the following main properties: 1. It is based on explicit probabilistic models describing the time series produced by the GCMs and the corresponding historical observations, 2. It can use several members whenever available, 3. It accounts for the uncertainty in observations, 4. It assigns a weight to each GCM (all weights summing up to one), 5. It can also assign a weight to the "H0 hypothesis" that all GCMs in the multimodel ensemble are not compatible with observations. The application of the weighting procedure is illustrated with several case studies including synthetic experiments, simple cases where the target GCM output is a simple univariate variable and more realistic cases where the target GCM output is a multivariate and/or a spatial variable. These case studies illustrate the generality of the procedure which can be applied in a wide range of situations, as long as the analyst is prepared to make an explicit probabilistic assumption on the target variable. Moreover, these case studies highlight several interesting properties of the weighting procedure. In

  12. A componential investigation of the relation between structural modelling and cognitive accounts of human judgement.

    PubMed

    Maule, A J

    1994-11-01

    Structural modelling and cognitive process approaches have developed rather different accounts of human judgement and decision making. Two hypotheses to explain these differences were evaluated in the context of a judgement task, and formulated in terms of predictions concerning measurement of attribute importance. First, following suggestions made by Billings and Marcus (1983), it was argued that measures of judgement behaviour based on structural modelling reflect cognitive activity late in the judgement process, whereas measures derived from cognitive process approaches reflect cognitive activity early in the process. A new componential judgement task was developed which not only provided estimates of attribute importance based on structural modelling, but also two sets of cognitive process measures based on cognitive components assumed to occur early and late in the judgement process. A greater degree of convergence between approaches was predicted when the cognitive approach was based on activity in the component occurring later in the judgement process. Second, it was argued that in previous research subjects have had unlimited time to make their judgements, reducing the need for attribute importance to provide the dominant basis for determining processing strategy. The present experiment introduced a time pressure condition and, on the basis of previous research, predicted that this would increase the amount of information processing based on attribute importance, thereby increasing the convergence between estimates of attribute importance derived from the two approaches. The first, but not the second hypothesis was supported and the results were discussed in terms of their implications for understanding previous differences between the two approaches to human judgement. PMID:7810352

  13. Taking the Missing Propensity into Account When Estimating Competence Scores: Evaluation of Item Response Theory Models for Nonignorable Omissions

    ERIC Educational Resources Information Center

    Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H.

    2015-01-01

    When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…

  14. Materials measurement and accounting in an operating plutonium conversion and purification process. Phase I. Process modeling and simulation. [PUCSF code

    SciTech Connect

    Thomas, C.C. Jr.; Ostenak, C.A.; Gutmacher, R.G.; Dayem, H.A.; Kern, E.A.

    1981-04-01

    A model of an operating conversion and purification process for the production of reactor-grade plutonium dioxide was developed as the first component in the design and evaluation of a nuclear materials measurement and accountability system. The model accurately simulates process operation and can be used to identify process problems and to predict the effect of process modifications.

  15. Toward an Human Resource Accounting (HRA)-Based Model for Designing an Organizational Effectiveness Audit in Education.

    ERIC Educational Resources Information Center

    Myroon, John L.

    The major purpose of this paper was to develop a Human Resource Accounting (HRA) macro-model that could be used for designing a school organizational effectiveness audit. Initially, the paper reviewed the advent and definition of HRA. In order to develop the proposed model, the different approaches to measuring effectiveness were reviewed,…

  16. Modeling Occupancy of Hosts by Mistletoe Seeds after Accounting for Imperfect Detectability

    PubMed Central

    Fadini, Rodrigo F.; Cintra, Renato

    2015-01-01

    The detection of an organism in a given site is widely used as a state variable in many metapopulation and epidemiological studies. However, failure to detect the species does not necessarily mean that it is absent. Assessing detectability is important for occupancy (presence—absence) surveys; and identifying the factors reducing detectability may help improve survey precision and efficiency. A method was used to estimate the occupancy status of host trees colonized by mistletoe seeds of Psittacanthus plagiophyllus as a function of host covariates: host size and presence of mistletoe infections on the same or on the nearest neighboring host (the cashew tree Anacardium occidentale). The technique also evaluated the effect of taking detectability into account for estimating host occupancy by mistletoe seeds. Individual host trees were surveyed for presence of mistletoe seeds with the aid of two or three observers to estimate detectability and occupancy. Detectability was, on average, 17% higher in focal-host trees with infected neighbors, while decreased about 23 to 50% from smallest to largest hosts. The presence of mistletoe plants in the sample tree had negligible effect on detectability. Failure to detect hosts as occupied decreased occupancy by 2.5% on average, with maximum of 10% for large and isolated hosts. The method presented in this study has potential for use with metapopulation studies of mistletoes, especially those focusing on the seed stage, but also as improvement of accuracy in occupancy models estimates often used for metapopulation dynamics of tree-dwelling plants in general. PMID:25973754

  17. Design of a Competency-Based Assessment Model in the Field of Accounting

    ERIC Educational Resources Information Center

    Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús

    2012-01-01

    This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…

  18. Improving Accountability Models by Using Technology-Enabled Knowledge Systems (TEKS). CSE Report 656

    ERIC Educational Resources Information Center

    Baker, Eva L.

    2005-01-01

    This paper addresses accountability and the role emerging technologies can play in improving education. The connection between accountability and technology can be best summarized in one term: feedback. Technological supports provide ways of designing, collecting and sharing information in order to provide the basis for the improvement of systems…

  19. Striving for Student Success: A Model of Shared Accountability. Education Sector Reports

    ERIC Educational Resources Information Center

    Bathgate, Kelly; Colvin, Richard Lee; Silva, Elena

    2011-01-01

    Instead of putting the entire achievement burden on schools, what would it look like to hold a whole community responsible for long-range student outcomes? This report explores the concept of "shared accountability" in education. The No Child Left Behind (NCLB) Act ushered in a new era of accountability in American education: for the first time,…

  20. Sediment erodability in sediment transport modelling: Can we account for biota effects?

    NASA Astrophysics Data System (ADS)

    Le Hir, P.; Monbet, Y.; Orvain, F.

    2007-05-01

    Sediment erosion results from hydrodynamic forcing, represented by the bottom shear stress (BSS), and from the erodability of the sediment, defined by the critical erosion shear stress and the erosion rate. Abundant literature has dealt with the effects of biological components on sediment erodability and concluded that sediment processes are highly sensitive to the biota. However, very few sediment transport models account for these effects. We provide some background on the computation of BSS, and on the classical erosion laws for fine sand and mud, followed by a brief review of biota effects with the aim of quantifying the latter into generic formulations, where applicable. The effects of macrophytes, microphytobenthos, and macrofauna are considered in succession. Marine vegetation enhances the bottom dissipation of current energy, but also reduces shear stress at the sediment-water interface, which can be significant when the shoot density is high. The microphytobenthos and secreted extracellular polymeric substances (EPS) stabilise the sediment, and an increase of up to a factor of 5 can be assigned to the erosion threshold on muddy beds. However, the consequences with respect to the erosion rate are debatable since, once the protective biofilm is eroded, the underlying sediment probably has the same erosion behaviour as bare sediment. In addition, the development of benthic diatoms tends to be seasonal, so that stabilising effects are likely to be minimal in winter. Macrofaunal effects are characterised by extreme variability. For muddy sediments, destabilisation seems to be the general trend; this can become critical when benthic communities settle on consolidated sediments that would not be eroded if they remained bare. Biodeposition and bioresuspension fluxes are mentioned, for comparison with hydrodynamically induced erosion rates. Unlike the microphytobenthos, epifaunal benthic organisms create local roughness and are likely to change the BSS generated

  1. Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting

    NASA Astrophysics Data System (ADS)

    Edouard, Simon; Béatrice, Vincendon; Véronique, Ducrocq

    2015-04-01

    Title : Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting Authors : Simon EDOUARD*, Béatrice VINCENDON*, Véronique Ducrocq* * : GAME/CNRM(Météo-France, CNRS)Toulouse,France Mediterranean intense weather events often lead to devastating flash-floods (FF). Increasing the lead time of FF forecasts would permit to better anticipate their catastrophic consequences. These events are one part of Mediterranean hydrological cycle. HyMeX (HYdrological cycle in the Mediterranean EXperiment) aims at a better understanding and quantification of the hydrological cycle and related processes in the Mediterranean. In order to get a lot of data, measurement campaigns were conducted. The first special observing period (SOP1) of these campaigns, served as a test-bed for a real-time hydrological ensemble prediction system (HEPS) dedicated to FF forecasting. It produced an ensemble of quantitative discharge forecasts (QDF) using the ISBA-TOP system. ISBATOP is a coupling between the surface scheme ISBA and a version of TOPMODEL dedicated to Mediterranean fast responding rivers. ISBA-TOP was driven with several quantitative precipitation forecasts (QPF) ensembles based on AROME atmospheric convection-permitting model. This permitted to take into account the uncertainty that affects QPF and that propagates up to the QDF. This uncertainty is major for discharge forecasting especially in the case of Mediterranean flash-floods. But other sources of uncertainty need to be sampled in HEPS systems. One of them is inherent to the hydrological modelling. The ISBA-TOP coupled system has been improved since the initial version, that was used for instance during Hymex SOP1. The initial ISBA-TOP consisted into coupling a TOPMODEL approach with ISBA-3L, which represented the soil stratification with 3 layers. The new version consists into coupling the same TOPMODEL approach with a version of ISBA where more than ten layers describe the soil vertical

  2. A Pluralistic Account of Homology: Adapting the Models to the Data

    PubMed Central

    Haggerty, Leanne S.; Jachiet, Pierre-Alain; Hanage, William P.; Fitzpatrick, David A.; Lopez, Philippe; O’Connell, Mary J.; Pisani, Davide; Wilkinson, Mark; Bapteste, Eric; McInerney, James O.

    2014-01-01

    Defining homologous genes is important in many evolutionary studies but raises obvious issues. Some of these issues are conceptual and stem from our assumptions of how a gene evolves, others are practical, and depend on the algorithmic decisions implemented in existing software. Therefore, to make progress in the study of homology, both ontological and epistemological questions must be considered. In particular, defining homologous genes cannot be solely addressed under the classic assumptions of strong tree thinking, according to which genes evolve in a strictly tree-like fashion of vertical descent and divergence and the problems of homology detection are primarily methodological. Gene homology could also be considered under a different perspective where genes evolve as “public goods,” subjected to various introgressive processes. In this latter case, defining homologous genes becomes a matter of designing models suited to the actual complexity of the data and how such complexity arises, rather than trying to fit genetic data to some a priori tree-like evolutionary model, a practice that inevitably results in the loss of much information. Here we show how important aspects of the problems raised by homology detection methods can be overcome when even more fundamental roots of these problems are addressed by analyzing public goods thinking evolutionary processes through which genes have frequently originated. This kind of thinking acknowledges distinct types of homologs, characterized by distinct patterns, in phylogenetic and nonphylogenetic unrooted or multirooted networks. In addition, we define “family resemblances” to include genes that are related through intermediate relatives, thereby placing notions of homology in the broader context of evolutionary relationships. We conclude by presenting some payoffs of adopting such a pluralistic account of homology and family relationship, which expands the scope of evolutionary analyses beyond the traditional

  3. Pharmacokinetic Modeling of Manganese III. Physiological Approaches Accounting for Background and Tracer Kinetics

    SciTech Connect

    Teeguarden, Justin G.; Gearhart, Jeffrey; Clewell, III, H. J.; Covington, Tammie R.; Nong, Andy; Anderson, Melvin E.

    2007-01-01

    assessments (Dixit et al., 2003). With most exogenous compounds, there is often no background exposure and body concentrations are not under active control from homeostatic processes as occurs with essential nutrients. Any complete Mn PBPK model would include the homeostatic regulation as an essential nutritional element and the additional exposure routes by inhalation. Two companion papers discuss the kinetic complexities of the quantitative dose-dependent alterations in hepatic and intestinal processes that control uptake and elimination of Mn (Teeguarden et al., 2006a, b). Radioactive 54Mn has been to investigate the behavior of the more common 55Mn isotope in the body because the distribution and elimination of tracer doses reflects the overall distributional characteristics of Mn. In this paper, we take the first steps in developing a multi-route PBPK model for Mn. Here we develop a PBPK model to account for tissue concentrations and tracer kinetics of Mn under normal dietary intake. This model for normal levels of Mn will serve as the starting point for more complete model descriptions that include dose-dependencies in both oral uptake and and biliary excretion. Material and Methods Experimental Data Two studies using 54Mn tracer were employed in model development. (Furchner et al. 1966; Wieczorek and Oberdorster 1989). In Furchner et al. (1966) male Sprague-Dawley rats received an ip injection of carrier-free 54MnCl2 while maintained on standard rodent feed containing ~ 45 ppm Mn. Tissue radioactivity of 54Mn was measured by liquid scintillation counting between post injection days 1 to 89 and reported as percent of administered dose per kg tissue. 54Mn time courses were reported for liver, kidney, bone, brain, muscle, blood, lung and whole body. Because ip uptake is via the portal circulation to the liver, this data set had information on distribution and clearance behaviors of Mn entering the systemic circulation from liver.

  4. Accounting for water management issues within hydrological simulation: Alternative modelling options and a network optimization approach

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Nalbantis, Ioannis; Rozos, Evangelos; Koutsoyiannis, Demetris

    2010-05-01

    In mixed natural and artificialized river basins, many complexities arise due to anthropogenic interventions in the hydrological cycle, including abstractions from surface water bodies, groundwater pumping or recharge and water returns through drainage systems. Typical engineering approaches adopt a multi-stage modelling procedure, with the aim to handle the complexity of process interactions and the lack of measured abstractions. In such context, the entire hydrosystem is separated into natural and artificial sub-systems or components; the natural ones are modelled individually, and their predictions (i.e. hydrological fluxes) are transferred to the artificial components as inputs to a water management scheme. To account for the interactions between the various components, an iterative procedure is essential, whereby the outputs of the artificial sub-systems (i.e. abstractions) become inputs to the natural ones. However, this strategy suffers from multiple shortcomings, since it presupposes that pure natural sub-systems can be located and that sufficient information is available for each sub-system modelled, including suitable, i.e. "unmodified", data for calibrating the hydrological component. In addition, implementing such strategy is ineffective when the entire scheme runs in stochastic simulation mode. To cope with the above drawbacks, we developed a generalized modelling framework, following a network optimization approach. This originates from the graph theory, which has been successfully implemented within some advanced computer packages for water resource systems analysis. The user formulates a unified system which is comprised of the hydrographical network and the typical components of a water management network (aqueducts, pumps, junctions, demand nodes etc.). Input data for the later include hydraulic properties, constraints, targets, priorities and operation costs. The real-world system is described through a conceptual graph, whose dummy properties

  5. Crash Simulation of Roll Formed Parts by Damage Modelling Taking Into Account Preforming Effects

    NASA Astrophysics Data System (ADS)

    Till, Edwin T.; Hackl, Benjamin; Schauer, Hermann

    2011-08-01

    Complex phase steels of strength levels up to 1200 MPa are suitable to roll forming. These may be applied in automotive structures for enhancing the crashworthiness, e. g. as stiffeners in doors. Even though the strain hardening of the material is low there is considerable bending formability. However ductility decreases with the strength level. Higher strength requires more focus to the structural integrity of the part during the process planning stage and with respect to the crash behavior. Nowadays numerical simulation is used as a process design tool for roll-forming in a production environment. The assessment of the stability of a roll forming process is quite challenging for AHSS grades. There are two objectives of the present work. First to provide a reliable assessment tool to the roll forming analyst for failure prediction. Second to establish simulation procedures in order to predict the part's behavior in crash applications taking into account damage and failure. Today adequate ductile fracture models are available which can be used in forming and crash applications. These continuum models are based on failure strain curves or surfaces which depend on the stress triaxiality (e. g. Crach or GISSMO) and may additionally include the Lode angle (extended Mohr Coulomb or extended GISSMO model). A challenging task is to obtain the respective failure strain curves. In the paper the procedure is described in detail how these failure strain curves are obtained using small scale tests within voestalpine Stahl, notch tensile-, bulge and shear tests. It is shown that capturing the surface strains is not sufficient for obtaining reliable material failure parameters. The simulation tool for roll-forming at the site of voestalpine Krems is Copra® FEA RF, which is a 3D continuum finite element solver based on MSC.Marc. The simulation environment for crash applications is LS-DYNA. Shell elements are used for this type of analyses. A major task is to provide results of

  6. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    NASA Astrophysics Data System (ADS)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that

  7. Multiple-breed reaction norm animal model accounting for robustness and heteroskedastic in a Nelore-Angus crossed population.

    PubMed

    Oliveira, M M; Santana, M L; Cardoso, F F

    2016-07-01

    Our objective was to genetically characterize post-weaning weight gain (PWG), over a 345-day period after weaning, of Brangus-Ibagé (Nelore×Angus) cattle. Records (n=4016) were from the foundation herd of the Embrapa South Livestock Center. A Bayesian approach was used to assess genotype by environment (G×E) interaction and to identify a suitable model for the estimation of genetic parameters and use in genetic evaluation. A robust and heteroscedastic reaction norm multiple-breed animal model was proposed. The model accounted for heterogeneity of residual variance associated with effects of breed, heterozygosity, sex and contemporary group; and was robust with respect to outliers. Additive genetic effects were modeled for the intercept and slope of a reaction norm to changes in the environmental gradient. Inference was based on Monte Carlo Markov Chain of 110 000 cycles, after 10 000 cycles of burn-in. Bayesian model choice criteria indicated the proposed model was superior to simpler sub-models that did not account for G×E interaction, multiple-breed structure, robustness and heteroscedasticity. We conclude that, for the Brangus-Ibagé population, these factors should be jointly accounted for in genetic evaluation of PWG. Heritability estimates increased proportionally with improvement in the environmental conditions gradient. Therefore, an increased proportion of differences in performance among animals were explained by genetic factors rather than environmental factors as rearing conditions improved. As a consequence response to selection may be increased in favorable environments. PMID:26754914

  8. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology

    PubMed Central

    2013-01-01

    Background The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. Methods A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Results Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. Conclusions A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions. PMID:23419192

  9. Measuring What Matters: A Stronger Accountability Model for Teacher Education [Executive Summary

    ERIC Educational Resources Information Center

    Crowe, Edward

    2010-01-01

    Our current system for holding U.S. teacher education programs accountable doesn't guarantee program quality or serve the needs of schools and students. State oversight for teacher preparation programs mostly ignores the impact of graduates on the K-12 students they teach, and it gives little attention to where graduates teach or how long they…

  10. Local Accountability in Vocational Education: A Theoretical Model and Its Limitations in Practice.

    ERIC Educational Resources Information Center

    Stecher, Brian M.; Hanser, Lawrence M.

    A study sought to determine the extent to which local accountability systems exist in vocational education and to describe the nature of the underlying relationships between such programs and their constituents. Data were collected through interviews from two local vocational education programs in each of five states (California, Florida,…

  11. Alternative Schools Accountability Model: 2001-2002 Indicator Selection and Reporting Guide.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    California has developed an alternative accountability system for schools with fewer than 100 students, alternative schools of various kinds, community day schools, and other schools under the jurisdiction of a county board of education or a county superintendent of schools. This document is a guide to assist local administrators in completing the…

  12. Towards a Model of Stewardship and Accountability in Support of Innovation and "Good" Failure.

    PubMed

    Denny, Keith; Veillard, Jeremy

    2015-01-01

    From an evolutionary perspective, failures of imagination and missed opportunities to learn from experimentation are as potentially harmful for the health system as failures of practice. The conundrum is encapsulated in the fact that while commentators are steadfast about the need on the part of the stewards of the health system to avoid any waste of public dollars, they are also insistent about the need for innovation. There is tension between these two imperatives that is often unrecognized: the pursuit of efficiency, narrowly defined, can crowd out the goal of innovation by insisting on the elimination of "good waste" (the costs of experimentation) as well as "bad waste" (the costs of inefficiency) (Potts 2009). This tension is mirrored in the two broad drivers of performance reporting in health systems: public accountability and quality improvement. Health organizations, predominantly funded by public funds, are necessarily accountable for the ways in which those funds are used and outcomes achieved. This paper reviews how accountability relationships should be re-examined to create room for "good failure" and to ensure that system accountability does not become a barrier to performance improvement. PMID:26853610

  13. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    ERIC Educational Resources Information Center

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  14. Accountability to Whom? For What? Teacher Identity and the Force Field Model of Teacher Development

    ERIC Educational Resources Information Center

    Samuel, Michael

    2008-01-01

    The rise of fundamentalism in the sphere of teacher education points to a swing back towards teachers as service workers for State agendas. Increasingly, teachers are expected to account for the outcomes of their practices. This article traces the trajectory of trends in teacher education over the past five decades arguing that this "new…

  15. Growth Models and Accountability: A Recipe for Remaking ESEA. Education Sector Reports

    ERIC Educational Resources Information Center

    Carey, Kevin; Manwaring, Robert

    2011-01-01

    Under the federal No Child Left Behind Act, schools were held almost exclusively accountable for absolute levels of student performance. But that meant that even schools that were making great strides with students were still labeled as "failing," just because the students had not yet made it all the way to a "proficient" level of achievement. As…

  16. Modeling Task Switching without Switching Tasks: A Short-Term Priming Account of Explicitly Cued Performance

    ERIC Educational Resources Information Center

    Schneider, Darryl W.; Logan, Gordon D.

    2005-01-01

    Switch costs in task switching are commonly attributed to an executive control process of task-set reconfiguration, particularly in studies involving the explicit task-cuing procedure. The authors propose an alternative account of explicitly cued performance that is based on 2 mechanisms: priming of cue encoding from residual activation of cues in…

  17. Accounting for Global Climate Model Projection Uncertainty in Modern Statistical Downscaling

    SciTech Connect

    Johannesson, G

    2010-03-17

    configuring and running a regional climate model (RCM) nested within a given GCM projection (i.e., the GCM provides bounder conditions for the RCM). On the other hand, statistical downscaling aims at establishing a statistical relationship between observed local/regional climate variables of interest and synoptic (GCM-scale) climate predictors. The resulting empirical relationship is then applied to future GCM projections. A comparison of the pros and cons of dynamical versus statistical downscaling is outside the scope of this effort, but has been extensively studied and the reader is referred to Wilby et al. (1998); Murphy (1999); Wood et al. (2004); Benestad et al. (2007); Fowler et al. (2007), and references within those. The scope of this effort is to study methodology, a statistical framework, to propagate and account for GCM uncertainty in regional statistical downscaling assessment. In particular, we will explore how to leverage an ensemble of GCM projections to quantify the impact of the GCM uncertainty in such an assessment. There are three main component to this effort: (1) gather the necessary climate-related data for a regional SDS study, including multiple GCM projections, (2) carry out SDS, and (3) assess the uncertainty. The first step is carried out using tools written in the Python programming language, while analysis tools were developed in the statistical programming language R; see Figure 1.

  18. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    NASA Astrophysics Data System (ADS)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  19. Using satellite and multi-modeling for improving soil moisture and streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Toll, David; Li, Bailing; Xiwu, Zhan; Brian, Cosgrove

    2010-05-01

    Work for this project is towards improving the stream flow forecasts for the NOAA River Forecast Centers (RFC) throughout the U.S. using multi-model capability primarily from the NASA Land Information System and remote sensing data provided by AMSR-E for soil moisture. The RFCs address a range of issues, including peak and low flow predictions as well as river floods and flash floods. The NASA Land Information System (LIS) provides a data integration framework for combining a range of ancillary and satellite data with state of the art data assimilation capabilities. We are currently including: 1) the Noah land surface model (LSM) simulates soil moisture (both liquid and frozen), soil temperature, skin temperature, snowpack water equivalent, snowpack density, canopy water content, and the traditional energy flux and water flux terms of the surface energy and surface water balance; 2) the Sacramento Distributed model is based on the lumped 'SAC-SMA' model used for hydrological simulations; and 3) the Catchment land surface model that is distinctive in the way land surface elements are depicted as hydrological catchments. Results from assimilating AMSR-E (Advances Microwave Sounding Radiometer) soil moisture with the Noah LSM using ensemble Kalman filter data assimilation. Results for a test site in Oklahoma, US show significant improvement for soil moisture estimation assimilating AMSR-E data. We used a conservation of mass procedure within a soil column to provide a more physically based approach to transfer observed soil moisture state to the lower soil moisture profiles. Overall the AMSR-e results shows improvement for improving the true spatial mean of soil moisture improvements. Noah LSM comparisons to determine if AMSR-E contributed to an improved streamflow showed inconclusive results. More accurate hydrologic improvements are expected from the new SMOS (Soil Moisture Ocean Salinity) and the future SMAP (Soil Moisture Active Passive). Future work will compare

  20. On the treatment of evapotranspiration, soil moisture accounting, and aquifer recharge in monthly water balance models.

    USGS Publications Warehouse

    Alley, W.M.

    1984-01-01

    Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation.-from Author

  1. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    ERIC Educational Resources Information Center

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  2. Recommended Method To Account For Daughter Ingrowth For The Portsmouth On-Site Waste Disposal Facility Performance Assessment Modeling

    SciTech Connect

    Phifer, Mark A.; Smith, Frank G. III

    2013-06-21

    A 3-D STOMP model has been developed for the Portsmouth On-Site Waste Disposal Facility (OSWDF) at Site D as outlined in Appendix K of FBP 2013. This model projects the flow and transport of the following radionuclides to various points of assessments: Tc-99, U-234, U-235, U-236, U-238, Am-241, Np-237, Pu-238, Pu-239, Pu-240, Th-228, and Th-230. The model includes the radioactive decay of these parents, but does not include the associated daughter ingrowth because the STOMP model does not have the capability to model daughter ingrowth. The Savannah River National Laboratory (SRNL) provides herein a recommended method to account for daughter ingrowth in association with the Portsmouth OSWDF Performance Assessment (PA) modeling.

  3. An Instructional Model for Preparing Accounting/Computing Clerks in Michigan Secondary School Office Education Programs, Part I and Part II.

    ERIC Educational Resources Information Center

    Moskovis, L. Michael; McKitrick, Max O.

    Outlined in this two-part document is a model for the implementation of a business-industry oriented program designed to provide high school seniors with updated training in the skills and concepts necessary for developing competencies in entry-level and second-level accounting jobs that involve accounts receivable, accounts payable, and payroll…

  4. An Individual-Based Model of Zebrafish Population Dynamics Accounting for Energy Dynamics

    PubMed Central

    Beaudouin, Rémy; Goussen, Benoit; Piccini, Benjamin; Augustine, Starrlight; Devillers, James; Brion, François; Péry, Alexandre R. R.

    2015-01-01

    Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model) was coupled to an individual based model of zebrafish population dynamics (IBM model). Next, we fitted the DEB model to new experimental data on zebrafish growth and reproduction thus improving existing models. We further analysed the DEB-model and DEB-IBM using a sensitivity analysis. Finally, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding), it can already serve to predict the impact of compounds at the population level. PMID:25938409

  5. Comparative evaluation of ensemble Kalman filter, particle filter and variational techniques for river discharge forecast

    NASA Astrophysics Data System (ADS)

    Hirpa, F. A.; Gebremichael, M.; LEE, H.; Hopson, T. M.

    2012-12-01

    Hydrologic data assimilation techniques provide a means to improve river discharge forecasts through updating hydrologic model states and correcting the atmospheric forcing data via optimally combining model outputs with observations. The performance of the assimilation procedure, however, depends on the data assimilation techniques used and the amount of uncertainty in the data sets. To investigate the effects of these, we comparatively evaluate three data assimilation techniques, including ensemble Kalman filter (EnKF), particle filter (PF) and variational (VAR) technique, which assimilate discharge and synthetic soil moisture data at various uncertainty levels into the Sacramento Soil Moisture accounting (SAC-SMA) model used by the National Weather Service (NWS) for river forecasting in The United States. The study basin is Greens Bayou watershed with area of 178 km2 in eastern Texas. In the presentation, we summarize the results of the comparisons, and discuss the challenges of applying each technique for hydrologic applications.

  6. Accounting for surface roughness in a physically-based urban wash-off model

    NASA Astrophysics Data System (ADS)

    Shaw, Stephen B.; Parlange, Jean-Yves; Lebowitz, Molly; Todd Walter, M.

    2009-03-01

    SummaryTo date, urban wash-off models have largely ignored the role of surface roughness in controlling particulate mass loss. We propose a mechanistic model in which particles are ejected by raindrops from surface cavities and travel laterally at the velocity of the overland flow until they are recaptured. In the model, cavities of differing depth and diameter have different ejection rates. The model has a similar form to multiple rate mass transfer models more commonly used to simulate solute transport in groundwater. An analytical solution for a model consisting of two possible cavity geometries is fit to breakthrough curves from sediment wash-off experiments. The experiments are conducted on a 0.8-m flume under artificial rainfall with a surface constructed of casts of asphalt. The experiments use fine sand (˜250 μm) and rainfall rates equivalent to that from a 2 year, 5-min storm in non-coastal regions of the Northeastern United States. Model parameters can be attributed to specific physical features of the surface cavities, particles, or rainfall rate and can be determined with limited calibration. At the plot scale, the model replicates an initial first flush and then settles to a more gradual loss rate which is noticeably different from the more rapid mass exhaustion implied by use of the common exponential wash-off model. Insights from this model could lead to improved design and placement of water quality management structures in urban landscapes.

  7. Accounting for spatial effects in land use regression for urban air pollution modeling.

    PubMed

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models. PMID:26530819

  8. An intuitive Bayesian spatial model for disease mapping that accounts for scaling.

    PubMed

    Riebler, Andrea; Sørbye, Sigrunn H; Simpson, Daniel; Rue, Håvard

    2016-08-01

    In recent years, disease mapping studies have become a routine application within geographical epidemiology and are typically analysed within a Bayesian hierarchical model formulation. A variety of model formulations for the latent level have been proposed but all come with inherent issues. In the classical BYM (Besag, York and Mollié) model, the spatially structured component cannot be seen independently from the unstructured component. This makes prior definitions for the hyperparameters of the two random effects challenging. There are alternative model formulations that address this confounding; however, the issue on how to choose interpretable hyperpriors is still unsolved. Here, we discuss a recently proposed parameterisation of the BYM model that leads to improved parameter control as the hyperparameters can be seen independently from each other. Furthermore, the need for a scaled spatial component is addressed, which facilitates assignment of interpretable hyperpriors and make these transferable between spatial applications with different graph structures. The hyperparameters themselves are used to define flexible extensions of simple base models. Consequently, penalised complexity priors for these parameters can be derived based on the information-theoretic distance from the flexible model to the base model, giving priors with clear interpretation. We provide implementation details for the new model formulation which preserve sparsity properties, and we investigate systematically the model performance and compare it to existing parameterisations. Through a simulation study, we show that the new model performs well, both showing good learning abilities and good shrinkage behaviour. In terms of model choice criteria, the proposed model performs at least equally well as existing parameterisations, but only the new formulation offers parameters that are interpretable and hyperpriors that have a clear meaning. PMID:27566770

  9. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    NASA Astrophysics Data System (ADS)

    Branlard, E.; Gaunaa, M.; Machefaux, E.

    2014-06-01

    The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data from the MEXICO experiment are used as a basis for validation. Three tools using the same 2D airfoil coefficient data are compared: a BEM code, an Actuator-Line and a vortex code. The vortex code is further used to validate the results from the newly implemented BEM yaw-model. Significant improvements are obtained for the prediction of loads and induced velocities. Further relaxation of the main assumptions of the model are briefly presented and discussed.

  10. Modelling reverse characteristics of power LEDs with thermal phenomena taken into account

    NASA Astrophysics Data System (ADS)

    Ptak, Przemysław; Górecki, Krzysztof

    2016-01-01

    This paper refers to modelling characteristics of power LEDs with a particular reference to thermal phenomena. Special attention is paid to modelling characteristics of the circuit protecting the considered device against the excessive value of the reverse voltage and to the description of the temperature influence on optical power. The network form of the worked out model is presented and some results of experimental verification of this model for the selected diodes operating at different cooling conditions are described. The very good agreement between the calculated and measured characteristics is obtained.

  11. Beyond Socks, Signs, and Alarms: A Reflective Accountability Model for Fall Prevention.

    PubMed

    Hoke, Linda M; Guarracino, Dana

    2016-01-01

    Despite standard fall precautions, including nonskid socks, signs, alarms, and patient instructions, our 48-bed cardiac intermediate care unit (CICU) had a 41% increase in the rate of falls (from 2.2 to 3.1 per 1,000 patient days) and a 65% increase in the rate of falls with injury (from 0.75 to 1.24 per 1,000 patient days) between fiscal years (FY) 2012 and 2013. An evaluation of the falls data conducted by a cohort of four clinical nurses found that the majority of falls occurred when patients were unassisted by nurses, most often during toileting. Supported by the leadership team, the clinical nurses developed an accountability care program that required nurses to use reflective practice to evaluate each fall, including sending an e-mail to all staff members with both the nurse's and the patient's perspective on the fall, as well as the nurse's reflection on what could have been done to prevent the fall. Other program components were a postfall huddle and guidelines for assisting and remaining with fall risk patients for the duration of their toileting. Placing the accountability for falls with the nurse resulted in decreases in the unit's rates of falls and falls with injury of 55% (from 3.1 to 1.39 per 1,000 patient days) and 72% (from 1.24 to 0.35 per 1,000 patient days), respectively, between FY2013 and FY2014. Prompt call bell response (less than 60 seconds) also contributed to the goal of fall prevention. PMID:26710147

  12. Randomly Accountable

    ERIC Educational Resources Information Center

    Kane, Thomas J.; Staiger, Douglas O.; Geppert, Jeffrey

    2002-01-01

    The accountability debate tends to devolve into a battle between the pro-testing and anti-testing crowds. When it comes to the design of a school accountability system, the devil is truly in the details. A well-designed accountability plan may go a long way toward giving school personnel the kinds of signals they need to improve performance.…

  13. School Accountability.

    ERIC Educational Resources Information Center

    Evers, Williamson M., Ed.; Walberg, Herbert J., Ed.

    This book presents the perspectives of experts from the fields of history, economics, political science, and psychology on what is known about accountability, what still needs to be learned, what should be done right now, and what should be avoided in devising accountability systems. The common myths about accountability are dispelled and how it…

  14. Colorful Accounting

    ERIC Educational Resources Information Center

    Warrick, C. Shane

    2006-01-01

    As instructors of accounting, we should take an abstract topic (at least to most students) and connect it to content known by students to help increase the effectiveness of our instruction. In a recent semester, ordinary items such as colors, a basketball, and baseball were used to relate the subject of accounting. The accounting topics of account…

  15. Subject Matter Relevance in Interpersonal Communication, Skills, and Instructional Accountability: A Consensus Model.

    ERIC Educational Resources Information Center

    Schuelke, L. David

    The author supports the position that change in the basic speech course is needed and proposes a consensus model to achieve this change. A consensus model approach to the basic course provides for a reduction in entropy regarding objectives, activities, and progress in the classroom. Applying the theories of interpersonal communication taught in…

  16. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    EPA Science Inventory

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  17. Accounting for Individual Differences in Bradley-Terry Models by Means of Recursive Partitioning

    ERIC Educational Resources Information Center

    Strobl, Carolin; Wickelmaier, Florian; Zeileis, Achim

    2011-01-01

    The preference scaling of a group of subjects may not be homogeneous, but different groups of subjects with certain characteristics may show different preference scalings, each of which can be derived from paired comparisons by means of the Bradley-Terry model. Usually, either different models are fit in predefined subsets of the sample or the…

  18. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    PubMed

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. PMID:26339919

  19. Comparison of Preisach and Jiles-Atherton models to take into account hysteresis phenomenon for finite element analysis

    NASA Astrophysics Data System (ADS)

    Benabou, A.; Clénet, S.; Piriou, F.

    2003-04-01

    In electrical engineering, study and design of electromagnetic systems require more and more accurate models. To improve the accuracy of field calculation code, hysteresis phenomenon has to be taken into account to model ferromagnetic material. This material model has to be accurate and fast. In that context, two macroscopic models are often used: the Preisach and the Jiles-Atherton (J-A) models. In this paper, both models are presented. Field calculation requires a model giving the magnetization M versus either the magnetic field H or the magnetic flux density B. Consequently, from the classical Preisach and J-A, two sub-models M( H) and M( B) are deduced. Then, we aim at comparing these models in terms of identification procedure facilities, accuracy, numerical implementation and computational effort. This study is carried out for three kinds of materials, which have different magnetic features: ferrites, FeSi sheets and a soft magnetic composite material. Then, the implementation of these models in a finite element code is presented. As example of application, a high-frequency transformer supplied by a rectangular voltage is studied.

  20. The Accountability Controversy

    ERIC Educational Resources Information Center

    Glisson, Charles

    1975-01-01

    Author discusses accountability controversy concerning effectiveness of social services. Turem's mechanistic and Gruber's organic models of accountability are compared and an alternate open system model of organization is offered which combines positive aspects of Turem's and Gruber's models as well as adds other constructive elements to them. (SE)

  1. Modeling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    NASA Astrophysics Data System (ADS)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2014-01-01

    Coral reefs are diverse ecosystems threatened by rising CO2 levels that are driving the observed increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are able to explicitly modelled by linking the rates of growth, recovery and calcification to the rates of bleaching and temperature stress induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The different characteristics of this model are also assessed against independent data to show that the model captures the observed response of corals. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles for understanding the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can reproduce much of the observed response of corals to changes in temperature and ocean acidification.

  2. A spatially filtered multilevel model to account for spatial dependency: application to self-rated health status in South Korea

    PubMed Central

    2014-01-01

    Background This study aims to suggest an approach that integrates multilevel models and eigenvector spatial filtering methods and apply it to a case study of self-rated health status in South Korea. In many previous health-related studies, multilevel models and single-level spatial regression are used separately. However, the two methods should be used in conjunction because the objectives of both approaches are important in health-related analyses. The multilevel model enables the simultaneous analysis of both individual and neighborhood factors influencing health outcomes. However, the results of conventional multilevel models are potentially misleading when spatial dependency across neighborhoods exists. Spatial dependency in health-related data indicates that health outcomes in nearby neighborhoods are more similar to each other than those in distant neighborhoods. Spatial regression models can address this problem by modeling spatial dependency. This study explores the possibility of integrating a multilevel model and eigenvector spatial filtering, an advanced spatial regression for addressing spatial dependency in datasets. Methods In this spatially filtered multilevel model, eigenvectors function as additional explanatory variables accounting for unexplained spatial dependency within the neighborhood-level error. The specification addresses the inability of conventional multilevel models to account for spatial dependency, and thereby, generates more robust outputs. Results The findings show that sex, employment status, monthly household income, and perceived levels of stress are significantly associated with self-rated health status. Residents living in neighborhoods with low deprivation and a high doctor-to-resident ratio tend to report higher health status. The spatially filtered multilevel model provides unbiased estimations and improves the explanatory power of the model compared to conventional multilevel models although there are no changes in the

  3. A macro traffic flow model accounting for real-time traffic state

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Chen, Liang; Wu, Yong-Hong; Caccetta, Lou

    2015-11-01

    In this paper, we propose a traffic flow model to study the effects of the real-time traffic state on traffic flow. The numerical results show that the proposed model can describe oscillation in traffic and stop-and-go traffic, where the speed-density relationship is qualitatively accordant with the empirical data of the Weizikeng segment of the Badaling freeway in Beijing, which means that the proposed model can qualitatively reproduce some complex traffic phenomena associated with real-time traffic state.

  4. A vortex model for Richtmyer-Meshkov instability accounting for finite Atwood number

    NASA Astrophysics Data System (ADS)

    Likhachev, Oleg A.; Jacobs, Jeffrey W.

    2005-03-01

    The vortex model developed by Jacobs and Sheeley ["Experimental study of incompressible Richtmyer-Meshkov instability," Phys. Fluids 8, 405 (1996)] is essentially a solution to the governing equations for the case of a uniform density fluid. Thus, this model strictly speaking only applies to the case of vanishing small Atwood number. A modification to this model for small to finite Atwood number is proposed in which the vortex row utilized is perturbed such that the vortex spacing is smaller across the spikes and larger across the bubbles, a fact readily observed in experimental images. It is shown that this modification more effectively captures the behavior of experimental amplitude measurements, especially when compared with separate bubble and spike data. In addition, it is shown that this modification will cause the amplitude to deviate from the logarithmic result given by the heuristic models at late time.

  5. A thermomechanical model accounting for the behavior of shape memory alloys in finite deformations

    NASA Astrophysics Data System (ADS)

    Haller, Laviniu; Nedjar, Boumedienne; Moumni, Ziad; Vedinaş, Ioan; Trană, Eugen

    2016-07-01

    Shape memory alloys (SMA) comport an interesting behavior. They can undertake large strains and then recover their undeformed shape by heating. In this context, one of the aspects that challenged many researchers was the development of a mathematical model to predict the behavior of a known SMA under real-life conditions, or finite strain. This paper is aimed at working out a finite strain mathematical model for a Ni-Ti SMA, under the superelastic experiment conditions and under uniaxial mechanical loading, based on the Zaki-Moumni 3D mathematical model developed under the small perturbations assumption. Within the current article, a comparison between experimental findings and calculated results is also investigated. The proposed finite strain mathematical model shows good agreement with experimental data.

  6. Behavioral Health and Health Care Reform Models: Patient-Centered Medical Home, Health Home, and Accountable Care Organization

    PubMed Central

    Bao, Yuhua; Casalino, Lawrence P.; Pincus, Harold Alan

    2012-01-01

    Discussions of health care delivery and payment reforms have largely been silent about how behavioral health could be incorporated into reform initiatives. This paper draws attention to four patient populations defined by the severity of their behavioral health conditions and insurance status. It discusses the potentials and limitations of three prominent models promoted by the Affordable Care Act to serve populations with behavioral health conditions: the Patient Centered Medical Home, the Health Home initiative within Medicaid, and the Accountable Care Organization. To incorporate behavioral health into health reform, policymakers and practitioners may consider embedding in the reform efforts explicit tools – accountability measures and payment designs – to improve access to and quality of care for patients with behavioral health needs. PMID:23188486

  7. Modelling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    NASA Astrophysics Data System (ADS)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2015-05-01

    Coral reefs are diverse ecosystems that are threatened by rising CO2 levels through increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are explicitly modelled by linking rates of growth, recovery and calcification to rates of bleaching and temperature-stress-induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, correlated up- and down-regulation of traits that are consistent with resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The performance of the model is assessed against independent data to demonstrate how it can capture the observed response of corals to stress. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles to help understand the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can give insights into how corals respond to changes in temperature and ocean acidification.

  8. Analysis of homogeneous/non-homogeneous nanofluid models accounting for nanofluid-surface interactions

    NASA Astrophysics Data System (ADS)

    Ahmad, R.

    2016-07-01

    This article reports an unbiased analysis for the water based rod shaped alumina nanoparticles by considering both the homogeneous and non-homogeneous nanofluid models over the coupled nanofluid-surface interface. The mechanics of the surface are found for both the homogeneous and non-homogeneous models, which were ignored in previous studies. The viscosity and thermal conductivity data are implemented from the international nanofluid property benchmark exercise. All the simulations are being done by using the experimentally verified results. By considering the homogeneous and non-homogeneous models, the precise movement of the alumina nanoparticles over the surface has been observed by solving the corresponding system of differential equations. For the non-homogeneous model, a uniform temperature and nanofluid volume fraction are assumed at the surface, and the flux of the alumina nanoparticle is taken as zero. The assumption of zero nanoparticle flux at the surface makes the non-homogeneous model physically more realistic. The differences of all profiles for both the homogeneous and nonhomogeneous models are insignificant, and this is due to small deviations in the values of the Brownian motion and thermophoresis parameters.

  9. Accounting for uncertainty in the analysis of overlap layer mean velocity models

    NASA Astrophysics Data System (ADS)

    Oliver, Todd A.; Moser, Robert D.

    2012-07-01

    When assessing the veracity of mathematical models, it is important to consider the uncertainties in the data used for the assessment. In this paper, we study the impact of data uncertainties on the analysis of overlap layer models for the mean velocity in wall-bounded turbulent flows. Specifically, the tools of Bayesian statistics are used to calibrate and compare six competing models of the mean velocity profile, including multiple logarithmic and power law forms, using velocity profile measurements from a zero-pressure-gradient turbulent boundary layer and fully developed turbulent pipe flow. The calibration problem is formulated as a Bayesian update of the joint probability density function for the calibration parameters, which are treated as random variables to characterize incomplete knowledge about their values. This probabilistic formulation provides a natural treatment of uncertainty and gives insight into the quality of the fit, features that are not easily obtained in deterministic calibration procedures. The model comparison also relies on a Bayesian update. In particular, the relative probabilities of the competing models are updated using the calibration data. The resulting posterior probabilities quantify the relative plausibility of the competing models given the data. For the boundary layer, results are shown for five subsets of the turbulent boundary layer data due to Österlund, including different Reynolds number and wall distance ranges, and multiple assumptions regarding the magnitude of the uncertainty in the velocity measurements. For most choices, multiple models have relatively high posterior probability, indicating that it is difficult to distinguish between the models. For the most inclusive data sets—i.e., the largest ranges of Reynolds number and wall distance—the first-order logarithmic law due to Buschmann and Gad-el-Hak is significantly more probable, given the data, than the other models evaluated. For the pipe flow, data from

  10. Retrieval-Based Model Accounts for Striking Profile of Episodic Memory and Generalization.

    PubMed

    Banino, Andrea; Koster, Raphael; Hassabis, Demis; Kumaran, Dharshan

    2016-01-01

    A fundamental theoretical tension exists between the role of the hippocampus in generalizing across a set of related episodes, and in supporting memory for individual episodes. Whilst the former requires an appreciation of the commonalities across episodes, the latter emphasizes the representation of the specifics of individual experiences. We developed a novel version of the hippocampal-dependent paired associate inference (PAI) paradigm, which afforded us the unique opportunity to investigate the relationship between episodic memory and generalization in parallel. Across four experiments, we provide surprising evidence that the overlap between object pairs in the PAI paradigm results in a marked loss of episodic memory. Critically, however, we demonstrate that superior generalization ability was associated with stronger episodic memory. Through computational simulations we show that this striking profile of behavioral findings is best accounted for by a mechanism by which generalization occurs at the point of retrieval, through the recombination of related episodes on the fly. Taken together, our study offers new insights into the intricate relationship between episodic memory and generalization, and constrains theories of the mechanisms by which the hippocampus supports generalization. PMID:27510579

  11. Retrieval-Based Model Accounts for Striking Profile of Episodic Memory and Generalization

    PubMed Central

    Banino, Andrea; Koster, Raphael; Hassabis, Demis; Kumaran, Dharshan

    2016-01-01

    A fundamental theoretical tension exists between the role of the hippocampus in generalizing across a set of related episodes, and in supporting memory for individual episodes. Whilst the former requires an appreciation of the commonalities across episodes, the latter emphasizes the representation of the specifics of individual experiences. We developed a novel version of the hippocampal-dependent paired associate inference (PAI) paradigm, which afforded us the unique opportunity to investigate the relationship between episodic memory and generalization in parallel. Across four experiments, we provide surprising evidence that the overlap between object pairs in the PAI paradigm results in a marked loss of episodic memory. Critically, however, we demonstrate that superior generalization ability was associated with stronger episodic memory. Through computational simulations we show that this striking profile of behavioral findings is best accounted for by a mechanism by which generalization occurs at the point of retrieval, through the recombination of related episodes on the fly. Taken together, our study offers new insights into the intricate relationship between episodic memory and generalization, and constrains theories of the mechanisms by which the hippocampus supports generalization. PMID:27510579

  12. A contaminant transport model for wetlands accounting for distinct residence time bimodality

    NASA Astrophysics Data System (ADS)

    Musner, T.; Bottacin-Busolin, A.; Zaramella, M.; Marion, A.

    2014-07-01

    Vegetation plays a major role in controlling the fate of contaminants in natural and constructed wetlands. Estimating the efficiency of contaminant removal of a wetland requires separate knowledge of the residence time statistics in the main flow channels, where the flow velocity is relatively higher, and in the more densely vegetated zones, where the velocity is smaller and most of the biochemical transformations occur. A conceptual wetland characterized by a main flow channel (MFC) and lateral vegetated zones (LVZs) is modeled here using a two-dimensional depth-averaged hydrodynamic and advection-dispersion model. The effect of vegetation is described as a flow resistance represented in the hydrodynamic model as a function of the stem density. Simulations are performed for a given flow discharge and for increasing values of the ratio between the vegetation density in the LVZs and in the MFC. Residence time distributions (RTDs) of a nonreactive tracer are derived from numerical simulations of the solute breakthrough curves (BTCs) resulting from a continuous concentration input. Results show that increasing vegetation densities produce an increasingly pronounced bimodality of the RTDs. At longer times, the RTDs decrease exponentially, with different timescales depending on the stem density ratio and other system parameters. The overall residence time distribution can be decomposed into a first component associated with the relatively fast transport in the MFC, and a second component associated with the slower transport in the LVZs. The weight of each temporal component is related to the exchange flux at the MFC-LVZ interface. A one-dimensional transport model is proposed that is capable to reproduce the RTDs predicted by the depth-averaged model, and the relationship between model and system parameters is investigated using a combination of direct and inverse modeling approaches.

  13. Variable parameter McCarthy-Muskingum flow transport model for compound channels accounting for distributed non-uniform lateral flow

    NASA Astrophysics Data System (ADS)

    Swain, Ratnakar; Sahoo, Bhabagrahi

    2015-11-01

    In this study, the fully volume conservative simplified hydrodynamic-based variable parameter McCarthy-Muskingum (VPMM) flow transport model advocated by Perumal and Price in 2013 is extended to exclusively incorporate the distributed non-uniform lateral flow in the routing scheme accounting for compound river channel flows. The revised VPMM formulation is exclusively derived from the combined form of the de Saint-Venant's continuity and momentum equations with the spatiotemporally distributed lateral flow which is solved using the finite difference box scheme. This revised model could address the earlier model limitations of: (i) non-accounting non-uniformly distributed lateral flow, (ii) ignoring floodplain flow, and (iii) non-consideration of catchment dynamics of lateral flow generation restricting its real-time application. The efficacy of the revised formulation is tested to simulate 16 years (1980-1995) river runoff from real-time storm events under scarce morpho-hydrological data conditions in a tropical monsoon-type 48 km Bolani-Gomlai reach of the Brahmani River in eastern India. The spatiotemporally distributed lateral flows generated in real-time is computed by water balance approach accounting for catchment characteristics of normalized network area function, land use land cover classes, and soil textural classes; and hydro-meteorological variables of precipitation, soil moisture, minimum and maximum temperatures, wind speed, relative humidity, and solar radiation. The multiple error measures used in this study and the simulation results reveal that the revised VPMM model has a greater practical utility in estimating the event-based and long-term meso-scale river runoff (both discharge and its stage) at any ungauged site, enhancing its application for real-time flood estimation.

  14. Educational Quality Is Measured by Individual Student Achievement Over Time. Mt. San Antonio College AB 1725 Model Accountability System Pilot Proposal.

    ERIC Educational Resources Information Center

    Mount San Antonio Coll., Walnut, CA.

    In December 1990, a project was begun at Mt. San Antonio College (MSAC) in Walnut, California, to develop a model accountability system based on the belief that educational quality is measured by individual achievement over time. This proposal for the Accountability Model (AM) presents information on project methodology and organization in four…

  15. Accounting for anthropogenic actions in modeling of stream flow at the regional scale

    NASA Astrophysics Data System (ADS)

    David, C. H.; Famiglietti, J. S.

    2013-12-01

    The modeling of the horizontal movement of water from land to coasts at scales ranging from 10^5 km^2 to 10^6 km^2 has benefited from extensive research within the past two decades. In parallel, community technology for gathering/sharing surface water observations and datasets for describing the geography of terrestrial water bodies have recently had groundbreaking advancements. Yet, the fields of computational hydrology and hydroinformatics have barely started to work hand-in-hand, and much research remains to be performed before we can better understand the anthropogenic impact on surface water through combined observations and models. Here, we build on our existing river modeling approach that leverages community state-of-the-art tools such as atmospheric data from the second phase of the North American Land Data Assimilation System (NLDAS2), river networks from the enhanced National Hydrography Dataset (NHDPlus), and observations from the U.S. Geological Survey National Water Information System (NWIS) obtained through CUAHSI webservices. Modifications are made to our integrated observational/modeling system to include treatment for anthropogenic actions such as dams, pumping and divergences in river networks. Initial results of a study focusing on the entire State of California suggest that availability of data describing human alterations on natural river networks associated with proper representation of such actions in our models could help advance hydrology further. Snapshot from an animation of flow in California river networks. The full animation is available at: http://www.ucchm.org/david/rapid.htm.

  16. A dissolution model that accounts for coverage of mineral surfaces by precipitation in core floods

    NASA Astrophysics Data System (ADS)

    Pedersen, Janne; Jettestuen, Espen; Madland, Merete V.; Hildebrand-Habel, Tania; Korsnes, Reidar I.; Vinningland, Jan Ludvig; Hiorth, Aksel

    2016-01-01

    In this paper, we propose a model for evolution of reactive surface area of minerals due to surface coverage by precipitating minerals. The model is used to interpret results from an experiment where a chalk core was flooded with MgCl2 for 1072 days, giving rise to calcite dissolution and magnesite precipitation. The model successfully describes both the long-term behavior of the measured effluent concentrations and the more or less homogeneous distribution of magnesite found in the core after 1072 days. The model also predicts that precipitating magnesite minerals form as larger crystals or aggregates of smaller size crystals, and not as thin flakes or as a monomolecular layer. Using rate constants obtained from literature gave numerical effluent concentrations that diverged from observed values only after a few days of flooding. To match the simulations to the experimental data after approximately 1 year of flooding, a rate constant that is four orders of magnitude lower than reported by powder experiments had to be used. We argue that a static rate constant is not sufficient to describe a chalk core flooding experiment lasting for nearly 3 years. The model is a necessary extension of standard rate equations in order to describe long term core flooding experiments where there is a large degree of textural alteration.

  17. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    SciTech Connect

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  18. Does Don Fisher's high-pressure manifold model account for phloem transport and resource partitioning?

    PubMed Central

    Patrick, John W.

    2013-01-01

    The pressure flow model of phloem transport envisaged by Münch (1930) has gained wide acceptance. Recently, however, the model has been questioned on structural and physiological grounds. For instance, sub-structures of sieve elements may reduce their hydraulic conductances to levels that impede flow rates of phloem sap and observed magnitudes of pressure gradients to drive flow along sieve tubes could be inadequate in tall trees. A variant of the Münch pressure flow model, the high-pressure manifold model of phloem transport introduced by Donald Fisher may serve to reconcile at least some of these questions. To this end, key predicted features of the high-pressure manifold model of phloem transport are evaluated against current knowledge of the physiology of phloem transport. These features include: (1) An absence of significant gradients in axial hydrostatic pressure in sieve elements from collection to release phloem accompanied by transport properties of sieve elements that underpin this outcome; (2) Symplasmic pathways of phloem unloading into sink organs impose a major constraint over bulk flow rates of resources translocated through the source-path-sink system; (3) Hydraulic conductances of plasmodesmata, linking sieve elements with surrounding phloem parenchyma cells, are sufficient to support and also regulate bulk flow rates exiting from sieve elements of release phloem. The review identifies strong circumstantial evidence that resource transport through the source-path-sink system is consistent with the high-pressure manifold model of phloem transport. The analysis then moves to exploring mechanisms that may link demand for resources, by cells of meristematic and expansion/storage sinks, with plasmodesmal conductances of release phloem. The review concludes with a brief discussion of how these mechanisms may offer novel opportunities to enhance crop biomass yields. PMID:23802003

  19. Assessing and accounting for the effects of model error in Bayesian solutions to hydrogeophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Koepke, C.; Irving, J.; Roubinet, D.

    2014-12-01

    Geophysical methods have gained much interest in hydrology over the past two decades because of their ability to provide estimates of the spatial distribution of subsurface properties at a scale that is often relevant to key hydrological processes. Because of an increased desire to quantify uncertainty in hydrological predictions, many hydrogeophysical inverse problems have recently been posed within a Bayesian framework, such that estimates of hydrological properties and their corresponding uncertainties can be obtained. With the Bayesian approach, it is often necessary to make significant approximations to the associated hydrological and geophysical forward models such that stochastic sampling from the posterior distribution, for example using Markov-chain-Monte-Carlo (MCMC) methods, is computationally feasible. These approximations lead to model structural errors, which, so far, have not been properly treated in hydrogeophysical inverse problems. Here, we study the inverse problem of estimating unsaturated hydraulic properties, namely the van Genuchten-Mualem (VGM) parameters, in a layered subsurface from time-lapse, zero-offset-profile (ZOP) ground penetrating radar (GPR) data, collected over the course of an infiltration experiment. In particular, we investigate the effects of assumptions made for computational tractability of the stochastic inversion on model prediction errors as a function of depth and time. These assumptions are that (i) infiltration is purely vertical and can be modeled by the 1D Richards equation, and (ii) the petrophysical relationship between water content and relative dielectric permittivity is known. Results indicate that model errors for this problem are far from Gaussian and independently identically distributed, which has been the common assumption in previous efforts in this domain. In order to develop a more appropriate likelihood formulation, we use (i) a stochastic description of the model error that is obtained through

  20. A macro traffic flow model accounting for road capacity and reliability analysis

    NASA Astrophysics Data System (ADS)

    Tang, T. Q.; Shi, W. F.; Yang, X. B.; Wang, Y. P.; Lu, G. Q.

    2013-12-01

    Based on existing traffic flow models, in this paper we develop a macro traffic flow model taking into consideration road capacity to study the impact of the road capacity on traffic flow. The numerical results show that the road capacity destroys the stability of uniform flow and produces stop-and-go traffic under a moderate density and that the road capacity enhances the traffic risk coefficient and reduces the traffic system’s reliability. In addition, the numerical results show that properly improving the road condition can enhance the road capacity, reduce the traffic risk coefficient and enhance the traffic system’s reliability.

  1. Magnetic models of crystalline terrane: accounting for the effect of topography.

    USGS Publications Warehouse

    Blakely, R.J.; Grauch, V.J.S.

    1983-01-01

    Facilitates geologic interpretation of an aeromagnetic survey of the Oregon Cascade Range by calculating the magnetic field caused by a 3-D topographic model. Maps of the calculated field are compared with observed aeromagnetic data both visually and with a numerical technique that produces a contour map of correlation coefficients for the model. These comparisons allow quick recognition of anomalies caused by normally or reversely magnetized topographic features and, more importantly, identification of anomalies caused by geologic features not obviously caused by the topography. -from Authors

  2. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    PubMed

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction. PMID:26154937

  3. Redesigning Urban Districts in the USA: Mayoral Accountability and the Diverse Provider Model

    ERIC Educational Resources Information Center

    Wong, Kenneth K.

    2011-01-01

    In response to public pressure, urban districts in the USA have initiated reforms that aim at redrawing the boundaries between the school system and other major local institutions. More specifically, this article focuses on two emerging reform strategies. We will examine an emerging model of governance that enables big-city mayors to establish…

  4. Assessing and accounting for time heterogeneity in stochastic actor oriented models

    PubMed Central

    Schweinberger, Michael; Snijders, Tom A. B.; Ripley, Ruth M.

    2011-01-01

    This paper explores time heterogeneity in stochastic actor oriented models (SAOM) proposed by Snijders (Sociological Methodology. Blackwell, Boston, pp 361–395, 2001) which are meant to study the evolution of networks. SAOMs model social networks as directed graphs with nodes representing people, organizations, etc., and dichotomous relations representing underlying relationships of friendship, advice, etc. We illustrate several reasons why heterogeneity should be statistically tested and provide a fast, convenient method for assessment and model correction. SAOMs provide a flexible framework for network dynamics which allow a researcher to test selection, influence, behavioral, and structural properties in network data over time. We show how the forward-selecting, score type test proposed by Schweinberger (Chapter 4: Statistical modeling of network panel data: goodness of fit. PhD thesis, University of Groningen 2007) can be employed to quickly assess heterogeneity at almost no additional computational cost. One step estimates are used to assess the magnitude of the heterogeneity. Simulation studies are conducted to support the validity of this approach. The ASSIST dataset (Campbell et al. Lancet 371(9624):1595–1602, 2008) is reanalyzed with the score type test, one step estimators, and a full estimation for illustration. These tools are implemented in the RSiena package, and a brief walkthrough is provided. PMID:22003370

  5. Delay differential models in multimode laser dynamics: taking chromatic dispersion into account

    NASA Astrophysics Data System (ADS)

    Vladimirov, A. G.; Huyet, G.; Pimenov, A.

    2016-04-01

    A set of differential equations with distributed delay is derived for modeling of multimode ring lasers with intracavity chromatic dispersion. Analytical stability analysis of continuous wave regimes is performed and it is demonstrated that sufficiently strong anomalous dispersion can destabilize these regimes.

  6. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  7. Evaluation of alternative surface runoff accounting procedures using the SWAT model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For surface runoff estimation in the Soil and Water Assessment Tool (SWAT) model, the curve number (CN) procedure is commonly adopted to calculate surface runoff by utilizing antecedent soil moisture condition (SCSI) in field. In the recent version of SWAT (SWAT2005), an alternative approach is ava...

  8. Accountability in Training Transfer: Adapting Schlenker's Model of Responsibility to a Persistent but Solvable Problem

    ERIC Educational Resources Information Center

    Burke, Lisa A.; Saks, Alan M.

    2009-01-01

    Decades have been spent studying training transfer in organizational environments in recognition of a transfer problem in organizations. Theoretical models of various antecedents, empirical studies of transfer interventions, and studies of best practices have all been advanced to address this continued problem. Yet a solution may not be so…

  9. Trust, Accountability, Autonomy: Building a Teacher-Driven Professional Growth Model

    ERIC Educational Resources Information Center

    Jebson, Hugh; DiNota, Carlo

    2011-01-01

    Faculty evaluation--arguably no other topic in independent education evokes as much passionate discourse--mostly negative, or at least freighted with anxiety. But, in the authors' experience, it does not have to be this way. At their school, Berkeley Preparatory School (Florida), they have recently developed a teacher evaluation model that is…

  10. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  11. Modeling Site-Based Decision Making: School Practices in the Age of Accountability

    ERIC Educational Resources Information Center

    Bauer, Scott C.; Bogotch, Ira E.

    2006-01-01

    Purpose: The primary purpose is to present empirical measures of variables relating to practices engaged in by site-based teams, and then to use these variables to test a model predicting significant outcomes of site-based decision making. The practice variables of site-based management (SBM) teams are essential in promoting research within a…

  12. Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper

    ERIC Educational Resources Information Center

    Hock, Heinrich; Isenberg, Eric

    2012-01-01

    Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…

  13. Accounting for Model Uncertainty in the Prediction of University Graduation Rates

    ERIC Educational Resources Information Center

    Goenner, Cullen F.; Snaith, Sean M.

    2004-01-01

    Empirical analysis requires researchers to choose which variables to use as controls in their models. Theory should dictate this choice, yet often in social science there are several theories that may suggest the inclusion or exclusion of certain variables as controls. The result of this is that researchers may use different variables in their…

  14. Assessing and accounting for time heterogeneity in stochastic actor oriented models.

    PubMed

    Lospinoso, Joshua A; Schweinberger, Michael; Snijders, Tom A B; Ripley, Ruth M

    2011-07-01

    This paper explores time heterogeneity in stochastic actor oriented models (SAOM) proposed by Snijders (Sociological Methodology. Blackwell, Boston, pp 361-395, 2001) which are meant to study the evolution of networks. SAOMs model social networks as directed graphs with nodes representing people, organizations, etc., and dichotomous relations representing underlying relationships of friendship, advice, etc. We illustrate several reasons why heterogeneity should be statistically tested and provide a fast, convenient method for assessment and model correction. SAOMs provide a flexible framework for network dynamics which allow a researcher to test selection, influence, behavioral, and structural properties in network data over time. We show how the forward-selecting, score type test proposed by Schweinberger (Chapter 4: Statistical modeling of network panel data: goodness of fit. PhD thesis, University of Groningen 2007) can be employed to quickly assess heterogeneity at almost no additional computational cost. One step estimates are used to assess the magnitude of the heterogeneity. Simulation studies are conducted to support the validity of this approach. The ASSIST dataset (Campbell et al. Lancet 371(9624):1595-1602, 2008) is reanalyzed with the score type test, one step estimators, and a full estimation for illustration. These tools are implemented in the RSiena package, and a brief walkthrough is provided. PMID:22003370

  15. An Exemplar-Model Account of Feature Inference from Uncertain Categorizations

    ERIC Educational Resources Information Center

    Nosofsky, Robert M.

    2015-01-01

    In a highly systematic literature, researchers have investigated the manner in which people make feature inferences in paradigms involving uncertain categorizations (e.g., Griffiths, Hayes, & Newell, 2012; Murphy & Ross, 1994, 2007, 2010a). Although researchers have discussed the implications of the results for models of categorization and…

  16. A Semi-Empirical Model for Tilted-Gun Planar Magnetron Sputtering Accounting for Chimney Shadowing

    NASA Astrophysics Data System (ADS)

    Bunn, J. K.; Metting, C. J.; Hattrick-Simpers, J.

    2015-01-01

    Integrated computational materials engineering (ICME) approaches to composition and thickness profiles of sputtered thin-film samples are the key to expediting materials exploration for these materials. Here, an ICME-based semi-empirical approach to modeling the thickness of thin-film samples deposited via magnetron sputtering is developed. Using Yamamura's dimensionless differential angular sputtering yield and a measured deposition rate at a point in space for a single experimental condition, the model predicts the deposition profile from planar DC sputtering sources. The model includes corrections for off-center, tilted gun geometries as well as shadowing effects from gun chimneys used in most state-of-the-art sputtering systems. The modeling algorithm was validated by comparing its results with experimental deposition rates obtained from a sputtering system utilizing sources with a multi-piece chimney assembly that consists of a lower ground shield and a removable gas chimney. Simulations were performed for gun-tilts ranging from 0° to 31.3° from the vertical with and without the gas chimney installed. The results for the predicted and experimental angular dependence of the sputtering deposition rate were found to have an average magnitude of relative error of for a 0°-31.3° gun-tilt range without the gas chimney, and for a 17.7°-31.3° gun-tilt range with the gas chimney. The continuum nature of the model renders this approach reverse-optimizable, providing a rapid tool for assisting in the understanding of the synthesis-composition-property space of novel materials.

  17. Using state-and-transition modeling to account for imperfect detection in invasive species management

    USGS Publications Warehouse

    Frid, Leonardo; Holcombe, Tracy; Morisette, Jeffrey T.; Olsson, Aaryn D.; Brigham, Lindy; Bean, Travis M.; Betancourt, Julio L.; Bryan, Katherine

    2013-01-01

    Buffelgrass, a highly competitive and flammable African bunchgrass, is spreading rapidly across both urban and natural areas in the Sonoran Desert of southern and central Arizona. Damages include increased fire risk, losses in biodiversity, and diminished revenues and quality of life. Feasibility of sustained and successful mitigation will depend heavily on rates of spread, treatment capacity, and cost–benefit analysis. We created a decision support model for the wildland–urban interface north of Tucson, AZ, using a spatial state-and-transition simulation modeling framework, the Tool for Exploratory Landscape Scenario Analyses. We addressed the issues of undetected invasions, identifying potentially suitable habitat and calibrating spread rates, while answering questions about how to allocate resources among inventory, treatment, and maintenance. Inputs to the model include a state-and-transition simulation model to describe the succession and control of buffelgrass, a habitat suitability model, management planning zones, spread vectors, estimated dispersal kernels for buffelgrass, and maps of current distribution. Our spatial simulations showed that without treatment, buffelgrass infestations that started with as little as 80 ha (198 ac) could grow to more than 6,000 ha by the year 2060. In contrast, applying unlimited management resources could limit 2060 infestation levels to approximately 50 ha. The application of sufficient resources toward inventory is important because undetected patches of buffelgrass will tend to grow exponentially. In our simulations, areas affected by buffelgrass may increase substantially over the next 50 yr, but a large, upfront investment in buffelgrass control could reduce the infested area and overall management costs.

  18. The effects of drugs on human models of emotional processing: an account of antidepressant drug treatment

    PubMed Central

    Pringle, Abbie; Harmer, Catherine J.

    2015-01-01

    Human models of emotional processing suggest that the direct effect of successful antidepressant drug treatment may be to modify biases in the processing of emotional information. Negative biases in emotional processing are documented in depression, and single or short-term dosing with conventional antidepressant drugs reverses these biases in depressed patients prior to any subjective change in mood. Antidepressant drug treatments also modulate emotional processing in healthy volunteers, which allows the consideration of the psychological effects of these drugs without the confound of changes in mood. As such, human models of emotional processing may prove to be useful for testing the efficacy of novel treatments and for matching treatments to individual patients or subgroups of patients. PMID:26869848

  19. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    NASA Astrophysics Data System (ADS)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  20. A communication model of shared decision making: accounting for cancer treatment decisions.

    PubMed

    Siminoff, Laura A; Step, Mary M

    2005-07-01

    The authors present a communication model of shared decision making (CMSDM) that explicitly identifies the communication process as the vehicle for decision making in cancer treatment. In this view, decision making is necessarily a sociocommunicative process whereby people enter into a relationship, exchange information, establish preferences, and choose a course of action. The model derives from contemporary notions of behavioral decision making and ethical conceptions of the doctor-patient relationship. This article briefly reviews the theoretical approaches to decision making, notes deficiencies, and embeds a more socially based process into the dynamics of the physician-patient relationship, focusing on cancer treatment decisions. In the CMSDM, decisions depend on (a) antecedent factors that have potential to influence communication, (b) jointly constructed communication climate, and (c) treatment preferences established by the physician and the patient. PMID:16045427

  1. An Energy Approach to a Micromechanics Model Accounting for Nonlinear Interface Debonding.

    SciTech Connect

    Tan, H.; Huang, Y.; Geubelle, P. H.; Liu, C.; Breitenfeld, M. S.

    2005-01-01

    We developed a micromechanics model to study the effect of nonlinear interface debonding on the constitutive behavior of composite materials. While implementing this micromechanics model into a large simulation code on solid rockets, we are challenged by problems such as tension/shear coupling and the nonuniform distribution of displacement jump at the particle/matrix interfaces. We therefore propose an energy approach to solve these problems. This energy approach calculates the potential energy of the representative volume element, including the contribution from the interface debonding. By minimizing the potential energy with respect to the variation of the interface displacement jump, the traction balanced interface debonding can be found and the macroscopic constitutive relations established. This energy approach has the ability to treat different load conditions in a unified way, and the interface cohesive law can be in any arbitrary forms. In this paper, the energy approach is verified to give the same constitutive behaviors as reported before.

  2. Structure-Based Statistical Mechanical Model Accounts for the Causality and Energetics of Allosteric Communication

    PubMed Central

    Guarnera, Enrico; Berezovsky, Igor N.

    2016-01-01

    Allostery is one of the pervasive mechanisms through which proteins in living systems carry out enzymatic activity, cell signaling, and metabolism control. Effective modeling of the protein function regulation requires a synthesis of the thermodynamic and structural views of allostery. We present here a structure-based statistical mechanical model of allostery, allowing one to observe causality of communication between regulatory and functional sites, and to estimate per residue free energy changes. Based on the consideration of ligand free and ligand bound systems in the context of a harmonic model, corresponding sets of characteristic normal modes are obtained and used as inputs for an allosteric potential. This potential quantifies the mean work exerted on a residue due to the local motion of its neighbors. Subsequently, in a statistical mechanical framework the entropic contribution to allosteric free energy of a residue is directly calculated from the comparison of conformational ensembles in the ligand free and ligand bound systems. As a result, this method provides a systematic approach for analyzing the energetics of allosteric communication based on a single structure. The feasibility of the approach was tested on a variety of allosteric proteins, heterogeneous in terms of size, topology and degree of oligomerization. The allosteric free energy calculations show the diversity of ways and complexity of scenarios existing in the phenomenology of allosteric causality and communication. The presented model is a step forward in developing the computational techniques aimed at detecting allosteric sites and obtaining the discriminative power between agonistic and antagonistic effectors, which are among the major goals in allosteric drug design. PMID:26939022

  3. Structure-Based Statistical Mechanical Model Accounts for the Causality and Energetics of Allosteric Communication.

    PubMed

    Guarnera, Enrico; Berezovsky, Igor N

    2016-03-01

    Allostery is one of the pervasive mechanisms through which proteins in living systems carry out enzymatic activity, cell signaling, and metabolism control. Effective modeling of the protein function regulation requires a synthesis of the thermodynamic and structural views of allostery. We present here a structure-based statistical mechanical model of allostery, allowing one to observe causality of communication between regulatory and functional sites, and to estimate per residue free energy changes. Based on the consideration of ligand free and ligand bound systems in the context of a harmonic model, corresponding sets of characteristic normal modes are obtained and used as inputs for an allosteric potential. This potential quantifies the mean work exerted on a residue due to the local motion of its neighbors. Subsequently, in a statistical mechanical framework the entropic contribution to allosteric free energy of a residue is directly calculated from the comparison of conformational ensembles in the ligand free and ligand bound systems. As a result, this method provides a systematic approach for analyzing the energetics of allosteric communication based on a single structure. The feasibility of the approach was tested on a variety of allosteric proteins, heterogeneous in terms of size, topology and degree of oligomerization. The allosteric free energy calculations show the diversity of ways and complexity of scenarios existing in the phenomenology of allosteric causality and communication. The presented model is a step forward in developing the computational techniques aimed at detecting allosteric sites and obtaining the discriminative power between agonistic and antagonistic effectors, which are among the major goals in allosteric drug design. PMID:26939022

  4. On the influence of debris in glacier melt modelling: a new temperature-index model accounting for the debris thickness feedback

    NASA Astrophysics Data System (ADS)

    Carenzo, Marco; Mabillard, Johan; Pellicciotti, Francesca; Reid, Tim; Brock, Ben; Burlando, Paolo

    2013-04-01

    The increase of rockfalls from the surrounding slopes and of englacial melt-out material has led to an increase of the debris cover extent on Alpine glaciers. In recent years, distributed debris energy-balance models have been developed to account for the melt rate enhancing/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya. Some of the input data such as wind or temperature are also of difficult extrapolation from station measurements. Due to their lower data requirement, empirical models have been used in glacier melt modelling. However, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of debris thickness on melt. In this paper, we present a new temperature-index model accounting for the debris thickness feedback in the computation of melt rates at the debris-ice interface. The empirical parameters (temperature factor, shortwave radiation factor, and lag factor accounting for the energy transfer through the debris layer) are optimized at the point scale for several debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter has been validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. The new model is developed on Miage Glacier, Italy, a debris cover glacier in which the ablation area is mantled in near-continuous layer of rock. Subsequently, its transferability is tested on Haut Glacier d'Arolla, Switzerland, where debris is thinner and its extension has been seen to expand in the last decades. The results show that the performance of the new debris temperature-index model (DETI) in simulating the glacier melt rate at the point scale

  5. Accounting for natural organic matter in aqueous chemical equilibrium models: a review of the theories and applications

    NASA Astrophysics Data System (ADS)

    Dudal, Yves; Gérard, Frédéric

    2004-08-01

    Soil organic matter consists of a highly complex and diversified blend of organic molecules, ranging from low molecular weight organic acids (LMWOAs), sugars, amines, alcohols, etc., to high apparent molecular weight fulvic and humic acids. The presence of a wide range of functional groups on these molecules makes them very reactive and influential in soil chemistry, in regards to acid-base chemistry, metal complexation, precipitation and dissolution of minerals and microbial reactions. Out of these functional groups, the carboxylic and phenolic ones are the most abundant and most influential in regards to metal complexation. Therefore, chemical equilibrium models have progressively dealt with organic matter in their calculations. This paper presents a review of six chemical equilibrium models, namely N ICA-Donnan, E Q3/6, G EOCHEM, M INTEQA2, P HREEQC and W HAM, in light of the account they make of natural organic matter (NOM) with the objective of helping potential users in choosing a modelling approach. The account has taken various faces, mainly by adding specific molecules within the existing model databases (E Q3/6, G EOCHEM, and P HREEQC) or by using either a discrete (W HAM) or a continuous (N ICA-Donnan and M INTEQA2) distribution of the deprotonated carboxylic and phenolic groups. The different ways in which soil organic matter has been integrated into these models are discussed in regards to the model-experiment comparisons that were found in the literature, concerning applications to either laboratory or natural systems. Much of the attention has been focused on the two most advanced models, W HAM and N ICA-Donnan, which are able to reasonably describe most of the experimental results. Nevertheless, a better knowledge of the humic substances metal-binding properties is needed to better constrain model inputs with site-specific parameter values. This represents the main axis of research that needs to be carried out to improve the models. In addition to

  6. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections.

    PubMed

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model

  7. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    PubMed Central

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model

  8. Accounting for Long Term Sediment Storage in a Watershed Scale Numerical Model for Suspended Sediment Routing

    NASA Astrophysics Data System (ADS)

    Keeler, J. J.; Pizzuto, J. E.; Skalak, K.; Karwan, D. L.; Benthem, A.; Ackerman, T. R.

    2015-12-01

    Quantifying the delivery of suspended sediment from upland sources to downstream receiving waters is important for watershed management, but current routing models fail to accurately represent lag times in delivery resulting from sediment storage. In this study, we route suspended sediment tagged by a characteristic tracer using a 1-dimensional model that implicitly includes storage and remobilization processes and timescales. From an input location where tagged sediment is added, the model advects suspended sediment downstream at the velocity of the stream (adjusted for the intermittency of transport events). Deposition rates are specified by the fraction of the suspended load stored per kilometer of downstream transport (presumably available from a sediment budget). Tagged sediment leaving storage is evaluated from a convolution equation based on the probability distribution function (pdf) of sediment storage waiting times; this approach avoids the difficulty of accurately representing complex processes of sediment remobilization from floodplain and other deposits. To illustrate the role of storage on sediment delivery, we compare exponential and bounded power-law waiting time pdfs with identical means of 94 years. In both cases, the median travel time for sediment to reach the depocenter in fluvial systems less than 40km long is governed by in-channel transport and is unaffected by sediment storage. As the channel length increases, however, the median sediment travel time reflects storage rather than in-channel transport; travel times do not vary significantly between the two different waiting time functions. At distances of 50, 100, and 200 km, the median travel time for suspended sediment is 36, 136, and 325 years, orders of magnitude slower than travel times associated with in-channel transport. These computations demonstrate that storage can be neglected for short rivers, but for longer systems, storage controls the delivery of suspended sediment.

  9. Accounting for management costs in sensitivity analyses of matrix population models.

    PubMed

    Baxter, Peter W J; McCarthy, Michael A; Possingham, Hugh P; Menkhorst, Peter W; McLean, Natasha

    2006-06-01

    Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. PMID

  10. Consistent Treatment of Hydrophobicity in Protein Lattice Models Accounts for Cold Denaturation

    NASA Astrophysics Data System (ADS)

    van Dijk, Erik; Varilly, Patrick; Knowles, Tuomas P. J.; Frenkel, Daan; Abeln, Sanne

    2016-02-01

    The hydrophobic effect stabilizes the native structure of proteins by minimizing the unfavorable interactions between hydrophobic residues and water through the formation of a hydrophobic core. Here, we include the entropic and enthalpic contributions of the hydrophobic effect explicitly in an implicit solvent model. This allows us to capture two important effects: a length-scale dependence and a temperature dependence for the solvation of a hydrophobic particle. This consistent treatment of the hydrophobic effect explains cold denaturation and heat capacity measurements of solvated proteins.

  11. Consistent Treatment of Hydrophobicity in Protein Lattice Models Accounts for Cold Denaturation.

    PubMed

    van Dijk, Erik; Varilly, Patrick; Knowles, Tuomas P J; Frenkel, Daan; Abeln, Sanne

    2016-02-19

    The hydrophobic effect stabilizes the native structure of proteins by minimizing the unfavorable interactions between hydrophobic residues and water through the formation of a hydrophobic core. Here, we include the entropic and enthalpic contributions of the hydrophobic effect explicitly in an implicit solvent model. This allows us to capture two important effects: a length-scale dependence and a temperature dependence for the solvation of a hydrophobic particle. This consistent treatment of the hydrophobic effect explains cold denaturation and heat capacity measurements of solvated proteins. PMID:26943560

  12. Accounting for Heaping in Retrospectively Reported Event Data – A Mixture-Model Approach

    PubMed Central

    Bar, Haim Y.; Lillard, Dean R.

    2012-01-01

    When event data are retrospectively reported, more temporally distal events tend to get “heaped” on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch time-varying right-hand side variables. We develop a model-based approach to estimate the extent of heaping in the data, and how it affects regression parameter estimates. We use smoking cessation data as a motivating example, but our method is general. It facilitates the use of retrospective data from the multitude of cross-sectional and longitudinal studies worldwide that collect and potentially could collect event data. PMID:22733577

  13. Account of near-cathode sheath in numerical models of high-pressure arc discharges

    NASA Astrophysics Data System (ADS)

    Benilov, M. S.; Almeida, N. A.; Baeva, M.; Cunha, M. D.; Benilova, L. G.; Uhrlandt, D.

    2016-06-01

    Three approaches to describing the separation of charges in near-cathode regions of high-pressure arc discharges are compared. The first approach employs a single set of equations, including the Poisson equation, in the whole interelectrode gap. The second approach employs a fully non-equilibrium description of the quasi-neutral bulk plasma, complemented with a newly developed description of the space-charge sheaths. The third, and the simplest, approach exploits the fact that significant power is deposited by the arc power supply into the near-cathode plasma layer, which allows one to simulate the plasma–cathode interaction to the first approximation independently of processes in the bulk plasma. It is found that results given by the different models are generally in good agreement, and in some cases the agreement is even surprisingly good. It follows that the predicted integral characteristics of the plasma–cathode interaction are not strongly affected by details of the model provided that the basic physics is right.

  14. Singing with yourself: evidence for an inverse modeling account of poor-pitch singing.

    PubMed

    Pfordresher, Peter Q; Mantell, James T

    2014-05-01

    Singing is a ubiquitous and culturally significant activity that humans engage in from an early age. Nevertheless, some individuals - termed poor-pitch singers - are unable to match target pitches within a musical semitone while singing. In the experiments reported here, we tested whether poor-pitch singing deficits would be reduced when individuals imitate recordings of themselves as opposed to recordings of other individuals. This prediction was based on the hypothesis that poor-pitch singers have not developed an abstract "inverse model" of the auditory-vocal system and instead must rely on sensorimotor associations that they have experienced directly, which is true for sequences an individual has already produced. In three experiments, participants, both accurate and poor-pitch singers, were better able to imitate sung recordings of themselves than sung recordings of other singers. However, this self-advantage was enhanced for poor-pitch singers. These effects were not a byproduct of self-recognition (Experiment 1), vocal timbre (Experiment 2), or the absolute pitch of target recordings (i.e., the advantage remains when recordings are transposed, Experiment 3). Results support the conceptualization of poor-pitch singing as an imitative deficit resulting from a deficient inverse model of the auditory-vocal system with respect to pitch. PMID:24480454

  15. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    USGS Publications Warehouse

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  16. Model Fitting Versus Curve Fitting: A Model of Renormalization Provides a Better Account of Age Aftereffects Than a Model of Local Repulsion

    PubMed Central

    Mac, Amy; Rhodes, Gillian; Webster, Michael A.

    2015-01-01

    Recently, we proposed that the aftereffects of adapting to facial age are consistent with a renormalization of the perceived age (e.g., so that after adapting to a younger or older age, all ages appear slightly older or younger, respectively). This conclusion has been challenged by arguing that the aftereffects can also be accounted for by an alternative model based on repulsion (in which facial ages above or below the adapting age are biased away from the adaptor). However, we show here that this challenge was based on allowing the fitted functions to take on values which are implausible and incompatible across the different adapting conditions. When the fits are constrained or interpreted in terms of standard assumptions about normalization and repulsion, then the two analyses both agree in pointing to a pattern of renormalization in age aftereffects. PMID:27551353

  17. Modelling of trace metal uptake by roots taking into account complexation by exogenous organic ligands

    NASA Astrophysics Data System (ADS)

    Jean-Marc, Custos; Christian, Moyne; Sterckeman, Thibault

    2010-05-01

    The context of this study is phytoextraction of soil trace metals such as Cd, Pb or Zn. Trace metal transfer from soil to plant depends on physical and chemical processes such as minerals alteration, transport, adsorption/desorption, reactions in solution and biological processes including the action of plant roots and of associated micro-flora. Complexation of metal ions by organic ligands is considered to play a role on the availability of trace metals for roots in particular in the event that synthetic ligands (EDTA, NTA, etc.) are added to the soil to increase the solubility of the contaminants. As this role is not clearly understood, we wanted to simulate it in order to quantify the effect of organic ligands on root uptake of trace metals and produce a tool which could help in optimizing the conditions of phytoextraction.We studied the effect of an aminocarboxilate ligand on the absorption of the metal ion by roots, both in hydroponic solution and in soil solution, for which we had to formalize the buffer power for the metal. We assumed that the hydrated metal ion is the only form which can be absorbed by the plants. Transport and reaction processes were modelled for a system made up of the metal M, a ligand L and the metal complex ML. The Tinker-Nye-Barber model was adapted to describe the transport of solutes M, L and ML in the soil and absorption of M by the roots. This allowed to represent the interactions between transport, chelating reactions, absorption of the solutes at the root surface, root growth with time, in order to simulate metal uptake by a whole root system.Several assumptions were tested such as i) absorption of the metal by an infinite sink and according to a Michaelis-Menten kinetics, solutes transport by diffusion with and without ii) mass flow and iii) soil buffer power for the ligand L. In hydroponic solution (without soil buffer power), ligands decreased the trace metal flux towards roots, as they reduced the concentration of hydrated

  18. A multi-level model accounting for the effects of JAK2-STAT5 signal modulation in erythropoiesis.

    PubMed

    Lai, Xin; Nikolov, Svetoslav; Wolkenhauer, Olaf; Vera, Julio

    2009-08-01

    We develop a multi-level model, using ordinary differential equations, based on quantitative experimental data, accounting for murine erythropoiesis. At the sub-cellular level, the model includes a description of the regulation of red blood cell differentiation through Epo-stimulated JAK2-STAT5 signalling activation, while at the cell population level the model describes the dynamics of (STAT5-mediated) red blood cell differentiation from their progenitors. Furthermore, the model includes equations depicting the hypoxia-mediated regulation of hormone erythropoietin blood levels. Take all together, the model constitutes a multi-level, feedback loop-regulated biological system, involving processes in different organs and at different organisational levels. We use our model to investigate the effect of deregulation in the proteins involved in the JAK2-STAT5 signalling pathway in red blood cells. Our analysis results suggest that down-regulation in any of the three signalling system components affects the hematocrit level in an individual considerably. In addition, our analysis predicts that exogenous Epo injection (an already existing treatment for several blood diseases) may compensate the effects of single down-regulation of Epo hormone level, STAT5 or EpoR/JAK2 expression level, and that it may be insufficient to counterpart a combined down-regulation of all the elements in the JAK2-STAT5 signalling cascade. PMID:19660986

  19. Accounting for false-positive acoustic detections of bats using occupancy models

    USGS Publications Warehouse

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    4. Synthesis and applications. Our results suggest that false positives sufficient to affect inferences may be common in acoustic surveys for bats. We demonstrate an approach that can estimate occupancy, regardless of the false-positive rate, when acoustic surveys are paired with capture surveys. Applications of this approach include monitoring the spread of White-Nose Syndrome, estimating the impact of climate change and informing conservation listing decisions. We calculate a site-specific probability of occupancy, conditional on survey results, which could inform local permitting decisions, such as for wind energy projects. More generally, the magnitude of false positives suggests that false-positive occupancy models can improve accuracy in research and monitoring of bats and provide wildlife managers with more reliable information.

  20. Accounting for subordinate perceptions of supervisor power: an identity-dependence model.

    PubMed

    Farmer, Steven M; Aguinis, Herman

    2005-11-01

    The authors present a model that explains how subordinates perceive the power of their supervisors and the causal mechanisms by which these perceptions translate into subordinate outcomes. Drawing on identity and resource-dependence theories, the authors propose that supervisors have power over their subordinates when they control resources needed for the subordinates' enactment and maintenance of current and desired identities. The joint effect of perceptions of supervisor power and supervisor intentions to provide such resources leads to 4 conditions ranging from highly functional to highly dysfunctional: confirmation, hope, apathy, and progressive withdrawal. Each of these conditions is associated with specific outcomes such as the quality of the supervisor-subordinate relationship, turnover, and changes in the type and centrality of various subordinate identities. PMID:16316266

  1. Accounting for crustal magnetization in models of the core magnetic field

    NASA Technical Reports Server (NTRS)

    Jackson, Andrew

    1990-01-01

    The problem of determining the magnetic field originating in the earth's core in the presence of remanent and induced magnetization is considered. The effect of remanent magnetization in the crust on satellite measurements of the core magnetic field is investigated. The crust as a zero-mean stationary Gaussian random process is modelled using an idea proposed by Parker (1988). It is shown that the matrix of second-order statistics is proportional to the Gram matrix, which depends only on the inner-products of the appropriate Green's functions, and that at a typical satellite altitude of 400 km the data are correlated out to an angular separation of approximately 15 deg. Accurate and efficient means of calculating the matrix elements are given. It is shown that the variance of measurements of the radial component of a magnetic field due to the crust is expected to be approximately twice that in horizontal components.

  2. Simulation of a vector hysteresis measurement system taking hysteresis into account by the vector Preisach model

    NASA Astrophysics Data System (ADS)

    Kuczmann, Miklós

    2008-02-01

    The paper deals with the numerical analysis of a rotational single sheet tester with round-shaped specimen (RRSST) which is now under construction. The measurement setup consists of an induction motor the rotor of which has been removed, and its windings have been replaced to a special two phase one which can generate homogeneous magnetic field inside the motor. The two orthogonal components of the magnetic field intensity and of the magnetic flux density vectors can be measured by H-coils and B-coils, respectively. The Finite Element Method (FEM) with the T, Φ-Φ potential formulation has been applied in the simulations. The vector hysteresis property of the specimen has been approximated by the vector Preisach model. Finally, the nonlinear problem has been solved by the fixed-point technique. The aim of the present work is to focus on the design aspects of this kind of measurement system.

  3. The gamesmanship of sex: a model based on African American adolescent accounts.

    PubMed

    Eyre, S L; Hoffman, V; Millstein, S G

    1998-12-01

    This article examines adolescent understanding of the social context of sexual behavior. Using grounded theory to interpret interviews with 39 African American male and female adolescents, the article builds a model of sex-related behavior as a set of interrelated games. A courtship game involves communication of sexual or romantic interest and, over time, formation of a romantic relationship. A duplicity game draws on conventions of a courtship game to trick a partner into having sex. A disclosure game spreads stories about one's own and other's sex-related activities to peers in a gossip network. Finally, a prestige game builds social reputation in the eyes of peers, typically based on gender-specific standards. The article concludes by examining the meanings that sex-related behavior may have for adolescents and the potential use of social knowledge for facilitating adolescent health. PMID:9884994

  4. Rotating Stellar Models Can Account for the Extended Main-sequence Turnoffs in Intermediate-age Clusters

    NASA Astrophysics Data System (ADS)

    Brandt, Timothy D.; Huang, Chelsea X.

    2015-07-01

    We show that the extended main-sequence turnoffs seen in intermediate-age Large Magellanic Cloud (LMC) clusters, often attributed to age spreads of several 100 Myr, may be easily accounted for by variable stellar rotation in a coeval population. We compute synthetic photometry for grids of rotating stellar evolution models and interpolate them to produce isochrones at a variety of rotation rates and orientations. An extended main-sequence turnoff naturally appears in color-magnitude diagrams at ages just under 1 Gyr, peaks in extent between ˜1 and 1.5 Gyr, and gradually disappears by around 2 Gyr in age. We then fit our interpolated isochrones by eye to four LMC clusters with very extended main-sequence turnoffs: NGC 1783, 1806, 1846, and 1987. In each case, stellar populations with a single age and metallicity can comfortably account for the observed extent of the turnoff region. The new stellar models predict almost no correlation of turnoff color with rotational v{sin}i. The red part of the turnoff is populated by a combination of slow rotators and edge-on rapid rotators, while the blue part contains rapid rotators at lower inclinations.

  5. Accounting for heterogeneity of nutrient dynamics in riverscapes through spatially distributed models

    NASA Astrophysics Data System (ADS)

    Wollheim, W. M.; Stewart, R. J.

    2011-12-01

    Numerous types of heterogeneity exist within river systems, leading to hotspots of nutrient sources, sinks, and impacts embedded within an underlying gradient defined by river size. This heterogeneity influences the downstream propagation of anthropogenic impacts across flow conditions. We applied a river network model to explore how nitrogen saturation at river network scales is influenced by the abundance and distribution of potential nutrient processing hotspots (lakes, beaver ponds, tributary junctions, hyporheic zones) under different flow conditions. We determined that under low flow conditions, whole network nutrient removal is relatively insensitive to the number of hotspots because the underlying river network structure has sufficient nutrient processing capacity. However, hotspots become more important at higher flows and greatly influence the spatial distribution of removal within the network at all flows, suggesting that identification of heterogeneity is critical to develop predictive understanding of nutrient removal processes under changing loading and climate conditions. New temporally intensive data from in situ sensors can potentially help to better understand and constrain these dynamics.

  6. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili

    PubMed Central

    Xiao, Ke; Malvankar, Nikhil S.; Shu, Chuanjun; Martz, Eric; Lovley, Derek R.; Sun, Xiao

    2016-01-01

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics. PMID:27001169

  7. Covariance-based synaptic plasticity in an attractor network model accounts for fast adaptation in free operant learning.

    PubMed

    Neiman, Tal; Loewenstein, Yonatan

    2013-01-23

    In free operant experiments, subjects alternate at will between targets that yield rewards stochastically. Behavior in these experiments is typically characterized by (1) an exponential distribution of stay durations, (2) matching of the relative time spent at a target to its relative share of the total number of rewards, and (3) adaptation after a change in the reward rates that can be very fast. The neural mechanism underlying these regularities is largely unknown. Moreover, current decision-making neural network models typically aim at explaining behavior in discrete-time experiments in which a single decision is made once in every trial, making these models hard to extend to the more natural case of free operant decisions. Here we show that a model based on attractor dynamics, in which transitions are induced by noise and preference is formed via covariance-based synaptic plasticity, can account for the characteristics of behavior in free operant experiments. We compare a specific instance of such a model, in which two recurrently excited populations of neurons compete for higher activity, to the behavior of rats responding on two levers for rewarding brain stimulation on a concurrent variable interval reward schedule (Gallistel et al., 2001). We show that the model is consistent with the rats' behavior, and in particular, with the observed fast adaptation to matching behavior. Further, we show that the neural model can be reduced to a behavioral model, and we use this model to deduce a novel "conservation law," which is consistent with the behavior of the rats. PMID:23345226

  8. Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    PubMed Central

    Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.

    2011-01-01

    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562

  9. A New Model Incorporating Variably Saturated Flow That Accounts for Capillary-Fringe Elongation in Unconfined-Aquifer Tests

    NASA Astrophysics Data System (ADS)

    Moench, A. F.

    2006-12-01

    A seven-day, constant-rate aquifer test conducted by University of Waterloo researchers at Canadian Forces Base Borden in Ontario, Canada is useful for advancing understanding of fluid flow processes in response to pumping from an unconfined aquifer. Measured data included detailed water content in the unsaturated zone through time and space and drawdown in the saturated zone. The water content data reveal downward translation of the soil-moisture profiles and simultaneous elongations of the capillary fringe. Estimates of capillary-fringe thicknesses made use of use of model-calculated water-table elevations. Using drawdown data only, parameter estimation with a numerical model that solves Richards' equation for fluid flow and uses Brooks and Corey functional relations to represent unsaturated-zone characteristics yielded simulated drawdowns in the saturated zone that compared favorably with measured drawdowns. However, the modeled soil-moisture profile bore no resemblance to measured soil- moisture profiles and the model did not accurately simulate capillary-fringe elongation. I propose a modified model that largely decouples the Brooks and Corey soil-moisture and relative hydraulic conductivity functions by using two pore-size distribution functions, one for each functional relation. With the proposed model the general shape of the measured soil-moisture profiles was reproduced, there were time-varying vertical extensions of the capillary fringe consistent with observations, and there was satisfactory agreement between simulated and measured drawdowns in the saturated zone. The model does not account for appreciable radial variations in the thickness of the capillary fringe. For example, in seven days of pumping the capillary fringe grew from 35 to 58 cm at a distance of 1 m and 41 to 50 cm at a distance of 20 m. The analysis shows that drawdown measurements in the saturated zone by themselves without supporting soil-moisture measurements are not sufficient to

  10. Accounting for geochemical alterations of caprock fracture permeability in basin-scale models of leakage from geologic CO2 reservoirs

    NASA Astrophysics Data System (ADS)

    Guo, B.; Fitts, J. P.; Dobossy, M.; Bielicki, J. M.; Peters, C. A.

    2012-12-01

    Climate mitigation, public acceptance and energy, markets demand that the potential CO2 leakage rates from geologic storage reservoirs are predicted to be low and are known to a high level of certainty. Current approaches to predict CO2 leakage rates assume constant permeability of leakage pathways (e.g., wellbores, faults, fractures). A reactive transport model was developed to account for geochemical alterations that result in permeability evolution of leakage pathways. The one-dimensional reactive transport model was coupled with the basin-scale Estimating Leakage Semi-Analytical (ELSA) model to simulate CO2 and brine leakage through vertical caprock pathways for different CO2 storage reservoir sites and injection scenarios within the Mt. Simon and St. Peter sandstone formations of the Michigan basin. Mineral dissolution in the numerical reactive transport model expands leakage pathways and increases permeability as a result of calcite dissolution by reactions driven by CO2-acidified brine. A geochemical model compared kinetic and equilibrium treatments of calcite dissolution within each grid block for each time step. For a single fracture, we investigated the effect of the reactions on leakage by performing sensitivity analyses of fracture geometry, CO2 concentration, calcite abundance, initial permeability, and pressure gradient. Assuming that calcite dissolution reaches equilibrium at each time step produces unrealistic scenarios of buffering and permeability evolution within fractures. Therefore, the reactive transport model with a kinetic treatment of calcite dissolution was coupled to the ELSA model and used to compare brine and CO2 leakage rates at a variety of potential geologic storage sites within the Michigan basin. The results are used to construct maps based on the susceptibility to geochemically driven increases in leakage rates. These maps should provide useful and easily communicated inputs into decision-making processes for siting geologic CO2

  11. Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing

    PubMed Central

    Tan, Cheston; Poggio, Tomaso

    2016-01-01

    Faces are an important and unique class of visual stimuli, and have been of interest to neuroscientists for many years. Faces are known to elicit certain characteristic behavioral markers, collectively labeled “holistic processing”, while non-face objects are not processed holistically. However, little is known about the underlying neural mechanisms. The main aim of this computational simulation work is to investigate the neural mechanisms that make face processing holistic. Using a model of primate visual processing, we show that a single key factor, “neural tuning size”, is able to account for three important markers of holistic face processing: the Composite Face Effect (CFE), Face Inversion Effect (FIE) and Whole-Part Effect (WPE). Our proof-of-principle specifies the precise neurophysiological property that corresponds to the poorly-understood notion of holism, and shows that this one neural property controls three classic behavioral markers of holism. Our work is consistent with neurophysiological evidence, and makes further testable predictions. Overall, we provide a parsimonious account of holistic face processing, connecting computation, behavior and neurophysiology. PMID:26985989

  12. A sampling design and model for estimating abundance of Nile crocodiles while accounting for heterogeneity of detectability of multiple observers

    USGS Publications Warehouse

    Shirley, Matthew H.; Dorazio, Robert M.; Abassery, Ekramy; Elhady, Amr A.; Mekki, Mohammed S.; Asran, Hosni H.

    2012-01-01

    As part of the development of a management program for Nile crocodiles in Lake Nasser, Egypt, we used a dependent double-observer sampling protocol with multiple observers to compute estimates of population size. To analyze the data, we developed a hierarchical model that allowed us to assess variation in detection probabilities among observers and survey dates, as well as account for variation in crocodile abundance among sites and habitats. We conducted surveys from July 2008-June 2009 in 15 areas of Lake Nasser that were representative of 3 main habitat categories. During these surveys, we sampled 1,086 km of lake shore wherein we detected 386 crocodiles. Analysis of the data revealed significant variability in both inter- and intra-observer detection probabilities. Our raw encounter rate was 0.355 crocodiles/km. When we accounted for observer effects and habitat, we estimated a surface population abundance of 2,581 (2,239-2,987, 95% credible intervals) crocodiles in Lake Nasser. Our results underscore the importance of well-trained, experienced monitoring personnel in order to decrease heterogeneity in intra-observer detection probability and to better detect changes in the population based on survey indices. This study will assist the Egyptian government establish a monitoring program as an integral part of future crocodile harvest activities in Lake Nasser

  13. Accounting for intracell flow in models with emphasis on water table recharge and stream-aquifer interaction. 2. A procedure

    USGS Publications Warehouse

    Jorgensen, D.G.; Signor, D.C.; Imes, J.L.

    1989-01-01

    Intercepted intracell flow, especially if cell includes water table recharge and a stream (sink), can result in significant model error if not accounted for. A procedure utilizing net flow per cell (Fn) that accounts for intercepted intracell flow can be used for both steady state and transient simulations. Germane to the procedure is the determination of the ratio of area of influence of the interior sink to the area of the cell (Ai/Ac). Ai is the area in which water table recharge has the potential to be intercepted by the sink. Determining Ai/Ac requires either a detailed water table map or observation of stream conditions within the cell. A proportioning parameter M, which is equal to 1 or slightly less and is a function of cell geometry, is used to determine how much of the water that has potential for interception is intercepted by the sink within the cell. Also germane to the procedure is the determination of the flow across the streambed (Fs) which is not directly a function of cell size, due to difference in head between the water level in the stream and the potentiometric surface of the aquifer underlying the streambed. -from Authors

  14. Accountability, responsiveness and quality for clients model of home support: a model for improved home support services to promote aging at home.

    PubMed

    Kelly, Judy; Orr, Alison

    2009-01-01

    As the proportion of older adults increases within the Canadian population, healthcare systems across the country are facing increased demands for home-based services, including home care nursing, rehabilitation, case management, adult day programs, respite, meal programs and home support. Home support is one of the core care services required in the community to enable older adults to remain at home as long as possible. In 2006, Vancouver Community introduced a new home support delivery and performance management model: the Accountability, Responsiveness and Quality for Clients Model of Home Support (ARQ Model) (VCH 2006). The main components of the ARQ Model are an expanded use of "cluster care" along with stable monthly funding for high-density buildings and neighbourhoods; the introduction of specific monthly and quarterly quality performance reporting; and the implementation of performance-based funding for home support. This article discusses the setup of the ARQ model, its ongoing evaluation and results achieved thus far. PMID:20057219

  15. Impact of accounting for coloured noise in radar altimetry data on a regional quasi-geoid model

    NASA Astrophysics Data System (ADS)

    Farahani, H. H.; Slobbe, D. C.; Klees, R.; Seitz, Kurt

    2016-07-01

    We study the impact of an accurate computation and incorporation of coloured noise in radar altimeter data when computing a regional quasi-geoid model using least-squares techniques. Our test area comprises the Southern North Sea including the Netherlands, Belgium, and parts of France, Germany, and the UK. We perform the study by modelling the disturbing potential with spherical radial base functions. To that end, we use the traditional remove-compute-restore procedure with a recent GRACE/GOCE static gravity field model. Apart from radar altimeter data, we use terrestrial, airborne, and shipboard gravity data. Radar altimeter sea surface heights are corrected for the instantaneous dynamic topography and used in the form of along-track quasi-geoid height differences. Noise in these data are estimated using repeat-track and post-fit residual analysis techniques and then modelled as an auto regressive moving average process. Quasi-geoid models are computed with and without taking the modelled coloured noise into account. The difference between them is used as a measure of the impact of coloured noise in radar altimeter along-track quasi-geoid height differences on the estimated quasi-geoid model. The impact strongly depends on the availability of shipboard gravity data. If no such data are available, the impact may attain values exceeding 10 centimetres in particular areas. In case shipboard gravity data are used, the impact is reduced, though it still attains values of several centimetres. We use geometric quasi-geoid heights from GPS/levelling data at height markers as control data to analyse the quality of the quasi-geoid models. The quasi-geoid model computed using a model of the coloured noise in radar altimeter along-track quasi-geoid height differences shows in some areas a significant improvement over a model that assumes white noise in these data. However, the interpretation in other areas remains a challenge due to the limited quality of the control data.

  16. Thermal creep model for CWSR zircaloy-4 cladding taking into account the annealing of the irradiation hardening

    SciTech Connect

    Cappelaere, Chantal; Limon, Roger; Duguay, Chrstelle; Pinte, Gerard; Le Breton, Michel; Bouffioux, Pol; Chabretou, Valerie; Miquet, Alain

    2012-02-15

    After irradiation and cooling in a pool, spent nuclear fuel assemblies are either transported for wet storage to a devoted site or loaded in casks for dry storage. During dry transportation or at the beginning of dry storage, the cladding is expected to be submitted to creep deformation under the hoop stress induced by the internal pressure of the fuel rod. The thermal creep is a potential mechanism that might lead to cladding failure. A new creep model was developed, based on a database of creep tests on as-received and irradiated cold-worked stress-relieved Zircaloy-4 cladding in a wide range of temperatures (310 degrees C to 470 degrees C) and hoop stress (80 to 260 MPa). Based on three laws-a flow law, a strain-hardening recovery law, and an annealing of irradiation hardening law this model allows the simulation of not only the transient creep and the steady-state creep, but also the early creep acceleration observed on irradiated samples tested in severe conditions, which was not taken into account in the previous models. The extrapolation of the creep model in the conditions of very long-term creep tests is reassuring, proving the robustness of the chosen formalism. The creep model has been assessed in progressively decreasing stress conditions, more representative of a transport. Set up to predict the cladding creep behavior under variable temperature and stress conditions, this model can easily be implemented into codes in order to simulate the thermomechanical behavior of spent fuel rods in various scenarios of postirradiation phases. (authors)

  17. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. PMID:27494960

  18. A general model for likelihood computations of genetic marker data accounting for linkage, linkage disequilibrium, and mutations.

    PubMed

    Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter

    2015-09-01

    Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers. PMID:25425094

  19. Modelling of stamping of DP steel automotive part accounting for the effect of hard components in the microstructure

    NASA Astrophysics Data System (ADS)

    Ambrozinski, Mateusz; Bzowski, Krzysztof; Mirek, Michal; Rauch, Lukasz; Pietrzyk, Maciej

    2013-05-01

    The paper presents simulations of the manufacturing of the automotive part, which has high influence on improvement of passengers safety. Two approaches to the Finite Element (FE) modelling of stamping of a part that provides extra stiffening of construction subassemblies in the back of a car were considered. The first is conventional simulation, which assumes that the material is a continuum with flow stress model and anisotropy coefficients determined from the tensile tests. In the second approach two-phase microstructure of the DP steel is accounted for in simulations. The FE2 method, which belongs to upscaling techniques, is used. Representative Volume Element (RVE), which is the basis of the upscaling approach and reflects the real microstructure, was obtained by the image analysis of the micrograph of the DP steel. However, since FE2 simulations with the real picture of the microstructure in the micro scale, are extremely time consuming, the idea of the Statistically Similar Representative Volume Element (SSRVE) was applied. SSRVE obtained for the DP steel, used for production of automotive part, is presented in the paper in the form of 3D inclusion. The macro scale model of the simulated part is described in details, as well as the results obtained for macro and micro-macro simulations.

  20. An energy-based model accounting for snow accumulation and snowmelt in a coniferous forest and in an open area

    NASA Astrophysics Data System (ADS)

    Matějka, Ondřej; Jeníček, Michal

    2016-04-01

    An energy balance approach was used to simulate snow water equivalent (SWE) evolution in an open area, forest clearing and coniferous forest during winter seasons 2011/12 and 2012/13 in the Bystřice River basin (Krušné Mountains, Czech Republic). The aim was to describe the impact of vegetation on snow accumulation and snowmelt under different forest canopy structure and trees density. Hemispherical photographs were used to describe the forest canopy structure. Energy balance model of snow accumulation and melt was set up. The snow model was adjusted to account the effects of forest canopy on driving meteorological variables. Leaf area index derived from 32 hemispherical photographs of vegetation and sky was used to implement the forest influence in the snow model. The model was evaluated using snow depth and SWE data measured at 16 localities in winter seasons from 2011 to 2013. The model was able to reproduce the SWE evolution in both winter seasons beneath the forest canopy, forest clearing and open area. The SWE maximum in forest sites was by 18% lower than in open areas and forest clearings. The portion of shortwave radiation on snowmelt rate was by 50% lower in forest areas than in open areas due to shading effect. The importance of turbulent fluxes was by 30% lower in forest sites compared to openings because of wind speed reduction up to 10% of values at corresponding open areas. Indirect estimation of interception rates was derived. Between 14 and 60% of snowfall was intercept and sublimated in the forest canopy in both winter seasons. Based on model results, the underestimation of solid precipitation (heated precipitation gauge used for measurement) at the weather station Hřebečná was revealed. The snowfall was underestimated by 40% in winter season 2011/12 and by 13% in winter season 2012/13. Although, the model formulation appeared sufficient for both analysed winter seasons, canopy effects on the longwave radiation and ground heat flux were not

  1. Mixture models of nucleotide sequence evolution that account for heterogeneity in the substitution process across sites and across lineages.

    PubMed

    Jayaswal, Vivek; Wong, Thomas K F; Robinson, John; Poladian, Leon; Jermiin, Lars S

    2014-09-01

    Molecular phylogenetic studies of homologous sequences of nucleotides often assume that the underlying evolutionary process was globally stationary, reversible, and homogeneous (SRH), and that a model of evolution with one or more site-specific and time-reversible rate matrices (e.g., the GTR rate matrix) is enough to accurately model the evolution of data over the whole tree. However, an increasing body of data suggests that evolution under these conditions is an exception, rather than the norm. To address this issue, several non-SRH models of molecular evolution have been proposed, but they either ignore heterogeneity in the substitution process across sites (HAS) or assume it can be modeled accurately using the distribution. As an alternative to these models of evolution, we introduce a family of mixture models that approximate HAS without the assumption of an underlying predefined statistical distribution. This family of mixture models is combined with non-SRH models of evolution that account for heterogeneity in the substitution process across lineages (HAL). We also present two algorithms for searching model space and identifying an optimal model of evolution that is less likely to over- or underparameterize the data. The performance of the two new algorithms was evaluated using alignments of nucleotides with 10 000 sites simulated under complex non-SRH conditions on a 25-tipped tree. The algorithms were found to be very successful, identifying the correct HAL model with a 75% success rate (the average success rate for assigning rate matrices to the tree's 48 edges was 99.25%) and, for the correct HAL model, identifying the correct HAS model with a 98% success rate. Finally, parameter estimates obtained under the correct HAL-HAS model were found to be accurate and precise. The merits of our new algorithms were illustrated with an analysis of 42 337 second codon sites extracted from a concatenation of 106 alignments of orthologous genes encoded by the nuclear

  2. Accountability Overboard

    ERIC Educational Resources Information Center

    Chieppo, Charles D.; Gass, James T.

    2009-01-01

    This article reports that special interest groups opposed to charter schools and high-stakes testing have hijacked Massachusetts's once-independent board of education and stand poised to water down the Massachusetts Comprehensive Assessment System (MCAS) tests and the accountability system they support. President Barack Obama and Massachusetts…

  3. Accounting Specialist.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This publication identifies 20 subjects appropriate for use in a competency list for the occupation of accounting specialist, 1 of 12 occupations within the business/computer technologies cluster. Each unit consists of a number of competencies; a list of competency builders is provided for each competency. Titles of the 20 units are as follows:…

  4. Modelling of the physico-chemical behaviour of clay minerals with a thermo-kinetic model taking into account particles morphology in compacted material.

    NASA Astrophysics Data System (ADS)

    Sali, D.; Fritz, B.; Clément, C.; Michau, N.

    2003-04-01

    Modelling of fluid-mineral interactions is largely used in Earth Sciences studies to better understand the involved physicochemical processes and their long-term effect on the materials behaviour. Numerical models simplify the processes but try to preserve their main characteristics. Therefore the modelling results strongly depend on the data quality describing initial physicochemical conditions for rock materials, fluids and gases, and on the realistic way of processes representations. The current geo-chemical models do not well take into account rock porosity and permeability and the particle morphology of clay minerals. In compacted materials like those considered as barriers in waste repositories, low permeability rocks like mudstones or compacted powders will be used : they contain mainly fine particles and the geochemical models used for predicting their interactions with fluids tend to misjudge their surface areas, which are fundamental parameters in kinetic modelling. The purpose of this study was to improve how to take into account the particles morphology in the thermo-kinetic code KINDIS and the reactive transport code KIRMAT. A new function was integrated in these codes, considering the reaction surface area as a volume depending parameter and the calculated evolution of the mass balance in the system was coupled with the evolution of reactive surface areas. We made application exercises for numerical validation of these new versions of the codes and the results were compared with those of the pre-existing thermo-kinetic code KINDIS. Several points are highlighted. Taking into account reactive surface area evolution during simulation modifies the predicted mass transfers related to fluid-minerals interactions. Different secondary mineral phases are also observed during modelling. The evolution of the reactive surface parameter helps to solve the competition effects between different phases present in the system which are all able to fix the chemical

  5. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    SciTech Connect

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-03-15

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  6. Integrated water resources management of the Ichkeul basin taking into account the durability of its wetland ecosystem using WEAP model

    NASA Astrophysics Data System (ADS)

    Shabou, M.; Lili-Chabaane, Z.; Gastli, W.; Chakroun, H.; Ben Abdallah, S.; Oueslati, I.; Lasram, F.; Laajimi, R.; Shaiek, M.; Romdhane, M. S.; Mnajja, A.

    2012-04-01

    The Conservation of coastal wetlands in the Mediterranean area is generally faced with development issues. It is the case of Tunisia where the precipitation is irregular in time and space. For the equity of water use (drinking, irrigation), there is a planning at the national level allowing the possibility of water transfer from regions rich in water resources to poor ones. This plan was initially done in Tunisia without taking into account the wetlands ecosystems and their specificities. The main purpose of this study is to find a model able to integrate simultaneously available resources and various water demands within a watershed by taking into account the durability of related wetland ecosystems. It is the case of the Ichkeul basin. This later is situated in northern of Tunisia, having an area of 2080 km2 and rainfall of about 600 mm/year. Downstream this basin, the Ichkeul Lake is characterized by a double alternation of seasonal high water and low salinity in winter and spring and low water levels and high salinity in summer and autumn that makes the Ichkeul an exceptional ecosystem. The originality of this hydrological system of Lake-marsh conditions is related to the presence of aquatic vegetation in the lake and special rich and varied hygrophilic in the marshes that constitutes the main source of food for large migrating water birds. After the construction of three dams on the principle rivers that are feeding the Ichkeul Lake, aiming particularly to supply the local irrigation and the drinking water demand of cities in the north and the east of Tunisia, freshwater inflow to the lake is greatly reduced causing a hydrological disequilibrium that influences the ecological conditions of the different species. Therefore, to ensure the sustainability of the water resources management, it's important to find a trade off between the existing hydrological and ecological systems taking into account water demands of various users (drinking, irrigation fishing, and

  7. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system.

    PubMed

    Beckon, William N

    2016-07-01

    For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment). PMID:27149556

  8. Can rational models be good accounts of developmental change? The case of language development at two time scales.

    PubMed

    Dawson, Colin R; Gerken, LouAnn

    2012-01-01

    Rational models of human perception and cognition have allowed researchers new ways to look at learning and the ability to make inferences from data. But how good are such models at accounting for developmental change? In this chapter, we address this question in the domain of language development, focusing on the speed with which developmental change takes place, and classifying different types of language development as either fast or slow. From the pattern of fast and slow development observed, we hypothesize that rational learning processes are generally well suited for handling fast processes over small amounts of input data. In contrast, we suggest that associative learning processes are generally better suited to slow development, in which learners accumulate information about what is typical of their language over time. Finally, although one system may be dominant for a particular component of language learning, we speculate that both systems frequently interact, with the associative system providing a source of emergent hypotheses to be evaluated by the rational system and the rational system serving to highlight which aspects of the learner's input need to be processed in greater depth by the associative system. PMID:23205409

  9. A simple arc column model that accounts for the relationship between voltage, current and electrode gap during VAR

    SciTech Connect

    Williamson, R.L.

    1997-02-01

    Mean arc voltage is a process parameter commonly used in vacuum arc remelting (VAR) control schemes. The response of this parameter to changes in melting current (I) and electrode gap (g{sub e}) at constant pressure may be accurately described by an equation of the form V = V{sub 0} + c{sub 1}g{sub e}I + c{sub 2}g{sub e}{sup 2} + c{sub 3}I{sup 2}, where c{sub 1}, c{sub 2} and c{sub 3} are constants, and where the non-linear terms generally constitute a relatively small correction. If the non-linear terms are ignored, the equation has the form of Ohm`s law with a constant offset (V{sub 0}), c{sub 1}g{sub e} playing the role of resistance. This implies that the arc column may be treated approximately as a simple resistor during constant current VAR, the resistance changing linearly with g{sub e}. The VAR furnace arc is known to originate from multiple cathode spot clusters situated randomly on the electrode tip surface. Each cluster marks a point of exist for conduction electrons leaving the cathode surface and entering the electrode gap. Because the spot clusters re highly localized on the cathode surface, each gives rise to an arc column that may be considered to operate independently of other local arc columns. This approximation is used to develop a model that accounts for the observed arc voltage dependence on electrode gap at constant current. Local arc column resistivity is estimated from elementary plasma physics and used to test the model for consistency by using it to predict local column heavy particle density. Furthermore, it is shown that the local arc column resistance increases as particle density increases. This is used to account for the common observation that the arc stiffens with increasing current, i.e. the arc voltage becomes more sensitive to changes in electrode gap as the melting current is increased. This explains why arc voltage is an accurate electrode gap indicator for high current VAR processes but not low current VAR processes.

  10. Response function theories that account for size distribution effects - A review. [mathematical models concerning composite propellant heterogeneity effects on combustion instability

    NASA Technical Reports Server (NTRS)

    Cohen, N. S.

    1980-01-01

    The paper presents theoretical models developed to account for the heterogeneity of composite propellants in expressing the pressure-coupled combustion response function. It is noted that the model of Lengelle and Williams (1968) furnishes a viable basis to explain the effects of heterogeneity.

  11. Developing a Global Model of Accounting Education and Examining IES Compliance in Australia, Japan, and Sri Lanka

    ERIC Educational Resources Information Center

    Watty, Kim; Sugahara, Satoshi; Abayadeera, Nadana; Perera, Luckmika

    2013-01-01

    The introduction of International Education Standards (IES) signals a clear move by the International Accounting Education Standards Board (IAESB) to ensure high quality standards in professional accounting education at a global level. This study investigated how IES are perceived and valued by member bodies and academics in three counties:…

  12. Self-consistent modeling of induced magnetic field in Titan's atmosphere accounting for the generation of Schumann resonance

    NASA Astrophysics Data System (ADS)

    Béghin, Christian

    2015-02-01

    This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.

  13. Accountability and primary healthcare.

    PubMed

    Mukhi, Shaheena; Barnsley, Jan; Deber, Raisa B

    2014-09-01

    This paper examines the accountability structures within primary healthcare (PHC) in Ontario; in particular, who is accountable for what and to whom, and the policy tools being used. Ontario has implemented a series of incremental reforms, using expenditure policy instruments, enforced through contractual agreements to provide a defined set of publicly financed services that are privately delivered, most often by family physicians. The findings indicate that reporting, funding, evaluation and governance accountability requirements vary across service provider models. Accountability to the funder and patients is most common. Agreements, incentives and compensation tools have been used but may be insufficient to ensure parties are being held responsible for their activities related to stated goals. Clear definitions of various governance structures, a cohesive approach to monitoring critical performance indicators and associated improvement strategies are important elements in operationalizing accountability and determining whether goals are being met. PMID:25305392

  14. Accountability and Primary Healthcare

    PubMed Central

    Mukhi, Shaheena; Barnsley, Jan; Deber, Raisa B.

    2014-01-01

    This paper examines the accountability structures within primary healthcare (PHC) in Ontario; in particular, who is accountable for what and to whom, and the policy tools being used. Ontario has implemented a series of incremental reforms, using expenditure policy instruments, enforced through contractual agreements to provide a defined set of publicly financed services that are privately delivered, most often by family physicians. The findings indicate that reporting, funding, evaluation and governance accountability requirements vary across service provider models. Accountability to the funder and patients is most common. Agreements, incentives and compensation tools have been used but may be insufficient to ensure parties are being held responsible for their activities related to stated goals. Clear definitions of various governance structures, a cohesive approach to monitoring critical performance indicators and associated improvement strategies are important elements in operationalizing accountability and determining whether goals are being met. PMID:25305392

  15. Modes of Dynamic Rupture Propagation and Rupture Front Speeds in Earthquake Models That Account for Dynamic Weakening Mechanisms

    NASA Astrophysics Data System (ADS)

    Lapusta, N.

    2005-12-01

    Laboratory experiments and theories of how fault materials respond suggest that the constitutive response of faults is far from simple. For slow slip rates, laboratory-derived rate and state friction formulations incorporate small, less than 10%, variations in frictional strength about a representative value which is the product of a typical slow-rate friction coefficient (0.6-0.7 for most rock surfaces and fault-like gouge) times the effective normal stress (which is comparable to overburden minus hydrostatic pore pressure, about 150 MPa at the representative seismic depth of 8 km). One could refer to this slow-rate frictional strength as (high) static fault strength. For fast sliding velocities and large slips, additional weakening mechanisms are activated that result in much lower frictional resistance during dynamic sliding. Hence we need to build earthquake models that would account for both high static strength and low dynamic strength of faults. At first, it seems that the combination of high static strength and low, near-zero, dynamic strength should create static stress drops that are large compared to 1-10 MPa static stress drops typically observed. However, Rice (AGU, 1994) and Lapusta and Rice (AGU, 2003, 2004) proposed a model that avoids that pitfall by incorporating small defect regions that nucleate ruptures while the average stress on the fault is still low compared to its static strength. By simulating earthquake sequences in the framework of a 2D depth-averaged elastic model of a faulted crustal plate, they showed that the fault would then operate with reasonable static stress drops, low shear stress, and low heat generation as follows: Earthquakes nucleate under low shear stress in a defect (weak) and then propagate into strong regions due to significant dynamic weakening. The simulations incorporated truly slow, tectonic-type loading of 35 mm/year and resolved all stages of the simulated earthquakes, including the nucleation process and

  16. Can the Five Factor Model of Personality Account for the Variability of Autism Symptom Expression? Multivariate Approaches to Behavioral Phenotyping in Adult Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Schwartzman, Benjamin C.; Wood, Jeffrey J.; Kapp, Steven K.

    2016-01-01

    The present study aimed to: determine the extent to which the five factor model of personality (FFM) accounts for variability in autism spectrum disorder (ASD) symptomatology in adults, examine differences in average FFM personality traits of adults with and without ASD and identify distinct behavioral phenotypes within ASD. Adults (N = 828;…

  17. A mathematical model of the global processes of plastic degradation in the World Ocean with account for the surface temperature distribution

    NASA Astrophysics Data System (ADS)

    Bartsev, S. I.; Gitelson, J. I.

    2016-02-01

    The suggested model of plastic garbage degradation allows us to obtain an estimate of the stationary density of their distribution over the surface of the World Ocean with account for the temperature dependence on the degradation rate. The model also allows us to estimate the characteristic time periods of degradation of plastic garbage and the dynamics of the mean density variation as the mean rate of plastic garbage entry into the ocean varies

  18. A Model and Method of Evaluative Accounts: Development Impact of the National Literacy Mission (NLM of India).

    ERIC Educational Resources Information Center

    Bhola, H. S.

    2002-01-01

    Studied the developmental impact of the National Literacy Mission of India, providing an evaluative account based on 97 evaluation studies. Compared findings with those from a 27-study synthesis of studies of effects of adult literacy efforts in Africa. Findings show the impact of literacy on the development of nations. (SLD)

  19. A physically meaningful equivalent circuit network model of a lithium-ion battery accounting for local electrochemical and thermal behaviour, variable double layer capacitance and degradation

    NASA Astrophysics Data System (ADS)

    von Srbik, Marie-Therese; Marinescu, Monica; Martinez-Botas, Ricardo F.; Offer, Gregory J.

    2016-09-01

    A novel electrical circuit analogy is proposed modelling electrochemical systems under realistic automotive operation conditions. The model is developed for a lithium ion battery and is based on a pseudo 2D electrochemical model. Although cast in the framework familiar to application engineers, the model is essentially an electrochemical battery model: all variables have a direct physical interpretation and there is direct access to all states of the cell via the model variables (concentrations, potentials) for monitoring and control systems design. This is the first Equivalent Circuit Network -type model that tracks directly the evolution of species inside the cell. It accounts for complex electrochemical phenomena that are usually omitted in online battery performance predictors such as variable double layer capacitance, the full current-overpotential relation and overpotentials due to mass transport limitations. The coupled electrochemical and thermal model accounts for capacity fade via a loss in active species and for power fade via an increase in resistive solid electrolyte passivation layers at both electrodes. The model's capability to simulate cell behaviour under dynamic events is validated against test procedures, such as standard battery testing load cycles for current rates up to 20 C, as well as realistic automotive drive cycle loads.

  20. Integrating a distributed hydrological model and SEEA-Water for improving water account and water allocation management under a climate change context.

    NASA Astrophysics Data System (ADS)

    Jauch, Eduardo; Almeida, Carina; Simionesei, Lucian; Ramos, Tiago; Neves, Ramiro

    2015-04-01

    The crescent demand and situations of water scarcity and droughts are a difficult problem to solve by water managers, with big repercussions in the entire society. The complexity of this question is increased by trans-boundary river issues and the environmental impacts of the usual adopted solutions to store water, like reservoirs. To be able to answer to the society requirements regarding water allocation in a sustainable way, the managers must have a complete and clear picture of the present situation, as well as being able to understand the changes in the water dynamics both in the short and long time period. One of the available tools for the managers is the System of Environmental-Economic Accounts for Water (SEEA-Water), a subsystem of SEEA with focus on water accounts, developed by the United Nations Statistical Division (UNSD) in collaboration with the London Group on Environmental Accounting, This system provides, between other things, with a set of tables and accounts for water and water related emissions, organizing statistical data making possible the derivation of indicators that can be used to assess the relations between economy and environment. One of the main issues with the SEEA-Water framework seems to be the requirement of large amounts of data, including field measurements of water availability in rivers/lakes/reservoirs, soil and groundwater, as also precipitation, irrigation and other water sources and uses. While this is an incentive to collecting and using data, it diminishes the usefulness of the system on countries where this data is not yet available or is incomplete, as it can lead to a poor understanding of the water availability and uses. Distributed hydrological models can be used to fill missing data required by the SEEA-Water framework. They also make it easier to assess different scenarios (usually soil use, water demand and climate changes) for a better planning of water allocation. In the context of the DURERO project (www

  1. A Comparison of Seven Cox Regression-Based Models to Account for Heterogeneity Across Multiple HIV Treatment Cohorts in Latin America and the Caribbean

    PubMed Central

    Giganti, Mark J.; Luz, Paula M.; Caro-Vega, Yanink; Cesar, Carina; Padgett, Denis; Koenig, Serena; Echevarria, Juan; McGowan, Catherine C.; Shepherd, Bryan E.

    2015-01-01

    Abstract Many studies of HIV/AIDS aggregate data from multiple cohorts to improve power and generalizability. There are several analysis approaches to account for cross-cohort heterogeneity; we assessed how different approaches can impact results from an HIV/AIDS study investigating predictors of mortality. Using data from 13,658 HIV-infected patients starting antiretroviral therapy from seven Latin American and Caribbean cohorts, we illustrate the assumptions of seven readily implementable approaches to account for across cohort heterogeneity with Cox proportional hazards models, and we compare hazard ratio estimates across approaches. As a sensitivity analysis, we modify cohort membership to generate specific heterogeneity conditions. Hazard ratio estimates varied slightly between the seven analysis approaches, but differences were not clinically meaningful. Adjusted hazard ratio estimates for the association between AIDS at treatment initiation and death varied from 2.00 to 2.20 across approaches that accounted for heterogeneity; the adjusted hazard ratio was estimated as 1.73 in analyses that ignored across cohort heterogeneity. In sensitivity analyses with more extreme heterogeneity, we noted a slightly greater distinction between approaches. Despite substantial heterogeneity between cohorts, the impact of the specific approach to account for heterogeneity was minimal in our case study. Our results suggest that it is important to account for across cohort heterogeneity in analyses, but that the specific technique for addressing heterogeneity may be less important. Because of their flexibility in accounting for cohort heterogeneity, we prefer stratification or meta-analysis methods, but we encourage investigators to consider their specific study conditions and objectives. PMID:25647087

  2. A Comparison of Seven Cox Regression-Based Models to Account for Heterogeneity Across Multiple HIV Treatment Cohorts in Latin America and the Caribbean.

    PubMed

    Giganti, Mark J; Luz, Paula M; Caro-Vega, Yanink; Cesar, Carina; Padgett, Denis; Koenig, Serena; Echevarria, Juan; McGowan, Catherine C; Shepherd, Bryan E

    2015-05-01

    Many studies of HIV/AIDS aggregate data from multiple cohorts to improve power and generalizability. There are several analysis approaches to account for cross-cohort heterogeneity; we assessed how different approaches can impact results from an HIV/AIDS study investigating predictors of mortality. Using data from 13,658 HIV-infected patients starting antiretroviral therapy from seven Latin American and Caribbean cohorts, we illustrate the assumptions of seven readily implementable approaches to account for across cohort heterogeneity with Cox proportional hazards models, and we compare hazard ratio estimates across approaches. As a sensitivity analysis, we modify cohort membership to generate specific heterogeneity conditions. Hazard ratio estimates varied slightly between the seven analysis approaches, but differences were not clinically meaningful. Adjusted hazard ratio estimates for the association between AIDS at treatment initiation and death varied from 2.00 to 2.20 across approaches that accounted for heterogeneity; the adjusted hazard ratio was estimated as 1.73 in analyses that ignored across cohort heterogeneity. In sensitivity analyses with more extreme heterogeneity, we noted a slightly greater distinction between approaches. Despite substantial heterogeneity between cohorts, the impact of the specific approach to account for heterogeneity was minimal in our case study. Our results suggest that it is important to account for across cohort heterogeneity in analyses, but that the specific technique for addressing heterogeneity may be less important. Because of their flexibility in accounting for cohort heterogeneity, we prefer stratification or meta-analysis methods, but we encourage investigators to consider their specific study conditions and objectives. PMID:25647087

  3. Model-Based Assessments to Support Learning and Accountability: The Evolution of CRESST's Research on Multiple-Purpose Measures

    ERIC Educational Resources Information Center

    Baker, Eva L.

    2007-01-01

    This article describes the history, evidence warrants, and evolution of the Center for Research on Evaluation, Standards, and Student Testing's (CRESST) model-based assessments. It considers alternative interpretations of scientific or practical models and illustrates how model-based assessment addresses both definitions. The components of the…

  4. Account of nonlocal ionization by fast electrons in the fluid models of a direct current glow discharge

    SciTech Connect

    Rafatov, I.; Bogdanov, E. A.; Kudryavtsev, A. A.

    2012-09-15

    We developed and tested a simple hybrid model for a glow discharge, which incorporates nonlocal ionization by fast electrons into the 'simple' and 'extended' fluid frameworks. Calculations have been performed for an argon gas. Comparison with the experimental data as well as with the hybrid (particle) and fluid modelling results demonstated good applicability of the proposed model.

  5. A constitutive model for air-NAPL-water flow in the vadose zone accounting for immobile, non-occluded (residual) NAPL in strongly water-wet porous media

    SciTech Connect

    Lenhard, Robert J.; Oostrom, Mart; Dane, J H.

    2004-07-01

    A hysteretic constitutive model describing relations among relative permeabilities, saturations, and pressures in fluid systems consisting of air, nonaqueous-phase liquid (NAPL), and water is modified to account for NAPL that is postulated to be immobile in small pores and pore wedges and as films or lenses on water surfaces. A direct outcome of the model is prediction of the NAPL saturation that remains in the vadose zone after long drainage periods (residual NAPL). Using the modified model, water and NAPL (free, trapped by water, and residual) saturations can be predicted from the capillary pressures and the water and total-liquid saturation-path histories. Relations between relative permeabilities and saturations are modified to account for the residual NAPL by adjusting the limits of integration in the integral expression used for predicting the NAPL relative permeability. When all of the NAPL is either residual or trapped (i.e., no free NAPL), then the NAPL relative permeability will be zero. We model residual NAPL using concepts similar to those used to model residual water. As an initial test of the constitutive model, we compare predictions to published measurements of residual NAPL. Furthermore, we present results using the modified constitutive theory for a scenario involving NAPL imbibition and drainage.

  6. A Constitutive Model for Air-NAPL-Water Flow in the Vadose Zone Accounting for Immobile, Non-Occluded (Residual) NAPL in Strongly Water-Wet Porous Media

    SciTech Connect

    R. J. Lenhard; M. Oostrom; J. H. Dane

    2004-07-01

    A hysteretic constitutive model describing relations among relative permeabilities, saturations, and pressures in fluid systems consisting of air, nonaqueous-phase liquid (NAPL), and water is modified to account for NAPL that is postulated to be immobile in small pores and pore wedges and as films or lenses on water surfaces. A direct outcome of the model is prediction of the NAPL saturation that remains in the vadose zone after long drainage periods (residual NAPL). Using the modified model, water and NAPL (free, entrapped by water, and residual) saturations can be predicted from the capillary pressures and the water and total-liquid saturation-path histories. Relations between relative permeabilities and saturations are modified to account for the residual NAPL by adjusting the limits of integration in the integral expression used for predicting the NAPL relative permeability. When all of the NAPL is either residual or entrapped (i.e., no free NAPL), then the NAPL relative permeability will be zero. We model residual NAPL using concepts similar to those used to model residual water. As an initial test of the constitutive model, we compare predictions to published measurements of residual NAPL. Furthermore, we present results using the modified constitutive theory for a scenario involving NAPL imbibition and drainage.

  7. A constitutive model for air-NAPL-water flow in the vadose zone accounting for immobile, non-occluded (residual) NAPL in strongly water-wet porous media

    SciTech Connect

    Lenhard, Robert J.; Oostrom, Mart; Dane, J H.

    2004-09-01

    A hysteretic constitutive model describing relations among relative permeabilities, saturations, and pressures in fluid systems consisting of air, nonaqueous-phase liquid (NAPL), and water is modified to account for NAPL that is postulated to be immobile in small pores and pore wedges and as films or lenses on water surfaces. A direct outcome of the model is prediction of the NAPL saturation that remains in the vadose zone after long drainage periods (residual NAPL). Using the modified model, water and NAPL (free, entrapped by water, and residual) saturations can be predicted from the capillary pressures and the water and total-liquid saturation-path histories. Relations between relative permeabilities and saturations are modified to account for the residual NAPL by adjusting the limits of integration in the integral expression used for predicting the NAPL relative permeability. When all of the NAPL is either residual or entrapped (i.e., no free NAPL), then the NAPL relative permeability will be zero. We model residual NAPL using concepts similar to those used to model residual water. As an initial test of the constitutive model, we compare predictions to published measurements of residual NAPL. Furthermore, we present results using the modified constitutive theory for a scenario involving NAPL imbibition and drainage.

  8. Demonstrating marketing accountability.

    PubMed

    Gombeski, William R; Britt, Jason; Taylor, Jan; Riggs, Karen; Wray, Tanya; Adkins, Wanda; Springate, Suzanne

    2008-01-01

    Pressure on health care marketers to demonstrate effectiveness of their strategies and show their contribution to organizational goals is growing. A seven-tiered model based on the concepts of structure (having the right people, systems), process (doing the right things in the right way), and outcomes (results) is discussed. Examples of measures for each tier are provided and the benefits of using the model as a tool for measuring, organizing, tracking, and communicating appropriate information are provided. The model also provides a framework for helping management understand marketing's value and can serve as a vehicle for demonstrating marketing accountability. PMID:19064476

  9. An Analytical Approach to Model Heterogonous Recrystallization Kinetics Taking into Account the Natural Spatial Inhomogeneity of Deformation

    NASA Astrophysics Data System (ADS)

    Luo, Haiwen; van der Zwaag, Sybrand

    2016-01-01

    The classical Johnson-Mehl-Avrami-Kolmogorov equation was modified to take into account the normal local strain distribution in deformed samples. This new approach is not only able to describe the influence of the local heterogeneity of recrystallization but also to produce an average apparent Avrami exponent to characterize the entire recrystallization process. In particular, it predicts that the apparent Avrami exponent should be within a narrow range of 1 to 2 and converges to 1 when the local strain varies greatly. Moreover, the apparent Avrami exponent is predicted to be insensitive to temperature and deformation conditions. These predictions are in excellent agreement with the experimental observations on static recrystallization after hot deformation in different steels and other metallic alloys.

  10. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    PubMed

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447

  11. Construction of a mathematical model of the human body, taking the nonlinear rigidity of the spine into account

    NASA Technical Reports Server (NTRS)

    Glukharev, K. K.; Morozova, N. I.; Potemkin, B. A.; Solovyev, V. S.; Frolov, K. V.

    1973-01-01

    A mathematical model of the human body was constructed, under the action of harmonic vibrations, in the 2.5-7 Hz frequency range. In this frequency range, the model of the human body as a vibrating system, with concentrated parameters is considered. Vertical movements of the seat and vertical components of vibrations of the human body are investigated.

  12. Production model in the conditions of unstable demand taking into account the influence of trading infrastructure: Ergodicity and its application

    NASA Astrophysics Data System (ADS)

    Obrosova, N. K.; Shananin, A. A.

    2015-04-01

    A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by an attempt to analyze the problems of functioning of low competitive macroeconomic structures. The model is formalized in the form of a Bellman equation, for which a closed-form solution is found. The stochastic process of product stock variations is proved to be ergodic and its final probability distribution is found. Expressions for the average production load and the average product stock are found by analyzing the stochastic process. A system of model equations relating the model variables to official statistical parameters is derived. The model is identified using data from the Fiat and KAMAZ companies. The influence of the credit interest rate on the firm market value assessment and the production load level are analyzed using comparative statics methods.

  13. The Asian clam Corbicula fluminea as a biomonitor of trace element contamination: accounting for different sources of variation using an hierarchical linear model.

    PubMed

    Shoults-Wilson, W Aaron; Peterson, James T; Unrine, Jason M; Rickard, James; Black, Marsha C

    2009-10-01

    In the present study, specimens of the invasive clam, Corbicula fluminea, were collected above and below possible sources of potentially toxic trace elements (As, Cd, Cr, Cu, Hg, Pb, and Zn) in the Altamaha River system (Georgia, U.S.A.). Bioaccumulation of these elements was quantified, along with environmental (water and sediment) concentrations. Hierarchical linear models were used to account for variability in tissue concentrations related to environmental (site water chemistry and sediment characteristics) and individual (growth metrics) variables while identifying the strongest relations between these variables and trace element accumulation. The present study found significantly elevated concentrations of Cd, Cu, and Hg downstream of the outfall of kaolin-processing facilities, Zn downstream of a tire cording facility, and Cr downstream of both a nuclear power plant and a paper pulp mill. Models of the present study indicated that variation in trace element accumulation was linked to distance upstream from the estuary, dissolved oxygen, percentage of silt and clay in the sediment, elemental concentrations in sediment, shell length, and bivalve condition index. By explicitly modeling environmental variability, the Hierarchical linear modeling procedure allowed the identification of sites showing increased accumulation of trace elements that may have been caused by human activity. Hierarchical linear modeling is a useful tool for accounting for environmental and individual sources of variation in bioaccumulation studies. PMID:19463028

  14. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    SciTech Connect

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.

    2012-04-24

    We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.

  15. The Asian clam Corbicula fluminea as a biomonitor of trace element contamination: Accounting for different sources of variation using an hierarchical linear model

    USGS Publications Warehouse

    Shoults-Wilson, W. A.; Peterson, J.T.; Unrine, J.M.; Rickard, J.; Black, M.C.

    2009-01-01

    In the present study, specimens of the invasive clam, Corbicula fluminea, were collected above and below possible sources of potentially toxic trace elements (As, Cd, Cr, Cu, Hg, Pb, and Zn) in the Altamaha River system (Georgia, USA). Bioaccumulation of these elements was quantified, along with environmental (water and sediment) concentrations. Hierarchical linear models were used to account for variability in tissue concentrations related to environmental (site water chemistry and sediment characteristics) and individual (growth metrics) variables while identifying the strongest relations between these variables and trace element accumulation. The present study found significantly elevated concentrations of Cd, Cu, and Hg downstream of the outfall of kaolin-processing facilities, Zn downstream of a tire cording facility, and Cr downstream of both a nuclear power plant and a paper pulp mill. Models of the present study indicated that variation in trace element accumulation was linked to distance upstream from the estuary, dissolved oxygen, percentage of silt and clay in the sediment, elemental concentrations in sediment, shell length, and bivalve condition index. By explicitly modeling environmental variability, the Hierarchical linear modeling procedure allowed the identification of sites showing increased accumulation of trace elements that may have been caused by human activity. Hierarchical linear modeling is a useful tool for accounting for environmental and individual sources of variation in bioaccumulation studies. ?? 2009 SETAC.

  16. Accounting for Age Uncertainty in Growth Modeling, the Case Study of Yellowfin Tuna (Thunnus albacares) of the Indian Ocean

    PubMed Central

    Dortel, Emmanuelle; Massiot-Granier, Félix; Rivot, Etienne; Million, Julien; Hallier, Jean-Pierre; Morize, Eric; Munaron, Jean-Marie; Bousquet, Nicolas; Chassot, Emmanuel

    2013-01-01

    Age estimates, typically determined by counting periodic growth increments in calcified structures of vertebrates, are the basis of population dynamics models used for managing exploited or threatened species. In fisheries research, the use of otolith growth rings as an indicator of fish age has increased considerably in recent decades. However, otolith readings include various sources of uncertainty. Current ageing methods, which converts an average count of rings into age, only provide periodic age estimates in which the range of uncertainty is fully ignored. In this study, we describe a hierarchical model for estimating individual ages from repeated otolith readings. The model was developed within a Bayesian framework to explicitly represent the sources of uncertainty associated with age estimation, to allow for individual variations and to include knowledge on parameters from expertise. The performance of the proposed model was examined through simulations, and then it was coupled to a two-stanza somatic growth model to evaluate the impact of the age estimation method on the age composition of commercial fisheries catches. We illustrate our approach using the saggital otoliths of yellowfin tuna of the Indian Ocean collected through large-scale mark-recapture experiments. The simulation performance suggested that the ageing error model was able to estimate the ageing biases and provide accurate age estimates, regardless of the age of the fish. Coupled with the growth model, this approach appeared suitable for modeling the growth of Indian Ocean yellowfin and is consistent with findings of previous studies. The simulations showed that the choice of the ageing method can strongly affect growth estimates with subsequent implications for age-structured data used as inputs for population models. Finally, our modeling approach revealed particularly useful to reflect uncertainty around age estimates into the process of growth estimation and it can be applied to any

  17. Accounting for age uncertainty in growth modeling, the case study of yellowfin tuna (Thunnus albacares) of the Indian Ocean.

    PubMed

    Dortel, Emmanuelle; Massiot-Granier, Félix; Rivot, Etienne; Million, Julien; Hallier, Jean-Pierre; Morize, Eric; Munaron, Jean-Marie; Bousquet, Nicolas; Chassot, Emmanuel

    2013-01-01

    Age estimates, typically determined by counting periodic growth increments in calcified structures of vertebrates, are the basis of population dynamics models used for managing exploited or threatened species. In fisheries research, the use of otolith growth rings as an indicator of fish age has increased considerably in recent decades. However, otolith readings include various sources of uncertainty. Current ageing methods, which converts an average count of rings into age, only provide periodic age estimates in which the range of uncertainty is fully ignored. In this study, we describe a hierarchical model for estimating individual ages from repeated otolith readings. The model was developed within a Bayesian framework to explicitly represent the sources of uncertainty associated with age estimation, to allow for individual variations and to include knowledge on parameters from expertise. The performance of the proposed model was examined through simulations, and then it was coupled to a two-stanza somatic growth model to evaluate the impact of the age estimation method on the age composition of commercial fisheries catches. We illustrate our approach using the sagittal otoliths of yellowfin tuna of the Indian Ocean collected through large-scale mark-recapture experiments. The simulation performance suggested that the ageing error model was able to estimate the ageing biases and provide accurate age estimates, regardless of the age of the fish. Coupled with the growth model, this approach appeared suitable for modeling the growth of Indian Ocean yellowfin and is consistent with findings of previous studies. The simulations showed that the choice of the ageing method can strongly affect growth estimates with subsequent implications for age-structured data used as inputs for population models. Finally, our modeling approach revealed particularly useful to reflect uncertainty around age estimates into the process of growth estimation and it can be applied to any

  18. Elastic consequences of a single plastic event: Towards a realistic account of structural disorder and shear wave propagation in models of flowing amorphous solids

    NASA Astrophysics Data System (ADS)

    Nicolas, Alexandre; Puosi, Francesco; Mizuno, Hideyuki; Barrat, Jean-Louis

    2015-05-01

    Shear transformations (i.e., localized rearrangements of particles resulting in the shear deformation of a small region of the sample) are the building blocks of mesoscale models for the flow of disordered solids. In order to compute the time-dependent response of the solid material to such a shear transformation, with a proper account of elastic heterogeneity and shear wave propagation, we propose and implement a very simple Finite-Element (FE)-based method. Molecular Dynamics (MD) simulations of a binary Lennard-Jones glass are used as a benchmark for comparison, and information about the microscopic viscosity and the local elastic constants is directly extracted from the MD system and used as input in FE. We find very good agreement between FE and MD regarding the temporal evolution of the disorder-averaged displacement field induced by a shear transformation, which turns out to coincide with the response of a uniform elastic medium. However, fluctuations are relatively large, and their magnitude is satisfactorily captured by the FE simulations of an elastically heterogeneous system. Besides, accounting for elastic anisotropy on the mesoscale is not crucial in this respect. The proposed method thus paves the way for models of the rheology of amorphous solids which are both computationally efficient and realistic, in that structural disorder and inertial effects are accounted for.

  19. Accounting for the Impact of Impermeable Soil Layers on Pesticide Runoff and Leaching in a Landscape Vulnerability Model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A regional-scale model that estimates landscape vulnerability of pesticide leaching and runoff (solution and particle adsorbed) underestimated runoff vulnerability and overestimated leaching vulnerability compared to measured data when applied to a gently rolling landscape in northeast Missouri. Man...

  20. Modeling scale-dependent runoff generation in a small semi-arid watershed accounting for rainfall intensity and water depth

    NASA Astrophysics Data System (ADS)

    Langhans, Christoph; Govers, Gerard; Diels, Jan; Stone, Jeffry J.; Nearing, Mark A.

    2014-07-01

    Observed scale effects of runoff on hillslopes and small watersheds derive from complex interactions of time-varying rainfall rates with runoff, infiltration and macro- and microtopographic structures. A little studied aspect of scale effects is the concept of water depth-dependent infiltration. For semi-arid rangeland it has been demonstrated that mounds underneath shrubs have a high infiltrability and lower lying compacted or stony inter-shrub areas have a lower infiltrability. It is hypothesized that runoff accumulation further downslope leads to increased water depth, inundating high infiltrability areas, which increases the area-averaged infiltration rate. A model was developed that combines the concepts of water depth-dependent infiltration, partial contributing area under variable rainfall intensity, and the Green-Ampt theory for point-scale infiltration. The model was applied to rainfall simulation data and natural rainfall-runoff data from a small sub-watershed (0.4 ha) of the Walnut Gulch Experimental Watershed in the semi-arid US Southwest. Its performance to reproduce observed hydrographs was compared to that of a conventional Green-Ampt model assuming complete inundation sheet flow, with runon infiltration, which is infiltration of runoff onto pervious downstream areas. Parameters were derived from rainfall simulations and from watershed-scale calibration directly from the rainfall-runoff events. The performance of the water depth-dependent model was better than that of the conventional model on the scale of a rainfall simulator plot, but on the scale of a small watershed the performance of both model types was similar. We believe that the proposed model contributes to a less scale-dependent way of modeling runoff and erosion on the hillslope-scale.

  1. Can the Five Factor Model of Personality Account for the Variability of Autism Symptom Expression? Multivariate Approaches to Behavioral Phenotyping in Adult Autism Spectrum Disorder.

    PubMed

    Schwartzman, Benjamin C; Wood, Jeffrey J; Kapp, Steven K

    2016-01-01

    The present study aimed to: determine the extent to which the five factor model of personality (FFM) accounts for variability in autism spectrum disorder (ASD) symptomatology in adults, examine differences in average FFM personality traits of adults with and without ASD and identify distinct behavioral phenotypes within ASD. Adults (N = 828; nASD = 364) completed an online survey with an autism trait questionnaire and an FFM personality questionnaire. FFM facets accounted for 70 % of variance in autism trait scores. Neuroticism positively correlated with autism symptom severity, while extraversion, openness to experience, agreeableness, and conscientiousness negatively correlated with autism symptom severity. Four FFM subtypes emerged within adults with ASD, with three subtypes characterized by high neuroticism and none characterized by lower-than-average neuroticism. PMID:26319256

  2. An analytical model for the celestial distribution of polarized light, accounting for polarization singularities, wavelength and atmospheric turbidity

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Gao, Jun; Fan, Zhiguo; Roberts, Nicholas W.

    2016-06-01

    We present a computationally inexpensive analytical model for simulating celestial polarization patterns in variable conditions. We combine both the singularity theory of Berry et al (2004 New J. Phys. 6 162) and the intensity model of Perez et al (1993 Sol. Energy 50 235–245) such that our single model describes three key sets of data: (1) the overhead distribution of the degree of polarization as well as the existence of neutral points in the sky; (2) the change in sky polarization as a function of the turbidity of the atmosphere; and (3) sky polarization patterns as a function of wavelength, calculated in this work from the ultra-violet to the near infra-red. To verify the performance of our model we generate accurate reference data using a numerical radiative transfer model and statistical comparisons between these two methods demonstrate no significant difference in almost all situations. The development of our analytical model provides a novel method for efficiently calculating the overhead skylight polarization pattern. This provides a new tool of particular relevance for our understanding of animals that use the celestial polarization pattern as a source of visual information.

  3. Using a new high resolution regional model for malaria that accounts for population density and surface hydrology to determine sensitivity of malaria risk to climate drivers

    NASA Astrophysics Data System (ADS)

    Tompkins, Adrian; Ermert, Volker; Di Giuseppe, Francesca

    2013-04-01

    In order to better address the role of population dynamics and surface hydrology in the assessment of malaria risk, a new dynamical disease model been developed at ICTP, known as VECTRI: VECtor borne disease community model of ICTP, TRIeste (VECTRI). The model accounts for the temperature impact on the larvae, parasite and adult vector populations. Local host population density affects the transmission intensity, and the model thus reproduces the differences between peri-urban and rural transmission noted in Africa. A new simple pond model framework represents surface hydrology. The model can be used on with spatial resolutions finer than 10km to resolve individual health districts and thus can be used as a planning tool. Results of the models representation of interannual variability and longer term projections of malaria transmission will be shown for Africa. These will show that the model represents the seasonality and spatial variations of malaria transmission well matching a wide range of survey data of parasite rate and entomological inoculation rate (EIR) from across West and East Africa taken in the period prior to large-scale interventions. The model is used to determine the sensitivity of malaria risk to climate variations, both in rainfall and temperature, and then its use in a prototype forecasting system coupled with ECMWF forecasts will be demonstrated.

  4. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    USGS Publications Warehouse

    Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  5. Improvements of land-surface models to account for fire-climate feedbacks in the Amazon region

    NASA Astrophysics Data System (ADS)

    Cardoso, M.; Sampaio, G.; Shimizu, M. H.; Sanches, M.; Nobre, C. A.

    2013-05-01

    Improved dynamic global vegetation models are needed to evaluate the synergistic effects of changes in climate and fire activity in Amazonia. Coupled land-surface and climate models, for example, are being developed to evaluate the vulnerability of Amazon rainforest to more frequent and severe droughts, either through a direct effect on tree mortality or indirectly via increased occurrence of vegetation fires. In this context, we have been working on improving land-surface models that can be applied globally with enhanced performance for representing biosphere-atmosphere interactions over South American biomes, including fires. To this end, fire dynamics sub-models are being developed for evaluating fire occurrence and impacts considering major natural and anthropogenic factors. Current fire equations are based on methods already tested in global dynamic vegetation models, and their initial re-parametrization lead to reasonable representation of major spatial and temporal features of the global fire occurrence. At large scale there is correct representation of time and location for most of the burned area reported in datasets based on remote sensing. However, important under- and over-estimation of model results occur at smaller scales, in part explained by the simplicity of equations and current parametrization based on spatial and temporal averages of fire observations. We are now working to enhance the precision of the fire models and to improve the representation of the effects of fire on vegetation and atmospheric composition, including better representation of sources of ignition, changes in mortality rates and carbon emissions. These improvements will then contribute for evaluating the signal and strength of potential fire-climate feedbacks in the region.

  6. Two-way FSI modelling of blood flow through CCA accounting on-line medical diagnostics in hypertension

    NASA Astrophysics Data System (ADS)

    Czechowicz, K.; Badur, J.; Narkiewicz, K.

    2014-08-01

    Flow parameters can induce pathological changes in the arteries. We propose a method to asses those parameters using a 3D computer model of the flow in the Common Carotid Artery. Input data was acquired using an automatic 2D ultrasound wall tracking system. This data has been used to generate a 3D geometry of the artery. The diameter and wall thickness have been assessed individually for every patient, but the artery has been taken as a 75mm straight tube. The Young's modulus for the arterial walls was calculated using the pulse pressure, diastolic (minimal) diameter and wall thickness (IMT). Blood flow was derived from the pressure waveform using a 2-parameter Windkessel model. The blood is assumed to be non-Newtonian. The computational models were generated and calculated using commercial code. The coupling method required the use of Arbitrary Lagrangian-Euler formulation to solve Navier-Stokes and Navier-Lame equations in a moving domain. The calculations showed that the distention of the walls in the model is not significantly different from the measurements. Results from the model have been used to locate additional risk factors, such as wall shear stress or circumferential stress, that may predict adverse hypertension complications.

  7. A second gradient continuum model accounting for some effects of micro-structure on reconstructed bone remodelling

    NASA Astrophysics Data System (ADS)

    Madeo, Angela; George, D.; Lekszycki, T.; Nierenberger, Mathieu; Rémond, Yves

    2012-08-01

    We propose a second gradient, two-solids, continuum mixture model with variable masses to describe the effect of micro-structure on mechanically-driven remodelling of bones grafted with bio-resorbable materials. A one-dimensional numerical simulation is addressed showing the potentialities of the proposed generalized continuum model. In particular, we show that the used second gradient model allows for the description of some micro-structure-related size effects which are known to be important in hierarchically heterogeneous materials like reconstructed bones. Moreover, the influence of the introduced second gradient parameters on the final percentages of replacement of artificial bio-material with natural bone tissue is presented and discussed.

  8. Accounting for Diffusion in Agent Based Models of Reaction-Diffusion Systems with Application to Cytoskeletal Diffusion

    PubMed Central

    Azimi, Mohammad; Jamali, Yousef; Mofrad, Mohammad R. K.

    2011-01-01

    Diffusion plays a key role in many biochemical reaction systems seen in nature. Scenarios where diffusion behavior is critical can be seen in the cell and subcellular compartments where molecular crowding limits the interaction between particles. We investigate the application of a computational method for modeling the diffusion of molecules and macromolecules in three-dimensional solutions using agent based modeling. This method allows for realistic modeling of a system of particles with different properties such as size, diffusion coefficients, and affinity as well as the environment properties such as viscosity and geometry. Simulations using these movement probabilities yield behavior that mimics natural diffusion. Using this modeling framework, we simulate the effects of molecular crowding on effective diffusion and have validated the results of our model using Langevin dynamics simulations and note that they are in good agreement with previous experimental data. Furthermore, we investigate an extension of this framework where single discrete cells can contain multiple particles of varying size in an effort to highlight errors that can arise from discretization that lead to the unnatural behavior of particles undergoing diffusion. Subsequently, we explore various algorithms that differ in how they handle the movement of multiple particles per cell and suggest an algorithm that properly accommodates multiple particles of various sizes per cell that can replicate the natural behavior of these particles diffusing. Finally, we use the present modeling framework to investigate the effect of structural geometry on the directionality of diffusion in the cell cytoskeleton with the observation that parallel orientation in the structural geometry of actin filaments of filopodia and the branched structure of lamellipodia can give directionality to diffusion at the filopodia-lamellipodia interface. PMID:21966493

  9. A Tree Diagram: Compilation of Methods for Evaluating Host Rock Suitability Taking Account of Uncertainties in Hydrogeological Modeling

    NASA Astrophysics Data System (ADS)

    Sawada, A.; Hayano, A.; Goto, J.; Inagaki, M.

    2014-12-01

    In Japan, the siting process of geological repositories for vitrified high-level radioactive waste and low-level radioactive waste containing long-lived nuclides shall comprise step-wise site investigations and evaluations. The Detailed Investigation Areas will be selected focusing on the suitability for the host rock where the underground facility is constructed, after a series of surface-based investigations at Preliminary Investigation Areas. The suitability shall be judged by considering multi-disciplinary performances of the rock mass, such as thermal, hydrologic, mechanical and geochemical conditions and the volume of rock mass, based on the site models. However, the limited geoscientific information yields relatively large uncertainties of the site models, especially the hydrogeological models due to a wider variability of hydraulic properties. The uncertainties make it difficult to clarify the relationship among the site investigation, repository design (Design) and safety assessment (SA). In this study, groundwater travel time is identified as one of the important evaluation factors relevant for SA in terms of hydrology. In addition, the various options for evaluating the groundwater travel time are put together into a tree diagram. The highest level of the tree diagram is defined by the evaluation factor (groundwater travel time), and evaluation methods are systematically classified into multi-levels that comprise analytical methods/models in one dimension and three dimensions, parameters, datasets, data and investigation methods. Multiple options, such as alternative cases and/or models caused by uncertainties in data, analytical methods and models, are incorporated at each level of the tree diagrams. The feasibility of the tree diagram was examined by tracing both analytical options. Through this examination, the importance of interaction among the site investigation, SA and Design was also demonstrated.

  10. Modelling runoff at the plot scale taking into account rainfall partitioning by vegetation: application to stemflow of banana (Musa spp.) plant

    NASA Astrophysics Data System (ADS)

    Charlier, J.-B.; Moussa, R.; Cattan, P.; Cabidoche, Y.-M.; Voltz, M.

    2009-06-01

    Rainfall partitioning by vegetation modifies the intensity of rainwater reaching the ground, which affects runoff generation. Incident rainfall is intercepted by the plant canopy and then redistributed into throughfall and stemflow. Rainfall intensities at the soil surface are therefore not spatially uniform, generating local variations of runoff production that are disregarded in runoff models. The aim of this paper was to model runoff at the plot scale, accounting for rainfall partitioning by vegetation in the case of plants concentrating rainwater at the plant foot and promoting stemflow. We developed a lumped modelling approach, including a stemflow function that divided the plot into two compartments: one compartment including stemflow and the relative water pathways and one compartment for the rest of the plot. This stemflow function was coupled with a production function and a transfer function to simulate a flood hydrograph using the MHYDAS model. Calibrated parameters were a "stemflow coefficient", which compartmented the plot; the saturated hydraulic conductivity (Ks), which controls infiltration and runoff; and the two parameters of the diffusive wave equation. We tested our model on a banana plot of 3000 m2 on permeable Andosol (mean Ks=75 mm h-1) under tropical rainfalls, in Guadeloupe (FWI). Runoff simulations without and with the stemflow function were performed and compared to 18 flood events from 10 to 130 mm rainfall depth. Modelling results showed that the stemflow function improved the calibration of hydrographs according to the error criteria on volume and on peakflow and to the Nash and Sutcliffe coefficient. This was particularly the case for low flows observed during residual rainfall, for which the stemflow function allowed runoff to be simulated for rainfall intensities lower than the Ks measured at the soil surface. This approach also allowed us to take into account the experimental data, without needing to calibrate the runoff volume on

  11. Accounting for particle non-sphericity in modeling of mineral dust radiative properties in the thermal infrared

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Dubovik, O.; Lapyonok, T.; Derimian, Y.

    2014-12-01

    Spectral radiative parameters (extinction optical depth, single scattering albedo, asymmetry factor) of spheroids of mineral dust composed of quartz and clays have been simulated at wavelengths between 7.0 and 10.2 μm using a T-matrix code. In spectral intervals with high values of complex index of refraction and for large particles, the parameters cannot be fully calculated with the code. Practically, the calculations are stopped at a truncation radius over which the particles contribution cannot thus be taken into account. To deal with this issue, we have developed and applied an accurate corrective technique of T-matrix Size Truncation Compensation (TSTC). For a mineral dust described by its AERONET standard aspect ratio (AR) distribution, the full error margin when applying the TSTC is within 0.3% (or ±0.15%), whatever the radiative parameter and the wavelength considered, for quartz (the most difficult case). Large AR values limit also the possibilities of calculation with the code. The TSTC has been able to complete the calculations of the T-matrix code for a modified AERONET AR distribution with a maximum AR of 4.7 instead of 3 for the standard distribution. Comparison between the simulated properties of spheroids and of spheres of same volume confirms, in agreement with the literature, that significant differences are observed in the vicinity of the mineral resonant peaks (λ ca. 8.3-8.7 μm for quartz, ca. 9.3-9.5 μm for clays) and that they are due to absorption by the small particles. This is a favorable circumstance for the TSTC, which is concerned with the contribution of the largest particles. This technique of numerical calculation improves the accuracy of the simulated radiative parameters of mineral dust, which must lead to a progress in view of applications such as remote sensing or determination of energy balance of dust in the thermal infrared (TIR), incompletely investigated so far.

  12. Modeling complicated rheological behaviors in encapsulating shells of lipid-coated microbubbles accounting for nonlinear changes of both shell viscosity and elasticity.

    PubMed

    Li, Qian; Matula, Thomas J; Tu, Juan; Guo, Xiasheng; Zhang, Dong

    2013-02-21

    It has been accepted that the dynamic responses of ultrasound contrast agent (UCA) microbubbles will be significantly affected by the encapsulating shell properties (e.g., shell elasticity and viscosity). In this work, a new model is proposed to describe the complicated rheological behaviors in an encapsulating shell of UCA microbubbles by applying the nonlinear 'Cross law' to the shell viscous term in the Marmottant model. The proposed new model was verified by fitting the dynamic responses of UCAs measured with either a high-speed optical imaging system or a light scattering system. The comparison results between the measured radius-time curves and the numerical simulations demonstrate that the 'compression-only' behavior of UCAs can be successfully simulated with the new model. Then, the shell elastic and viscous coefficients of SonoVue microbubbles were evaluated based on the new model simulations, and compared to the results obtained from some existing UCA models. The results confirm the capability of the current model for reducing the dependence of bubble shell parameters on the initial bubble radius, which indicates that the current model might be more comprehensive to describe the complex rheological nature (e.g., 'shear-thinning' and 'strain-softening') in encapsulating shells of UCA microbubbles by taking into account the nonlinear changes of both shell elasticity and shell viscosity. PMID:23339902

  13. Modelling scale-dependent runoff generation in a small semi-arid watershed accounting for rainfall intensity and water depth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Observed scale effects of runoff and erosion on hillslopes and small watersheds pose one of the most intriguing challenges to modellers, because it results from complex interactions of time-dependent rainfall input with runoff, infiltration and macro- and microtopographic structures. A little studie...

  14. Voxelized Model of Brain Infusion That Accounts for Small Feature Fissures: Comparison With Magnetic Resonance Tracer Studies.

    PubMed

    Dai, Wei; Astary, Garrett W; Kasinadhuni, Aditya K; Carney, Paul R; Mareci, Thomas H; Sarntinoranont, Malisa

    2016-05-01

    Convection enhanced delivery (CED) is a promising novel technology to treat neural diseases, as it can transport macromolecular therapeutic agents greater distances through tissue by direct infusion. To minimize off-target delivery, our group has developed 3D computational transport models to predict infusion flow fields and tracer distributions based on magnetic resonance (MR) diffusion tensor imaging data sets. To improve the accuracy of our voxelized models, generalized anisotropy (GA), a scalar measure of a higher order diffusion tensor obtained from high angular resolution diffusion imaging (HARDI) was used to improve tissue segmentation within complex tissue regions of the hippocampus by capturing small feature fissures. Simulations were conducted to reveal the effect of these fissures and cerebrospinal fluid (CSF) boundaries on CED tracer diversion and mistargeting. Sensitivity analysis was also conducted to determine the effect of dorsal and ventral hippocampal infusion sites and tissue transport properties on drug delivery. Predicted CED tissue concentrations from this model are then compared with experimentally measured MR concentration profiles. This allowed for more quantitative comparison between model predictions and MR measurement. Simulations were able to capture infusate diversion into fissures and other CSF spaces which is a major source of CED mistargeting. Such knowledge is important for proper surgical planning. PMID:26833078

  15. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    PubMed

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. PMID:25908267

  16. Diagnostic Competence of Teachers: A Process Model That Accounts for Diagnosing Learning Behavior Tested by Means of a Case Scenario

    ERIC Educational Resources Information Center

    Klug, Julia; Bruder, Simone; Kelava, Augustin; Spiel, Christiane; Schmitz, Bernhard

    2013-01-01

    Diagnosing learning behavior is one of teachers' most central tasks. So far, accuracy in teachers' judgments on students' achievement has been investigated. In this study, a new perspective is taken by developing and testing a three-dimensional model that describes the process of diagnosing learning behavior within a sample of N = 293…

  17. Accounting for propagation outside of the model boundaries in regional full waveform inversion based on adjoint methods

    NASA Astrophysics Data System (ADS)

    Masson, Y.; Pierre, C.; Romanowicz, B. A.; French, S. W.; Yuan, H.

    2014-12-01

    Yuan et al. (2013) developed a 3D radially anisotropic shear wave model of North America (NA) upper mantle based on full waveform tomography, combining teleseismic and regional distance data sampling the NA. In this model, synthetic seismograms associated with regional events (i.e. events located inside in the region imaged NA) were computed exactly using the Spectral Element method (Cupillard et al., 2012), while, synthetic seismograms associated with teleseismic events were performed approximately using non-linear asymptotic coupling theory (NACT, Li and Romanowicz, 1995). Both the regional and the teleseismic dataset have been inverted using approximate sensitivity kernels based upon normal mode theory. Our objective is to improve our current model and to build the next generation model of NA by introducing new methodological developments (Masson et al., 2014) that allow us to compute exact synthetic seismograms as well as adjoint sensitivity kernels associated with teleseismic events, using mostly regional computations of wave propagation. The principle of the method is to substitute a teleseismic source (i.e. an earthquake) by an "equivalent" set of seismic sources acting on the boundaries of the region to be imaged that is producing exactly the same wavefield. Computing the equivalent set of sources associated with each one of the teleseismic events requires a few global simulations of the seismic wavefield that can be done once for all, prior to the regional inversion. Then, the regional full waveform inversion can be preformed using regional simulations only. We present a 3D model of NA demonstrating the advantages of the proposed method.

  18. Alternative methods to predict actual evapotranspiration illustrate the importance of accounting for phenology - Part 2: The event driven phenology model

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.

    2011-05-01

    Evapotranspiration (ET) flux constitutes a major component of both the water and energy balances at the land surface. Among the many factors that control evapotranspiration, phenology poses a major source of uncertainty in attempts to predict ET. Contemporary approaches to ET modeling and monitoring frequently summarize the complexity of the seasonal development of vegetation cover into static phenological trajectories (or climatologies) that lack sensitivity to changing environmental conditions. The Event Driven Phenology Model (EDPM) offers an alternative, interactive approach to representing phenology. This study presents the results of an experiment designed to illustrate the differences in ET arising from various techniques used to mimic phenology in models of land surface processes. The experiment compares and contrasts two realizations of static phenologies derived from long-term satellite observations of the Normalized Difference Vegetation Index (NDVI) against canopy trajectories produced by the interactive EDPM trained on flux tower observations. The assessment was carried out through validation of predicted ET against records collected by flux tower instruments. The VegET model (Senay, 2008) was used as a framework to estimate daily actual evapotranspiration and supplied with seasonal canopy trajectories produced by the EDPM and traditional techniques. The interactive approach presented the following advantages over phenology modeled with static climatologies: (a) lower prediction bias in crops; (b) smaller root mean square error in daily ET - 0.5 mm per day on average; (c) stable level of errors throughout the season similar among different land cover types and locations; and (d) better estimation of season duration and total seasonal ET.

  19. Alternative methods to predict actual evapotranspiration illustrate the importance of accounting for phenology - Part 2: The event driven phenology model

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.

    2012-01-01

    Evapotranspiration (ET) flux constitutes a major component of both the water and energy balances at the land surface. Among the many factors that control evapotranspiration, phenology poses a major source of uncertainty in attempts to predict ET. Contemporary approaches to ET modeling and monitoring frequently summarize the complexity of the seasonal development of vegetation cover into static phenological trajectories (or climatologies) that lack sensitivity to changing environmental conditions. The Event Driven Phenology Model (EDPM) offers an alternative, interactive approach to representing phenology. This study presents the results of an experiment designed to illustrate the differences in ET arising from various techniques used to mimic phenology in models of land surface processes. The experiment compares and contrasts two realizations of static phenologies derived from long-term satellite observations of the Normalized Difference Vegetation Index (NDVI) against canopy trajectories produced by the interactive EDPM trained on flux tower observations. The assessment was carried out through validation of predicted ET against records collected by flux tower instruments. The VegET model (Senay, 2008) was used as a framework to estimate daily actual evapotranspiration and supplied with seasonal canopy trajectories produced by the EDPM and traditional techniques. The interactive approach presented the following advantages over phenology modeled with static climatologies: (a) lower prediction bias in crops; (b) smaller root mean square error in daily ET - 0.5 mm per day on average; (c) stable level of errors throughout the season similar among different land cover types and locations; and (d) better estimation of season duration and total seasonal ET.

  20. Accounting Occupations Cluster Assessment Guide.

    ERIC Educational Resources Information Center

    Beaverton School District 48, OR.

    This assessment guide, developed by the Model Accounting Project at Aloha High School in the Beaverton, Oregon, school district, contains criteria statements that reflect factors deemed essential for quality instruction and overall effectiveness of the accounting program. The guide can be used by an instructor as a self-assessment instrument or by…

  1. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in

  2. Spatially Explicit Full Carbon and Greenhouse Gas Accounting for the Midwestern and Continental US: Modeling and Decision Support for Carbon Management

    NASA Astrophysics Data System (ADS)

    West, T. O.; Brandt, C. C.; Wilson, B. S.; Hellwinckel, C. M.; Mueller, M.; Tyler, D. D.; de La Torre Ugarte, D. G.; Larson, J. A.; Nelson, R. G.; Marland, G.

    2006-12-01

    Full carbon accounting for terrestrial ecosystems is intended to quantify changes in net carbon emissions caused by changes in land management. On agricultural lands, changes in land management can cause changes in CO2 emissions from fossil fuel use, agricultural lime, and decomposition of soil carbon. Changes in off-site emissions can occur from the manufacturing of fertilizers, pesticides, and agricultural lime. We are developing a full carbon accounting framework that can be used for estimates of on-site net carbon flux or for full greenhouse gas accounting at a high spatial resolution. Estimates are based on the assimilation of national inventory data, soil carbon dynamics based on empirical analyses of field data, and Landsat-derived remote sensing products with 30x30m resolution. We applied this framework to a mid-western region of the US that consists of 679 counties approximately centered around Iowa. We estimate the 1990 baseline soil carbon for this region to be 4,099 Tg C to a 3m maximum depth. Soil carbon accumulation of 57.3 Tg C is estimated to have occurred in this region between 1991-2000. Without accounting for soil carbon loss associated with changes to more intense tillage practices, our estimate increases to 66.3 Tg C. This indicates that on-site permanence of soil carbon is approximately 86% with no additional economic incentives provided for soil carbon sequestration practices. Total net carbon flux from the agricultural activities in the Midwestern US in 2000 is estimated at about -5 Tg C. This estimate includes carbon uptake, decomposition, harvested products, and on-site fossil fuel emissions. Therefore, soil carbon accumulation offset on-site emissions in 2000. Our carbon accounting framework offers a method to integrate new inventory and remote sensing data on an annual basis, account for alternating annual trends in land management without the need for model equilibration, and provide a transparent means to monitor changes soil carbon

  3. Modeling Water Resource Systems Accounting for Water-Related Energy Use, GHG Emissions and Water-Dependent Energy Generation in California

    NASA Astrophysics Data System (ADS)

    Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Medellin-Azuara, J.

    2015-12-01

    Most individual processes relating water and energy interdependence have been assessed in many different ways over the last decade. It is time to step up and include the results of these studies in management by proportionating a tool for integrating these processes in decision-making to effectively understand the tradeoffs between water and energy from management options and scenarios. A simple but powerful decision support system (DSS) for water management is described that includes water-related energy use and GHG emissions not solely from the water operations, but also from final water end uses, including demands from cities, agriculture, environment and the energy sector. Because one of the main drivers of energy use and GHG emissions is water pumping from aquifers, the DSS combines a surface water management model with a simple groundwater model, accounting for their interrelationships. The model also explicitly includes economic data to optimize water use across sectors during shortages and calculate return flows from different uses. Capabilities of the DSS are demonstrated on a case study over California's intertied water system. Results show that urban end uses account for most GHG emissions of the entire water cycle, but large water conveyance produces significant peaks over the summer season. Also the development of more efficient water application on the agricultural sector has increased the total energy consumption and the net water use in the basins.

  4. Accounting for anthropic energy flux of traffic in winter urban road surface temperature simulations with the TEB model

    NASA Astrophysics Data System (ADS)

    Khalifa, A.; Marchetti, M.; Bouilloud, L.; Martin, E.; Bues, M.; Chancibaut, K.

    2016-02-01

    Snowfall forecasts help winter maintenance of road networks, ensure better coordination between services, cost control, and a reduction in environmental impacts caused by an inappropriate use of de-icers. In order to determine the possible accumulation of snow on pavements, forecasting the road surface temperature (RST) is mandatory. Weather outstations are used along these networks to identify changes in pavement status, and to make forecasts by analyzing the data they provide. Physical numerical models provide such forecasts, and require an accurate description of the infrastructure along with meteorological parameters. The objective of this study was to build a reliable urban RST forecast with a detailed integration of traffic in the Town Energy Balance (TEB) numerical model for winter maintenance. The study first consisted in generating a physical and consistent description of traffic in the model with two approaches to evaluate traffic incidence on RST. Experiments were then conducted to measure the effect of traffic on RST increase with respect to non-circulated areas. These field data were then used for comparison with the forecast provided by this traffic-implemented TEB version.

  5. Accounting for anthropic energy flux of traffic in winter urban road surface temperature simulations with TEB model

    NASA Astrophysics Data System (ADS)

    Khalifa, A.; Marchetti, M.; Bouilloud, L.; Martin, E.; Bues, M.; Chancibaut, K.

    2015-06-01

    A forecast of the snowfall helps winter coordination operating services, reducing the cost of the maintenance actions, and the environmental impacts caused by an inappropriate use of de-icing. In order to determine the possible accumulation of snow on pavement, the forecast of the road surface temperature (RST) is mandatory. Physical numerical models provide such forecast, and do need an accurate description of the infrastructure along with meteorological parameters. The objective of this study was to build a reliable urban RST forecast with a detailed integration of traffic in the Town Energy Balance (TEB) numerical model for winter maintenance. The study first consisted in generating a physical and consistent description of traffic in the model with all the energy interactions, with two approaches to evaluate the traffic incidence on RST. Experiments were then conducted to measure the traffic effect on RST increase with respect to non circulated areas. These field data were then used for comparison with forecast provided by this traffic-implemented TEB version.

  6. Accounting for Uncertainty in Confounder and Effect Modifier Selection when Estimating Average Causal Effects in Generalized Linear Models

    PubMed Central

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-01-01

    Summary Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012) and Lefebvre et al. (2014), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to non-collapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100 to 150 observations and 50 covariates. The method is applied to data on 15060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within thirty days of diagnosis. PMID:25899155

  7. Two-dimensional global Rayleigh wave attenuation model by accounting for finite-frequency focusing and defocusing effect

    NASA Astrophysics Data System (ADS)

    Ma, Zhitu; Masters, Guy; Mancinelli, Nicholas

    2016-01-01

    In this study, we obtain a set of 2-D global phase velocity and attenuation maps for Rayleigh waves between 5 and 25 mHz. Correcting the effect of focusing-defocusing is crucial in order to obtain reliable attenuation structure. Great circle linearized ray theory, which has been used to date, can give useful predictions of this effect if careful attention is paid to how the phase velocity model is smoothed. In contrast, predictions based on the 2-D finite-frequency kernels are quite robust in this frequency range and suggest that they are better suited as a basis for inversion. We use a large data set of Rayleigh wave phase and amplitude measurements to invert for the phase velocity, attenuation, source and receiver terms simultaneously. Our models provide 60-70 per cent variance reduction to the raw data though the source terms are the biggest contribution to the fit of the data. The attenuation maps show structures that correlate well with surface tectonics and the age progression trend of the attenuation is clearly seen in the ocean basins. We have also identified problematic stations and earthquake sources as a by-product of our data selection process.

  8. A Unified Model of Time Perception Accounts for Duration-Based and Beat-Based Timing Mechanisms

    PubMed Central

    Teki, Sundeep; Grube, Manon; Griffiths, Timothy D.

    2011-01-01

    Accurate timing is an integral aspect of sensory and motor processes such as the perception of speech and music and the execution of skilled movement. Neuropsychological studies of time perception in patient groups and functional neuroimaging studies of timing in normal participants suggest common neural substrates for perceptual and motor timing. A timing system is implicated in core regions of the motor network such as the cerebellum, inferior olive, basal ganglia, pre-supplementary, and supplementary motor area, pre-motor cortex as well as higher-level areas such as the prefrontal cortex. In this article, we assess how distinct parts of the timing system subserve different aspects of perceptual timing. We previously established brain bases for absolute, duration-based timing and relative, beat-based timing in the olivocerebellar and striato-thalamo-cortical circuits respectively (Teki et al., 2011). However, neurophysiological and neuroanatomical studies provide a basis to suggest that timing functions of these circuits may not be independent. Here, we propose a unified model of time perception based on coordinated activity in the core striatal and olivocerebellar networks that are interconnected with each other and the cerebral cortex through multiple synaptic pathways. Timing in this unified model is proposed to involve serial beat-based striatal activation followed by absolute olivocerebellar timing mechanisms. PMID:22319477

  9. Competency-Based Accounting Instruction

    ERIC Educational Resources Information Center

    Graham, John E.

    1977-01-01

    Shows how the proposed model (an individualized competency based learning system) can be used effectively to produce a course in accounting principles which adapts to different entering competencies and to different rates and styles of learning. (TA)

  10. A multiscale modelling methodology applicable for regulatory purposes taking into account effects of complex terrain and buildings on pollutant dispersion: a case study for an inner Alpine basin.

    PubMed

    Oettl, D

    2015-11-01

    Dispersion modelling in complex terrain always has been challenging for modellers. Although a large number of publications are dedicated to that field, candidate methods and models for usage in regulatory applications are scarce. This is all the more true when the combined effect of topography and obstacles on pollutant dispersion has to be taken into account. In Austria, largely situated in Alpine regions, such complex situations are quite frequent. This work deals with an approach, which is in principle capable of considering both buildings and topography in simulations by combining state-of-the-art wind field models at the micro- (<1 km) and mesoscale γ (2-20 km) with a Lagrangian particle model. In order to make such complex numerical models applicable for regulatory purposes, meteorological input data for the models need to be readily derived from routine observations. Here, use was made of the traditional way to bin meteorological data based on wind direction, speed, and stability class, formerly mainly used in conjunction with Gaussian-type models. It is demonstrated that this approach leads to reasonable agreements (fractional bias < 0.1) between observed and modelled annual average concentrations in an Alpine basin with frequent low-wind-speed conditions, temperature inversions, and quite complex flow patterns, while keeping the simulation times within the frame of possibility with regard to applications in licencing procedures. However, due to the simplifications in the derivation of meteorological input data as well as several ad hoc assumptions regarding the boundary conditions of the mesoscale wind field model, the methodology is not suited for computing detailed time and space variations of pollutant concentrations. PMID:26162440

  11. Computer-program documentation of an interactive-accounting model to simulate streamflow, water quality, and water-supply operations in a river basin

    USGS Publications Warehouse

    Burns, A.W.

    1988-01-01

    This report describes an interactive-accounting model used to simulate streamflow, chemical-constituent concentrations and loads, and water-supply operations in a river basin. The model uses regression equations to compute flow from incremental (internode) drainage areas. Conservative chemical constituents (typically dissolved solids) also are computed from regression equations. Both flow and water quality loads are accumulated downstream. Optionally, the model simulates the water use and the simplified groundwater systems of a basin. Water users include agricultural, municipal, industrial, and in-stream users , and reservoir operators. Water users list their potential water sources, including direct diversions, groundwater pumpage, interbasin imports, or reservoir releases, in the order in which they will be used. Direct diversions conform to basinwide water law priorities. The model is interactive, and although the input data exist in files, the user can modify them interactively. A major feature of the model is its color-graphic-output options. This report includes a description of the model, organizational charts of subroutines, and examples of the graphics. Detailed format instructions for the input data, example files of input data, definitions of program variables, and listing of the FORTRAN source code are Attachments to the report. (USGS)

  12. Computing options for multiple-trait test-day random regression models while accounting for heat tolerance.

    PubMed

    Aguilar, I; Tsuruta, S; Misztal, I

    2010-06-01

    Data included 90,242,799 test day records from first, second and third parities of 5,402,484 Holstein cows and 9,326,754 animals in the pedigree. Additionally, daily temperature humidity indexes (THI) from 202 weather stations were available. The fixed effects included herd test day, age at calving, milking frequency and days in milk classes (DIM). Random effects were additive genetic, permanent environment and herd-year and were fit as random regressions. Covariates included linear splines with four knots at 5, 50, 200 and 305 DIM and a function of THI. Mixed model equations were solved using an iteration on data program with a preconditioned conjugate gradient algorithm. Preconditioners used were diagonal (D), block diagonal due to traits (BT) and block diagonal due to traits and correlated effects (BTCORR). One run included BT with a 'diagonalized' model in which the random effects were reparameterized for diagonal (co)variance matrices among traits (BTDIAG). Memory requirements were 8.7 Gb for D, 10.4 Gb for BT and BTDIAG, and 24.3 Gb for BTCORR. Computing times (rounds) were 14 days (952) for D, 10.7 days (706) for BT, 7.7 days (494) for BTDIAG and 4.6 days (289) for BTCORR. The convergence pattern was strongly influenced by the choice of fixed effects. When sufficient memory is available, the option BTCORR is the fastest and simplest to implement; the next efficient method, BTDIAG, requires additional steps for diagonalization and back-diagonalization. PMID:20536641

  13. An efficient model to predict guided wave radiation by finite-sized sources in multilayered anisotropic plates with account of caustics

    NASA Astrophysics Data System (ADS)

    Stévenin, M.; Lhémery, A.; Grondel, S.

    2016-01-01

    Elastic guided waves (GW) are used in various non-destructive testing (NDT) methods to inspect plate-like structures, generated by finite-sized transducers. Thanks to GW long range propagation, using a few transducers at permanent positions can provide a full coverage of the plate. Transducer diffraction effects take place, leading to complex radiated fields. Optimizing transducers positioning makes it necessary to accurately predict the GW field radiated by a transducer. Fraunhofer-like approximations applied to GW in isotropic homogeneous plates lead to fast and accurate field computation but can fail when applied to multi-layered anisotropic composite plates, as shown by some examples given. Here, a model is proposed for composite plates, based on the computation of the approximate Green's tensor describing modal propagation from a source point, with account of caustics typically seen when strong anisotropy is concerned. Modal solutions are otherwise obtained by the Semi-Analytic Finite Element method. Transducer diffraction effects are accounted for by means of an angular integration over the transducer surface as seen from the calculation point, that is, over energy paths involved, which are mode-dependent. The model is validated by comparing its predictions with those computed by means of a full convolution integration of the Green's tensor with the source over transducer surface. Examples given concern disk and rectangular shaped transducers commonly used in NDT.

  14. Long-term pollution by chlordecone of tropical volcanic soils in the French West Indies: a simple leaching model accounts for current residue.

    PubMed

    Cabidoche, Y-M; Achard, R; Cattan, P; Clermont-Dauphin, C; Massat, F; Sansoulet, J

    2009-05-01

    Chlordecone was applied between 1972 and 1993 in banana fields of the French West Indies. This resulted in long-term pollution of soils and contamination of waters, aquatic biota, and crops. To assess pollution level and duration according to soil type, WISORCH, a leaching model based on first-order desorption kinetics, was developed and run. Its input parameters are soil organic carbon content (SOC) and SOC/water partitioning coefficient (K(oc)). It accounts for current chlordecone soil contents and drainage water concentrations. The model was valid for andosol, which indicates that neither physico-chemical nor microbial degradation occurred. Dilution by previous deep tillages makes soil scrapping unrealistic. Lixiviation appeared the main way to reduce pollution. Besides the SOC and rainfall increases, K(oc) increased from nitisol to ferralsol and then andosol while lixiviation efficiency decreased. Consequently, pollution is bound to last for several decades for nitisol, centuries for ferralsol, and half a millennium for andosol. PMID:19167793

  15. Mathematical Modeling of the Thermal State of an Isothermal Element with Account of the Radiant Heat Transfer Between Parts of a Spacecraft

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Paleshkin, A. V.; Terent‧ev, V. V.; Firsyuk, S. O.

    2016-01-01

    A methodological approach to determination of the thermal state at a point on the surface of an isothermal element of a small spacecraft has been developed. A mathematical model of heat transfer between surfaces of intricate geometric configuration has been described. In this model, account was taken of the external field of radiant fluxes and of the differentiated mutual influence of the surfaces. An algorithm for calculation of the distribution of the density of the radiation absorbed by surface elements of the object under study has been proposed. The temperature field on the lateral surface of the spacecraft exposed to sunlight and on its shady side has been calculated. By determining the thermal state of magnetic controls of the orientation system as an example, the authors have assessed the contribution of the radiation coming from the solar-cell panels and from the spacecraft surface.

  16. Combining a polarizable force-field and a coarse-grained polarizable solvent model. II. Accounting for hydrophobic effects.

    PubMed

    Masella, Michel; Borgis, Daniel; Cuniasse, Philippe

    2011-09-01

    A revised and improved version of our efficient polarizable force-field/coarse grained solvent combined approach (Masella, Borgis, and Cuniasse, J. Comput. Chem. 2008, 29, 1707) is described. The polarizable pseudo-particle solvent model represents the macroscopic solvent polarization by induced dipoles placed on mobile pseudo-particles. In this study, we propose a new formulation of the energy term handling the nonelectrostatic interactions among the pseudo-particles. This term is now able to reproduce the energetic and structural response of liquid water due to the presence of a hydrophobic spherical cavity. Accordingly, the parameters of the energy term handling the nonpolar solute/solvent interactions have been refined to reproduce the free-solvation energy of small solutes, based on a standard thermodynamic integration scheme. The reliability of this new approach has been checked for the properties of solvated methane and of the solvated methane dimer, as well as by performing 10 × 20 ns molecular dynamics (MD) trajectories for three solvated proteins. A long-time stability of the protein structures along the trajectories is observed. Moreover, our method still provides a measure of the protein solvation thermodynamic at the same accuracy as standard Poisson-Boltzman continuum methods. These results show the relevance of our approach and its applicability to massively coupled MD schemes to accurately and intensively explore solvated macromolecule potential energy surfaces. PMID:21647929

  17. Comparisons between satellite-derived datasets of stratospheric NOy species: using a photochemical model to account for diurnal variations

    NASA Astrophysics Data System (ADS)

    Sheese, Patrick; Walker, Kaley; McLinden, Chris; Boone, Chris; Bernath, Peter; Burrows, John; Funke, Bernd; Fussen, Didier; Manney, Gloria; Murtagh, Donal; Randall, Cora; Raspollini, Piera; Rozanov, Alexei; Russell, James; Urban, Jo; von Clarmann, Thomas; Zawodny, Joseph

    2014-05-01

    The ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) instrument on the Canadian satellite SCISAT, which has been in operation now for over 10 years, has the capability of deriving stratospheric profiles of many of the NOy (NO + NO2+ NO3+ 2×N2O5+ HNO3+ HNO4+ ClONO2+ BrONO2) species. However, as a solar occultation instrument, opportunities for ACE-FTS and another given satellite instrument to observe a common air mass, can be rather limited. In the case of comparing species that exhibit significant diurnal variation, finding 'coincident' measurements can be even more difficult. In order for the measurements to be considered common-volume, the required difference between measurement times can be limitingly small. In this study, for each ACE-FTS measurement, we use a photochemical box model to simulate the diurnal variations of different NOy species over that day. The ACE-FTS NOy profiles are then scaled to the local times of coincident measurements from different satellite instruments-GOMOS, MIPAS, MLS, OSIRIS, POAM III, SAGE III, SCIAMACHY, and SMR. This allows for a much larger number of coincidences to be utilized. This study will discuss the advantages and limitations of this technique, as well as the results from comparing NO, NO2, N2O5, HNO3, and ClONO2 between ACE-FTS and other atmospheric limb sounders.

  18. What is accountability in health care?

    PubMed

    Emanuel, E J; Emanuel, L L

    1996-01-15

    Accountability has become a major issue in health care. Accountability entails the procedures and processes by which one party justifies and takes responsibility for its activities. The concept of accountability contains three essential components: 1) the loci of accountability--health care consists of at least 11 different parties that can be held accountable or hold others accountable; 2) the domains of accountability--in health care, parties can be held accountable for as many as six activities: professional competence, legal and ethical conduct, financial performance, adequacy of access, public health promotion, and community benefit; and 3) the procedures of accountability, including formal and informal procedures for evaluating compliance with domains and for disseminating the evaluation and responses by the accountable parties. Different models of accountability stress different domains, evaluative criteria, loci, and procedures. We characterize and compare three dominant models of accountability: 1) the professional model, in which the individual physician and patient participate in shared decision making and physicians are held accountable to professional colleagues and to patients; 2) the economic model, in which the market is brought to bear in health care and accountability is mediated through consumer choice of providers; and 3) the political model, in which physicians and patients interact as citizen-members within a community and in which physicians are accountable to a governing board elected from the members of the community, such as the board of a managed care plan. We argue that no single model of accountability is appropriate to health care. Instead, we advocate a stratified model of accountability in which the professional model guides the physician-patient relationship, the political model operates within managed care plans and other integrated health delivery networks, and the economic and political models operate in the relations between

  19. Accounting for observational uncertainties in the evaluation of low latitude turbulent air-sea fluxes simulated in a suite of IPSL model versions

    NASA Astrophysics Data System (ADS)

    Servonnat, Jerome; Braconnot, Pascale; Gainusa-Bogdan, Alina

    2015-04-01

    Turbulent momentum and heat (sensible and latent) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate and their good representation in climate models is of prime importance. In this work, we use the methodology developed by Braconnot & Frankignoul (1993) to perform a Hotelling T2 test on spatio-temporal fields (annual cycles). This statistic provides a quantitative measure accounting for an estimate of the observational uncertainty for the evaluation of low-latitude turbulent air-sea fluxes in a suite of IPSL model versions. The spread within the observational ensemble of turbulent flux data products assembled by Gainusa-Bogdan et al (submitted) is used as an estimate of the observational uncertainty for the different turbulent fluxes. The methodology holds on a selection of a small number of dominating variability patterns (EOFs) that are common to both the model and the observations for the comparison. Consequently it focuses on the large-scale variability patterns and avoids the possibly noisy smaller scales. The results show that different versions of the IPSL couple model share common large scale model biases, but also that there the skill on sea surface temperature is not necessarily directly related to the skill in the representation of the different turbulent fluxes. Despite the large error bars on the observations the test clearly distinguish the different merits of the different model version. The analyses of the common EOF patterns and related time series provide guidance on the major differences with the observations. This work is a first attempt to use such statistic on the evaluation of the spatio-temporal variability of the turbulent fluxes, accounting for an observational uncertainty, and represents an efficient tool for systematic evaluation of simulated air-seafluxes, considering both the fluxes and the related atmospheric variables. References Braconnot, P., and C. Frankignoul (1993), Testing Model

  20. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  1. Assessing the performance of dispersionless and dispersion-accounting methods: helium interaction with cluster models of the TiO2(110) surface.

    PubMed

    de Lara-Castells, María Pilar; Stoll, Hermann; Mitrushchenkov, Alexander O

    2014-08-21

    As a prototypical dispersion-dominated physisorption problem, we analyze here the performance of dispersionless and dispersion-accounting methodologies on the helium interaction with cluster models of the TiO2(110) surface. A special focus has been given to the dispersionless density functional dlDF and the dlDF+Das construction for the total interaction energy (K. Pernal, R. Podeswa, K. Patkowski, and K. Szalewicz, Phys. Rev. Lett. 2009, 109, 263201), where Das is an effective interatomic pairwise functional form for the dispersion. Likewise, the performance of symmetry-adapted perturbation theory (SAPT) method is evaluated, where the interacting monomers are described by density functional theory (DFT) with the dlDF, PBE, and PBE0 functionals. Our benchmarks include CCSD(T)-F12b calculations and comparative analysis on the nuclear bound states supported by the He-cluster potentials. Moreover, intra- and intermonomer correlation contributions to the physisorption interaction are analyzed through the method of increments (H. Stoll, J. Chem. Phys. 1992, 97, 8449) at the CCSD(T) level of theory. This method is further applied in conjunction with a partitioning of the Hartree-Fock interaction energy to estimate individual interaction energy components, comparing them with those obtained using the different SAPT(DFT) approaches. The cluster size evolution of dispersionless and dispersion-accounting energy components is then discussed, revealing the reduced role of the dispersionless interaction and intramonomer correlation when the extended nature of the surface is better accounted for. On the contrary, both post-Hartree-Fock and SAPT(DFT) results clearly demonstrate the high-transferability character of the effective pairwise dispersion interaction whatever the cluster model is. Our contribution also illustrates how the method of increments can be used as a valuable tool not only to achieve the accuracy of CCSD(T) calculations using large cluster models but also to

  2. Modelling runoff at the plot scale taking into account rainfall partitioning by vegetation: application to stemflow of banana (Musa spp.) plant

    NASA Astrophysics Data System (ADS)

    Charlier, J.-B.; Moussa, R.; Cattan, P.; Cabidoche, Y.-M.; Voltz, M.

    2009-11-01

    Rainfall partitioning by vegetation modifies the intensity of rainwater reaching the ground, which affects runoff generation. Incident rainfall is intercepted by the plant canopy and then redistributed into throughfall and stemflow. Rainfall intensities at the soil surface are therefore not spatially uniform, generating local variations of runoff production that are disregarded in runoff models. The aim of this paper was to model runoff at the plot scale, accounting for rainfall partitioning by vegetation in the case of plants concentrating rainwater at the plant foot and promoting stemflow. We developed a lumped modelling approach, including a stemflow function that divided the plot into two compartments: one compartment including stemflow and the related water pathways and one compartment for the rest of the plot. This stemflow function was coupled with a production function and a transfer function to simulate a flood hydrograph using the MHYDAS model. Calibrated parameters were a "stemflow coefficient", which compartmented the plot; the saturated hydraulic conductivity (Ks), which controls infiltration and runoff; and the two parameters of the diffusive wave equation. We tested our model on a banana plot of 3000 m2 on permeable Andosol (mean Ks=75 mm h-1) under tropical rainfalls, in Guadeloupe (FWI). Runoff simulations without and with the stemflow function were performed and compared to 18 flood events from 10 to 140 rainfall mm depth. Modelling results showed that the stemflow function improved the calibration of hydrographs according to the error criteria on volume and on peakflow, to the Nash and Sutcliffe coefficient, and to the root mean square error. This was particularly the case for low flows observed during residual rainfall, for which the stemflow function allowed runoff to be simulated for rainfall intensities lower than the Ks measured at the soil surface. This approach also allowed us to take into account the experimental data, without needing to

  3. A new approach to account for the medium-dependent effect in model-based dose calculations for kilovoltage x-rays

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jason M.; Ding, George X.

    2011-07-01

    This study presents a new approach to accurately account for the medium-dependent effect in model-based dose calculations for kilovoltage (kV) x-rays. This approach is based on the hypothesis that the correction factors needed to convert dose from model-based dose calculations to absorbed dose-to-medium depend on both the attenuation characteristics of the absorbing media and the changes to the energy spectrum of the incident x-rays as they traverse media with an effective atomic number different than that of water. Using Monte Carlo simulation techniques, we obtained empirical medium-dependent correction factors that take both effects into account. We found that the correction factors can be expressed as a function of a single quantity, called the effective bone depth, which is a measure of the amount of bone that an x-ray beam must penetrate to reach a voxel. Since the effective bone depth can be calculated from volumetric patient CT images, the medium-dependent correction factors can be obtained for model-based dose calculations based on patient CT images. We tested the accuracy of this new approach on 14 patients for the case of calculating imaging dose from kilovoltage cone-beam computed tomography used for patient setup in radiotherapy, and compared it with the Monte Carlo method, which is regarded as the 'gold standard'. For all patients studied, the new approach resulted in mean dose errors of less than 3%. This is in contrast to current available inhomogeneity corrected methods, which have been shown to result in mean errors of up to -103% for bone and 8% for soft tissue. Since there is a huge gain in the calculation speed relative to the Monte Carlo method (~two orders of magnitude) with an acceptable loss of accuracy, this approach provides an alternative accurate dose calculation method for kV x-rays.

  4. Holding Accountability to Account. Research Brief

    ERIC Educational Resources Information Center

    National Center on Performance Incentives, 2008

    2008-01-01

    In "Holding Accountability to Account: How Scholarship and Experience in Other Fields Inform Exploration of Performance Incentives in Education"--a paper presented at the National Center on Performance Incentives research to policy conference in February--Richard Rothstein, a research associate at the Economic Policy Institute, argues educational…

  5. Accountability, California Style: Counting or Accounting?

    ERIC Educational Resources Information Center

    Russell, Michael; Higgins, Jennifer; Raczek, Anastasia

    2004-01-01

    Across the nation and at nearly all levels of our educational system, efforts to hold schools accountable for student learning dominate strategies for improving the quality of education. At both the national and state level, student testing stands at the center of educational accountability programs, such that schools are effectively held…

  6. International Accounting and the Accounting Educator.

    ERIC Educational Resources Information Center

    Laribee, Stephen F.

    The American Assembly of Collegiate Schools of Business (AACSB) has been instrumental in internationalizing the accounting curriculum by means of accreditation requirements and standards. Colleges and universities have met the AACSB requirements either by providing separate international accounting courses or by integrating international topics…

  7. Calibration and use of an interactive-accounting model to simulate dissolved solids, streamflow, and water-supply operations in the Arkansas River basin, Colorado

    USGS Publications Warehouse

    Burns, A.W.

    1989-01-01

    An interactive-accounting model was used to simulate dissolved solids, streamflow, and water supply operations in the Arkansas River basin, Colorado. Model calibration of specific conductance to streamflow relations at three sites enabled computation of dissolved-solids loads throughout the basin. To simulate streamflow only, all water supply operations were incorporated in the regression relations for streamflow. Calibration for 1940-85 resulted in coefficients of determination that ranged from 0.89 to 0.58, and values in excess of 0.80 were determined for 16 of 20 nodes. The model then incorporated 74 water users and 11 reservoirs to simulate the water supply operations for two periods, 1943-74 and 1975-85. For the 1943-74 calibration, coefficients of determination for streamflow ranged from 0.87 to 0.02. Calibration of the water supply operations resulted in coefficients of determination that ranged from 0.87 to negative for simulated irrigation diversions of 37 selected water users. Calibration for 1975-85 was not evaluated statistically, but average values and plots of reservoir contents indicated reasonableness of the simulation. To demonstrate the utility of the model, six specific alternatives were simulated to consider effects of potential enlargement of Pueblo Reservoir. Three general major alternatives were simulated: the 1975-85 calibrated model data, the calibrated model data with an addition of 30 cu ft/sec in Fountain Creek flows, and the calibrated model data plus additional municipal water in storage. These three major alternatives considered the options of reservoir enlargement or no enlargement. A 40,000-acre-foot reservoir enlargement resulted in average increases of 2,500 acre-ft in transmountain diversions, of 800 acre-ft in storage diversions, and of 100 acre-ft in winter-water storage. (USGS)

  8. Operational hydrological ensemble forecasts in France. Recent development of the French Hydropower Company (EDF), taking into account rainfall and hydrological model uncertainties.

    NASA Astrophysics Data System (ADS)

    Mathevet, T.; Garavaglia, F.; Garçon, R.; Gailhard, J.; Paquet, E.

    2009-04-01

    In operational conditions, the actual quality of meteorological and hydrological forecasts do not allow decision-making in a certain future. In this context, meteorological and hydrological ensemble forecasts allow a better representation of forecasts uncertainties. Compared to classical deterministic forecasts, ensemble forecasts improve the human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. In this paper, we present a hydrological ensemble forecasting system under development at EDF (French Hydropower Company). This forecasting system both takes into account rainfall forecasts uncertainties and hydrological model forecasts uncertainties. Hydrological forecasts were generated using the MORDOR model (Andreassian et al., 2006), developed at EDF and used on a daily basis in operational conditions on a hundred of watersheds. Two sources of rainfall forecasts were used : one is based on ECMWF forecasts, another is based on an analogues approach (Obled et al., 2002). Two methods of hydrological model forecasts uncertainty estimation were used : one is based on the use of equifinal parameter sets (Beven & Binley, 1992), the other is based on the statistical modelisation of the hydrological forecast empirical uncertainty (Montanari et al., 2004 ; Schaefli et al., 2007). Daily operational hydrological 7-day ensemble forecasts during 2 years in 3 alpine watersheds were evaluated. Finally, we present a way to combine rainfall and hydrological model forecast uncertainties to achieve a good probabilistic calibration. Our results show that the combination of ECMWF and analogues-based rainfall forecasts allow a good probabilistic calibration of rainfall forecasts. They show also that the statistical modeling of the hydrological forecast empirical uncertainty has a better probabilistic calibration, than the equifinal parameter set approach

  9. A method for the stochastic modeling of karstic systems accounting for geophysical data: an example of application in the region of Tulum, Yucatan Peninsula (Mexico)

    NASA Astrophysics Data System (ADS)

    Vuilleumier, C.; Borghi, A.; Renard, P.; Ottowitz, D.; Schiller, A.; Supper, R.; Cornaton, F.

    2013-05-01

    The eastern coast of the Yucatan Peninsula, Mexico, contains one of the most developed karst systems in the world. This natural wonder is undergoing increasing pollution threat due to rapid economic development in the region of Tulum, together with a lack of wastewater treatment facilities. A preliminary numerical model has been developed to assess the vulnerability of the resource. Maps of explored caves have been completed using data from two airborne geophysical campaigns. These electromagnetic measurements allow for the mapping of unexplored karstic conduits. The completion of the network map is achieved through a stochastic pseudo-genetic karst simulator, previously developed but adapted as part of this study to account for the geophysical data. Together with the cave mapping by speleologists, the simulated networks are integrated into the finite-element flow-model mesh as pipe networks where turbulent flow is modeled. The calibration of the karstic network parameters (density, radius of the conduits) is conducted through a comparison with measured piezometric levels. Although the proposed model shows great uncertainty, it reproduces realistically the heterogeneous flow of the aquifer. Simulated velocities in conduits are greater than 1 cm s-1, suggesting that the reinjection of Tulum wastewater constitutes a pollution risk for the nearby ecosystems.

  10. Accounting for age Structure in Ponderosa Pine Ecosystem Analyses: Integrating Management, Disturbance Histories and Observations with the BIOME-BGC Model

    NASA Astrophysics Data System (ADS)

    Hibbard, K. A.; Law, B.; Thornton, P.

    2003-12-01

    61% for sites averaging 9,16 and 23 years, respectively. It was assumed that changes in long-term pools (e.g. soil C) were negligible within these timeframes. In Law et al. (2003), the model performed well for old and mature sites, however, model simulations of the younger sites (9-50Y) were weak compared to NEP estimates from observations. Error for the young plots in Law et al. (2003) ranged from 150 - >400% of observed NEP. By accounting for the observed age structure through harvest removal, model error from this study ranged from 20-90% in young plots. This study is one of a few that have sought to account for age structure in simulating ecosystem dynamics and processes.

  11. S-R Associations, Their Extinction, and Recovery in an Animal Model of Anxiety: A New Associative Account of Phobias Without Recall of Original Trauma

    PubMed Central

    Laborda, Mario A.; Miller, Ralph R.

    2012-01-01

    Associative accounts of the etiology of phobias have been criticized because of numerous cases of phobias in which the client does not remember a relevant traumatic event (i.e., Pavlovian conditioning trial), instructions, or vicarious experience with the phobic object. In three lick suppression experiments with rats as subjects, we modeled an associative account of such fears. Experiment 1 assessed stimulus-response (S-R) associations in first-order fear conditioning. After behaviorally complete devaluation of the unconditioned stimulus, the target stimulus still produced strong conditioned responses, suggesting that an S-R association had been formed and that this association was not significantly affected when the outcome was devalued through unsignaled presentations of the unconditioned stimulus. Experiments 2 and 3 examined extinction and recovery of S-R associations. Experiment 2 showed that extinguished S-R associations returned when testing occurred outside of the extinction context (i.e., renewal) and Experiment 3 found that a long delay between extinction and testing also produced a return of the extinguished S-R associations (i.e., spontaneous recovery). These experiments suggest that fears for which people cannot recall a cause are explicable in an associative framework, and indicate that those fears are susceptible to relapse after extinction treatment just like stimulus-outcome (S-O) associations. PMID:21496503

  12. S-R associations, their extinction, and recovery in an animal model of anxiety: a new associative account of phobias without recall of original trauma.

    PubMed

    Laborda, Mario A; Miller, Ralph R

    2011-06-01

    Associative accounts of the etiology of phobias have been criticized because of numerous cases of phobias in which the client does not remember a relevant traumatic event (i.e., Pavlovian conditioning trial), instructions, or vicarious experience with the phobic object. In three lick suppression experiments with rats as subjects, we modeled an associative account of such fears. Experiment 1 assessed stimulus-response (S-R) associations in first-order fear conditioning. After behaviorally complete devaluation of the unconditioned stimulus, the target stimulus still produced strong conditioned responses, suggesting that an S-R association had been formed and that this association was not significantly affected when the outcome was devalued through unsignaled presentations of the unconditioned stimulus. Experiments 2 and 3 examined extinction and recovery of S-R associations. Experiment 2 showed that extinguished S-R associations returned when testing occurred outside of the extinction context (i.e., renewal) and Experiment 3 found that a long delay between extinction and testing also produced a return of the extinguished S-R associations (i.e., spontaneous recovery). These experiments suggest that fears for which people cannot recall a cause are explicable in an associative framework, and indicate that those fears are susceptible to relapse after extinction treatment just like stimulus-outcome (S-O) associations. PMID:21496503

  13. A Harmonious Accounting Duo?

    ERIC Educational Resources Information Center

    Schapperle, Robert F.; Hardiman, Patrick F.

    1992-01-01

    Accountants have urged "harmonization" of standards between the Governmental Accounting Standards Board and the Financial Accounting Standards Board, recommending similar reporting of like transactions. However, varying display of similar accounting events does not necessarily indicate disharmony. The potential for problems because of differing…

  14. Do delivery routes of intranasally administered oxytocin account for observed effects on social cognition and behavior? A two-level model.

    PubMed

    Quintana, Daniel S; Alvares, Gail A; Hickie, Ian B; Guastella, Adam J

    2015-02-01

    Accumulating evidence demonstrates the important role of oxytocin (OT) in the modulation of social cognition and behavior. This has led many to suggest that the intranasal administration of OT may benefit psychiatric disorders characterized by social dysfunction, such as autism spectrum disorders and schizophrenia. Here, we review nasal anatomy and OT pathways to central and peripheral destinations, along with the impact of OT delivery to these destinations on social behavior and cognition. The primary goal of this review is to describe how these identified pathways may contribute to mechanisms of OT action on social cognition and behavior (that is, modulation of social information processing, anxiolytic effects, increases in approach-behaviors). We propose a two-level model involving three pathways to account for responses observed in both social cognition and behavior after intranasal OT administration and suggest avenues for future research to advance this research field. PMID:25526824

  15. Estimating the evolution of atrazine concentration in a fractured sandstone aquifer using lumped-parameter models and taking land-use into account

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Gallé, T.; Bayerle, M.; Pittois, D.; Braun, C.; El Khabbaz, H.; Maloszewski, P.; Elsner, M.

    2012-04-01

    The European water framework directive and the groundwater directive require member states to identify water bodies at risk and assess the significance of increasing trend in pollutant concentration. For groundwater bodies, estimating the time to trend reversal or the pollution potential of the different sources present in the catchment require a sound understanding of the hydraulic behaviour of the aquifer. Although numerical groundwater models can theoretically be used for such forecasts, their calibration remains in many real-world cases problematic. A more parsimonious lumped-parameter model was applied to predict the evolution of atrazine concentration in springs draining a fractured sandstone aquifer in Luxembourg. Despite a nationwide ban in 2005, spring water concentrations of both atrazine and its metabolite desethylatrazine still had not begun to decrease four years later. The transfer function of the model was calibrated using tritium measurements and modified to take into account the fact that whereas tritium is applied uniformly over the entire catchment, atrazine was only used in areas where cereals are grown. We could also show that sorption processes in the aquifer can be neglected and that including pesticide degradation does not modify the shape of the atrazine breakthrough, but only affects the magnitude of the predicted spring water concentration. Results indicate that due to the large hydraulic inertia of the aquifer, trend reversal should not be expected before 2018.

  16. How to conduct a proper sensitivity analysis in life cycle assessment: taking into account correlations within LCI data and interactions within the LCA calculation model.

    PubMed

    Wei, Wei; Larrey-Lassalle, Pyrene; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis

    2015-01-01

    Sensitivity analysis (SA) is a significant tool for studying the robustness of results and their sensitivity to uncertainty factors in life cycle assessment (LCA). It highlights the most important set of model parameters to determine whether data quality needs to be improved, and to enhance interpretation of results. Interactions within the LCA calculation model and correlations within Life Cycle Inventory (LCI) input parameters are two main issues among the LCA calculation process. Here we propose a methodology for conducting a proper SA which takes into account the effects of these two issues. This study first presents the SA in an uncorrelated case, comparing local and independent global sensitivity analysis. Independent global sensitivity analysis aims to analyze the variability of results because of the variation of input parameters over the whole domain of uncertainty, together with interactions among input parameters. We then apply a dependent global sensitivity approach that makes minor modifications to traditional Sobol indices to address the correlation issue. Finally, we propose some guidelines for choosing the appropriate SA method depending on the characteristics of the model and the goals of the study. Our results clearly show that the choice of sensitivity methods should be made according to the magnitude of uncertainty and the degree of correlation. PMID:25436503

  17. Towards a Best Practice Approach in PBPK Modeling: Case Example of Developing a Unified Efavirenz Model Accounting for Induction of CYPs 3A4 and 2B6.

    PubMed

    Ke, A; Barter, Z; Rowland-Yeo, K; Almond, L

    2016-07-01

    In this study, we present efavirenz physiologically based pharmacokinetic (PBPK) model development as an example of our best practice approach that uses a stepwise approach to verify the different components of the model. First, a PBPK model for efavirenz incorporating in vitro and clinical pharmacokinetic (PK) data was developed to predict exposure following multiple dosing (600 mg q.d.). Alfentanil i.v. and p.o. drug-drug interaction (DDI) studies were utilized to evaluate and refine the CYP3A4 induction component in the liver and gut. Next, independent DDI studies with substrates of CYP3A4 (maraviroc, atazanavir, and clarithromycin) and CYP2B6 (bupropion) verified the induction components of the model (area under the curve [AUC] ratios within 1.0-1.7-fold of observed). Finally, the model was refined to incorporate the fractional contribution of enzymes, including CYP2B6, propagating autoinduction into the model (Racc 1.7 vs. 1.7 observed). This validated mechanistic model can now be applied in clinical pharmacology studies to prospectively assess both the victim and perpetrator DDI potential of efavirenz. PMID:27435752

  18. Towards a Best Practice Approach in PBPK Modeling: Case Example of Developing a Unified Efavirenz Model Accounting for Induction of CYPs 3A4 and 2B6

    PubMed Central

    Ke, A; Barter, Z; Rowland‐Yeo, K

    2016-01-01

    In this study, we present efavirenz physiologically based pharmacokinetic (PBPK) model development as an example of our best practice approach that uses a stepwise approach to verify the different components of the model. First, a PBPK model for efavirenz incorporating in vitro and clinical pharmacokinetic (PK) data was developed to predict exposure following multiple dosing (600 mg q.d.). Alfentanil i.v. and p.o. drug‐drug interaction (DDI) studies were utilized to evaluate and refine the CYP3A4 induction component in the liver and gut. Next, independent DDI studies with substrates of CYP3A4 (maraviroc, atazanavir, and clarithromycin) and CYP2B6 (bupropion) verified the induction components of the model (area under the curve [AUC] ratios within 1.0–1.7‐fold of observed). Finally, the model was refined to incorporate the fractional contribution of enzymes, including CYP2B6, propagating autoinduction into the model (Racc 1.7 vs. 1.7 observed). This validated mechanistic model can now be applied in clinical pharmacology studies to prospectively assess both the victim and perpetrator DDI potential of efavirenz. PMID:27435752

  19. JSC interactive basic accounting system

    NASA Technical Reports Server (NTRS)

    Spitzer, J. F.

    1978-01-01

    Design concepts for an interactive basic accounting system (IBAS) are considered in terms of selecting the design option which provides the best response at the lowest cost. Modeling the IBAS workload and applying this workload to a U1108 EXEC 8 based system using both a simulation model and the real system is discussed.

  20. Information-Theoretic Benchmarking of Land Surface Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  1. Dynamical analysis of fluid lines coupled to mechanical systems taking into account fluid frequency-dependent damping and non-conventional constitutive models: part 1 - Modeling fluid lines

    NASA Astrophysics Data System (ADS)

    Catania, Giuseppe; Sorrentino, Silvio

    2015-01-01

    The design of hydraulic transmission systems for control and actuation requires accurate knowledge of their dynamic response: some standard techniques are known to obtain a consistent dynamic model of a fluid line, including the contribution of inertia, compressibility and friction. In this paper an efficient procedure is developed for simulating the dynamic response of a fluid line in both the frequency and time domains, focusing the attention on the modal analysis of a discretized model, in view of coupling with mechanical systems. A bi-dimensional approach is adopted, and the laminar flow frequency-dependent friction is modeled using non-integer order differential laws, which may improve the accuracy of the simulated responses in comparison with more traditional Newtonian models.

  2. A Single-Level Tunnel Model to Account for Electrical Transport through Single Molecule- and Self-Assembled Monolayer-based Junctions

    PubMed Central

    Garrigues, Alvar R.; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R.; Thompon, Damien; del Barco, Enrique; Nijhuis, Christian A.

    2016-01-01

    We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction. PMID:27216489

  3. A Single-Level Tunnel Model to Account for Electrical Transport through Single Molecule- and Self-Assembled Monolayer-based Junctions.

    PubMed

    Garrigues, Alvar R; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R; Thompon, Damien; Del Barco, Enrique; Nijhuis, Christian A

    2016-01-01

    We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction. PMID:27216489

  4. A Single-Level Tunnel Model to Account for Electrical Transport through Single Molecule- and Self-Assembled Monolayer-based Junctions

    NASA Astrophysics Data System (ADS)

    Garrigues, Alvar R.; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R.; Thompon, Damien; Del Barco, Enrique; Nijhuis, Christian A.

    2016-05-01

    We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction.

  5. User's guide for RIV2; a package for routing and accounting of river discharge for a modular, three-dimensional, finite-difference, ground- water flow model

    USGS Publications Warehouse

    Miller, Roger S.

    1988-01-01

    RIV2 is a package for the U.S. Geological Survey 's modular, three-dimensional, finite-difference, groundwater flow model developed by M. G. McDonald and A. W. Harbaugh that simulates river-discharge routing. RIV2 replaces RIVI, the original river package used in the model. RIV2 preserves the basic logic of RIV1, but better represents river-discharge routing. The main features of RIV2 are (1) The river system is divided into reaches and simulated river discharge is routed from one node to the next. (2) Inflow (river discharge) entering the upstream end of a reach can be specified. (3) More than one river can be represented at one node and rivers can cross, as when representing a siphon. (4) The quantity of leakage to or from the aquifer at a given node is proportional to the hydraulic-head difference between that specified for the river and that calculated for the aquifer. Also, the quantity of leakage to the aquifer at any node can be limited by the user and, within this limit, the maximum leakage to the aquifer is the discharge available in the river. This feature allows for the simulation of intermittent rivers and drains that have no discharge routed to their upstream reaches. (5) An accounting of river discharge is maintained. Neither stage-discharge relations nor storage in the river or river banks is simulated. (USGS)

  6. Accountability: A Mosaic Image

    ERIC Educational Resources Information Center

    Turner, Teri

    1977-01-01

    The problems involved in definition, implementation and control of accountability processes are discussed. It is stated that "...emotional involvement in accountability is one of the most difficult aspects to deal with, the chief emotion being fear". (Author/RW)

  7. LMAL Accounting Office 1936

    NASA Technical Reports Server (NTRS)

    1936-01-01

    Accounting Office: The Langley Memorial Aeronautical Laboratory's accounting office, 1936, with photographs of the Wright brothers on the wall. Although the Lab was named after Samuel P. Langley, most of the NACA staff held the Wrights as their heroes.

  8. Managerial Accounting. Study Guide.

    ERIC Educational Resources Information Center

    Plachta, Leonard E.

    This self-instructional study guide is part of the materials for a college-level programmed course in managerial accounting. The study guide is intended for use by students in conjuction with a separate textbook, Horngren's "Accounting for Management Control: An Introduction," and a workbook, Curry's "Student Guide to Accounting for Management…

  9. The Accounting Capstone Problem

    ERIC Educational Resources Information Center

    Elrod, Henry; Norris, J. T.

    2012-01-01

    Capstone courses in accounting programs bring students experiences integrating across the curriculum (University of Washington, 2005) and offer unique (Sanyal, 2003) and transformative experiences (Sill, Harward, & Cooper, 2009). Students take many accounting courses without preparing complete sets of financial statements. Accountants not only…

  10. Accounting & Computing Curriculum Guide.

    ERIC Educational Resources Information Center

    Avani, Nathan T.; And Others

    This curriculum guide consists of materials for use in teaching a competency-based accounting and computing course that is designed to prepare students for employability in the following occupational areas: inventory control clerk, invoice clerk, payroll clerk, traffic clerk, general ledger bookkeeper, accounting clerk, account information clerk,…

  11. Intelligent Accountability in Education

    ERIC Educational Resources Information Center

    O'Neill, Onora

    2013-01-01

    Systems of accountability are "second order" ways of using evidence of the standard to which "first order" tasks are carried out for a great variety of purposes. However, more accountability is not always better, and processes of holding to account can impose high costs without securing substantial benefits. At their worst,…

  12. Accounting Education in Crisis

    ERIC Educational Resources Information Center

    Turner, Karen F.; Reed, Ronald O.; Greiman, Janel

    2011-01-01

    Almost on a daily basis new accounting rules and laws are put into use, creating information that must be known and learned by the accounting faculty and then introduced to and understood by the accounting student. Even with the 150 hours of education now required for CPA licensure, it is impossible to teach and learn all there is to learn. Over…

  13. Viscoplastic Model Development to Account for Strength Differential: Application to Aged Inconel 718 at Elevated Temperature. Degree awarded by Pennsylvania State Univ., 2000

    NASA Technical Reports Server (NTRS)

    Iyer, Saiganesh; Lerch, Brad (Technical Monitor)

    2001-01-01

    The magnitude of yield and flow stresses in aged Inconel 718 are observed to be different in tension and compression. This phenomenon, called the Strength differential (SD), contradicts the metal plasticity axiom that the second deviatoric stress invariant alone is sufficient for representing yield and flow. Apparently, at least one of the other two stress invariants is also significant. A unified viscoplastic model was developed that is able to account for the SD effect in aged Inconel 718. Building this model involved both theory and experiments. First, a general threshold function was proposed that depends on all three stress invariants and then the flow and evolution laws were developed using a potential-based thermodynamic framework. Judiciously chosen shear and axial tests were conducted to characterize the material. Shear tests involved monotonic loading, relaxation, and creep tests with different loading rates and load levels. The axial tests were tension and compression tests that resulted in sufficiently large inelastic strains. All tests were performed at 650 C. The viscoplastic material parameters were determined by optimizing the fit to the shear tests, during which the first and the third stress invariants remained zero. The threshold surface parameters were then fit to the tension and compression test data. An experimental procedure was established to quantify the effect of each stress invariant on inelastic deformation. This requires conducting tests with nonproportional three-dimensional load paths. Validation of the model was done using biaxial tests on tubular specimens of aged Inconel 718 using proportional and nonproportional axial-torsion loading. These biaxial tests also helped to determine the most appropriate form of the threshold function; that is, how to combine the stress invariants. Of the set of trial threshold functions, the ones that incorporated the third stress invariant give the best predictions. However, inclusion of the first

  14. Combining Statistical and Ensemble Streamflow Predictions to Cope with Consensus Forecast

    NASA Astrophysics Data System (ADS)

    Mirfenderesgi, G.; Najafi, M.; Moradkhani, H.

    2012-12-01

    Monthly and seasonal water supply outlooks are used for water resource planning and management including the industrial and agriculture water allocation as well as reservoir operations. Currently consensus forecasts are jointly issued by the operational agencies in the Western US based on statistical regression equations and ensemble streamflow predictions. However, an objective method is needed to combine the forecasts from these methods. In this study monthly and seasonal streamflow predictions are generated from various hydrologic and statistical simulations including: Variable Infiltration Capacity (VIC), Sacramento Soil Moisture Accounting Model (SAC-SMA), Precipitation Runoff Modeling System (PRMS), Conceptual Hydrologic MODel (HYMOD), and Principal and Independent Component Regression (PCR and ICR), etc. The results are optimally combined by several objective multi-modeling methods. The increase in forecast accuracy is assessed in comparison with the available best and worst prediction. The precision of each multi-model method is also estimated. The study is performed over the Lake Granby, located in the headwaters of the Colorado River Basin. Overall the results show improvements in both monthly and seasonal forecasts as compared with single model simulations.

  15. MREdictor: a two-step dynamic interaction model that accounts for mRNA accessibility and Pumilio binding accurately predicts microRNA targets

    PubMed Central

    Incarnato, Danny; Neri, Francesco; Diamanti, Daniela; Oliviero, Salvatore

    2013-01-01

    The prediction of pairing between microRNAs (miRNAs) and the miRNA recognition elements (MREs) on mRNAs is expected to be an important tool for understanding gene regulation. Here, we show that mRNAs that contain Pumilio recognition elements (PRE) in the proximity of predicted miRNA-binding sites are more likely to form stable secondary structures within their 3′-UTR, and we demonstrated using a PUM1 and PUM2 double knockdown that Pumilio proteins are general regulators of miRNA accessibility. On the basis of these findings, we developed a computational method for predicting miRNA targets that accounts for the presence of PRE in the proximity of seed-match sequences within poorly accessible structures. Moreover, we implement the miRNA-MRE duplex pairing as a two-step model, which better fits the available structural data. This algorithm, called MREdictor, allows for the identification of miRNA targets in poorly accessible regions and is not restricted to a perfect seed-match; these features are not present in other computational prediction methods. PMID:23863844

  16. Learning by Doing: Concepts and Models for Service-Learning in Accounting. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Rama, D. V., Ed.

    This volume is part of a series of 18 monographs on service learning and the academic disciplines. It is designed to (1) develop a theoretical framework for service learning in accounting consistent with the goals identified by accounting educators and the recent efforts toward curriculum reform, and (2) describe specific active learning…

  17. Random regression models to account for the effect of genotype by environment interaction due to heat stress on the milk yield of Holstein cows under tropical conditions.

    PubMed

    Santana, Mário L; Bignardi, Annaiza Braga; Pereira, Rodrigo Junqueira; Menéndez-Buxadera, Alberto; El Faro, Lenira

    2016-02-01

    The present study had the following objectives: to compare random regression models (RRM) considering the time-dependent (days in milk, DIM) and/or temperature × humidity-dependent (THI) covariate for genetic evaluation; to identify the effect of genotype by environment interaction (G×E) due to heat stress on milk yield; and to quantify the loss of milk yield due to heat stress across lactation of cows under tropical conditions. A total of 937,771 test-day records from 3603 first lactations of Brazilian Holstein cows obtained between 2007 and 2013 were analyzed. An important reduction in milk yield due to heat stress was observed for THI values above 66 (-0.23 kg/day/THI). Three phases of milk yield loss were identified during lactation, the most damaging one at the end of lactation (-0.27 kg/day/THI). Using the most complex RRM, the additive genetic variance could be altered simultaneously as a function of both DIM and THI values. This model could be recommended for the genetic evaluation taking into account the effect of G×E. The response to selection in the comfort zone (THI ≤ 66) is expected to be higher than that obtained in the heat stress zone (THI > 66) of the animals. The genetic correlations between milk yield in the comfort and heat stress zones were less than unity at opposite extremes of the environmental gradient. Thus, the best animals for milk yield in the comfort zone are not necessarily the best in the zone of heat stress and, therefore, G×E due to heat stress should not be neglected in the genetic evaluation. PMID:26155774

  18. The Ensemble Framework for Flash Flood Forecasting: Global and CONUS Applications

    NASA Astrophysics Data System (ADS)

    Flamig, Z.; Vergara, H. J.; Clark, R. A.; Gourley, J. J.; Kirstetter, P. E.; Hong, Y.

    2015-12-01

    The Ensemble Framework for Flash Flood Forecasting (EF5) is a distributed hydrologic modeling framework combining water balance components such as the Variable Infiltration Curve (VIC) and Sacramento Soil Moisture Accounting (SAC-SMA) with kinematic wave channel routing. The Snow-17 snow pack model is included as an optional component in EF5 for basins where snow impacts are important. EF5 also contains the Differential Evolution Adaptive Metropolis (DREAM) parameter estimation scheme for model calibration. EF5 is made to be user friendly and as such training has been developed into a weeklong course. This course has been tested in modeling workshops held in Namibia and Mexico. EF5 has also been applied to specialized applications including the Flooded Locations and Simulated Hydrographs (FLASH) project. FLASH aims to provide flash flood monitoring and forecasting over the CONUS using Multi-Radar Multi-Sensor precipitation forcing. Using the extensive field measurements database from the 10,000 USGS measurement locations across the CONUS, parameters were developed for the kinematic wave routing in FLASH. This presentation will highlight FLASH performance over the CONUS on basins less than 1,000 km2 and discuss the development of simulated streamflow climatology over the CONUS for data mining applications. A global application of EF5 has also been developed using satellite based precipitation measurements combined with numerical weather prediction forecasts to produce flood and impact forecasts. The performance of this global system will be assessed and future plans detailed.

  19. Do we need to account for scenarios of land use/land cover changes in regional climate modeling and impact studies?

    NASA Astrophysics Data System (ADS)

    Strada, Susanna; de Noblet-Ducoudré, Nathalie; Perrin, Mathieu; Stefanon, Marc

    2016-04-01

    By modifying the Earth's natural landscapes, humans have introduced an imbalance in the Earth System's energy, water and emission fluxes via land-use and land-cover changes (LULCCs). Through land-atmosphere interactions, LULCCs influence weather, air quality and climate at different scales, from regional/local (a few ten kilometres) (Pielke et al., 2011) to global (a few hundred kilometres) (Mahmood et al., 2014). Therefore, in the context of climate change, LULCCs will play a role locally/regionally in altering weather/atmospheric conditions. In addition to the global climate change impacts, LULCCs will possibly induce further changes in the functioning of terrestrial ecosystems and thereby affect adaptation strategies. If LULCCs influence weather/atmospheric conditions, could land use planning alter climate conditions and ease the impact of climate change by wisely shaping urban and rural landscapes? Nowadays, numerical land-atmosphere modelling allows to assess LULCC impacts at different scales (e.g., Marshall et al., 2003; de Noblet-Ducoudré et al., 2011). However, most scenarios of climate changes used to force impact models result from downscaling procedures that do not account for LULCCs (e.g., Jacob et al., 2014). Therefore, if numerical modelling may help in tackling the discussion about LULCCs, do existing LULCC scenarios encompass realistic changes in terms of land use planning? In the present study, we apply a surface model to compare projected LULCC scenarios over France and to assess their impacts on surface fluxes (i.e., water, heat and carbon dioxide fluxes) and on water and carbon storage in soils. To depict future LULCCs in France, we use RCP scenarios from the IPCC AR5 report (Moss et al., 2011). LULCCs encompassed in RCPs are discussed in terms of: (a) their impacts on water and energy balance over France, and (b) their feasibility in the framework of land use planning in France. This study is the first step to quantify the sensitivity of land

  20. Do we need to account for scenarios of land use/land cover changes in regional climate modeling and impact studies?

    NASA Astrophysics Data System (ADS)

    Strada, Susanna; de Noblet-Ducoudré, Nathalie; Perrin, Mathieu; Stefanon, Marc

    2016-04-01

    By modifying the Earth's natural landscapes, humans have introduced an imbalance in the Earth System's energy, water and emission fluxes via land-use and land-cover changes (LULCCs). Through land-atmosphere interactions, LULCCs influence weather, air quality and climate at different scales, from regional/local (a few ten kilometres) (Pielke et al., 2011) to global (a few hundred kilometres) (Mahmood et al., 2014). Therefore, in the context of climate change, LULCCs will play a role locally/regionally in altering weather/atmospheric conditions. In addition to the global climate change impacts, LULCCs will possibly induce further changes in the functioning of terrestrial ecosystems and thereby affect adaptation strategies. If LULCCs influence weather/atmospheric conditions, could land use planning alter climate conditions and ease the impact of climate change by wisely shaping urban and rural landscapes? Nowadays, numerical land-atmosphere modelling allows to assess LULCC impacts at different scales (e.g., Marshall et al., 2003; de Noblet-Ducoudré et al., 2011). However, most scenarios of climate changes used to force impact models result from downscaling procedures that do not account for LULCCs (e.g., Jacob et al., 2014). Therefore, if numerical modelling may help in tackling the discussion about LULCCs, do existing LULCC scenarios encompass realistic changes in terms of land use planning? In the present study, we apply a surface model to compare projected LULCC scenarios over France and to assess their impacts on surface fluxes (i.e., water, heat and carbon dioxide fluxes) and on water and carbon storage in soils. To depict future LULCCs in France, we use RCP scenarios from the IPCC AR5 report (Moss et al., 2011). LULCCs encompassed in RCPs are discussed in terms of: (a) their impacts on water and energy balance over France, and (b) their feasibility in the framework of land use planning in France. This study is the first step to quantify the sensitivity of land

  1. Improving Hydrologic Data Assimilation by a Multivariate Particle Filter-Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Yan, H.; DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Data assimilation (DA) is a popular method for merging information from multiple sources (i.e. models and remotely sensing), leading to improved hydrologic prediction. With the increasing availability of satellite observations (such as soil moisture) in recent years, DA is emerging in operational forecast systems. Although these techniques have seen widespread application, developmental research has continued to further refine their effectiveness. This presentation will examine potential improvements to the Particle Filter (PF) through the inclusion of multivariate correlation structures. Applications of the PF typically rely on univariate DA schemes (such as assimilating the outlet observed discharge), and multivariate schemes generally ignore the spatial correlation of the observations. In this study, a multivariate DA scheme is proposed by introducing geostatistics into the newly developed particle filter with Markov chain Monte Carlo (PF-MCMC) method. This new method is assessed by a case study over one of the basin with natural hydrologic process in Model Parameter Estimation Experiment (MOPEX), located in Arizona. The multivariate PF-MCMC method is used to assimilate the Advanced Scatterometer (ASCAT) grid (12.5 km) soil moisture retrievals and the observed streamflow in five gages (four inlet and one outlet gages) into the Sacramento Soil Moisture Accounting (SAC-SMA) model for the same scale (12.5 km), leading to greater skill in hydrologic predictions.

  2. Runoff changes in Czech headwater regions after deforestation induced by acid rains

    NASA Astrophysics Data System (ADS)

    Buchtele, J.; Buchtelova, M.; Hrkal, Z.; Koskova, R.

    2003-04-01

    Tendencies in water regime resulting from land-use change represent an important subject for research and in the region of so called Black Triangle at the borders of Czech Republic, Germany and Poland urgent practical problem. Namely extensive deforestation in Czech hilly basins induced by acid rains, which appeared in seventies and eighties, requires attention. Discussions among professionals and public, sometimes having emotional character, took place after large floods on the rivers Odra and Morava in 1997 and in Vltava and Elbe river basins in August 2002. The influence of deforestation induced by acid rains in the Central Europe has been considered as important contribution to disastrous character of floods. Simulations of rainfall-runoff process in several catchments and experimental basins in two distinct headwater regions along German borders, with different extent of deforestation have been carried out using daily time series up to 40 years long. The outputs of two hydrological models of different structure have been compared in these investigations: - the conceptual model SAC-SMA - Sacramento soil moisture accounting - physically based 1- D model BROOK´90 The differences between observed and simulated discharge, which could show the tendencies in the runoff have been followed. They indicate increase of runoff after deforestation.

  3. Managerial accounting applications in radiology.

    PubMed

    Lexa, Frank James; Mehta, Tushar; Seidmann, Abraham

    2005-03-01

    We review the core issues in managerial accounting for radiologists. We introduce the topic and then explore its application to diagnostic imaging. We define key terms such as fixed cost, variable cost, marginal cost, and marginal revenue and discuss their role in understanding the operational and financial implications for a radiology facility by using a cost-volume-profit model. Our work places particular emphasis on the role of managerial accounting in understanding service costs, as well as how it assists executive decision making. PMID:17411809

  4. Zebrafish Seizure Model Identifies p,p′-DDE as the Dominant Contaminant of Fetal California Sea Lions That Accounts for Synergistic Activity with Domoic Acid

    PubMed Central

    Tiedeken, Jessica A.; Ramsdell, John S.

    2010-01-01

    Background Fetal poisoning of California sea lions (CSLs; Zalophus californianus) has been associated with exposure to the algal toxin domoic acid. These same sea lions accumulate a mixture of persistent environmental contaminants including pesticides and industrial products such as polychlorinated biphenyls (PCBs) and polybrominated diphenyl ethers (PBDEs). Developmental exposure to the pesticide dichlorodiphenyltrichloroethane (DDT) and its stable metabolite 1,1-bis-(4-chlorophenyl)-2,2-dichloroethene (p,p′-DDE) has been shown to enhance domoic acid–induced seizures in zebrafish; however, the contribution of other co-occurring contaminants is unknown. Objective We formulated a mixture of contaminants to include PCBs, PBDEs, hexachlorocyclohexane (HCH), and chlordane at levels matching those reported for fetal CSL blubber to determine the impact of co-occurring persistent contaminants with p,p′-DDE on chemically induced seizures in zebrafish as a model for the CSLs. Methods Embryos were exposed (6–30 hr postfertilization) to p,p′-DDE in the presence or absence of a defined contaminant mixture prior to neurodevelopment via either bath exposure or embryo yolk sac microinjection. After brain maturation (7 days postfertilization), fish were exposed to a chemical convulsant, either pentylenetetrazole or domoic acid; resulting seizure behavior was then monitored and analyzed for changes, using cameras and behavioral tracking software. Results Induced seizure behavior did not differ significantly between subjects with embryonic exposure to a contaminant mixture and those exposed to p,p′-DDE only. Conclusion These studies demonstrate that p,p′-DDE—in the absence of PCBs, HCH, chlordane, and PBDEs that co-occur in fetal sea lions—accounts for the synergistic activity that leads to greater sensitivity to domoic acid seizures. PMID:20368122

  5. Computerizing the Accounting Curriculum.

    ERIC Educational Resources Information Center

    Nash, John F.; England, Thomas G.

    1986-01-01

    Discusses the use of computers in college accounting courses. Argues that the success of new efforts in using computers in teaching accounting is dependent upon increasing instructors' computer skills, and choosing appropriate hardware and software, including commercially available business software packages. (TW)

  6. PLATO IV Accountancy Index.

    ERIC Educational Resources Information Center

    Pondy, Dorothy, Comp.

    The catalog was compiled to assist instructors in planning community college and university curricula using the 48 computer-assisted accountancy lessons available on PLATO IV (Programmed Logic for Automatic Teaching Operation) for first semester accounting courses. It contains information on lesson access, lists of acceptable abbreviations for…

  7. The Choreography of Accountability

    ERIC Educational Resources Information Center

    Webb, P. Taylor

    2006-01-01

    The prevailing performance discourse in education claims school improvements can be achieved through transparent accountability procedures. The article identifies how teachers generate performances of their work in order to satisfy accountability demands. By identifying sources of teachers' knowledge that produce choreographed performances, I…

  8. Teaching Accounting with Computers.

    ERIC Educational Resources Information Center

    Shaoul, Jean

    This paper addresses the numerous ways that computers may be used to enhance the teaching of accounting and business topics. It focuses on the pedagogical use of spreadsheet software to improve the conceptual coverage of accounting principles and practice, increase student understanding by involvement in the solution process, and reduce the amount…

  9. Leadership for Accountability.

    ERIC Educational Resources Information Center

    Lashway, Larry

    2001-01-01

    This document explores issues of leadership for accountability and reviews five resources on the subject. These include: (1) "Accountability by Carrots and Sticks: Will Incentives and Sanctions Motivate Students, Teachers, and Administrators for Peak Performance?" (Larry Lashway); (2) "Organizing Schools for Teacher Learning" (Judith Warren…

  10. Accountability in Education.

    ERIC Educational Resources Information Center

    Chippendale, P. R., Ed.; Wilkes, Paula V., Ed.

    This collection of papers delivered at a conference on accountability held at Darling Downs Institute of Advanced Education in Australia examines the meaning of accountability in education for teachers, lecturers, government, parents, administrators, education authorities, and the society at large. In Part 1, W. G. Walker attempts to answer the…

  11. The Accountability Illusion: Georgia

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  12. The Accountability Illusion: Kansas

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  13. The Evolution of Accountability

    ERIC Educational Resources Information Center

    Webb, P. Taylor

    2011-01-01

    Campus 2020: Thinking ahead is a policy in British Columbia (BC), Canada, that attempted to hold universities accountable to performance. Within, I demonstrate how this Canadian articulation of educational accountability intended to develop "governmentality constellations" to control the university and regulate its knowledge output. This research…

  14. The Accountability Illusion: Texas

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  15. Responsible and accountable.

    PubMed

    Woodrow, Philip

    2006-03-01

    Healthcare assistants are valuable members of the multi-disciplinary team, using many skills outlined in previous articles in this series. But anyone exceeding the limits of their skills can cause harm and may be called to account. This article explains how everyone is accountable. PMID:16538993

  16. The Accountability Illusion: Minnesota

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  17. Accountability and values in radically collaborative research.

    PubMed

    Winsberg, Eric; Huebner, Bryce; Kukla, Rebecca

    2014-06-01

    This paper discusses a crisis of accountability that arises when scientific collaborations are massively epistemically distributed. We argue that social models of epistemic collaboration, which are social analogs to what Patrick Suppes called a "model of the experiment," must play a role in creating accountability in these contexts. We also argue that these social models must accommodate the fact that the various agents in a collaborative project often have ineliminable, messy, and conflicting interests and values; any story about accountability in a massively distributed collaboration must therefore involve models of such interests and values and their methodological and epistemic effects. PMID:25051867

  18. Human Resource Accounting System

    ERIC Educational Resources Information Center

    Cerullo, Michael J.

    1974-01-01

    Main objectives of human resource accounting systems are to satisfy the informational demands made by investors and by operating managers. The paper's main concern is with the internal uses of a human asset system. (Author)

  19. A Structural Equation Modeling Investigation of the Theory of Planned Behavior Applied to Accounting Professors' Enforcement of Cheating Rules

    ERIC Educational Resources Information Center

    Brigham, Stephen Scott

    2010-01-01

    This dissertation concerns factors that influence accounting professors' formal enforcement of academic misconduct rules using the theory of planned behavior ("TPB") as a theoretical framework. The theory posits that intentional behavior, such as enforcement, can be predicted by peoples' perceived behavioral control and…

  20. EPA’s ALPHA Model Fuel Economy Simulation and Refinements to Account for Fuel Economy Effects of a Vehicle’s Transient Operation and Overhead Needs

    EPA Science Inventory

    This paper will describe how ALPHA accounts for each type of fuel use overhead, using a variety of data from general vehicle and engine benchmarking, as well as data from special test procedures to characterize engine operation during the overhead conditions.

  1. Uncertainty calculation in the RIO air quality interpolation model and aggregation to yearly average and exceedance probability taking into account the temporal auto-correlation.

    NASA Astrophysics Data System (ADS)

    Maiheu, Bino; Nele, Veldeman; Janssen, Stijn; Fierens, Frans; Trimpeneers, Elke

    2010-05-01

    RIO is an operational air quality interpolation model developed by VITO and IRCEL-CELINE and produces hourly maps for different pollutant concentrations such as O3, PM10 and NO2 measured in Belgium [1]. The RIO methodology consists of residual interpolation by Ordinary Kriging of the residuals of the measured concentrations and pre-determined trend functions which express the relation between land cover information derived from the CORINE dataset and measured time-averaged concentrations [2]. RIO is an important tool for the Flemish administration and is among others used to report, as is required by each member state, on the air quality status in Flanders to the European Union. We feel that a good estimate of the uncertainty of the yearly average concentration maps and the probability of norm-exceedance are both as important as the values themselves. In this contribution we will discuss the uncertainties specific to the RIO methodology, where we have both contributions from the Ordinary Kriging technique as well as the trend functions. Especially the parameterisation of the uncertainty w.r.t. the trend functions will be the key indicator for the degree of confidence the model puts into using land cover information for spatial interpolation of pollutant concentrations. Next, we will propose a method which enables us to calculate the uncertainty on the yearly average concentrations as well as the number of exceedance days, taking into account the temporal auto-correlation of the concentration fields. It is clear that the autocorrelation will have a strong impact on the uncertainty estimation [3] of yearly averages. The method we propose is based on a Monte Carlo technique that generates an ensemble of interpolation maps with the correct temporal auto-correlation structure. From a generated ensemble, the calculation of norm-exceedance probability at each interpolation location becomes quite straightforward. A comparison with the ad-hoc method proposed in [3], where

  2. Water Accounting from Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Bastiaanssen, W. G.; Savenije, H.

    2014-12-01

    Water scarcity is increasing globally. This requires a more accurate management of the water resources at river basin scale and understanding of withdrawals and return flows; both naturally and man-induced. Many basins and their tributaries are, however, ungauged or poorly gauged. This hampers sound planning and monitoring processes. While certain countries have developed clear guidelines and policies on data observatories and data sharing, other countries and their basin organization still have to start on developing data democracies. Water accounting quantifies flows, fluxes, stocks and consumptive use pertaining to every land use class in a river basin. The objective is to derive a knowledge base with certain minimum information that facilitates decision making. Water Accounting Plus (WA+) is a new method for water resources assessment reporting (www.wateraccounting.org). While the PUB framework has yielded several deterministic models for flow prediction, WA+ utilizes remote sensing data of rainfall, evaporation (including soil, water, vegetation and interception evaporation), soil moisture, water levels, land use and biomass production. Examples will be demonstrated that show how remote sensing and hydrological models can be smartly integrated for generating all the required input data into WA+. A standard water accounting system for all basins in the world - with a special emphasis on data scarce regions - is under development. First results of using remote sensing measurements and hydrological modeling as an alternative to expensive field data sets, will be presented and discussed.

  3. Accounting for the environment.

    PubMed

    Lutz, E; Munasinghe, M

    1991-03-01

    Environmental awareness in the 1980s has led to efforts to improve the current UN System of National Accounts (SNA) for better measurement of the value of environmental resources when estimating income. National governments, the UN, the International Monetary Fund, and the World Bank are interested in solving this issue. The World Bank relies heavily on national aggregates in income accounts compiled by means of the SNA that was published in 1968 and stressed gross domestic product (GDP). GDP measures mainly market activity, but it takes does not consider the consumption of natural capital, and indirectly inhibits sustained development. The deficiencies of the current method of accounting are inconsistent treatment of manmade and natural capital, the omission of natural resources and their depletion from balance sheets, and pollution cleanup costs from national income. In the calculation of GDP pollution is overlooked, and beneficial environmental inputs are valued at zero. The calculation of environmentally adjusted net domestic product (EDP) and environmentally adjusted net income (ENI) would lower income and growth rate, as the World Resources Institute found with respect to Indonesia for 1971-84. When depreciation for oil, timber, and top soil was included the net domestic product (NDP) was only 4% compared with a 7.1% GDP. The World Bank has advocated environmental accounting since 1983 in SNA revisions. The 1989 revised Blue Book of the SNA takes environment concerns into account. Relevant research is under way in Mexico and Papua New Guinea using the UN Statistical Office framework as a system for environmentally adjusted economic accounts that computes EDP and ENI and integrates environmental data with national accounts while preserving SNA concepts. PMID:12285741

  4. Thinking about Accountability

    PubMed Central

    Deber, Raisa B.

    2014-01-01

    Accountability is a key component of healthcare reforms, in Canada and internationally, but there is increasing recognition that one size does not fit all. A more nuanced understanding begins with clarifying what is meant by accountability, including specifying for what, by whom, to whom and how. These papers arise from a Partnership for Health System Improvement (PHSI), funded by the Canadian Institutes of Health Research (CIHR), on approaches to accountability that examined accountability across multiple healthcare subsectors in Ontario. The partnership features collaboration among an interdisciplinary team, working with senior policy makers, to clarify what is known about best practices to achieve accountability under various circumstances. This paper presents our conceptual framework. It examines potential approaches (policy instruments) and postulates that their outcomes may vary by subsector depending upon (a) the policy goals being pursued, (b) governance/ownership structures and relationships and (c) the types of goods and services being delivered, and their production characteristics (e.g., contestability, measurability and complexity). PMID:25305385

  5. Reclaiming "Sense" from "Cents" in Accounting Education

    ERIC Educational Resources Information Center

    Dellaportas, Steven

    2015-01-01

    This essay adopts an interpretive methodology of relevant literature to explore the limitations of accounting education when it is taught purely as a technical practice. The essay proceeds from the assumption that conventional accounting education is captured by a positivistic neo-classical model of decision-making that draws on economic rationale…

  6. Integrated Approach to User Account Management

    NASA Technical Reports Server (NTRS)

    Kesselman, Glenn; Smith, William

    2007-01-01

    IT environments consist of both Windows and other platforms. Providing user account management for this model has become increasingly diffi cult. If Microsoft#s Active Directory could be enhanced to extend a W indows identity for authentication services for Unix, Linux, Java and Macintosh systems, then an integrated approach to user account manag ement could be realized.

  7. An Existentialist Account of Identity Formation.

    ERIC Educational Resources Information Center

    Bilsker, Dan

    1992-01-01

    Gives account of Marcia's identity formation model in language of existentialist philosophy. Examines parallels between ego-identity and existentialist approaches. Describes identity in terms of existentialist concepts of Heidegger and Sartre. Argues that existentialist account of identity formation has benefits of clarification of difficult…

  8. Educational Accountability in a Regulated Market

    ERIC Educational Resources Information Center

    Adams, Jacob E., Jr.; Hill, Paul T.

    2006-01-01

    Public debate about school choice is often polarized between those who favor and oppose total free markets in education. However, the serious intellectual work on choice focuses on more moderate alternatives that involve a mixture of public and private accountability. A regulated market model of educational accountability would mix government…

  9. Materials for Training Specialized Accounting Clerks

    ERIC Educational Resources Information Center

    McKitrick, Max O.

    1974-01-01

    To prepare instructional materials for training specialized accounting clerks, teachers must visit offices and make task analyses of these jobs utilizing the systems approach. Described are models developed for training these types of accounting clerks: computer control clerks, coupon clerks, internal auditing clerks, and statement clerks. (SC)

  10. 17 CFR 17.01 - Identification of special accounts, volume threshold accounts, and omnibus accounts.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... accounts, volume threshold accounts, and omnibus accounts. 17.01 Section 17.01 Commodity and Securities..., CLEARING MEMBERS, AND FOREIGN BROKERS § 17.01 Identification of special accounts, volume threshold accounts... in § 17.02(b). (b) Identification of volume threshold accounts. Each clearing member shall...

  11. 18 CFR 367.9040 - Account 904, Uncollectible accounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 904... GAS ACT Operation and Maintenance Expense Chart of Accounts § 367.9040 Account 904, Uncollectible accounts. This account must be charged with amounts sufficient to provide for losses from...

  12. Excel in the Accounting Curriculum: Perceptions from Accounting Professors

    ERIC Educational Resources Information Center

    Ramachandran Rackliffe, Usha; Ragland, Linda

    2016-01-01

    Public accounting firms emphasize the importance of accounting graduates being proficient in Excel. Since many accounting graduates often aspire to work in public accounting, a question arises as to whether there should be an emphasis on Excel in accounting education. The purpose of this paper is to specifically look at this issue by examining…

  13. A Pariah Profession? Some Student Perceptions of Accounting and Accountancy.

    ERIC Educational Resources Information Center

    Fisher, Roy; Murphy, Vivienne

    1995-01-01

    Existing literature and a survey of 106 undergraduate accounting students in the United Kingdom were analyzed for perceptions of the accounting profession and the academic discipline of accounting. Results suggest that among accounting and nonaccounting students alike, there exist coexisting perceptions of accounting as having high status and low…

  14. Mathematical modeling of liquid/liquid hollow fiber membrane contactor accounting for interfacial transport phenomena: Extraction of lanthanides as a surrogate for actinides

    SciTech Connect

    Rogers, J.D.

    1994-08-04

    This report is divided into two parts. The second part is divided into the following sections: experimental protocol; modeling the hollow fiber extractor using film theory; Graetz model of the hollow fiber membrane process; fundamental diffusive-kinetic model; and diffusive liquid membrane device-a rigorous model. The first part is divided into: membrane and membrane process-a concept; metal extraction; kinetics of metal extraction; modeling the membrane contactor; and interfacial phenomenon-boundary conditions-applied to membrane transport.

  15. Risk-Informed Monitoring, Verification and Accounting (RI-MVA). An NRAP White Paper Documenting Methods and a Demonstration Model for Risk-Informed MVA System Design and Operations in Geologic Carbon Sequestration

    SciTech Connect

    Unwin, Stephen D.; Sadovsky, Artyom; Sullivan, E. C.; Anderson, Richard M.

    2011-09-30

    This white paper accompanies a demonstration model that implements methods for the risk-informed design of monitoring, verification and accounting (RI-MVA) systems in geologic carbon sequestration projects. The intent is that this model will ultimately be integrated with, or interfaced with, the National Risk Assessment Partnership (NRAP) integrated assessment model (IAM). The RI-MVA methods described here apply optimization techniques in the analytical environment of NRAP risk profiles to allow systematic identification and comparison of the risk and cost attributes of MVA design options.

  16. Planning for Accountability.

    ERIC Educational Resources Information Center

    Cuneo, Tim; Bell, Shareen; Welsh-Gray, Carol

    1999-01-01

    Through its Challenge 2000 program, Joint Venture: Silicon Valley Network's 21st Century Education Initiative has been working with K-12 schools to improve student performance in literature, math, and science. Clearly stated standards, appropriate assessments, formal monitoring, critical friends, and systemwide accountability are keys to success.…

  17. Accountability Measures Report, 2007

    ERIC Educational Resources Information Center

    North Dakota University System, 2007

    2007-01-01

    This document is a tool for demonstrating that the University System is meeting the "flexibility with accountability" expectations of SB 2003 passed by the 2001 Legislative Assembly. The 2007 report reflects some of the many ways North Dakota University System (NDUS) colleges and universities are developing the human capital needed to create a…

  18. Accountability Measures Report, 2006

    ERIC Educational Resources Information Center

    North Dakota University System, 2006

    2006-01-01

    This document is a valuable tool for demonstrating that the University System is meeting the "flexibility with accountability" expectations of SB 2003 passed by the 2001 Legislative Assembly. The 2006 report reflects some of the many ways North Dakota University System (NDUS) colleges and universities are developing the human capital needed to…

  19. Accounting Forms. Instructor's Handbook.

    ERIC Educational Resources Information Center

    Itter, Pat

    Supporting performance objective of the 16 V-TECS (Vocational-Technical Education Consortium of States) Bookkeeper Catalog, this instructor's manual contains copies of accounting forms which can be used to make spirit masters or transparencies. (This module is the first in a set of ten on bookkeeping [CE 019 480-489].) Twenty forms grouped under…

  20. Viewpoints on Accountability.

    ERIC Educational Resources Information Center

    Educational Innovators Press, Tucson, AZ.

    This booklet contains five papers which examine the activities, successes, and pitfalls encountered by educators who are introducing accountability techniques into instructional programs where they did not exist in the past. The papers are based on actual programs and offer possible solutions in the areas considered, which are 1) performance…

  1. Accounting 202, 302.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg.

    This teaching guide consists of guidelines for conducting two secondary-level introductory accounting courses. Intended for vocational business education students, the courses are designed to introduce financial principles and practices important to personal and business life, to promote development of clerical and bookkeeping skills sufficient…

  2. Accountability: A Rationale.

    ERIC Educational Resources Information Center

    Brademas, John

    1974-01-01

    The idea of accountability has by now been interpreted in ways which are different enough from one another to have permitted a certain ambiguity to creep into the notion in its present use within the educational community. The principal purpose of this report is, therefore, to try to set forth some clearer statement of what the idea of…

  3. Democracy, Accountability, and Education

    ERIC Educational Resources Information Center

    Levinson, Meira

    2011-01-01

    Educational standards, assessments, and accountability systems are of immense political moment around the world. But there is no developed theory exploring the role that these systems should play within a democratic polity in particular. On the one hand, well-designed standards are public goods, supported by assessment and accountability…

  4. Student Attendance Accounting Manual.

    ERIC Educational Resources Information Center

    Freitas, Joseph M.

    In response to state legislation authorizing procedures for changes in academic calendars and measurement of student workload in California community colleges, this manual from the Chancellor's Office provides guidelines for student attendance accounting. Chapter 1 explains general items such as the academic calendar, admissions policies, student…

  5. CEBAF beam loss accounting

    SciTech Connect

    Ursic, R.; Mahoney, K.; Hovater, C.; Hutton, A.; Sinclair, C.

    1995-12-31

    This paper describes the design and implementation of a beam loss accounting system for the CEBAF electron accelerator. This system samples the beam curent throughout the beam path and measures the beam current accurately. Personnel Safety and Machine Protection systems use this system to turn off the beam when hazardous beam losses occur.

  6. Accounting for What Counts

    ERIC Educational Resources Information Center

    Milner, Joseph O.; Ferran, Joan E.; Martin, Katharine Y.

    2003-01-01

    No Child Left Behind legislation makes it clear that outside evaluators determine what gets taught in the classroom. It is important to ensure they measure what truly counts in school. This fact is poignantly and sadly true for the under funded, poorly resourced, "low performing" schools that may be hammered by administration accountants in the…

  7. Educational Accounting Procedures.

    ERIC Educational Resources Information Center

    Tidwell, Sam B.

    This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing, annual…

  8. Assessment and Accountability.

    ERIC Educational Resources Information Center

    Au, Kathryn

    2001-01-01

    Discusses six books that give a range of perspectives on the issues of assessment and accountability, from the use of standardized reading tests to creating student and professional portfolios. States that these books will provide teachers with the knowledge to make sound assessment decisions and with practical suggestions to document student and…

  9. The Accountability Illusion

    ERIC Educational Resources Information Center

    Cronin, John; Dahlin, Michael; Xiang, Yun; McCahon, Donna

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states have leeway to: (1) Craft their own academic standards, select their own tests, and define…

  10. Accountability in Action.

    ERIC Educational Resources Information Center

    Dumke, Glenn S.

    Since education has become big business, the reactions of the academic community to social change are of immense political and social effect. Therefore, before higher education can deal with the question of accountability, it has to define the role of the college or university in relation to society. One alternative is that the campus operate as…

  11. Fiscal Accounting Manual.

    ERIC Educational Resources Information Center

    California State Dept. of Housing and Community Development, Sacramento. Indian Assistance Program.

    Written in simple, easy to understand form, the manual provides a vehicle for the untrained person in bookkeeping to control funds received from grants for Indian Tribal Councils and Indian organizations. The method used to control grants (federal, state, or private) is fund accounting, designed to organize rendering services on a non-profit…

  12. Measurements and material accounting

    SciTech Connect

    Hammond, G.A. )

    1989-11-01

    The DOE role for the NBL in safeguarding nuclear material into the 21st century is discussed. Development of measurement technology and reference materials supporting requirements of SDI, SIS, AVLIS, pyrochemical reprocessing, fusion, waste storage, plant modernization program, and improved tritium accounting are some of the suggested examples.

  13. Professional Capital as Accountability

    ERIC Educational Resources Information Center

    Fullan, Michael; Rincón-Gallardo, Santiago; Hargreaves, Andy

    2015-01-01

    This paper seeks to clarify and spells out the responsibilities of policy makers to create the conditions for an effective accountability system that produces substantial improvements in student learning, strengthens the teaching profession, and provides transparency of results to the public. The authors point out that U.S. policy makers will need…

  14. Accountability for Productivity

    ERIC Educational Resources Information Center

    Wellman, Jane

    2010-01-01

    Productivity gains in higher education won't be made just by improving cost effectiveness or even performance. They need to be documented, communicated, and integrated into a strategic agenda to increase attainment. This requires special attention to "accountability" for productivity, meaning public presentation and communication of evidence about…

  15. Legal responsibility and accountability.

    PubMed

    Cox, Chris

    2010-06-01

    Shifting boundaries in healthcare roles have led to anxiety among some nurses about their legal responsibilities and accountabilities. This is partly because of a lack of education about legal principles that underpin healthcare delivery. This article explains the law in terms of standards of care, duty of care, vicarious liability and indemnity insurance. PMID:20583648

  16. MATERIAL CONTROL ACCOUNTING INMM

    SciTech Connect

    Hasty, T.

    2009-06-14

    Since 1996, the Mining and Chemical Combine (MCC - formerly known as K-26), and the United States Department of Energy (DOE) have been cooperating under the cooperative Nuclear Material Protection, Control and Accounting (MPC&A) Program between the Russian Federation and the U.S. Governments. Since MCC continues to operate a reactor for steam and electricity production for the site and city of Zheleznogorsk which results in production of the weapons grade plutonium, one of the goals of the MPC&A program is to support implementation of an expanded comprehensive nuclear material control and accounting (MC&A) program. To date MCC has completed upgrades identified in the initial gap analysis and documented in the site MC&A Plan and is implementing additional upgrades identified during an update to the gap analysis. The scope of these upgrades includes implementation of MCC organization structure relating to MC&A, establishing material balance area structure for special nuclear materials (SNM) storage and bulk processing areas, and material control functions including SNM portal monitors at target locations. Material accounting function upgrades include enhancements in the conduct of physical inventories, limit of error inventory difference procedure enhancements, implementation of basic computerized accounting system for four SNM storage areas, implementation of measurement equipment for improved accountability reporting, and both new and revised site-level MC&A procedures. This paper will discuss the implementation of MC&A upgrades at MCC based on the requirements established in the comprehensive MC&A plan developed by the Mining and Chemical Combine as part of the MPC&A Program.

  17. A constitutive model for air-NAPL-water flow in the vadose zone accounting for immobile, non-occluded (residual) NAPL in strongly water-wet porous media

    SciTech Connect

    Lenhard, Robert J.; Oostrom, Mart; Dane, Jacob H.

    2004-07-01

    A major shortcoming of multifluid flow simulators is the inability to predict the retention of nonaqueous phase liquid (NAPL)in the vadose zone after long drainage periods. Recently, three theoretical models, Wipfler and Van Der Zee [J. Contam. Hydrol. 50 (2001); WVDZ model], Van Geel and Roy [J. Contam. Hydrol. 58 (2002); VGR model], and Lenhard et al. [J. Contam. Hydrol. (2004) In Press; LOD model] have been proposed for describing residual NAPL formation. The WVDZ model assumes a critical total liquid saturation below which all NAPL becomes residual. The VGR and LOD models are extensions of an existing hysteretic relative permeability – saturation – capillary pressure model and assume formation of residual NAPL during NAPL drainage and imbibition, respectively. In this paper, we compare model predictions against results of a series of static pressure cell experiments. We found no experimental evidence supporting the WVDZ concept of a critical total liquid saturation. The other two models yielded reasonable predictions. The VGR and LOD models were then incorporated into a multifluid flow simulator and simulations of two transient column experiments were conducted. Both models performed considerably better than simulations without considering the formation of residual NAPL, underwriting the importance of incorporating this process in simulators. Although the VGR and LOD models are based on different conceptual models, no clear performance differences could be observed when simulation results were compared against the transient experimental data.

  18. Iowa Community Colleges Accounting Manual.

    ERIC Educational Resources Information Center

    Iowa State Dept. of Education, Des Moines. Div. of Community Colleges and Workforce Preparation.

    This document describes account classifications and definitions for the accounting system of the Iowa community colleges. In view of the objectives of the accounting system, it is necessary to segregate the assets of the community college according to its source and intended use. Additionally, the accounting system should provide for accounting by…

  19. Public Accountability in the Age of Neo-Liberal Governance.

    ERIC Educational Resources Information Center

    Ranson, Stewart

    2003-01-01

    Analyzes the impact of neo-liberal corporate accountability on educational governance since the demise of professional accountability in the mid-1970s. Argues that corporate accountability is inappropriate for educational governance. Proposes an alternative model: democratic accountability. (Contains 1 figure and 125 references.)(PKP)

  20. Numerical modeling of 1D heterogeneous combustion in porous media under free convection taking into account dependence of permeability on porosity

    NASA Astrophysics Data System (ADS)

    Lutsenko, N. A.

    2016-06-01

    Using numerical experiment the one-dimensional unsteady process of heterogeneous combustion in porous object under free convection is considered when the dependence of permeability on porosity is taken into account. The combustion is due to exothermic reaction between the fuel in the solid porous medium and oxidizer contained in the gas flowing through the porous object. In the present work the process is considered under natural convection, i.e. when the flow rate and velocity of the gas at the inlet to the porous objects are unknown, but the gas pressure at object boundaries is known. The influence of changing of permeability due to the changing of porosity on the solution is investigated using original numerical method, which is based on a combination of explicit and implicit finite-difference schemes. It was shown that taking into account the dependence of permeability on porosity, which is described by some known equations, can significantly change the solution in one-dimensional case. The changing of permeability due to the changing of porosity leads to the speed increasing of both cocurrent and the countercurrent combustion waves, and to the temperature increasing in the combustion zone of countercurrent combustion wave.

  1. Evaluation of accountability measurements

    SciTech Connect

    Cacic, C.G.

    1988-01-01

    The New Brunswick Laboratory (NBL) is programmatically responsible to the U.S. Department of Energy (DOE) Office of Safeguards and Security (OSS) for providing independent review and evaluation of accountability measurement technology in DOE nuclear facilities. This function is addressed in part through the NBL Safegaurds Measurement Evaluation (SME) Program. The SME Program utilizes both on-site review of measurement methods along with material-specific measurement evaluation studies to provide information concerning the adequacy of subject accountability measurements. This paper reviews SME Program activities for the 1986-87 time period, with emphasis on noted improvements in measurement capabilities. Continued evolution of the SME Program to respond to changing safeguards concerns is discussed.

  2. Managing global accounts.

    PubMed

    Yip, George S; Bink, Audrey J M

    2007-09-01

    Global account management--which treats a multinational customer's operations as one integrated account, with coherent terms for pricing, product specifications, and service--has proliferated over the past decade. Yet according to the authors' research, only about a third of the suppliers that have offered GAM are pleased with the results. The unhappy majority may be suffering from confusion about when, how, and to whom to provide it. Yip, the director of research and innovation at Capgemini, and Bink, the head of marketing communications at Uxbridge College, have found that GAM can improve customer satisfaction by 20% or more and can raise both profits and revenues by at least 15% within just a few years of its introduction. They provide guidelines to help companies achieve similar results. The first steps are determining whether your products or services are appropriate for GAM, whether your customers want such a program, whether those customers are crucial to your strategy, and how GAM might affect your competitive advantage. If moving forward makes sense, the authors' exhibit, "A Scorecard for Selecting Global Accounts," can help you target the right customers. The final step is deciding which of three basic forms to offer: coordination GAM (in which national operations remain relatively strong), control GAM (in which the global operation and the national operations are fairly balanced), and separate GAM (in which a new business unit has total responsibility for global accounts). Given the difficulty and expense of providing multiple varieties, the vast majority of companies should initially customize just one---and they should be careful not to start with a choice that is too ambitious for either themselves or their customers to handle. PMID:17886487

  3. First-Person Accounts.

    ERIC Educational Resources Information Center

    Gribs, H.; And Others

    1995-01-01

    Personal accounts describe the lives of 2 individuals with deaf-blindness, one an 87-year-old woman who was deaf from birth and became totally blind over a 50-year period and the other of a woman who became deaf-blind as a result of a fever at the age of 7. Managing activities of daily life and experiencing sensory hallucinations are among topics…

  4. Hospitals' Internal Accountability

    PubMed Central

    Kraetschmer, Nancy; Jass, Janak; Woodman, Cheryl; Koo, Irene; Kromm, Seija K.; Deber, Raisa B.

    2014-01-01

    This study aimed to enhance understanding of the dimensions of accountability captured and not captured in acute care hospitals in Ontario, Canada. Based on an Ontario-wide survey and follow-up interviews with three acute care hospitals in the Greater Toronto Area, we found that the two dominant dimensions of hospital accountability being reported are financial and quality performance. These two dimensions drove both internal and external reporting. Hospitals' internal reports typically included performance measures that were required or mandated in external reports. Although respondents saw reporting as a valuable mechanism for hospitals and the health system to monitor and track progress against desired outcomes, multiple challenges with current reporting requirements were communicated, including the following: 58% of survey respondents indicated that performance-reporting resources were insufficient; manual data capture and performance reporting were prevalent, with the majority of hospitals lacking sophisticated tools or technology to effectively capture, analyze and report performance data; hospitals tended to focus on those processes and outcomes with high measurability; and 53% of respondents indicated that valuable cross-system accountability, performance measures or both were not captured by current reporting requirements. PMID:25305387

  5. Hospitals' internal accountability.

    PubMed

    Kraetschmer, Nancy; Jass, Janak; Woodman, Cheryl; Koo, Irene; Kromm, Seija K; Deber, Raisa B

    2014-09-01

    This study aimed to enhance understanding of the dimensions of accountability captured and not captured in acute care hospitals in Ontario, Canada. Based on an Ontario-wide survey and follow-up interviews with three acute care hospitals in the Greater Toronto Area, we found that the two dominant dimensions of hospital accountability being reported are financial and quality performance. These two dimensions drove both internal and external reporting. Hospitals' internal reports typically included performance measures that were required or mandated in external reports. Although respondents saw reporting as a valuable mechanism for hospitals and the health system to monitor and track progress against desired outcomes, multiple challenges with current reporting requirements were communicated, including the following: 58% of survey respondents indicated that performance-reporting resources were insufficient; manual data capture and performance reporting were prevalent, with the majority of hospitals lacking sophisticated tools or technology to effectively capture, analyze and report performance data; hospitals tended to focus on those processes and outcomes with high measurability; and 53% of respondents indicated that valuable cross-system accountability, performance measures or both were not captured by current reporting requirements. PMID:25305387

  6. Is the solvation parameter model or its adaptations adequate to account for ionic interactions when characterizing stationary phases for drug impurity profiling with supercritical fluid chromatography?

    PubMed

    Galea, Charlene; West, Caroline; Mangelings, Debby; Vander Heyden, Yvan

    2016-06-14

    Nine commercially available polar and aromatic stationary phases were characterized under supercritical fluid chromatographic (SFC) conditions. Retention data of 64 pharmaceutical compounds was acquired to generate models based on the linear solvation energy relationship (LSER) approach. Previously, adaptation of the LSER model was done in liquid chromatography by the addition of two solute descriptors to describe the influence of positive (D(+)) and negative (D(-)) charges on the retention of ionized compounds. In this study, the LSER models, with and without the ionization terms for acidic and basic solutes, were compared. The improved fits obtained for the modified models support inclusion of the D(+) and D(-) terms for pharmaceutical compounds. Moreover, the statistical significance of the new terms in the models indicates the importance of ionic interactions in the retention of pharmaceutical compounds in SFC. However, unlike characterization through the retention profiles, characterization of the stationary phases by modelling never explains the retention variance completely and thus seems less appropriate. PMID:27181639

  7. 18 CFR 367.1840 - Account 184, Clearing accounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... accounts. 367.1840 Section 367.1840 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE PUBLIC UTILITY HOLDING COMPANY ACT OF 2005, FEDERAL... ACT Balance Sheet Chart of Accounts Deferred Debits § 367.1840 Account 184, Clearing accounts....

  8. 18 CFR 367.1840 - Account 184, Clearing accounts.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... accounts. 367.1840 Section 367.1840 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE PUBLIC UTILITY HOLDING COMPANY ACT OF 2005, FEDERAL... ACT Balance Sheet Chart of Accounts Deferred Debits § 367.1840 Account 184, Clearing accounts....

  9. 18 CFR 367.1840 - Account 184, Clearing accounts.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... accounts. 367.1840 Section 367.1840 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE PUBLIC UTILITY HOLDING COMPANY ACT OF 2005, FEDERAL... ACT Balance Sheet Chart of Accounts Deferred Debits § 367.1840 Account 184, Clearing accounts....

  10. 18 CFR 367.1840 - Account 184, Clearing accounts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... accounts. 367.1840 Section 367.1840 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE PUBLIC UTILITY HOLDING COMPANY ACT OF 2005, FEDERAL... ACT Balance Sheet Chart of Accounts Deferred Debits § 367.1840 Account 184, Clearing accounts....

  11. 18 CFR 367.1840 - Account 184, Clearing accounts.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... accounts. 367.1840 Section 367.1840 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE PUBLIC UTILITY HOLDING COMPANY ACT OF 2005, FEDERAL... ACT Balance Sheet Chart of Accounts Deferred Debits § 367.1840 Account 184, Clearing accounts....

  12. New Frontiers: Training Forensic Accountants within the Accounting Program

    ERIC Educational Resources Information Center

    Ramaswamy, Vinita

    2007-01-01

    Accountants have recently been subject to very unpleasant publicity following the collapse of Enron and other major companies. There has been a plethora of accounting failures and accounting restatements of falsified earnings, with litigations and prosecutions taking place every day. As the FASB struggles to tighten the loopholes in accounting,…

  13. Teaching Elementary Accounting to Non-Accounting Majors

    ERIC Educational Resources Information Center

    Lloyd, Cynthia B.; Abbey, Augustus

    2009-01-01

    A central recurring theme in business education is the optimal strategy for improving introductory accounting, the gateway subject of business education. For many students, especially non-accounting majors, who are required to take introductory accounting as a requirement of the curriculum, introductory accounting has become a major obstacle for…

  14. Models accounting for intention-behavior discordance in the physical activity domain: a user's guide, content overview, and review of current evidence.

    PubMed

    Rhodes, Ryan E; Yao, Christopher A

    2015-01-01

    There is a growing concern among researchers with the limited effectiveness and yet subsequent stagnation of theories applied to physical activity (PA). One of the most highlighted areas of concern is the established gap between intention and PA, yet the considerable use of models that assume intention is the proximal antecedent of PA. The objective of this review was to: 1) provide a guide and thematic analysis of the available models that include constructs that address intention-behavior discordance and 2) highlight the evidence for these structures in the PA domain. A literature search was conducted among 13 major databases to locate relevant models and PA studies published before August 2014. Sixteen models were identified and nine overall themes for post-intentional constructs were created. Of the 16 models, eight were applied to 36 PA studies. Early evidence supported maintenance self-efficacy, behavioral regulation strategies, affective judgments, perceived control/opportunity, habit, and extraversion as reliable predictors of post-intention PA. Several intention-behavior discordance models exist within the literature, but are not used frequently. Further efforts are needed to test these models, preferably with experimental designs. PMID:25890238

  15. Estimating indoor semi-volatile organic compounds (SVOCs) associated with settled dust by an integrated kinetic model accounting for aerosol dynamics

    NASA Astrophysics Data System (ADS)

    Shi, Shanshan; Zhao, Bin

    2015-04-01

    Due to their low vapor pressure, semi-volatile organic compounds (SVOCs) can absorb onto other compartments in indoor environments, including settled dust. Incidental ingestion of settled dust-bound SVOCs contributes to the majority of daily non-dietary exposure to some SVOCs by human beings. With this pathway in mind, an integrated kinetic model to estimate indoor SVOC was developed to better predict the mass-fraction of SVOC associated with settled dust, which is important to accurately assess the non-dietary ingestion exposure to SVOC. In this integrated kinetic model, the aerosol dynamics were considered, including particle penetration, deposition and resuspension. The newly developed model was evaluated by comparing the predicted mass-fraction of SVOC associated with the settled dust (Xdust) and the measured Xdust from previous studies. Sixty Xdust values of thirty-eight different SVOCs measured in residences located in seven countries from four continents were involved in the model evaluation. The Xdust value predicted by the integrated kinetic model correlated linearly with the measured Xdust: y = 0.93x + 0.09 (R2 = 0.73), which indicates that the predicted Xdust by the integrated kinetic model are in good match with the measured data. This model may be utilized to predict SVOC concentrations in different indoor compartments, including dust-bound SVOC.

  16. Performance and Accountability Report

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The NASA Fiscal Year 2002 Performance and Accountability Report is presented. Over the past year, significant changes have been implemented to greatly improve NASA's management while continuing to break new ground in science and technology. Excellent progress has been made in implementing the President's Management Agenda. NASA is leading the government in its implementation of the five government-wide initiatives. NASA received an unqualified audit opinion on FY 2002 financial statements. The vast majority of performance goals have been achieved, furthering each area of NASA's mission. The contents include: 1) NASA Vision and Mission; 2) Management's Discussion and Analysis; 3) Performance; and 4) Financial.

  17. A modelling exercise to examine variations of NOx concentrations on adjacent footpaths in a street canyon: The importance of accounting for wind conditions and fleet composition.

    PubMed

    Gallagher, J

    2016-04-15

    Personal measurement studies and modelling investigations are used to examine pollutant exposure for pedestrians in the urban environment: each presenting various strengths and weaknesses in relation to labour and equipment costs, a sufficient sampling period and the accuracy of results. This modelling exercise considers the potential benefits of modelling results over personal measurement studies and aims to demonstrate how variations in fleet composition affects exposure results (presented as mean concentrations along the centre of both footpaths) in different traffic scenarios. A model of Pearse Street in Dublin, Ireland was developed by combining a computational fluid dynamic (CFD) model and a semi-empirical equation to simulate pollutant dispersion in the street. Using local NOx concentrations, traffic and meteorological data from a two-week period in 2011, the model were validated and a good fit was presented. To explore the long-term variations in personal exposure due to variations in fleet composition, synthesised traffic data was used to compare short-term personal exposure data (over a two-week period) with the results for an extended one-year period. Personal exposure during the two-week period underestimated the one-year results by between 8% and 65% on adjacent footpaths. The findings demonstrate the potential for relative differences in pedestrian exposure to exist between the north and south footpaths due to changing wind conditions in both peak and off-peak traffic scenarios. This modelling approach may help overcome potential under- or over-estimations of concentrations in personal measurement studies on the footpaths. Further research aims to measure pollutant concentrations on adjacent footpaths in different traffic and wind conditions and to develop a simpler modelling system to identify pollutant hotspots on our city footpaths so that urban planners can implement improvement strategies to improve urban air quality. PMID:26859699

  18. Accounting for Unresolved Spatial Variability in Large Scale Models: Development and Evaluation of a Statistical Cloud Parameterization with Prognostic Higher Order Moments

    SciTech Connect

    Robert Pincus

    2011-05-17

    This project focused on the variability of clouds that is present across a wide range of scales ranging from the synoptic to the millimeter. In particular, there is substantial variability in cloud properties at scales smaller than the grid spacing of models used to make climate projections (GCMs) and weather forecasts. These models represent clouds and other small-scale processes with parameterizations that describe how those processes respond to and feed back on the largescale state of the atmosphere.

  19. Performance testing accountability measurements

    SciTech Connect

    Oldham, R.D.; Mitchell, W.G.; Spaletto, M.I.

    1993-12-31

    The New Brunswick Laboratory (NBL) provides assessment support to the DOE Operations Offices in the area of Material Control and Accountability (MC and A). During surveys of facilities, the Operations Offices have begun to request from NBL either assistance in providing materials for performance testing of accountability measurements or both materials and personnel to do performance testing. To meet these needs, NBL has developed measurement and measurement control performance test procedures and materials. The present NBL repertoire of performance tests include the following: (1) mass measurement performance testing procedures using calibrated and traceable test weights, (2) uranium elemental concentration (assay) measurement performance tests which use ampulated solutions of normal uranyl nitrate containing approximately 7 milligrams of uranium per gram of solution, and (3) uranium isotopic measurement performance tests which use ampulated uranyl nitrate solutions with enrichments ranging from 4% to 90% U-235. The preparation, characterization, and packaging of the uranium isotopic and assay performance test materials were done in cooperation with the NBL Safeguards Measurements Evaluation Program since these materials can be used for both purposes.

  20. Where Are the Accounting Professors?

    ERIC Educational Resources Information Center

    Chang, Jui-Chin; Sun, Huey-Lian

    2008-01-01

    Accounting education is facing a crisis of shortage of accounting faculty. This study discusses the reasons behind the shortage and offers suggestions to increase the supply of accounting faculty. Our suggestions are as followings. First, educators should begin promoting accounting academia as one of the career choices to undergraduate and…