Science.gov

Sample records for accounting sac-sma model

  1. SAC-SMA a priori parameter differences and their impact on distributed hydrologic model simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Ziya; Koren, Victor; Reed, Seann; Smith, Michael; Zhang, Yu; Moreda, Fekadu; Cosgrove, Brian

    2012-02-01

    SummaryDeriving a priori gridded parameters is an important step in the development and deployment of an operational distributed hydrologic model. Accurate a priori parameters can reduce the manual calibration effort and/or speed up the automatic calibration process, reduce calibration uncertainty, and provide valuable information at ungauged locations. Underpinned by reasonable parameter data sets, distributed hydrologic modeling can help improve water resource and flood and flash flood forecasting capabilities. Initial efforts at the National Weather Service Office of Hydrologic Development (NWS OHD) to derive a priori gridded Sacramento Soil Moisture Accounting (SAC-SMA) model parameters for the conterminous United States (CONUS) were based on a relatively coarse resolution soils property database, the State Soil Geographic Database (STATSGO) (Soil Survey Staff, 2011) and on the assumption of uniform land use and land cover. In an effort to improve the parameters, subsequent work was performed to fully incorporate spatially variable land cover information into the parameter derivation process. Following that, finer-scale soils data (the county-level Soil Survey Geographic Database (SSURGO) ( Soil Survey Staff, 2011a,b), together with the use of variable land cover data, were used to derive a third set of CONUS, a priori gridded parameters. It is anticipated that the second and third parameter sets, which incorporate more physical data, will be more realistic and consistent. Here, we evaluate whether this is actually the case by intercomparing these three sets of a priori parameters along with their associated hydrologic simulations which were generated by applying the National Weather Service Hydrology Laboratory's Research Distributed Hydrologic Model (HL-RDHM) ( Koren et al., 2004) in a continuous fashion with an hourly time step. This model adopts a well-tested conceptual water balance model, SAC-SMA, applied on a regular spatial grid, and links to physically

  2. Application of stochastic parameter optimization to the Sacramento Soil Moisture Accounting model

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Gupta, Hoshin V.; Dekker, Stefan C.; Sorooshian, Soroosh; Wagener, Thorsten; Bouten, Willem

    2006-06-01

    Hydrological models generally contain parameters that cannot be measured directly, but can only be meaningfully inferred by calibration against a historical record of input-output data. While considerable progress has been made in the development and application of automatic procedures for model calibration, such methods have received criticism for their lack of rigor in treating uncertainty in the parameter estimates. In this paper, we apply the recently developed Shuffled Complex Evolution Metropolis algorithm (SCEM-UA) to stochastic calibration of the parameters in the Sacramento Soil Moisture Accounting model (SAC-SMA) model using historical data from the Leaf River in Mississippi. The SCEM-UA algorithm is a Markov Chain Monte Carlo sampler that provides an estimate of the most likely parameter set and underlying posterior distribution within a single optimization run. In particular, we explore the relationship between the length and variability of the streamflow data and the Bayesian uncertainty associated with the SAC-SMA model parameters and compare SCEM-UA derived parameter values with those obtained using deterministic SCE-UA calibrations. Most significantly, for the Leaf River catchments under study our results demonstrate that most of the 13 SAC-SMA parameters are well identified by calibration to daily streamflow data suggesting that this data contains more information than has previously been reported in the literature.

  3. Use of the Sacramento Soil Moisture Accounting Model in Areas with Insufficient Forcing Data

    NASA Astrophysics Data System (ADS)

    Kuzmin, V.

    2009-04-01

    The Sacramento Soil Moisture Accounting model (SAC-SMA) is known as a very reliable and effective hydrological model. It is widely used by the U.S. National Weather Service (NWS) and many organizations in other countries for operational forecasting of flash floods. As a purely conceptual model, the SAC-SMA requires a periodic re-calibration. However, this procedure is not trivial in watersheds with little or no historical data, in areas with changing watershed properties, in a changing climate environment, in regions with low quality and low spatial resolution forcing data etc. In such cases, so-called physically based models with measurable parameters also may not be an alternative, because they usually require high quality forcing data and, hence, are quite expensive. Therefore, this type of models can not be implemented in countries with scarce surface observation data. To resolve this problem, we offer using a very fast and efficient automatic calibration algorithm, a Stepwise Line Search (SLS), which has been implementing in NWS since 2005, and also its modifications that were developed especially for automated operational forecasting of flash floods in regions where high resolution and high quality forcing data are not available. The SLS-family includes several simple yet efficient calibration algorithms: 1) SLS-F, which supposes simultaneous natural smoothing of the response surface by quasi-local estimation of F-indices, what allows finding the most stable and reliable parameters that can be different from "global" optima in usual sense. (Thus, this method slightly transforms the original objective function); 2) SLS-2L (Two-Loop SLS), which is suitable for basins where hydraulic properties of soil are unknown; 3) SLS-2LF, which represents a conjunction of the SLS-F and SLS-2L algorithms and allows obtaining the SAC-SMA parameters that can be transferred to ungauged catchments; 4) SLS-E, which also supposes stochastic filtering of the model input through

  4. Physically-based modifications to the Sacramento Soil Moisture Accounting model. Part A: Modeling the effects of frozen ground on the runoff generation process

    NASA Astrophysics Data System (ADS)

    Koren, Victor; Smith, Michael; Cui, Zhengtao

    2014-11-01

    This paper presents the first of two physically-based modifications to a widely-used and well-validated hydrologic precipitation-runoff model. Here, we modify the Sacramento Soil Moisture Accounting (SAC-SMA) model to include a physically-based representation of the effects of freezing and thawing soil on the runoff generation process. This model is called the SAC-SMA Heat Transfer model (SAC-HT). The frozen ground physics are taken from the Noah land surface model which serves as the land surface component of several National Center for Environmental Prediction (NCEP) numerical weather prediction models. SAC-HT requires a boundary condition of the soil temperature at the bottom of the soil column (a climatic annual air temperature is typically used, and parameters derived from readily available soil texture data). A noteworthy feature of SAC-HT is that the frozen ground component needs no parameter calibration. SAC-HT was tested at 11 sites in the U.S. for soil temperature, one site in Russia for soil temperature and soil moisture, eight basins in the upper Midwest for the effects of frozen-ground on streamflow, and one location for frost depth. High correlation coefficients for simulated soil temperature at three depths at 11 stations were achieved. Multi-year simulations of soil moisture and soil temperature agreed very well at the Valdai, Russia test location. In eight basins affected by seasonally frozen soil in the upper Midwest, SAC-HT provided improved streamflow simulations compared to SAC-SMA when both models used a priori parameters. Further improvement was gained through calibration of the non-frozen ground a priori parameters. Frost depth computed by SAC-HT compared well with observed values in the Root River basin in Minnesota.

  5. Evaluation of Optimization Methods for Hydrologic Model Calibration in Ontario Basins

    NASA Astrophysics Data System (ADS)

    Razavi, T.; Coulibaly, P. D.

    2013-12-01

    Particle Swarm Optimization algorithm (PSO), Shuffled Complex Evolution algorithm (SCE), Non-Dominated Sorted Genetic algorithm II (NSGA II) and a Monte Carlo procedure are applied to optimize the calibration of two conceptual hydrologic models namely the Sacramento Soil Moisture Accounting (SAC-SMA) and McMaster University-Hydrologiska Byråns Vattenbalansavdelning (MAC-HBV). PSO, SCE, and NSGA II are inherently evolutionary computational methods with a potential of reaching the global optimum in contrast to stochastic search algorithms such as Monte Carlo method. The spatial analysis maps of Nash Sutcliffe Efficiency (NSE) for daily streamflow and Volume Error (VE) for peak and low flows demonstrate that for both MAC-HBV and SAC-SMA, PSO and SCE are equally superior to NSGAII and Monte Carlo for all the selected 90 basins across Ontario (Canada) using 20 years (1976-1994) of hydrologic records. For peakflows, MAC-HBV with PSO has generally better performance compared to SCE, whereas SAC-SMA with SCE and PSO indicate similar performance. For low flows, MAC-HBV with PSO has a better performance for most of the northern large watersheds while SCE has a better performance for southern small watersheds. Temporal variability of NSE values for daily streamflow show that all the optimization methods perform better for the winter season compared to the summer.

  6. Comparison of a Neural Network and a Conceptual Model for Rainfall-Runoff Modelling with Monthly Input

    NASA Astrophysics Data System (ADS)

    Chochlidakis, Chronis; Daliakopoulos, Ioannis; Tsanis, Ioannis

    2014-05-01

    Rainfall-runoff (RR) models contain parameters that can seldom be directly measured or estimated by expert judgment, but are rather inferred by calibration against a historical record of input-output datasets. Here, a comparison is made between a conceptual model and an Artificial Neural Network (ANN) for efficient modeling of complex hydrological processes. The monthly rainfall, streamflow, and evapotranspiration data from 15 catchments in Crete, Greece are used to compare the proposed methodologies. Genetic Algorithms (GA) are applied for the stochastic calibration of the parameters in the Sacramento Soil Moisture Accounting (SAC-SMA) model yielding R2 values between 0.65 and 0.90. A Feedforward NN (FNN) is trained using a time delay approach, optimized through trial and error for each catchment, yielding R2 values between 0.70 and 0.91. The results obtained show that the ANN models can be superior to the conventional conceptual models due to their ability to handle the non-linearity and dynamic nature of the natural physical processes in a more efficient manner. On the other hand, SAC-SMA depicts high flows with greater accuracy and results suggest that conceptual models can be more robust in extrapolating beyond historical record limits.

  7. UQLab - A Software Platform for Uncertainty Quantification of Complex System Models

    NASA Astrophysics Data System (ADS)

    Wang, C.; Duan, Q.; Gong, W.

    2014-12-01

    UQLab (Uncertainty quantification Laboratory) is a flexible, user-friendly software platform that integrates different kinds of UQ methods including experimental design, sensitivity analysis, uncertainty analysis, surrogate modeling and optimization methods to characterize uncertainty of complex system models. It is written in Python language and can run on all common operating systems. UQLab has a graphic user interface (GUI) that allows users to enter commands and output analysis results via pull-down menus. It is equipped with a model driver generator that allows any system model to be linked with the software. The only requirement is to make sure the executable code, control file and output file of interest of a model accessible by the software. Through two geophysics models: the Sacramento Soil Moisture Accounting Model (SAC-SMA) and Common Land Model (CoLM), this presentation intends to demonstrate that UQLab is an effective and easy UQ tool to use, and can be applied to a wide range of applications.

  8. Hydrologic evaluation of a Generalized Statistical Uncertainty Model for Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Sarachi, S.; Hsu, K. L.; Sorooshian, S.

    2014-12-01

    Development of satellite based precipitation retrieval algorithms and using them in hydroclimatic studies have been of great interest to hydrologists. It is important to understand the uncertainty associated with precipitation products and how they further contribute to the variability in stream flow simulation. In this study a mixture model of Generalized Normal Distribution and Gamma distribution (GND-G) is used to model the joint probability distribution of satellite-based (PERSIANN) and stage IV radar rainfall. The study area for constructing the uncertainty model covers a 15°×15°box of 0.25°×0.25° cells over the eastern United States for summer 2004 to 2009. Cells are aggregated in space and time to obtain data with different resolutions for the construction of the model's parameter space. This uncertainty model is evaluated using data from National Weather Service (NWS) Distributed Hydrologic Model Intercomparison Project - Phase 2 (DMIP 2) basin over Illinois River basin south of Siloam, OK. This data covers the time period of 2006 to 2008.The uncertainty range of precipitation is estimated. The impact of precipitation uncertainty to the stream flow estimation is demonstrated by Monte Carlo simulation of precipitation forcing in the Sacramento Soil Moisture Accounting (SAC-SMA) model. The results show that using precipitation along with its uncertainty distribution as forcing to SAC-SMA make it possible to have an estimation of the uncertainty associated with the stream flow simulation ( in this case study %90 confidence interval is used). The mean of this stream flow confidence interval is compared to the reference stream flow for evaluation of the model and the results show that this method helps to better estimate the variability of the stream flow simulation along with its statistics e.g. percent bias and root mean squared error.

  9. Exploring the effect of spatial disaggregation of conceptual hydrologic models for improved flow forecasting

    NASA Astrophysics Data System (ADS)

    Wi, S.; Brown, C. M.

    2013-12-01

    The availability of gridded climatic data, high resolution Digital Elevation Maps (DEM), soil, land-use and land-cover data has motivated researchers to exploit these data for more accurate distributed hydrologic modeling. However, with increased disaggregation there is the introduction of numerous parameters and conceptualized processes that are unobservable. In this study we explore the advantage of employing spatially distributed climatic and geographic information in the context of a disaggregated conceptual hydrologic modeling framework by developing distributed model versions for three hydrologic models: HYMOD (Hydrologic Model), HBV (Hydrologiska Byrans Vattenbalansavdelning), and SAC-SMA (Sacramento Soil Moisture Accounting). This study proposes a general framework for building a distributed conceptual hydrological model by coupling a rainfall-runoff model to a routing model which is based on the formularized sub-basin unit hydrograph and the linearized Saint-Venant equation. To deal with a very large number of model parameters resulting from the distributed system modeling approach, hydrological similarity and landscape classification derived from the geospatial database is used to reduce the complexity in the process of model parameter estimation. Tests for the Iowa River basin show that three distributed models outperform lumped model versions in terms of reproducing observed streamflow for both calibration and validation periods. Model calibration strategies informed by geospatial information yield flow predictions comparable to the fully distributed model simulations. Results from this study are encouraging and indicate that the proposed framework holds promise for making improved predictions of hydrologic system response.

  10. Building Cyberinfrastructure to Support a Real-time National Flood Model

    NASA Astrophysics Data System (ADS)

    Salas, F. R.; Maidment, D. R.; Tolle, K.; Navarro, C.; David, C. H.; Corby, R.

    2014-12-01

    The National Weather Service (NWS) is divided into 13 regional forecast centers across the country where the Sacramento Soil Moisture Accounting (SAC-SMA) model is run on average over a 10 day period, 5 days in the past and 5 days in the future. Model inputs and outputs such as precipitation and surface runoff are spatially aggregated over approximately 6,600 forecast basins with an average area of 1,200 square kilometers. In contrast, the NHDPlus dataset, which represents the geospatial fabric of the country, defines over 3 million catchments with an average area of 3 square kilometers. Downscaling the NWS land surface model outputs to the NHDPlus catchment scale in real-time requires the development of cyberinfrastructure to manage, share, compute and visualize large quantities of hydrologic data; streamflow computations through time for over 3 million river reaches. Between September 2014 and May 2015, the National Flood Interoperability Experiment (NFIE), coordinated through the Integrated Water Resource Science and Services (IWRSS) partners, will focus on building a national flood model for the country. This experiment will work to seamlessly integrate data and model services available on local and cloud servers (e.g. Azure) through disparate data sources operating at various spatial and temporal scales. As such, this paper will present a scalable information model that leverages the Routing Application for Parallel Computation of Discharge (RAPID) model to produce real-time flow estimates for approximately 67,000 NHDPlus river reaches in the NWS West Gulf River Forecast Center region.

  11. Combined assimilation of streamflow and satellite soil moisture with the particle filter and geostatistical modeling

    NASA Astrophysics Data System (ADS)

    Yan, Hongxiang; Moradkhani, Hamid

    2016-08-01

    Assimilation of satellite soil moisture and streamflow data into a distributed hydrologic model has received increasing attention over the past few years. This study provides a detailed analysis of the joint and separate assimilation of streamflow and Advanced Scatterometer (ASCAT) surface soil moisture into a distributed Sacramento Soil Moisture Accounting (SAC-SMA) model, with the use of recently developed particle filter-Markov chain Monte Carlo (PF-MCMC) method. Performance is assessed over the Salt River Watershed in Arizona, which is one of the watersheds without anthropogenic effects in Model Parameter Estimation Experiment (MOPEX). A total of five data assimilation (DA) scenarios are designed and the effects of the locations of streamflow gauges and the ASCAT soil moisture on the predictions of soil moisture and streamflow are assessed. In addition, a geostatistical model is introduced to overcome the significantly biased satellite soil moisture and also discontinuity issue. The results indicate that: (1) solely assimilating outlet streamflow can lead to biased soil moisture estimation; (2) when the study area can only be partially covered by the satellite data, the geostatistical approach can estimate the soil moisture for those uncovered grid cells; (3) joint assimilation of streamflow and soil moisture from geostatistical modeling can further improve the surface soil moisture prediction. This study recommends that the geostatistical model is a helpful tool to aid the remote sensing technique and the hydrologic DA study.

  12. Needed: An Updated Accountability Model

    ERIC Educational Resources Information Center

    Tucker, Marc

    2015-01-01

    It must have seemed simple to the framers of No Child Left Behind. For years, they had poured more and more money into federal programs for schools, yet reading performance had not improved. It appeared that the money had gone down a rat hole, and Congress was ready to hold schools accountable. It was time to get tough. Unfortunately, the…

  13. An Accountability Model for Counselors

    ERIC Educational Resources Information Center

    Krumboltz, John D.

    1974-01-01

    A sound counselor accountability system would collate counselor accomplishments with costs. It would define the domain of counselor responsibilities, use student behavior changes as evidence of counselor accomplishments, state counselor activities as costs, promote self-improvement, permit reports of failures and unknown outcomes, be designed by…

  14. An Institutional Accountability Model for Community Colleges.

    ERIC Educational Resources Information Center

    Harbour, Clifford P.

    2003-01-01

    Proposes a model for managing a community college's accountability environment and shows how it can be applied. Reports that the model is premised on the pluralistic perspective of accountability (Kearns), and uses Christensen's value network for building the community college model. (Contains 37 references.) (AUTH/NB)

  15. Implementing a trustworthy cost-accounting model.

    PubMed

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  16. The Utility of Two Shape Matching Error Functions in the Evaluation and Verification of SAC-HTET model Soil Moisture Variables

    NASA Astrophysics Data System (ADS)

    KIM, J.; Smith, M. B.; Koren, V.

    2013-12-01

    The National Oceanic and Atmospheric Administration's (NOAA) - National Weather Service (NWS) has modified the Sacramento Soil Moisture Accounting (SAC-SMA) model to include an advanced treatment of evapotranspiration (SAC-HTET). In this study, we run the SAC-HTET within the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) to simulate soil moisture grids over the Oklahoma Mesonet domain in real time. The main purpose of this study is to use novel shape-matching error functions to compare simulated soil moisture products with observed soil moisture. We compare the computed soil moisture products at the 4km grid scale with in-situ observations of soil moisture using both traditional measures and two different shape matching or similarity functions: Hausdorff (HAUS) and Earth Mover's Distance (EMD). Soil moisture variables are closely related to soil characteristics which vary greatly in space and depth. HAUS and EMD have the potential to consider these heterogeneities. The HAUS function allows for the incorporation of various factors such as location and depth due to its intrinsically multi-dimensional nature. The EMD function requires the mapping of the set of soil moisture variables into a two dimensional matrix in order to avoid its computational overburden. In this study, we examine the utility of these novel shape matching functions to evaluate the SAC-HTET model.

  17. Modeling habitat dynamics accounting for possible misclassification

    USGS Publications Warehouse

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  18. Integrating Soft Data into Hydrologic Modeling to Improve Post-fire Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Jung, H. Y.; Hogue, T. S.

    2008-12-01

    A significant problem with post-fire streamflow prediction is the limited availability of data for parameter estimation and for independent validation of model performance. The goal of the current study is to evaluate a range of optimization techniques which allow integration of alternative soft-data data into a hydrologic modeling and prediction framework and improve post-fire simulations. This project utilizes the Sacramento Soil Moisture Accounting Model (SAC-SMA), the National Weather Service operational conceptual rainfall- runoff model and incorporates both discharge and geochemical data to estimate model parameters. The analysis is undertaken in a watershed which has undergone an extensive land cover change (fire) and for which both pre- and post-fire geochemical and streamflow data are available. We utilize the Shuffled Complex Evolution Metropolis (SCEM) and the Generalized Likelihood Uncertainty Estimation (GLUE) algorithms coupled to the SACSMA and integrate estimates of geochemically-derived flow components. Success is determined not only by the accurate prediction of total discharge, but also by the prediction of flow from contributing sources (i.e. overland, lateral and baseflow components). The coupled SCEM-SACSMA, using only discharge as a criterion, shows reasonable simulation of total runoff and various flow components under pre-fire conditions. Post-fire model simulations show less accurate simulation of total discharge and unrealistic representation of watershed behavior. Pre-fire model runs using the coupled GLUE-SACSMA show reasonable performance integrating total discharge as the threshold criteria; whereas the post fire model run returned empty parameter sets (no sets met threshold criteria). Predictions using the GLUE- SACSMA and derived flow components showed significant improvement, narrowing the uncertainty bounds in total discharge as well as all observed flow components.

  19. Assimilation of AMSR-E snow water equivalent data in a spatially-lumped snow model

    NASA Astrophysics Data System (ADS)

    Dziubanski, David J.; Franz, Kristie J.

    2016-09-01

    Accurately initializing snow model states in hydrologic prediction models is important for estimating future snowmelt, water supplies, and flooding potential. While ground-based snow observations give the most reliable information about snowpack conditions, they are spatially limited. In the north-central USA, there are no continual observations of hydrologically critical snow variables. Satellites offer the most likely source of spatial snow data, such as the snow water equivalent (SWE), for this region. In this study, we test the impact of assimilating SWE data from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) instrument into the US National Weather Service (NWS) SNOW17 model for seven watersheds in the Upper Mississippi River basin. The SNOW17 is coupled with the NWS Sacramento Soil Moisture Accounting (SACSMA) model, and both simulated SWE and discharge are evaluated. The ensemble Kalman filter (EnKF) assimilation framework is applied and updating occurs on a daily cycle for water years 2006-2011. Prior to assimilation, AMSR-E data is bias corrected using data from the National Operational Hydrologic Remote Sensing Center (NOHRSC) airborne snow survey program. An average AMSR-E SWE bias of -17.91 mm was found for the study basins. SNOW17 and SAC-SMA model parameters from the North Central River Forecast Center (NCRFC) are used. Compared to a baseline run without assimilation, the SWE assimilation improved discharge for five of the seven study sites, in particular for high discharge magnitudes associated with snow melt runoff. SWE and discharge simulations suggest that the SNOW17 is underestimating SWE and snowmelt rates in the study basins. Deep snow conditions and periods of snowmelt may have introduced error into the assimilation due to difficulty obtaining accurate brightness temperatures under these conditions. Overall results indicate that the AMSR-E data and EnKF are viable and effective solutions for improving simulations

  20. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more

  1. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    SciTech Connect

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient

  2. Accountability.

    ERIC Educational Resources Information Center

    Mullen, David J., Ed.

    This monograph, prepared to assist Georgia elementary principals to better understand accountability and its implications for educational improvement, sets forth many of the theoretical and philosophical bases from which accountability is being considered. Leon M. Lessinger begins this 5-paper presentation by describing the need for accountability…

  3. Accountability.

    ERIC Educational Resources Information Center

    Lashway, Larry

    1999-01-01

    This issue reviews publications that provide a starting point for principals looking for a way through the accountability maze. Each publication views accountability differently, but collectively these readings argue that even in an era of state-mandated assessment, principals can pursue proactive strategies that serve students' needs. James A.…

  4. Accountability.

    ERIC Educational Resources Information Center

    The Newsletter of the Comprehensive Center-Region VI, 1999

    1999-01-01

    Controversy surrounding the accountability movement is related to how the movement began in response to dissatisfaction with public schools. Opponents see it as one-sided, somewhat mean-spirited, and a threat to the professional status of teachers. Supporters argue that all other spheres of the workplace have accountability systems and that the…

  5. Student Accountability Model: Procedures Manual. Vocational Education, Part C.

    ERIC Educational Resources Information Center

    Morris, William; Gold, Ben K.

    The Student Accountability Model (SAM) was developed by a consortium of 12 members, to provide a system of procedures for identifying and describing California community college occupational students and for obtaining information about them after they leave college. The two components of the model are the Student Accounting Component…

  6. AB 1725 Model Accountability System. California Community Colleges. Revised.

    ERIC Educational Resources Information Center

    California Community Colleges, Sacramento. Board of Governors.

    This report proposes a model accountability system for the California community colleges to comply with the directives of Assembly Bill 1725 (AB 1725). The purpose of the accountability system is to provide colleges and districts, the board of governors, and the California legislature with information that will allow for the continued improvement…

  7. A memristor SPICE model accounting for synaptic activity dependence.

    PubMed

    Li, Qingjiang; Serb, Alexander; Prodromakis, Themistoklis; Xu, Hui

    2015-01-01

    In this work, we propose a new memristor SPICE model that accounts for the typical synaptic characteristics that have been previously demonstrated with practical memristive devices. We show that this model could account for both volatile and non-volatile memristance changes under distinct stimuli. We then demonstrate that our model is capable of supporting typical STDP with simple non-overlapping digital pulse pairs. Finally, we investigate the capability of our model to simulate the activity dependence dynamics of synaptic modification and present simulated results that are in excellent agreement with biological results. PMID:25785597

  8. A memristor SPICE model accounting for synaptic activity dependence.

    PubMed

    Li, Qingjiang; Serb, Alexander; Prodromakis, Themistoklis; Xu, Hui

    2015-01-01

    In this work, we propose a new memristor SPICE model that accounts for the typical synaptic characteristics that have been previously demonstrated with practical memristive devices. We show that this model could account for both volatile and non-volatile memristance changes under distinct stimuli. We then demonstrate that our model is capable of supporting typical STDP with simple non-overlapping digital pulse pairs. Finally, we investigate the capability of our model to simulate the activity dependence dynamics of synaptic modification and present simulated results that are in excellent agreement with biological results.

  9. A Memristor SPICE Model Accounting for Synaptic Activity Dependence

    PubMed Central

    Li, Qingjiang; Serb, Alexander; Prodromakis, Themistoklis; Xu, Hui

    2015-01-01

    In this work, we propose a new memristor SPICE model that accounts for the typical synaptic characteristics that have been previously demonstrated with practical memristive devices. We show that this model could account for both volatile and non-volatile memristance changes under distinct stimuli. We then demonstrate that our model is capable of supporting typical STDP with simple non-overlapping digital pulse pairs. Finally, we investigate the capability of our model to simulate the activity dependence dynamics of synaptic modification and present simulated results that are in excellent agreement with biological results. PMID:25785597

  10. A Diffusion Model Account of the Lexical Decision Task

    ERIC Educational Resources Information Center

    Ratcliff, Roger; Gomez, Pablo; McKoon, Gail

    2004-01-01

    The diffusion model for 2-choice decisions (R. Ratcliff, 1978) was applied to data from lexical decision experiments in which word frequency, proportion of high- versus low-frequency words, and type of nonword were manipulated. The model gave a good account of all of the dependent variables--accuracy, correct and error response times, and their…

  11. Program Evaluation: The Accountability Bridge Model for Counselors

    ERIC Educational Resources Information Center

    Astramovich, Randall L.; Coker, J. Kelly

    2007-01-01

    The accountability and reform movements in education and the human services professions have pressured counselors to demonstrate outcomes of counseling programs and services. Evaluation models developed for large-scale evaluations are generally impractical for counselors to implement. Counselors require practical models to guide them in planning…

  12. Rainfall-runoff modeling in a flashy tropical watershed using the distributed HL-RDHM model

    NASA Astrophysics Data System (ADS)

    Fares, Ali; Awal, Ripendra; Michaud, Jene; Chu, Pao-Shin; Fares, Samira; Kodama, Kevin; Rosener, Matt

    2014-11-01

    Many watersheds in Hawai';i are flash flood prone due to their small contributing areas and frequent intense rainfall. Motivated by the possibility of developing an operational flood forecasting system, this study evaluated the performance of the National Weather Service (NWS) model, the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) in simulating the hydrology of the flood-prone Hanalei watershed in Kaua';i, Hawai';i. This rural watershed is very wet and has strong spatial rainfall gradients. Application of HL-RDHM to Hanalei watershed required (i) modifying the Hydrologic Rainfall Analysis Project (HRAP) coordinate system; (ii) generating precipitation grids from rain gauge data, and (iii) generating parameters for Sacramento Soil Moisture Accounting Model (SAC-SMA) and routing parameter grids for the modified HRAP coordinate system. Results were obtained for several spatial resolutions. Hourly basin-average rainfall calculated from one HRAP resolution grid (4 km × 4 km) was too low and inaccurate. More realistic rainfall and more accurate streamflow predictions were obtained with the ½ and ¼ HRAP grids. For a one year period with the best precipitation data, the performance of HL-RDHM was satisfactory even without calibration for basin-averaged and distributed a priori parameter grids. Calibration and validation of HL-RDHM were conducted using four-year data set each. The model reasonably matched the observed peak discharges and time to peak during calibration and validation periods. The performance of model was assessed using the following three statistical measures: Root Mean Square Error (RMSE), Nash-Sutcliffe efficiency (NSE) and Percent bias (PBIAS). Overall, HL-RDHM's performance was "very good (NSE > 0.75, PBIAS < ±10)" for the finer resolution grids (½ HRAP or ¼ HRAP). The quality of flood forecasting capability of the model was accessed using four accuracy measures (probability of false detection, false alarm ratio, critical

  13. A model for integrating independent physicians into accountable care organizations.

    PubMed

    Shields, Mark C; Patel, Pankaj H; Manning, Martin; Sacks, Lee

    2011-01-01

    The Affordable Care Act encourages the formation of accountable care organizations as a new part of Medicare. Pending forthcoming federal regulations, though, it is unclear precisely how these ACOs will be structured. Although large integrated care systems that directly employ physicians may be most likely to evolve into ACOs, few such integrated systems exist in the United States. This paper demonstrates how Advocate Physician Partners in Illinois could serve as a model for a new kind of accountable care organization, by demonstrating how to organize physicians into partnerships with hospitals to improve care, cut costs, and be held accountable for the results. The partnership has signed its first commercial ACO contract effective January 1, 2011, with the largest insurer in Illinois, Blue Cross Blue Shield. Other commercial contracts are expected to follow. In a health care system still dominated by small, independent physician practices, this may constitute a more viable way to push the broader health care system toward accountable care. PMID:21163804

  14. An Evaluation-Accountability Model for Regional Education Centers.

    ERIC Educational Resources Information Center

    Barber, R. Jerry; Benson, Charles W.

    This paper presents the rationale, techniques, and structure used to develop and implement an evaluation-accountability program for a new regional Education Service Center in Texas. Needs assessment, a critical element in this model, consists of objectively identifying the educational needs of clients and establishing an initial list of…

  15. Verification of Advances in a Coupled Snow-runoff Modeling Framework for Operational Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.

  16. Liquid drop model of spherical nuclei with account of viscosity

    NASA Astrophysics Data System (ADS)

    Khokonov, A. Kh.

    2016-01-01

    In the frame of nuclear liquid drop model an analytical solution for the frequency of capillary oscillations is obtained with taking into account the damping due to viscosity and surrounding medium polarizability. The model has been applied for estimation of even-even spherical nuclei surface tension and viscosity. It has been shown that energy shift of capillary oscillations of even-even spherical nuclei due to viscous dissipation gives viscosities in the interval 4.2- 7.6 MeVfm-2c-1 for nuclei from 10646Pd to 19880Hg.

  17. Optimal control design that accounts for model mismatch errors

    SciTech Connect

    Kim, T.J.; Hull, D.G.

    1995-02-01

    A new technique is presented in this paper that reduces the complexity of state differential equations while accounting for modeling assumptions. The mismatch controls are defined as the differences between the model equations and the true state equations. The performance index of the optimal control problem is formulated with a set of tuning parameters that are user-selected to tune the control solution in order to achieve the best results. Computer simulations demonstrate that the tuned control law outperforms the untuned controller and produces results that are comparable to a numerically-determined, piecewise-linear optimal controller.

  18. Accommodating environmental variation in population models: metaphysiological biomass loss accounting.

    PubMed

    Owen-Smith, Norman

    2011-07-01

    1. There is a pressing need for population models that can reliably predict responses to changing environmental conditions and diagnose the causes of variation in abundance in space as well as through time. In this 'how to' article, it is outlined how standard population models can be modified to accommodate environmental variation in a heuristically conducive way. This approach is based on metaphysiological modelling concepts linking populations within food web contexts and underlying behaviour governing resource selection. Using population biomass as the currency, population changes can be considered at fine temporal scales taking into account seasonal variation. Density feedbacks are generated through the seasonal depression of resources even in the absence of interference competition. 2. Examples described include (i) metaphysiological modifications of Lotka-Volterra equations for coupled consumer-resource dynamics, accommodating seasonal variation in resource quality as well as availability, resource-dependent mortality and additive predation, (ii) spatial variation in habitat suitability evident from the population abundance attained, taking into account resource heterogeneity and consumer choice using empirical data, (iii) accommodating population structure through the variable sensitivity of life-history stages to resource deficiencies, affecting susceptibility to oscillatory dynamics and (iv) expansion of density-dependent equations to accommodate various biomass losses reducing population growth rate below its potential, including reductions in reproductive outputs. Supporting computational code and parameter values are provided. 3. The essential features of metaphysiological population models include (i) the biomass currency enabling within-year dynamics to be represented appropriately, (ii) distinguishing various processes reducing population growth below its potential, (iii) structural consistency in the representation of interacting populations and

  19. Accommodating environmental variation in population models: metaphysiological biomass loss accounting.

    PubMed

    Owen-Smith, Norman

    2011-07-01

    1. There is a pressing need for population models that can reliably predict responses to changing environmental conditions and diagnose the causes of variation in abundance in space as well as through time. In this 'how to' article, it is outlined how standard population models can be modified to accommodate environmental variation in a heuristically conducive way. This approach is based on metaphysiological modelling concepts linking populations within food web contexts and underlying behaviour governing resource selection. Using population biomass as the currency, population changes can be considered at fine temporal scales taking into account seasonal variation. Density feedbacks are generated through the seasonal depression of resources even in the absence of interference competition. 2. Examples described include (i) metaphysiological modifications of Lotka-Volterra equations for coupled consumer-resource dynamics, accommodating seasonal variation in resource quality as well as availability, resource-dependent mortality and additive predation, (ii) spatial variation in habitat suitability evident from the population abundance attained, taking into account resource heterogeneity and consumer choice using empirical data, (iii) accommodating population structure through the variable sensitivity of life-history stages to resource deficiencies, affecting susceptibility to oscillatory dynamics and (iv) expansion of density-dependent equations to accommodate various biomass losses reducing population growth rate below its potential, including reductions in reproductive outputs. Supporting computational code and parameter values are provided. 3. The essential features of metaphysiological population models include (i) the biomass currency enabling within-year dynamics to be represented appropriately, (ii) distinguishing various processes reducing population growth below its potential, (iii) structural consistency in the representation of interacting populations and

  20. Short communication: Accounting for new mutations in genomic prediction models.

    PubMed

    Casellas, Joaquim; Esquivelzeta, Cecilia; Legarra, Andrés

    2013-08-01

    Genomic evaluation models so far do not allow for accounting of newly generated genetic variation due to mutation. The main target of this research was to extend current genomic BLUP models with mutational relationships (model AM), and compare them against standard genomic BLUP models (model A) by analyzing simulated data. Model performance and precision of the predicted breeding values were evaluated under different population structures and heritabilities. The deviance information criterion (DIC) clearly favored the mutational relationship model under large heritabilities or populations with moderate-to-deep pedigrees contributing phenotypic data (i.e., differences equal or larger than 10 DIC units); this model provided slightly higher correlation coefficients between simulated and predicted genomic breeding values. On the other hand, null DIC differences, or even relevant advantages for the standard genomic BLUP model, were reported under small heritabilities and shallow pedigrees, although precision of the genomic breeding values did not differ across models at a significant level. This method allows for slightly more accurate genomic predictions and handling of newly created variation; moreover, this approach does not require additional genotyping or phenotyping efforts, but a more accurate handing of available data. PMID:23746579

  1. Short communication: Accounting for new mutations in genomic prediction models.

    PubMed

    Casellas, Joaquim; Esquivelzeta, Cecilia; Legarra, Andrés

    2013-08-01

    Genomic evaluation models so far do not allow for accounting of newly generated genetic variation due to mutation. The main target of this research was to extend current genomic BLUP models with mutational relationships (model AM), and compare them against standard genomic BLUP models (model A) by analyzing simulated data. Model performance and precision of the predicted breeding values were evaluated under different population structures and heritabilities. The deviance information criterion (DIC) clearly favored the mutational relationship model under large heritabilities or populations with moderate-to-deep pedigrees contributing phenotypic data (i.e., differences equal or larger than 10 DIC units); this model provided slightly higher correlation coefficients between simulated and predicted genomic breeding values. On the other hand, null DIC differences, or even relevant advantages for the standard genomic BLUP model, were reported under small heritabilities and shallow pedigrees, although precision of the genomic breeding values did not differ across models at a significant level. This method allows for slightly more accurate genomic predictions and handling of newly created variation; moreover, this approach does not require additional genotyping or phenotyping efforts, but a more accurate handing of available data.

  2. A Diffusion Model Account of the Lexical Decision Task

    PubMed Central

    Ratcliff, Roger; Gomez, Pablo; McKoon, Gail

    2005-01-01

    The diffusion model for 2-choice decisions (R. Ratcliff, 1978) was applied to data from lexical decision experiments in which word frequency, proportion of high- versus low-frequency words, and type of nonword were manipulated. The model gave a good account of all of the dependent variables—accuracy, correct and error response times, and their distributions—and provided a description of how the component processes involved in the lexical decision task were affected by experimental variables. All of the variables investigated affected the rate at which information was accumulated from the stimuli—called drift rate in the model. The different drift rates observed for the various classes of stimuli can all be explained by a 2-dimensional signal-detection representation of stimulus information. The authors discuss how this representation and the diffusion model’s decision process might be integrated with current models of lexical access. PMID:14756592

  3. A Meta-modeling Framework to Support Accountability in Business Process Modeling

    NASA Astrophysics Data System (ADS)

    Zou, Joe; de Vaney, Christopher; Wang, Yan

    Accountability is becoming a central theme in business today in the midst of global financial crisis as the corporate scandals and fallouts dominate the front pages of the press. Businesses are demanding more accountability measures built-in at the business process modeling level. Currently the business process modeling standards and methods mainly focus on the sequential flow aspect of business process and leave the business aspect of accountability largely untouched. In this paper, we extend the OMG’s business modeling specifications to define a business accountability meta-model. The meta-model is complementary to the OMG’s Model-Driven Architecture (MDA) vision, laying out the foundation for future model generation and transformation for creating accountable business process solutions.

  4. Carbosoil, a land evaluation model for soil carbon accounting

    NASA Astrophysics Data System (ADS)

    Anaya-Romero, M.; Muñoz-Rojas, M.; Pino, R.; Jordan, A.; Zavala, L. M.; De la Rosa, D.

    2012-04-01

    The belowground carbon content is particularly difficult to quantify and most of the time is assumed to be a fixed fraction or ignored for lack of better information. In this respect, this research presents a land evaluation tool, Carbosoil, for predicting soil carbon accounting where this data are scarce or not available, as a new component of MicroLEIS DSS. The pilot study area was a Mediterranean region (Andalusia, Southern Spain) during 1956-2007. Input data were obtained from different data sources and include 1689 soil profiles from Andalusia (S Spain). Previously, detailed studies of changes in LU and vegetation carbon stocks, and soil organic carbon (SOC) dynamic were carried out. Previous results showed the influence of LU, climate (mean temperature and rainfall) and soil variables related with SOC dynamics. For instance, SCS decreased in Cambisols and Regosols by 80% when LU changed from forest to heterogeneous agricultural areas. Taking this into account, the input variables considered were LU, site (elevation, slope, erosion, type-of-drainage, and soil-depth), climate (mean winter/summer temperature and annual precipitation), and soil (pH, nitrates, CEC, sand/clay content, bulk density and field capacity). The available data set was randomly split into two parts: training-set (75%), and validation-set (25%). The model was built by using multiple linear regression. The regression coefficient (R2) obtained in the calibration and validation of Carbosoil was >0.9 for the considered soil sections (0-25, 25-50, and 50-75 cm). The validation showed the high accuracy of the model and its capacity to discriminate carbon distribution regarding different climate, LU and soil management scenarios. Carbosoil model together with the methodologies and information generated in this work will be a useful basis to accurately quantify and understanding the distribution of soil carbon account helpful for decision makers.

  5. Accounting for uncertainty in distributed flood forecasting models

    NASA Astrophysics Data System (ADS)

    Cole, Steven J.; Robson, Alice J.; Bell, Victoria A.; Moore, Robert J.; Pierce, Clive E.; Roberts, Nigel

    2010-05-01

    Recent research investigating the uncertainty of distributed hydrological flood forecasting models will be presented. These findings utilise the latest advances in rainfall estimation, ensemble nowcasting and Numerical Weather Prediction (NWP). The hydrological flood model that forms the central focus of the study is the Grid-to-Grid Model or G2G: this is a distributed grid-based model that produces area-wide flood forecasts across the modelled domain. Results from applying the G2G Model across the whole of England and Wales on a 1 km grid will be shown along with detailed regional case studies of major floods, such as those of summer 2007. Accounting for uncertainty will be illustrated using ensemble rainfall forecasts from both the Met Office's STEPS nowcasting and high-resolution (~1.5 km) NWP systems. When these rainfall forecasts are used as input to the G2G Model, risk maps of flood exceedance can be produced in animated form that allow the evolving flood risk to be visualised in space and time. Risk maps for a given forecast horizon (e.g. the next 6 hours) concisely summarise a wealth of spatio-temporal flood forecast information and provide an efficient means to identify ‘hot spots' of flood risk. These novel risk maps can be used to support flood warning in real-time and are being trialled operationally across England and Wales by the new joint Environment Agency and Met Office Flood Forecasting Centre.

  6. Reconstruction of Danio rerio Metabolic Model Accounting for Subcellular Compartmentalisation

    PubMed Central

    Bekaert, Michaël

    2012-01-01

    Plant and microbial metabolic engineering is commonly used in the production of functional foods and quality trait improvement. Computational model-based approaches have been used in this important endeavour. However, to date, fish metabolic models have only been scarcely and partially developed, in marked contrast to their prominent success in metabolic engineering. In this study we present the reconstruction of fully compartmentalised models of the Danio rerio (zebrafish) on a global scale. This reconstruction involves extraction of known biochemical reactions in D. rerio for both primary and secondary metabolism and the implementation of methods for determining subcellular localisation and assignment of enzymes. The reconstructed model (ZebraGEM) is amenable for constraint-based modelling analysis, and accounts for 4,988 genes coding for 2,406 gene-associated reactions and only 418 non-gene-associated reactions. A set of computational validations (i.e., simulations of known metabolic functionalities and experimental data) strongly testifies to the predictive ability of the model. Overall, the reconstructed model is expected to lay down the foundations for computational-based rational design of fish metabolic engineering in aquaculture. PMID:23166792

  7. Meander migration modeling accounting for the effect of riparian vegetation

    NASA Astrophysics Data System (ADS)

    Eke, E.; Parker, G.

    2010-12-01

    A numerical model is proposed to study the development of meandering rivers so as to reproduce patterns of both migration and spatial/temporal width variation pattern observed in nature. The model comprises of: a) a depth-averaged channel hydrodynamic/morphodynamic model developed using a two-parameter perturbation expansion technique that considers perturbations induced by curvature and spatial channel width variation and b) a bank migration model which separately considers bank erosional and depositional processes. Unlike most previous meandering river models where channel migration is characterized only in terms of bank erosion, channel dynamics are here defined at channel banks which are allowed to migrate independently via deposition/erosion based on the local flow field and bank characteristics. A bank erodes (deposits) if the near bank Shields stress computed from the flow field is greater (less) than a specified threshold. This threshold Shields number is equivalent to the formative Shields stress characterizing bankfull flow. Excessive bank erosion is controlled by means of natural armoring provided by cohesive/rooted slump blocks produced when a stream erodes into the lower non-cohesive part of a composite bank. Bank deposition is largely due to sediment trapping by vegetation; resultant channel narrowing is related to both a natural rate of vegetal encroachment and flow characteristics. This new model allows the channel freedom to vary in width both spatially and in time as it migrates, so accounting for the bi-directional coupling between vegetation and flow dynamics and reproducing more realistic planform geometries. Preliminary results based on the model are presented.

  8. Accounting for Water Insecurity in Modeling Domestic Water Demand

    NASA Astrophysics Data System (ADS)

    Galaitsis, S. E.; Huber-lee, A. T.; Vogel, R. M.; Naumova, E.

    2013-12-01

    Water demand management uses price elasticity estimates to predict consumer demand in relation to water pricing changes, but studies have shown that many additional factors effect water consumption. Development scholars document the need for water security, however, much of the water security literature focuses on broad policies which can influence water demand. Previous domestic water demand studies have not considered how water security can affect a population's consumption behavior. This study is the first to model the influence of water insecurity on water demand. A subjective indicator scale measuring water insecurity among consumers in the Palestinian West Bank is developed and included as a variable to explore how perceptions of control, or lack thereof, impact consumption behavior and resulting estimates of price elasticity. A multivariate regression model demonstrates the significance of a water insecurity variable for data sets encompassing disparate water access. When accounting for insecurity, the R-squaed value improves and the marginal price a household is willing to pay becomes a significant predictor for the household quantity consumption. The model denotes that, with all other variables held equal, a household will buy more water when the users are more water insecure. Though the reasons behind this trend require further study, the findings suggest broad policy implications by demonstrating that water distribution practices in scarcity conditions can promote consumer welfare and efficient water use.

  9. Capture-recapture survival models taking account of transients

    USGS Publications Warehouse

    Pradel, R.; Hines, J.E.; Lebreton, J.D.; Nichols, J.D.

    1997-01-01

    The presence of transient animals, common enough in natural populations, invalidates the estimation of survival by traditional capture- recapture (CR) models designed for the study of residents only. Also, the study of transit is interesting in itself. We thus develop here a class of CR models to describe the presence of transients. In order to assess the merits of this approach we examme the bias of the traditional survival estimators in the presence of transients in relation to the power of different tests for detecting transients. We also compare the relative efficiency of an ad hoc approach to dealing with transients that leaves out the first observation of each animal. We then study a real example using lazuli bunting (Passerina amoena) and, in conclusion, discuss the design of an experiment aiming at the estimation of transience. In practice, the presence of transients is easily detected whenever the risk of bias is high. The ad hoc approach, which yields unbiased estimates for residents only, is satisfactory in a time-dependent context but poorly efficient when parameters are constant. The example shows that intermediate situations between strict 'residence' and strict 'transience' may exist in certain studies. Yet, most of the time, if the study design takes into account the expected length of stay of a transient, it should be possible to efficiently separate the two categories of animals.

  10. Testing Neuronal Accounts of Anisotropic Motion Perception with Computational Modelling

    PubMed Central

    Wong, William; Chiang Price, Nicholas Seow

    2014-01-01

    There is an over-representation of neurons in early visual cortical areas that respond most strongly to cardinal (horizontal and vertical) orientations and directions of visual stimuli, and cardinal- and oblique-preferring neurons are reported to have different tuning curves. Collectively, these neuronal anisotropies can explain two commonly-reported phenomena of motion perception – the oblique effect and reference repulsion – but it remains unclear whether neuronal anisotropies can simultaneously account for both perceptual effects. We show in psychophysical experiments that reference repulsion and the oblique effect do not depend on the duration of a moving stimulus, and that brief adaptation to a single direction simultaneously causes a reference repulsion in the orientation domain, and the inverse of the oblique effect in the direction domain. We attempted to link these results to underlying neuronal anisotropies by implementing a large family of neuronal decoding models with parametrically varied levels of anisotropy in neuronal direction-tuning preferences, tuning bandwidths and spiking rates. Surprisingly, no model instantiation was able to satisfactorily explain our perceptual data. We argue that the oblique effect arises from the anisotropic distribution of preferred directions evident in V1 and MT, but that reference repulsion occurs separately, perhaps reflecting a process of categorisation occurring in higher-order cortical areas. PMID:25409518

  11. Conception of a cost accounting model for doctors' offices.

    PubMed

    Britzelmaier, Bernd; Eller, Brigitte

    2004-01-01

    Physicians are required, due to economical, financial, competitive, demographical and market-induced framework conditions, to pay increasing attention to the entrepreneurial administration of their offices. Because of restructuring policies throughout the public health system--on the grounds of increasing financing problems--more and better transparency of costs will be indispensable in all fields of medical activities in the future. The more cost-conscious public health insurance institutions or other public health funds will need professional cost accounting systems, which will provide, for minimum maintenance expense, standardised basis cost information as a device for decision. The conception of cost accounting for doctors' offices presented in this paper shows an integrated cost accounting approach based on activity and marginal costing philosophy. The conception presented provides a suitable basis for the development of standard software for cost accounting systems for doctors' offices.

  12. Accounting for Recoil Effects in Geochronometers: A New Model Approach

    NASA Astrophysics Data System (ADS)

    Lee, V. E.; Huber, C.

    2012-12-01

    dated grain is a major control on the magnitude of recoil loss, the first feature is the ability to calculate recoil effects on isotopic compositions for realistic, complex grain shapes and surface roughnesses. This is useful because natural grains may have irregular shapes that do not conform to simple geometric descriptions. Perhaps more importantly, the surface area over which recoiled nuclides are lost can be significantly underestimated when grain surface roughness is not accounted for, since the recoil distances can be of similar characteristic lengthscales to surface roughness features. The second key feature is the ability to incorporate dynamical geologic processes affecting grain surfaces in natural settings, such as dissolution and crystallization. We describe the model and its main components, and point out implications for the geologically-relevant chronometers mentioned above.

  13. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.

  14. Resource Allocation Models and Accountability: A Jamaican Case Study

    ERIC Educational Resources Information Center

    Nkrumah-Young, Kofi K.; Powell, Philip

    2008-01-01

    Higher education institutions (HEIs) may be funded privately, by the state or by a mixture of the two. Nevertheless, any state financing of HE necessitates a mechanism to determine the level of support and the channels through which it is to be directed; that is, a resource allocation model. Public funding, through resource allocation models,…

  15. Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems

    EPA Science Inventory

    Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...

  16. 76 FR 29249 - Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-20

    ... HUMAN SERVICES Centers for Medicare & Medicaid Services Medicare Program; Pioneer Accountable Care... participate in the Pioneer Accountable Care Organization Model for a period beginning in 2011 and ending...://innovations.cms.gov/areas-of-focus/seamless-and-coordinated-care-models/pioneer-aco . Application...

  17. Applying the International Medical Graduate Program Model to Alleviate the Supply Shortage of Accounting Doctoral Faculty

    ERIC Educational Resources Information Center

    HassabElnaby, Hassan R.; Dobrzykowski, David D.; Tran, Oanh Thikie

    2012-01-01

    Accounting has been faced with a severe shortage in the supply of qualified doctoral faculty. Drawing upon the international mobility of foreign scholars and the spirit of the international medical graduate program, this article suggests a model to fill the demand in accounting doctoral faculty. The underlying assumption of the suggested model is…

  18. Mental Models and the Suppositional Account of Conditionals

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Gauffroy, Caroline; Lecas, Jean-Francois

    2008-01-01

    The mental model theory of conditional reasoning presented by P. N. Johnson-Laird and R. M. J. Byrne (2002) has recently been the subject of criticisms (e.g., J. St. B. T. Evans, D. E. Over, & S. J. Handley, 2005). The authors argue that the theoretical conflict can be resolved by differentiating 2 kinds of reasoning, reasoning about possibilities…

  19. Facilitative Orthographic Neighborhood Effects: The SERIOL Model Account

    ERIC Educational Resources Information Center

    Whitney, Carol; Lavidor, Michal

    2005-01-01

    A large orthographic neighborhood (N) facilitates lexical decision for central and left visual field/right hemisphere (LVF/RH) presentation, but not for right visual field/left hemisphere (RVF/LH) presentation. Based on the SERIOL model of letter-position encoding, this asymmetric N effect is explained by differential activation patterns at the…

  20. Modeling tools to Account for Ethanol Impacts on BTEX Plumes

    EPA Science Inventory

    Widespread usage of ethanol in gasoline leads to impacts at leak sites which differ from those of non-ethanol gasolines. The presentation reviews current research results on the distribution of gasoline and ethanol, biodegradation, phase separation and cosolvancy. Model results f...

  1. Leadership Accountability Models: Issues of Policy and Practice.

    ERIC Educational Resources Information Center

    Wallace, Stephen O.; Sweatt, Owen; Acker-Hocevar, Michele

    This paper explores two questions: "What philosophical views of educational leadership will adequately allow us to meet the demands of a rapidly changing world?" and "How should such leadership be assessed?" The article asserts that evaluation of educational leadership needs to break away from the limitations of restrictive models to become…

  2. A Historical Account of the Hypodermic Model in Mass Communication.

    ERIC Educational Resources Information Center

    Bineham, Jeffery L.

    1988-01-01

    Critiques different historical conceptions of mass communication research. Argues that the different conceptions of the history of mass communication research, and of the hypodermic model (viewing the media as an all-powerful and direct influence on society), influence the theoretical and methodological choices made by mass media scholars. (MM)

  3. An evacuation model accounting for elementary students' individual properties

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Chen, Liang; Guo, Ren-Yong; Shang, Hua-Yan

    2015-12-01

    In this paper, we propose a cellular automata model for pedestrian flow to investigate the effects of elementary students' individual properties on the evacuation process in a classroom with two exits. In this model, each student's route choice behavior is determined by the capacity of his current route to each exit, the distance between his current position and the corresponding exit, the repulsive interactions between his adjacent students and him, and the congestion degree near each exit; the elementary students are sorted into rational and irrational students. The simulation results show that the irrational students' proportion has significant impacts on the evacuation process and efficiency, and that all students simultaneously evacuating may be inefficient.

  4. A Mathematical Model of Sentimental Dynamics Accounting for Marital Dissolution

    PubMed Central

    Rey, José-Manuel

    2010-01-01

    Background Marital dissolution is ubiquitous in western societies. It poses major scientific and sociological problems both in theoretical and therapeutic terms. Scholars and therapists agree on the existence of a sort of second law of thermodynamics for sentimental relationships. Effort is required to sustain them. Love is not enough. Methodology/Principal Findings Building on a simple version of the second law we use optimal control theory as a novel approach to model sentimental dynamics. Our analysis is consistent with sociological data. We show that, when both partners have similar emotional attributes, there is an optimal effort policy yielding a durable happy union. This policy is prey to structural destabilization resulting from a combination of two factors: there is an effort gap because the optimal policy always entails discomfort and there is a tendency to lower effort to non-sustaining levels due to the instability of the dynamics. Conclusions/Significance These mathematical facts implied by the model unveil an underlying mechanism that may explain couple disruption in real scenarios. Within this framework the apparent paradox that a union consistently planned to last forever will probably break up is explained as a mechanistic consequence of the second law. PMID:20360987

  5. Dynamic model of production enterprises based on accounting registers and its identification

    NASA Astrophysics Data System (ADS)

    Sirazetdinov, R. T.; Samodurov, A. V.; Yenikeev, I. A.; Markov, D. S.

    2016-06-01

    The report focuses on the mathematical modeling of economic entities based on accounting registers. Developed the dynamic model of financial and economic activity of the enterprise as a system of differential equations. Created algorithms for identification of parameters of the dynamic model. Constructed and identified the model of Russian machine-building enterprises.

  6. Key Elements for Educational Accountability Models in Transition: A Guide for Policymakers

    ERIC Educational Resources Information Center

    Klau, Kenneth

    2010-01-01

    State educational accountability models are in transition. Whether modifying the present accountability system to comply with existing state and federal requirements or anticipating new ones--such as the U.S. Department of Education's (ED) Race to the Top competition--recording the experiences of state education agencies (SEAs) that are currently…

  7. A Dynamic Simulation Model of the Management Accounting Information Systems (MAIS)

    NASA Astrophysics Data System (ADS)

    Konstantopoulos, Nikolaos; Bekiaris, Michail G.; Zounta, Stella

    2007-12-01

    The aim of this paper is to examine the factors which determine the problems and the advantages on the design of management accounting information systems (MAIS). A simulation is carried out with a dynamic model of the MAIS design.

  8. A Teacher Accountability Model for Overcoming Self-Exclusion of Pupils

    ERIC Educational Resources Information Center

    Jamal, Abu-Hussain; Tilchin, Oleg; Essawi, Mohammad

    2015-01-01

    Self-exclusion of pupils is one of the prominent challenges of education. In this paper we propose the TERA model, which shapes the process of creating formative accountability of teachers to overcome the self-exclusion of pupils. Development of the model includes elaboration and integration of interconnected model components. The TERA model…

  9. State Growth Models for School Accountability: Progress on Development and Reporting Measures of Student Growth

    ERIC Educational Resources Information Center

    Blank, Rolf K.

    2010-01-01

    The Council of Chief State School Officers (CCSSO) is working to respond to increased interest in the use of growth models for school accountability. Growth models are based on tracking change in individual student achievement scores over multiple years. While growth models have been used for decades in academic research and program evaluation, a…

  10. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    ERIC Educational Resources Information Center

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  11. Modeling Floods under Climate Change Condition in Otava River, Czech Republic: A Time Scale Issue

    NASA Astrophysics Data System (ADS)

    Danhelka, J.; Krejci, J.; Vlasak, T.

    2009-04-01

    While modeling of climate change (CC) impact on low flow and water balance is commonly done using daily time series of Global Circulation models (GCM) outputs, assessing CC impact on rare events as floods demands for special methodology. Paper demonstrates methodology, results and its sensitivity to the length of simulation in meso-scale basin. Multiple regional projection of temperature and precipitation under A2, A1B a B1 scenarios for 2040-2069 were evaluated in study of Czech Hydrometeorological Institute and Charles University (Pretel et al. 2008) for the Czech Republic. Daily time series of length of 30 years and 100 years (precipitation, Tmax, Tmin) were generated using LARS-WG (Semenov, 2008) based on expected monthly change of temperature and precipitation amount and variability for upper Otava river basin in mountainous region in SW Bohemia. Daily precipitation data were distributed into 6h time step using three step random generator. Spatial distribution of precipitation was based on random sampling of relevant historical analogues while temperature was distributed using simple vertical gradient rule. Derived time series of A2, A1B, B1 and recent climate (RC) scenarios inputted calibrated hydrological modeling system AquaLog (using SAC-SMA for rainfall-runoff modeling). Correction of SAC-SMA parameter defining potential evapotranspiration for changed climate was applied. Evaluation was made for Susice profile (534.5 km2), representing the mountainous part of the basin, and downstream Katovice profile (1133.4 km2). Results proved expected decrease of annual flow by 5-10 % (10-15 % in summer, 0-5 % in winter) for all modeled CC scenarios (for period 2040-2069) compared to recent climate. Design flows were computed based on yearly peaks using standard methodology. Decrease in design flow curves was observed for Katovice while no change (A1B, B1) or increase (A2) was found for Susice in 100 years time series. Estimates of 100y floods based on 30 or 100 years

  12. A simulation model of hospital management based on cost accounting analysis according to disease.

    PubMed

    Tanaka, Koji; Sato, Junzo; Guo, Jinqiu; Takada, Akira; Yoshihara, Hiroyuki

    2004-12-01

    Since a little before 2000, hospital cost accounting has been increasingly performed at Japanese national university hospitals. At Kumamoto University Hospital, for instance, departmental costs have been analyzed since 2000. And, since 2003, the cost balance has been obtained according to certain diseases for the preparation of Diagnosis-Related Groups and Prospective Payment System. On the basis of these experiences, we have constructed a simulation model of hospital management. This program has worked correctly at repeated trials and with satisfactory speed. Although there has been room for improvement of detailed accounts and cost accounting engine, the basic model has proved satisfactory. We have constructed a hospital management model based on the financial data of an existing hospital. We will later improve this program from the viewpoint of construction and using more various data of hospital management. A prospective outlook may be obtained for the practical application of this hospital management model.

  13. Hazardous gas dispersion: a CFD model accounting for atmospheric stability classes.

    PubMed

    Pontiggia, M; Derudi, M; Busini, V; Rota, R

    2009-11-15

    Nowadays, thanks to the increasing CPU power the use of Computational Fluid Dynamics (CFD) is rapidly imposing also in the industrial risk assessment area, replacing integral models when particular situations, such as those involving complex terrains or large obstacles, are involved. Nevertheless, commercial CFD codes usually do not provide specific turbulence model for simulating atmospheric stratification effects, which are accounted of by the integral models through the well-known stability-class approach. In this work, a new approach able to take account of atmospheric features in CFD simulations has been developed and validated by comparison with available experimental data. PMID:19619939

  14. Hazardous gas dispersion: a CFD model accounting for atmospheric stability classes.

    PubMed

    Pontiggia, M; Derudi, M; Busini, V; Rota, R

    2009-11-15

    Nowadays, thanks to the increasing CPU power the use of Computational Fluid Dynamics (CFD) is rapidly imposing also in the industrial risk assessment area, replacing integral models when particular situations, such as those involving complex terrains or large obstacles, are involved. Nevertheless, commercial CFD codes usually do not provide specific turbulence model for simulating atmospheric stratification effects, which are accounted of by the integral models through the well-known stability-class approach. In this work, a new approach able to take account of atmospheric features in CFD simulations has been developed and validated by comparison with available experimental data.

  15. Testing the limits of the 'joint account' model of genetic information: a legal thought experiment.

    PubMed

    Foster, Charles; Herring, Jonathan; Boyd, Magnus

    2015-05-01

    We examine the likely reception in the courtroom of the 'joint account' model of genetic confidentiality. We conclude that the model, as modified by Gilbar and others, is workable and reflects, better than more conventional legal approaches, both the biological and psychological realities and the obligations owed under Articles 8 and 10 of the European Convention on Human Rights (ECHR).

  16. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  17. Early lessons from accountable care models in the private sector: partnerships between health plans and providers.

    PubMed

    Higgins, Aparna; Stewart, Kristin; Dawson, Kirstin; Bocchino, Carmella

    2011-09-01

    New health care delivery and payment models in the private sector are being shaped by active collaboration between health insurance plans and providers. We examine key characteristics of several of these private accountable care models, including their overall efforts to improve the quality, efficiency, and accountability of care; their criteria for selecting providers; the payment methods and performance measures they are using; and the technical assistance they are supplying to participating providers. Our findings show that not all providers are equally ready to enter into these arrangements with health plans and therefore flexibility in design of these arrangements is critical. These findings also hold lessons for the emerging public accountable care models, such as the Medicare Shared Savings Program-underscoring providers' need for comprehensive and timely data and analytic reports; payment tailored to providers' readiness for these contracts; and measurement of quality across multiple years and care settings. PMID:21900663

  18. Multiple imputation to account for measurement error in marginal structural models

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  19. Supportive Accountability: A Model for Providing Human Support to Enhance Adherence to eHealth Interventions

    PubMed Central

    2011-01-01

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical literature, that can guide research into human support components of eHealth interventions. A review of the literature revealed little relevant information from clinical sciences. Applicable literature was drawn primarily from organizational psychology, motivation theory, and computer-mediated communication (CMC) research. We have developed a model, referred to as “Supportive Accountability.” We argue that human support increases adherence through accountability to a coach who is seen as trustworthy, benevolent, and having expertise. Accountability should involve clear, process-oriented expectations that the patient is involved in determining. Reciprocity in the relationship, through which the patient derives clear benefits, should be explicit. The effect of accountability may be moderated by patient motivation. The more intrinsically motivated patients are, the less support they likely require. The process of support is also mediated by the communications medium (eg, telephone, instant messaging, email). Different communications media each have their own potential benefits and disadvantages. We discuss the specific components of accountability, motivation, and CMC medium in detail. The proposed model is a first step toward understanding how human support enhances adherence to eHealth interventions. Each component of the proposed model is a testable hypothesis. As we develop viable human support models, these should be manualized to facilitate dissemination. PMID:21393123

  20. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  1. An Expansion of the Trait-State-Occasion Model: Accounting for Shared Method Variance

    ERIC Educational Resources Information Center

    LaGrange, Beth; Cole, David A.

    2008-01-01

    This article examines 4 approaches for explaining shared method variance, each applied to a longitudinal trait-state-occasion (TSO) model. Many approaches have been developed to account for shared method variance in multitrait-multimethod (MTMM) data. Some of these MTMM approaches (correlated method, orthogonal method, correlated method minus one,…

  2. The Politics and Statistics of Value-Added Modeling for Accountability of Teacher Preparation Programs

    ERIC Educational Resources Information Center

    Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas

    2014-01-01

    Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…

  3. Students' Use of the Energy Model to Account for Changes in Physical Systems

    ERIC Educational Resources Information Center

    Papadouris, Nico; Constantinou, Constantinos P.; Kyratsi, Theodora

    2008-01-01

    The aim of this study is to explore the ways in which students, aged 11-14 years, account for certain changes in physical systems and the extent to which they draw on an energy model as a common framework for explaining changes observed in diverse systems. Data were combined from two sources: interviews with 20 individuals and an open-ended…

  4. Developing a Model for Identifying Students at Risk of Failure in a First Year Accounting Unit

    ERIC Educational Resources Information Center

    Smith, Malcolm; Therry, Len; Whale, Jacqui

    2012-01-01

    This paper reports on the process involved in attempting to build a predictive model capable of identifying students at risk of failure in a first year accounting unit in an Australian university. Identifying attributes that contribute to students being at risk can lead to the development of appropriate intervention strategies and support…

  5. Increasing Accountability in Student Affairs through a New Comprehensive Assessment Model

    ERIC Educational Resources Information Center

    Barham, Janice Davis; Scott, Joel H.

    2006-01-01

    This article gives an overview of a new model for assessment practice within student affairs divisions. With the current increase of accountability and greater demands from higher education stakeholders, student affairs practitioners need to understand how to demonstrate the effectiveness and value of their work as it relates to the mission of…

  6. Aviation security cargo inspection queuing simulation model for material flow and accountability

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Rose, Terri A; Brumback, Daryl L

    2009-01-01

    Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we develop an aviation security cargo inspection queuing simulation model for material flow and accountability that will allow cargo managers to conduct impact studies of current and proposed business practices as they relate to inspection procedures, material flow, and accountability.

  7. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  8. Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad

    2006-01-01

    This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'propagate-burn-propagate' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons between it and three methods using modified linearized covariance propagation are made. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. In the second method, the maneuver is not modeled but the uncertainty in its magnitude is accounted for by the inclusion of a process noise matrix. In the third method, which is essentially a hybrid of the first two, the nominal portion of the maneuver is included via the state transition matrix while a process noise matrix is used to account for the magnitude uncertainty. Since this method also correctly accounts for the presence of the maneuver in the nominal orbit, it is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, Pc. However, applications for the two other methods exist and are briefly discussed. Despite the fact that the process model ('propagate-burn-propagate') that was studied was very simple - point-mass gravitational effects due to the Earth combined with an impulsive delta-V in the velocity direction for the maneuver - generalizations to more complex scenarios, including high fidelity force models, finite duration maneuvers, and maneuver pointing errors, are straightforward and are discussed in the conclusion.

  9. Cost accounting models used for price-setting of health services: an international review.

    PubMed

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals.

  10. A selection model for accounting for publication bias in a full network meta-analysis.

    PubMed

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency.

  11. A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks

    PubMed Central

    Wang, Ping; Zhang, Lin; Li, Victor O. K.

    2013-01-01

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708

  12. Predicting NonInertial Effects with Algebraic Stress Models which Account for Dissipation Rate Anisotropies

    NASA Technical Reports Server (NTRS)

    Jongen, T.; Machiels, L.; Gatski, T. B.

    1997-01-01

    Three types of turbulence models which account for rotational effects in noninertial frames of reference are evaluated for the case of incompressible, fully developed rotating turbulent channel flow. The different types of models are a Coriolis-modified eddy-viscosity model, a realizable algebraic stress model, and an algebraic stress model which accounts for dissipation rate anisotropies. A direct numerical simulation of a rotating channel flow is used for the turbulent model validation. This simulation differs from previous studies in that significantly higher rotation numbers are investigated. Flows at these higher rotation numbers are characterized by a relaminarization on the cyclonic or suction side of the channel, and a linear velocity profile on the anticyclonic or pressure side of the channel. The predictive performance of the three types of models are examined in detail, and formulation deficiencies are identified which cause poor predictive performance for some of the models. Criteria are identified which allow for accurate prediction of such flows by algebraic stress models and their corresponding Reynolds stress formulations.

  13. Meta-analysis of diagnostic tests accounting for disease prevalence: a new model using trivariate copulas.

    PubMed

    Hoyer, A; Kuss, O

    2015-05-20

    In real life and somewhat contrary to biostatistical textbook knowledge, sensitivity and specificity (and not only predictive values) of diagnostic tests can vary with the underlying prevalence of disease. In meta-analysis of diagnostic studies, accounting for this fact naturally leads to a trivariate expansion of the traditional bivariate logistic regression model with random study effects. In this paper, a new model is proposed using trivariate copulas and beta-binomial marginal distributions for sensitivity, specificity, and prevalence as an expansion of the bivariate model. Two different copulas are used, the trivariate Gaussian copula and a trivariate vine copula based on the bivariate Plackett copula. This model has a closed-form likelihood, so standard software (e.g., SAS PROC NLMIXED) can be used. The results of a simulation study have shown that the copula models perform at least as good but frequently better than the standard model. The methods are illustrated by two examples.

  14. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection

    USGS Publications Warehouse

    Zipkin, Elise F.; Grant, Evan H. Campbell; Fagan, William F.

    2012-01-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multi-species hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions of species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation dataset. We found that wetland hydroperiod (the length of time that a wetland holds water) as well as the occurrence state in the prior year were generally the most important factors in determining occupancy. The model with only habitat covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multi-species models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  15. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection.

    PubMed

    Zipkin, Elise F; Grant, Evan H Campbell; Fagan, William F

    2012-10-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multispecies hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions about species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation data set. We found that wetland hydroperiod (the length of time that a wetland holds water), as well as the occurrence state in the prior year, were generally the most important factors in determining occupancy. The model with habitat-only covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multispecies models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  16. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    NASA Astrophysics Data System (ADS)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  17. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE)

    PubMed Central

    Vêncio, Ricardo ZN; Brentani, Helena; Patrão, Diogo FC; Pereira, Carlos AB

    2004-01-01

    Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE), "Digital Northern" or Massively Parallel Signature Sequencing (MPSS), is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries") and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site. PMID:15339345

  18. Generation of SEEAW asset accounts based on water resources management models

    NASA Astrophysics Data System (ADS)

    Pedro-Monzonís, María; Solera, Abel; Andreu, Joaquín

    2015-04-01

    One of the main challenges in the XXI century is related with the sustainable use of water. This is due to the fact that water is an essential element for the life of all who inhabit our planet. In many cases, the lack of economic valuation of water resources causes an inefficient water use. In this regard, society expects of policymakers and stakeholders maximise the profit produced per unit of natural resources. Water planning and the Integrated Water Resources Management (IWRM) represent the best way to achieve this goal. The System of Environmental-Economic Accounting for Water (SEEAW) is displayed as a tool for water allocation which enables the building of water balances in a river basin. The main concern of the SEEAW is to provide a standard approach which allows the policymakers to compare results between different territories. But building water accounts is a complex task due to the difficulty of the collection of the required data. Due to the difficulty of gauging the components of the hydrological cycle, the use of simulation models has become an essential tool extensively employed in last decades. The target of this paper is to present the building up of a database that enables the combined use of hydrological models and water resources models developed with AQUATOOL DSSS to fill in the SEEAW tables. This research is framed within the Water Accounting in a Multi-Catchment District (WAMCD) project, financed by the European Union. Its main goal is the development of water accounts in the Mediterranean Andalusian River Basin District, in Spain. This research pretends to contribute to the objectives of the "Blueprint to safeguard Europe's water resources". It is noteworthy that, in Spain, a large part of these methodological decisions are included in the Spanish Guideline of Water Planning with normative status guaranteeing consistency and comparability of the results.

  19. Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad

    2006-01-01

    This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'prop-burn-prop' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons are made with three methods using modified linearized covariance propagation. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. The second method does not model the maneuver but includes a process noise matrix to account for the uncertainty in its magnitude. The third method, which is essentially a hybrid of the first two, includes the nominal portion of the maneuver via the state transition matrix and uses a process noise matrix to account for the magnitude uncertainty. The first method is unable to produce the final orbit covariance except in the case of zero maneuver uncertainty. The second method yields good accuracy for the final covariance matrix but fails to model the final orbital state accurately. Agreement between the simulated covariance data produced by this method and the Monte Carlo truth data fell within 0.5-2.5 percent over a range of maneuver sizes that span two orders of magnitude (0.1-20 m/s). The third method, which yields a combination of good accuracy in the computation of the final covariance matrix and correct accounting for the presence of the maneuver in the nominal orbit, is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, PC. However, applications for the two other methods exist and are briefly discussed. Although

  20. Accounting for anatomical noise in search-capable model observers for planar nuclear imaging.

    PubMed

    Sen, Anando; Gifford, Howard C

    2016-01-01

    Model observers intended to predict the diagnostic performance of human observers should account for the effects of both quantum and anatomical noise. We compared the abilities of several visual-search (VS) and scanning Hotelling-type models to account for anatomical noise in a localization receiver operating characteristic (LROC) study involving simulated nuclear medicine images. Our VS observer invoked a two-stage process of search and analysis. The images featured lesions in the prostate and pelvic lymph nodes. Lesion contrast and the geometric resolution and sensitivity of the imaging collimator were the study variables. A set of anthropomorphic mathematical phantoms was imaged with an analytic projector based on eight parallel-hole collimators with different sensitivity and resolution properties. The LROC study was conducted with human observers and the channelized nonprewhitening, channelized Hotelling (CH) and VS model observers. The CH observer was applied in a "background-known-statistically" protocol while the VS observer performed a quasi-background-known-exactly task. Both of these models were applied with and without internal noise in the decision variables. A perceptual search threshold was also tested with the VS observer. The model observers without inefficiencies failed to mimic the average performance trend for the humans. The CH and VS observers with internal noise matched the humans primarily at low collimator sensitivities. With both internal noise and the search threshold, the VS observer attained quantitative agreement with the human observers. Computational efficiency is an important advantage of the VS observer.

  1. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    EPA Science Inventory

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  2. On the Value of Climate Elasticity Indices to Assess the Impact of Climate Change on Streamflow Projection using an ensemble of bias corrected CMIP5 dataset

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet; Moradkhani, Hamid

    2015-04-01

    Changes in two climate elasticity indices, i.e. temperature and precipitation elasticity of streamflow, were investigated using an ensemble of bias corrected CMIP5 dataset as forcing to two hydrologic models. The Variable Infiltration Capacity (VIC) and the Sacramento Soil Moisture Accounting (SAC-SMA) hydrologic models, were calibrated at 1/16 degree resolution and the simulated streamflow was routed to the basin outlet of interest. We estimated precipitation and temperature elasticity of streamflow from: (1) observed streamflow; (2) simulated streamflow by VIC and SAC-SMA models using observed climate for the current climate (1963-2003); (3) simulated streamflow using simulated climate from 10 GCM - CMIP5 dataset for the future climate (2010-2099) including two concentration pathways (RCP4.5 and RCP8.5) and two downscaled climate products (BCSD and MACA). The streamflow sensitivity to long-term (e.g., 30-year) average annual changes in temperature and precipitation is estimated for three periods i.e. 2010-40, 2040-70 and 2070-99. We compared the results of the three cases to reflect on the value of precipitation and temperature indices to assess the climate change impacts on Columbia River streamflow. Moreover, these three cases for two models are used to assess the effects of different uncertainty sources (model forcing, model structure and different pathways) on the two climate elasticity indices.

  3. Can the Accountable Care Organization model facilitate integrated care in England?

    PubMed

    Ahmed, Faheem; Mays, Nicholas; Ahmed, Naeem; Bisognano, Maureen; Gottlieb, Gary

    2015-10-01

    Following the global economic recession, health care systems have experienced intense political pressure to contain costs without compromising quality. One response is to focus on improving the continuity and coordination of care, which is seen as beneficial for both patients and providers. However, cultural and structural barriers have proved difficult to overcome in the quest to provide integrated care for entire populations. By holding groups of providers responsible for the health outcomes of a designated population, in the United States, Accountable Care Organizations are regarded as having the potential to foster collaboration across the continuum of care. They could have a similar role in England's National Health Service. However, it is important to consider the difference in context before implementing a similar model, adapted to suit the system's strengths. Working together, general practice federations and the Academic Health Science Networks could form the basis of accountable care in England. PMID:26079144

  4. The Transformative Individual School Counseling Model: An Accountability Model for Urban School Counselors

    ERIC Educational Resources Information Center

    Eschenauer, Robert; Chen-Hayes, Stuart F.

    2005-01-01

    The realities and needs of urban students, families, and educators have outgrown traditional individual counseling models. The American School Counselor Association's National Model and National Standards and the Education Trust's Transforming School Counseling Initiative encourage professional school counselors to shift roles toward implementing…

  5. Accounting for Epistemic Uncertainty in PSHA: Logic Tree and Ensemble Model

    NASA Astrophysics Data System (ADS)

    Taroni, Matteo; Marzocchi, Warner; Selva, Jacopo

    2014-05-01

    The logic tree scheme is the probabilistic framework that has been widely used in the last decades to take into account epistemic uncertainties in probabilistic seismic hazard analysis (PSHA). Notwithstanding the vital importance for PSHA to incorporate properly the epistemic uncertainties, we argue that the use of the logic tree in a PSHA context has conceptual and practical drawbacks. Despite some of these drawbacks have been reported in the past, a careful evaluation of their impact on PSHA is still lacking. This is the goal of the present work. In brief, we show that i) PSHA practice does not meet the assumptions that stand behind the logic tree scheme; ii) the output of a logic tree is often misinterpreted and/or misleading, e.g., the use of percentiles (median included) in a logic tree scheme raises theoretical difficulties from a probabilistic point of view; iii) in case the assumptions that stand behind a logic tree are actually met, this leads to several problems in testing any PSHA model. We suggest a different strategy - based on ensemble modeling - to account for epistemic uncertainties in a more proper probabilistic framework. Finally, we show that in many PSHA practical applications, the logic tree is improperly applied to build sound ensemble models.

  6. Accounting for Epistemic Uncertainty in PSHA: Logic Tree and Ensemble Model

    NASA Astrophysics Data System (ADS)

    Taroni, M.; Marzocchi, W.; Selva, J.

    2014-12-01

    The logic tree scheme is the probabilistic framework that has been widely used in the last decades to take into account epistemic uncertainties in probabilistic seismic hazard analysis (PSHA). Notwithstanding the vital importance for PSHA to incorporate properly the epistemic uncertainties, we argue that the use of the logic tree in a PSHA context has conceptual and practical drawbacks. Despite some of these drawbacks have been reported in the past, a careful evaluation of their impact on PSHA is still lacking. This is the goal of the present work. In brief, we show that i) PSHA practice does not meet the assumptions that stand behind the logic tree scheme; ii) the output of a logic tree is often misinterpreted and/or misleading, e.g., the use of percentiles (median included) in a logic tree scheme raises theoretical difficulties from a probabilistic point of view; iii) in case the assumptions that stand behind a logic tree are actually met, this leads to several problems in testing any PSHA model. We suggest a different strategy - based on ensemble modeling - to account for epistemic uncertainties in a more proper probabilistic framework. Finally, we show that in many PSHA practical applications, the logic tree is de facto loosely applied to build sound ensemble models.

  7. Micromechanical modeling of elastic properties of cortical bone accounting for anisotropy of dense tissue.

    PubMed

    Salguero, Laura; Saadat, Fatemeh; Sevostianov, Igor

    2014-10-17

    The paper analyzes the connection between microstructure of the osteonal cortical bone and its overall elastic properties. The existing models either neglect anisotropy of the dense tissue or simplify cortical bone microstructure (accounting for Haversian canals only). These simplifications (related mostly to insufficient mathematical apparatus) complicate quantitative analysis of the effect of microstructural changes - produced by age, microgravity, or some diseases - on the overall mechanical performance of cortical bone. The present analysis fills this gap; it accounts for anisotropy of the dense tissue and uses realistic model of the porous microstructure. The approach is based on recent results of Sevostianov et al. (2005) and Saadat et al. (2012) on inhomogeneities in a transversely-isotropic material. Bone's microstructure is modeled according to books of Martin and Burr (1989), Currey (2002), and Fung (1993) and includes four main families of pores. The calculated elastic constants for porous cortical bone are in agreement with available experimental data. The influence of each of the pore types on the overall moduli is examined.

  8. Nonlinear Poisson-Boltzmann model of charged lipid membranes: Accounting for the presence of zwitterionic lipids

    NASA Astrophysics Data System (ADS)

    Mengistu, Demmelash H.; May, Sylvio

    2008-09-01

    The nonlinear Poisson-Boltzmann model is used to derive analytical expressions for the free energies of both mixed anionic-zwitterionic and mixed cationic-zwitterionic lipid membranes as function of the mole fraction of charged lipids. Accounting explicitly for the electrostatic properties of the zwitterionic lipid species affects the free energy of anionic and cationic membranes in a qualitatively different way: That of an anionic membrane changes monotonously as a function of the mole fraction of charged lipids, whereas it passes through a pronounced minimum for a cationic membrane.

  9. A Monte Carlo-based model of gold nanoparticle radiosensitization accounting for increased radiobiological effectiveness.

    PubMed

    Lechtman, E; Mashouf, S; Chattopadhyay, N; Keller, B M; Lai, P; Cai, Z; Reilly, R M; Pignol, J-P

    2013-05-21

    Radiosensitization using gold nanoparticles (AuNPs) has been shown to vary widely with cell line, irradiation energy, AuNP size, concentration and intracellular localization. We developed a Monte Carlo-based AuNP radiosensitization predictive model (ARP), which takes into account the detailed energy deposition at the nano-scale. This model was compared to experimental cell survival and macroscopic dose enhancement predictions. PC-3 prostate cancer cell survival was characterized after irradiation using a 300 kVp photon source with and without AuNPs present in the cell culture media. Detailed Monte Carlo simulations were conducted, producing individual tracks of photoelectric products escaping AuNPs and energy deposition was scored in nano-scale voxels in a model cell nucleus. Cell survival in our predictive model was calculated by integrating the radiation induced lethal event density over the nucleus volume. Experimental AuNP radiosensitization was observed with a sensitizer enhancement ratio (SER) of 1.21 ± 0.13. SERs estimated using the ARP model and the macroscopic enhancement model were 1.20 ± 0.12 and 1.07 ± 0.10 respectively. In the hypothetical case of AuNPs localized within the nucleus, the ARP model predicted a SER of 1.29 ± 0.13, demonstrating the influence of AuNP intracellular localization on radiosensitization.

  10. FPLUME-1.0: An integrated volcanic plume model accounting for ash aggregation

    NASA Astrophysics Data System (ADS)

    Folch, A.; Costa, A.; Macedonio, G.

    2015-09-01

    Eruption Source Parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1-D cross-section averaged eruption column model based on the Buoyant Plume Theory (BPT). The model accounts for plume bent over by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an "effective" grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice-versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the "effective" particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland).

  11. A three-dimensional model of mammalian tyrosinase active site accounting for loss of function mutations.

    PubMed

    Schweikardt, Thorsten; Olivares, Concepción; Solano, Francisco; Jaenicke, Elmar; García-Borrón, José Carlos; Decker, Heinz

    2007-10-01

    Tyrosinases are the first and rate-limiting enzymes in the synthesis of melanin pigments responsible for colouring hair, skin and eyes. Mutation of tyrosinases often decreases melanin production resulting in albinism, but the effects are not always understood at the molecular level. Homology modelling of mouse tyrosinase based on recently published crystal structures of non-mammalian tyrosinases provides an active site model accounting for loss-of-function mutations. According to the model, the copper-binding histidines are located in a helix bundle comprising four densely packed helices. A loop containing residues M374, S375 and V377 connects the CuA and CuB centres, with the peptide oxygens of M374 and V377 serving as hydrogen acceptors for the NH-groups of the imidazole rings of the copper-binding His367 and His180. Therefore, this loop is essential for the stability of the active site architecture. A double substitution (374)MS(375) --> (374)GG(375) or a single M374G mutation lead to a local perturbation of the protein matrix at the active site affecting the orientation of the H367 side chain, that may be unable to bind CuB reliably, resulting in loss of activity. The model also accounts for loss of function in two naturally occurring albino mutations, S380P and V393F. The hydroxyl group in S380 contributes to the correct orientation of M374, and the substitution of V393 for a bulkier phenylalanine sterically impedes correct side chain packing at the active site. Therefore, our model explains the mechanistic necessity for conservation of not only active site histidines but also adjacent amino acids in tyrosinase. PMID:17850513

  12. Accounting for "hot spots" and "hot moments" in soil carbon models for water-limited ecosystems

    NASA Astrophysics Data System (ADS)

    O'Donnell, Frances; Caylor, Kelly

    2010-05-01

    Soil organic carbon (SOC) dynamics in water-limited ecosystems are complicated by the stochastic nature of rainfall and patchy structure of vegetation, which can lead to "hot spots" and "hot moments" of high biological activity. Non-linear models that use spatial and temporal averages of forcing variables are unable to account for these phenomena and are likely to produce biased results. In this study we present a model of SOC abundance that accounts for spatial heterogeneity at the plant scale and temporal variability in soil moisture content at the daily scale. We approximated an existing simulation-based model of SOC dynamics as a stochastic differential equation driven by multiplicative noise that can be solved numerically for steady-state sizes of three SOC pools. We coupled this to a model of water balance and SOC input rate at a point for a given cover type, defined by the number of shrub and perennial grass root systems and canopies overlapping the point. Using a probabilistic description of vegetation structure based on a two dimensional Poisson process, we derived analytical expressions for the distribution of cover types across a landscape and produced weighted averages of SOC stocks. An application of the model to a shortgrass steppe ecosystem in Colorado, USA, replicated empirical data on spatial patterns and average abundance of SOC, whereas a version of the model using spatially averaged forcing variables overestimated SOC stocks by 12%. The model also successfully replicated data from paired desert grassland sites in New Mexico, USA, that had and had not been affected by woody plant encroachment, indicating that the model could be a useful tool for understanding and predicting the effect of woody plant encroachment on regional carbon budgets. We performed a theoretical analysis of a simplified version of the model to estimate the bias introduced by using spatial averages of forcing variables to model SOC stocks across a range of climatic conditions

  13. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    PubMed Central

    2011-01-01

    Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment. PMID:21974866

  14. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    PubMed

    Czoli, Christine; Da Silva, Michael; Shaul, Randi Zlotnik; d'Agincourt-Canning, Lori; Simpson, Christy; Boydell, Katherine; Rashkovan, Natalie; Vanin, Sharon

    2011-01-01

    Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories.These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed.Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment. PMID:21974866

  15. Accounting for sex differences in PTSD: A multi-variable mediation model

    PubMed Central

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    Background Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually. Objective The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. Method The study was a quasi-prospective questionnaire survey assessing PTSD and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, N=450) and 6 months after the robbery (T2, N=368; 61.1% females). Mediation was examined using an analysis designed specifically to test a multiple mediator model. Results Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion), tonic immobility, panic, dissociation, negative posttraumatic cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. Conclusions The findings suggest that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma-related psychiatric disorders

  16. A macro-micro model of sedimentary delta growth that accounts for biological processes

    NASA Astrophysics Data System (ADS)

    Lorenzo-Trueba, J.; Holm, G. O.; Bevington, A.; Twilley, R.; Voller, V. R.; Paola, C.

    2009-12-01

    Sediment mass balance as expressed by the Exner equation has proved to be a useful approach for modeling the average dynamics during the formation of a sedimentary delta. Whole-delta versions of such models consist of a balance between mineral sediment supply, sea-level rise, and subsidence. A simple model calculates the net sediment supplied in each time step and then positions the resulting sediment prism, with assumed constant geometric topset and foreset slopes, onto a subsiding basement. In many deltas, buried organic carbon in the form of peat makes up a significant fraction of the sediment column, but peat accumulation and degradation are typically not included in purely physical delta models. Here we extend current Exner models of delta growth, in particular the simple geometric prism model, and develop exploratory models that explicitly account for peat accumulation and degradation via plant growth, burial, and microbial processes influenced by salinity. We propose a set of ‘lumped’ models aimed at averaging the critical small scale biological processes as source and sink terms that are inserted into larger scale delta growth model. By analogy to previous models of solidification in the metals literature, we refer to this modeling style as Macro-Micro modeling. The basic lumped biological model first splits the delta into two regions: a near-shore region with high salt concentration and an inland region with a higher fresh water presence. The boundary between the regions is geometric and our model is constructed in such a way that the relative lengths of the near-shore and inland regions are controlled by subsidence and sea level changes. Our initial starting assumption is that the net rate at which soil organic carbon (SOC) is produced is constant throughout the delta. In the near-shore region, however, where the higher salt concentration stresses the system, we assume an additional SOC loss rate term proportional to the carbon content in the delta

  17. Implementation of a cost-accounting model in a biobank: practical implications.

    PubMed

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.

  18. Implementation of a cost-accounting model in a biobank: practical implications.

    PubMed

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level. PMID:25792217

  19. Pathology service line: a model for accountable care organizations at an academic medical center.

    PubMed

    Sussman, Ira; Prystowsky, Michael B

    2012-05-01

    Accountable care is designed to manage the health of patients using a capitated cost model rather than fee for service. Pay for performance is an attempt to use quality and not service reduction as the way to decrease costs. Pathologists will have to demonstrate value to the system. This value will include (1) working with clinical colleagues to optimize testing protocols, (2) reducing unnecessary testing in both clinical and anatomic pathology, (3) guiding treatment by helping to personalize therapy, (4) designing laboratory information technology solutions that will promote and facilitate accurate, complete data mining, and (5) administering efficient cost-effective laboratories. The pathology service line was established to improve the efficiency of delivering pathology services and to provide more effective support of medical center programs. We have used this model effectively at the Montefiore Medical Center for the past 14 years. PMID:22333926

  20. Water accounting for stressed river basins based on water resources management models.

    PubMed

    Pedro-Monzonís, María; Solera, Abel; Ferrer, Javier; Andreu, Joaquín; Estrela, Teodoro

    2016-09-15

    Water planning and the Integrated Water Resources Management (IWRM) represent the best way to help decision makers to identify and choose the most adequate alternatives among other possible ones. The System of Environmental-Economic Accounting for Water (SEEA-W) is displayed as a tool for the building of water balances in a river basin, providing a standard approach to achieve comparability of the results between different territories. The target of this paper is to present the building up of a tool that enables the combined use of hydrological models and water resources models to fill in the SEEA-W tables. At every step of the modelling chain, we are capable to build the asset accounts and the physical water supply and use tables according to SEEA-W approach along with an estimation of the water services costs. The case study is the Jucar River Basin District (RBD), located in the eastern part of the Iberian Peninsula in Spain which as in other many Mediterranean basins is currently water-stressed. To guide this work we have used PATRICAL model in combination with AQUATOOL Decision Support System (DSS). The results indicate that for the average year the total use of water in the district amounts to 15,143hm(3)/year, being the Total Water Renewable Water Resources 3909hm(3)/year. On the other hand, the water service costs in Jucar RBD amounts to 1634 million € per year at constant 2012 prices. It is noteworthy that 9% of these costs correspond to non-conventional resources, such as desalinated water, reused water and water transferred from other regions.

  1. Water accounting for stressed river basins based on water resources management models.

    PubMed

    Pedro-Monzonís, María; Solera, Abel; Ferrer, Javier; Andreu, Joaquín; Estrela, Teodoro

    2016-09-15

    Water planning and the Integrated Water Resources Management (IWRM) represent the best way to help decision makers to identify and choose the most adequate alternatives among other possible ones. The System of Environmental-Economic Accounting for Water (SEEA-W) is displayed as a tool for the building of water balances in a river basin, providing a standard approach to achieve comparability of the results between different territories. The target of this paper is to present the building up of a tool that enables the combined use of hydrological models and water resources models to fill in the SEEA-W tables. At every step of the modelling chain, we are capable to build the asset accounts and the physical water supply and use tables according to SEEA-W approach along with an estimation of the water services costs. The case study is the Jucar River Basin District (RBD), located in the eastern part of the Iberian Peninsula in Spain which as in other many Mediterranean basins is currently water-stressed. To guide this work we have used PATRICAL model in combination with AQUATOOL Decision Support System (DSS). The results indicate that for the average year the total use of water in the district amounts to 15,143hm(3)/year, being the Total Water Renewable Water Resources 3909hm(3)/year. On the other hand, the water service costs in Jucar RBD amounts to 1634 million € per year at constant 2012 prices. It is noteworthy that 9% of these costs correspond to non-conventional resources, such as desalinated water, reused water and water transferred from other regions. PMID:27161139

  2. Pointing, looking at, and pressing keys: A diffusion model account of response modality.

    PubMed

    Gomez, Pablo; Ratcliff, Roger; Childers, Russ

    2015-12-01

    Accumulation of evidence models of perceptual decision making have been able to account for data from a wide range of domains at an impressive level of precision. In particular, Ratcliff's (1978) diffusion model has been used across many different 2-choice tasks in which the response is executed via a key-press. In this article, we present 2 experiments in which we used a letter-discrimination task exploring 3 central aspects of a 2-choice task: the discriminability of the stimulus, the modality of the response execution (eye movement, key pressing, and pointing on a touchscreen), and the mapping of the response areas for the eye movement and the touchscreen conditions (consistent vs. inconsistent). We fitted the diffusion model to the data from these experiments and examined the behavior of the model's parameters. Fits of the model were consistent with the hypothesis that the same decision mechanism is used in the task with 3 different response methods. Drift rates are affected by the duration of the presentation of the stimulus while the response execution time changed as a function of the response modality.

  3. A gene network model accounting for development and evolution of mammalian teeth.

    PubMed

    Salazar-Ciudad, Isaac; Jernvall, Jukka

    2002-06-11

    Generation of morphological diversity remains a challenge for evolutionary biologists because it is unclear how an ultimately finite number of genes involved in initial pattern formation integrates with morphogenesis. Ideally, models used to search for the simplest developmental principles on how genes produce form should account for both developmental process and evolutionary change. Here we present a model reproducing the morphology of mammalian teeth by integrating experimental data on gene interactions and growth into a morphodynamic mechanism in which developing morphology has a causal role in patterning. The model predicts the course of tooth-shape development in different mammalian species and also reproduces key transitions in evolution. Furthermore, we reproduce the known expression patterns of several genes involved in tooth development and their dynamics over developmental time. Large morphological effects frequently can be achieved by small changes, according to this model, and similar morphologies can be produced by different changes. This finding may be consistent with why predicting the morphological outcomes of molecular experiments is challenging. Nevertheless, models incorporating morphology and gene activity show promise for linking genotypes to phenotypes.

  4. RADIATIVE OPACITY OF IRON STUDIED USING A DETAILED LEVEL ACCOUNTING MODEL

    SciTech Connect

    Jin Fengtao; Zeng Jiaolong; Yuan Jianmin; Huang Tianxuan; Ding Yongkun; Zheng Zhijian

    2009-03-01

    The opacity of iron plasma in local thermodynamic equilibrium is studied using an independently developed detailed level accounting model. Atomic data are generated by solving the full relativistic Dirac-Fock equations. State mixing within one electronic configuration is considered to include part of the correlations between electrons without configuration interaction matrices that are too large being involved. Simulations are carried out and compared with several recent experimental transmission spectra in the M- and L-shell absorption regions to reveal the high accuracy of the model. The present model is also compared with the OPAL, LEDCOP and OP models for two isothermal series at T = 20 eV and T = 19.3 eV. It is found that our model is in good agreement with OPAL and LEDCOP while it has discrepancies with OP at high densities. Systematic Rosseland and Planck mean opacities in the range 10-1000 eV for temperature and 10{sup -5}-10{sup -1} g cm{sup -3} for density are also presented and compared with LEDCOP results, finding good agreement at lower temperatures but apparent differences at high temperatures where the L- and K-shell absorptions are dominant.

  5. A simple lumped rainfall runoff model accounting for the spatial variability of drainage density

    NASA Astrophysics Data System (ADS)

    Di Lazzaro, M.; Zarlenga, A.; Volpi, E.

    2013-12-01

    Definition of drainage density (dd) as the inverse of twice the hillslope-to-channel length allows to create maps based on Digital Terrain Analysis that are clearly able to reveal the sharp contrast between neighbouring geologic provinces. This contrast is deeply correlated to the patterns of landscape dissection. Despite the fact that this definition can be applied relatively easily through the use of Geographic Information Systems (GIS), surprisingly its applications have been limited so far. Among them we consider in this work representing the spatial heterogeneity in the field of dd coupled with the within-catchment organization of the basin itself. Previous works highlight how dd affects key hydrological variables such as residence times, runoff coefficients, hydraulic conductivities, sediment yield and recession curves. Effects of dd drainage density can be classified as direct and indirect. Among direct effects is accounted the small extension of hillslope lengths where dd is high, which results in shorter corrivation times. Direct effects are intrinsically included in geomorphological rainfall-runoff models. Among indirect effects it has been proved that higher dd are related to impervious rocky hillslopes and to steeper slopes: this enhances the generation of higher flood peaks. In zones with high dd, shallow soils and low permeability prevent rainfall infiltration, so that runoff volumes are large. In areas of low drainage density hydraulic conductivities are expected to be higher, hydrological paths are mostly developed in the groundwater, where water is 'stored' for larger times. Despite the evidence of within-catchment variations of drainage density, a reliable schematization to account in a simplified model both direct and indirect effects, such as its strong correlation with permeability, has not yet been formulated. Traditional approaches in rainfall runoff modelling, based on the geomorphological derivation of the distribution of contributing areas

  6. A comparison of two diffusion process models in accounting for payoff and stimulus frequency manipulations.

    PubMed

    Leite, Fábio P

    2012-08-01

    I analyzed response time and accuracy data from a numerosity discrimination experiment in which both stimulus frequency and payoff structure were manipulated. The numerosity discrimination encompassed responding either "low" or "high" to the number of asterisks in a 10 × 10 grid, on the basis of an experimenter-determined decision cutoff (fixed at 50). In the stimulus frequency condition, there were more low than high stimuli in some blocks and more high than low stimuli in other blocks. In the payoff condition, responses were rewarded such that the relative value of a stimulus mimicked the relative frequency of that stimulus in the previous manipulation. I modeled the data using two sequential-sampling models in which evidence was accumulated until either a "low" or a "high" decision criterion was reached and a response was initiated: a single-stage diffusion model framework and a two-stage diffusion model framework. In using these two frameworks, the goal was to examine their relative merits across stimulus frequency and payoff structure manipulations. I found that shifts in starting point in a single-stage diffusion framework and shifts in the initial drift rate in the two-stage model were able to account for the data. I also found, however, that these two shifts across the two models produced similar changes in the random walk that described the decision process. In conclusion, I found that the similarities in the descriptions of the decision processes make it difficult to choose between the two models and suggested that such a choice should consider model assumptions and parameter estimate interpretations.

  7. Numerical model of the highlatitude ionosphere taking into account the solar wind parameters

    NASA Astrophysics Data System (ADS)

    Uvarov, Viacheslav Mikhailovich; Lukianova, Renata

    The high latitude ionosphere is driven by both magnetospheric and solar UV inputs and, therefore, quite variable and poor predictable. Empirical models are unable to describe these complex dependencies while physics-based mathematical models are more successful in reproducing the plasma density features. In this contribution we present a numerical model of the polar ionosphere. The three-dimensional model covers the E-region, F-region and topside poleward of 40° latitude in the altitude range from 90 km to 650 km. The main output is an electron density distribution at a specified times and under a specified solar wind conditions. The model consists of two blocks: calculation the convection electric field and calculation of the ionospheric plasma parameters. The convection block includes an analytical representation of the large-scale convection patterns and their dependence on the interplanetary magnetic field (IMF) orientation. The density distributions of O+ and generalized ion species are obtained from a numerical solution of the appropriate continuity equations as a function of height. The empirical model is used to calculate the thermospheric temperature, composition, and density. Flux tubes of plasma are followed as they convect and corotate through a moving neutral atmosphere during many hours. The offset between the geomagnetic and geographic poles are also taken into account. Model simulations for different IMF conditions, solar cycle, season, universal time (UT) and magnetic activity were used to elucidate the evolution of the high latitude ionospheric morphological features. The global features such as the tongue of ionization, plasma cavity, polar and auroral peaks are well reproduced and quantified. Model electron density profiles are compared with observations using the EISCAT incoherent scatter radar and ionosonde.

  8. Regional collaboration as a model for fostering accountability and transforming health care.

    PubMed

    Speir, Alan M; Rich, Jeffrey B; Crosby, Ivan; Fonner, Edwin

    2009-01-01

    An era of increasing budgetary constraints, misaligned payers and providers, and a competitive system where United States health outcomes are outpaced by less well-funded nations is motivating policy-makers to seek more effective means for promoting cost-effective delivery and accountability. This article illustrates an effective working model of regional collaboration focused on improving health outcomes, containing costs, and making efficient use of resources in cardiovascular surgical care. The Virginia Cardiac Surgery Quality Initiative is a decade-old collaboration of cardiac surgeons and hospital providers in Virginia working to improve outcomes and contain costs by analyzing comparative data, identifying top performers, and replicating best clinical practices on a statewide basis. The group's goals and objectives, along with 2 generations of performance improvement initiatives, are examined. These involve attempts to improve postoperative outcomes and use of tools for decision support and modeling. This work has led the group to espouse a more integrated approach to performance improvement and to formulate principles of a quality-focused payment system. This is one in which collaboration promotes regional accountability to deliver quality care on a cost-effective basis. The Virginia Cardiac Surgery Quality Initiative has attempted to test a global pricing model and has implemented a pay-for-performance program where physicians and hospitals are aligned with common objectives. Although this collaborative approach is a work in progress, authors point out preconditions applicable to other regions and medical specialties. A road map of short-term next steps is needed to create an adaptive payment system tied to the national agenda for reforming the delivery system. PMID:19632558

  9. A unifying modeling of plant shoot gravitropism with an explicit account of the effects of growth

    PubMed Central

    Bastien, Renaud; Douady, Stéphane; Moulia, Bruno

    2014-01-01

    Gravitropism, the slow reorientation of plant growth in response to gravity, is a major determinant of the form and posture of land plants. Recently a universal model of shoot gravitropism, the AC model, was presented, in which the dynamics of the tropic movement is only determined by the conflicting controls of (1) graviception that tends to curve the plants toward the vertical, and (2) proprioception that tends to keep the stem straight. This model was found to be valid for many species and over two orders of magnitude of organ size. However, the motor of the movement, the elongation, was purposely neglected in the AC model. If growth effects are to be taken into account, it is necessary to consider the material derivative, i.e., the rate of change of curvature bound to expanding and convected organ elements. Here we show that it is possible to rewrite the material equation of curvature in a compact simplified form that directly expresses the curvature variation as a function of the median elongation and of the distribution of the differential growth. By using this extended model, called the ACĖ model, growth is found to have two main destabilizing effects on the tropic movement: (1) passive orientation drift, which occurs when a curved element elongates without differential growth, and (2) fixed curvature, when an element leaves the elongation zone and is no longer able to actively change its curvature. By comparing the AC and ACĖ models to experiments, these two effects are found to be negligible. Our results show that the simplified AC mode can be used to analyze gravitropism and posture control in actively elongating plant organs without significant information loss. PMID:24782876

  10. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    NASA Astrophysics Data System (ADS)

    Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.

    2016-07-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( < 0.063 mm) ash (3-59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  11. FPLUME-1.0: An integral volcanic plume model accounting for ash aggregation

    NASA Astrophysics Data System (ADS)

    Folch, A.; Costa, A.; Macedonio, G.

    2016-02-01

    Eruption source parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1-D (one-dimensional) cross-section-averaged eruption column model based on the buoyant plume theory (BPT). The model accounts for plume bending by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in the presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an effective grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the effective particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland). The modular structure of the code facilitates the implementation in the future code versions of more quantitative ash aggregation parameterization as further observations and experiment data will be available for better constraining ash aggregation processes.

  12. FPLUME-1.0: An integral volcanic plume model accounting for ash aggregation

    NASA Astrophysics Data System (ADS)

    Folch, Arnau; Costa, Antonio; Macedonio, Giovanni

    2016-04-01

    Eruption Source Parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1D cross-section averaged eruption column model based on the Buoyant Plume Theory (BPT). The model accounts for plume bending by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an "effective" grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice-versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the "effective" particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland). The modular structure of the code facilitates the implementation in the future code versions of more quantitative ash aggregation parameterization as further observations and experiments data will be available for better constraining ash aggregation processes.

  13. Accounting for the effects of volcanoes and ENSO in comparisons of modeled and observed temperature trends

    NASA Astrophysics Data System (ADS)

    Santer, B. D.; Wigley, T. M. L.; Doutriaux, C.; Boyle, J. S.; Hansen, J. E.; Jones, P. D.; Meehl, G. A.; Roeckner, E.; Sengupta, S.; Taylor, K. E.

    2001-11-01

    Several previous studies have attempted to remove the effects of explosive volcanic eruptions and El Niño-Southern Oscillation (ENSO) variability from time series of globally averaged surface and tropospheric temperatures. Such work has largely ignored the nonzero correlation between volcanic signals and ENSO. Here we account for this collinearity using an iterative procedure. We remove estimated volcano and ENSO signals from the observed global mean temperature data, and then calculate trends over 1979-1999 in the residuals. Residual trends are sensitive to the choice of index used for removing ENSO effects and to uncertainties in key volcanic parameters. Despite these sensitivities, residual surface and lower tropospheric (2LT) trends are almost always larger than trends in the raw observational data. After removal of volcano and ENSO effects, the differential warming between the surface and lower troposphere is generally reduced. These results suggest that the net effect of volcanoes and ENSO over 1979-1999 was to reduce globally averaged surface and tropospheric temperatures and cool the troposphere by more than the surface. ENSO and incomplete volcanic forcing effects can hamper reliable assessment of the true correspondence between modeled and observed trends. In the second part of our study, we remove these effects from model data and compare simulated and observed residual trends. Residual temperature trends are not significantly different at the surface. In the lower troposphere the statistical significance of trend differences depends on the experiment considered, the choice of ENSO index, and the volcanic signal decay time. The simulated difference between surface and tropospheric warming rates is significantly smaller than observed in 51 out of 54 cases considered. We also examine multiple realizations of model experiments with relatively complete estimates of natural and anthropogenic forcing. ENSO and volcanic effects are not removed from these

  14. Accounting for network effects in neuronal responses using L1 regularized point process models.

    PubMed

    Kelly, Ryan C; Kass, Robert E; Smith, Matthew A; Lee, Tai Sing

    2010-01-01

    Activity of a neuron, even in the early sensory areas, is not simply a function of its local receptive field or tuning properties, but depends on global context of the stimulus, as well as the neural context. This suggests the activity of the surrounding neurons and global brain states can exert considerable influence on the activity of a neuron. In this paper we implemented an L1 regularized point process model to assess the contribution of multiple factors to the firing rate of many individual units recorded simultaneously from V1 with a 96-electrode "Utah" array. We found that the spikes of surrounding neurons indeed provide strong predictions of a neuron's response, in addition to the neuron's receptive field transfer function. We also found that the same spikes could be accounted for with the local field potentials, a surrogate measure of global network states. This work shows that accounting for network fluctuations can improve estimates of single trial firing rate and stimulus-response transfer functions. PMID:22162918

  15. Identifying Opportunities to Reduce Uncertainty in a National-Scale Forest Carbon Accounting Model

    NASA Astrophysics Data System (ADS)

    Shaw, C. H.; Metsaranta, J. M.; Kurz, W.; Hilger, A.

    2013-12-01

    Assessing the quality of forest carbon budget models used for national and international reporting of greenhouse gas emissions is essential, but model evaluations are rarely conducted mainly because of lack of appropriate, independent ground plot data sets. Ecosystem carbon stocks for all major pools estimated from data collected for 696 ground plots from Canada's new National Forest Inventory (NFI) were used to assess plot-level carbon stocks predicted by the Carbon Budget Model of the Canadian Forest Sector 3 (CBM-CFS3) -- a model compliant with the most complex (Tier-3) approach in the reporting guidelines of the Intergovernmental Panel on Climate Change. The model is the core of Canada's National Forest Carbon Monitoring, Accounting, and Reporting System. At the landscape scale, a major portion of total uncertainty in both C stock and flux estimation is associated with biomass productivity, turnover, and soil and dead organic matter modelling parameters, which can best be further evaluated using plot-level data. Because the data collected for the ground plots were comprehensive we were able to compare carbon stock estimates for 13 pools also estimated by the CBM-CFS3 (all modelled pools excepting coarse and fine root biomass) using the classical comparison statistics of mean difference and correlation. Using a Monte Carlo approach we were able to determine the contribution of aboveground biomass, deadwood and soil pool error to modeled ecosystem total error, as well as the contribution of pools that are summed to estimate aboveground biomass, deadwood and soil, to the error of these three subtotal pools. We were also able to assess potential sources of error propagation in the computational sequence of the CBM-CFS3. Analysis of the data grouped by the 16 dominant tree species allowed us to isolate the leading species where further research would lead to the greatest reductions in uncertainty for modeling of carbon stocks using the CBM-CFS3. This analysis

  16. Causal Inference in Occupational Epidemiology: Accounting for the Healthy Worker Effect by Using Structural Nested Models

    PubMed Central

    Naimi, Ashley I.; Richardson, David B.; Cole, Stephen R.

    2013-01-01

    In a recent issue of the Journal, Kirkeleit et al. (Am J Epidemiol. 2013;177(11):1218–1224) provided empirical evidence for the potential of the healthy worker effect in a large cohort of Norwegian workers across a range of occupations. In this commentary, we provide some historical context, define the healthy worker effect by using causal diagrams, and use simulated data to illustrate how structural nested models can be used to estimate exposure effects while accounting for the healthy worker survivor effect in 4 simple steps. We provide technical details and annotated SAS software (SAS Institute, Inc., Cary, North Carolina) code corresponding to the example analysis in the Web Appendices, available at http://aje.oxfordjournals.org/. PMID:24077092

  17. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    PubMed

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both.

  18. Biological Parametric Mapping Accounting for Random Regressors with Regression Calibration and Model II Regression

    PubMed Central

    Yang, Xue; Lauzon, Carolyn B.; Crainiceanu, Ciprian; Caffo, Brian; Resnick, Susan M.; Landman, Bennett A.

    2012-01-01

    Massively univariate regression and inference in the form of statistical parametric mapping have transformed the way in which multi-dimensional imaging data are studied. In functional and structural neuroimaging, the de facto standard “design matrix”-based general linear regression model and its multi-level cousins have enabled investigation of the biological basis of the human brain. With modern study designs, it is possible to acquire multi-modal three-dimensional assessments of the same individuals — e.g., structural, functional and quantitative magnetic resonance imaging, alongside functional and ligand binding maps with positron emission tomography. Largely, current statistical methods in the imaging community assume that the regressors are non-random. For more realistic multi-parametric assessment (e.g., voxel-wise modeling), distributional consideration of all observations is appropriate. Herein, we discuss two unified regression and inference approaches, model II regression and regression calibration, for use in massively univariate inference with imaging data. These methods use the design matrix paradigm and account for both random and non-random imaging regressors. We characterize these methods in simulation and illustrate their use on an empirical dataset. Both methods have been made readily available as a toolbox plug-in for the SPM software. PMID:22609453

  19. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    PubMed

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both. PMID:27198876

  20. On Accounting for the Interplay of Kinetic and Non-Kinetic Aspects in Population Mobility Models

    SciTech Connect

    Perumalla, Kalyan S; Bhaduri, Budhendra L

    2006-01-01

    Several important applications are placing demands on satisfactory characterization of the bi-directional interaction between kinetic and non-kinetic aspects in the mobility of people and commodities. Example applications include: emergency planning which needs to account for strong interplay of vehicular transport with inventory levels of critical supplies and/or people's psychologies; energy planning for normal day-to-day activities which considers the relation between travel patterns and energy usage; and, policy making for futuristic scenarios which examines the correlation between transportation behaviors and environmental/economic concerns. All these require new and holistic approaches for capturing the interplay of kinetic and non-kinetic aspects of mobility, as those aspects cannot be treated separately. Accurate characterization of such interplay requires proper integration of three distinct components, namely, data, models and computation. The availability of new sources of high-resolution data, and of detailed models together with recent advances in scalable computational methods now permits accurate capture of such an important interplay. This paper serves to highlight and argue that the interplay can in fact be captured in a high level of detail in simulations, enabled by the availability of new data, models and computational capabilities. Some of the challenges that are encountered in incorporating the interplay are outlined and plausible solution approaches are described in the context of large-scale scenarios involving mobility of people and commodities.

  1. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    USGS Publications Warehouse

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  2. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    PubMed

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-03-21

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions.

  3. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    NASA Astrophysics Data System (ADS)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  4. Towards an improvement of carbon accounting for wildfires: incorporation of charcoal production into carbon emission models

    NASA Astrophysics Data System (ADS)

    Doerr, Stefan H.; Santin, Cristina; de Groot, Bill

    2015-04-01

    Every year fires release to the atmosphere the equivalent to 20-30% of the carbon (C) emissions from fossil fuel consumption, with future emissions from wildfires expected to increase under a warming climate. Critically, however, part of the biomass C affected by fire is not emitted during burning, but converted into charcoal, which is very resistant to environmental degradation and, thus, contributes to long-term C sequestration. The magnitude of charcoal production from wildfires as a long-term C sink remains essentially unknown and, to the date, charcoal production has not been included in wildfire emission and C budget models. Here we present complete inventories of charcoal production in two fuel-rich, but otherwise very different ecosystems: i) a boreal conifer forest (experimental stand-replacing crown fire; Canada, 2012) and a dry eucalyptus forest (high-intensity fuel reduction burn; Australia 2014). Our data show that, when considering all the fuel components and quantifying all the charcoal produced from each (i.e. bark, dead wood debris, fine fuels), the overall amount of charcoal produced is significant: up to a third of the biomass C affected by fire. These findings indicate that charcoal production from wildfires could represent a major and currently unaccounted error in the estimation of the effects of wildfires in the global C balance. We suggest an initial approach to include charcoal production in C emission models, by using our case study of a boreal forest fire and the Canadian Fire Effects Model (CanFIRE). We also provide recommendations of how a 'conversion factor' for charcoal production could be relatively easily estimated when emission factors for different types of fuels and fire conditions are experimentally obtained. Ultimately, this presentation is a call for integrative collaboration between the fire emission modelling community and the charcoal community to work together towards the improvement of C accounting for wildfires.

  5. Carbon accounting of forest bioenergy: from model calibrations to policy options (Invited)

    NASA Astrophysics Data System (ADS)

    Lamers, P.

    2013-12-01

    knowledge in the field by comparing different state-of-the-art temporal forest carbon modeling efforts, and discusses whether or to what extent a deterministic ';carbon debt' accounting is possible and appropriate. It concludes upon the possible scientific and eventually political choices in temporal carbon accounting for regulatory frameworks including alternative options to address unintentional carbon losses within forest ecosystems/bioenergy systems.

  6. Accounting for the kinetics in order parameter analysis: Lessons from theoretical models and a disordered peptide

    NASA Astrophysics Data System (ADS)

    Berezovska, Ganna; Prada-Gracia, Diego; Mostarda, Stefano; Rao, Francesco

    2012-11-01

    Molecular simulations as well as single molecule experiments have been widely analyzed in terms of order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such descriptions are inaccurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-state-models of the system under study. Application of the methodology to theoretical models with a noisy order parameter as well as the dynamics of a disordered peptide illustrates the possibility to build accurate descriptions of molecular processes on the sole basis of order parameter time series without using any supplementary information.

  7. Accounting for the kinetics in order parameter analysis: lessons from theoretical models and a disordered peptide.

    PubMed

    Berezovska, Ganna; Prada-Gracia, Diego; Mostarda, Stefano; Rao, Francesco

    2012-11-21

    Molecular simulations as well as single molecule experiments have been widely analyzed in terms of order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such descriptions are inaccurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-state-models of the system under study. Application of the methodology to theoretical models with a noisy order parameter as well as the dynamics of a disordered peptide illustrates the possibility to build accurate descriptions of molecular processes on the sole basis of order parameter time series without using any supplementary information. PMID:23181288

  8. Binomial Mixture Model Based Association Testing to Account for Genetic Heterogeneity for GWAS.

    PubMed

    Xu, Zhiyuan; Pan, Wei

    2016-04-01

    Genome-wide association studies (GWAS) have confirmed the ubiquitous existence of genetic heterogeneity for common disease: multiple common genetic variants have been identified to be associated, while many more are yet expected to be uncovered. However, the single SNP (single-nucleotide polymorphism) based trend test (or its variants) that has been dominantly used in GWAS is based on contrasting the allele frequency difference between the case and control groups, completely ignoring possible genetic heterogeneity. In spite of the widely accepted notion of genetic heterogeneity, we are not aware of any previous attempt to apply genetic heterogeneity motivated methods in GWAS. Here, to explicitly account for unknown genetic heterogeneity, we applied a mixture model based single-SNP test to the Wellcome Trust Case Control Consortium (WTCCC) GWAS data with traits of Crohn's disease, bipolar disease, coronary artery disease, and type 2 diabetes, identifying much larger numbers of significant SNPs and risk loci for each trait than those of the popular trend test, demonstrating potential power gain of the mixture model based test.

  9. A performance weighting procedure for GCMs based on explicit probabilistic models and accounting for observation uncertainty

    NASA Astrophysics Data System (ADS)

    Renard, Benjamin; Vidal, Jean-Philippe

    2016-04-01

    In recent years, the climate modeling community has put a lot of effort into releasing the outputs of multimodel experiments for use by the wider scientific community. In such experiments, several structurally distinct GCMs are run using the same observed forcings (for the historical period) or the same projected forcings (for the future period). In addition, several members are produced for a single given model structure, by running each GCM with slightly different initial conditions. This multiplicity of GCM outputs offers many opportunities in terms of uncertainty quantification or GCM comparisons. In this presentation, we propose a new procedure to weight GCMs according to their ability to reproduce the observed climate. Such weights can be used to combine the outputs of several models in a way that rewards good-performing models and discards poorly-performing ones. The proposed procedure has the following main properties: 1. It is based on explicit probabilistic models describing the time series produced by the GCMs and the corresponding historical observations, 2. It can use several members whenever available, 3. It accounts for the uncertainty in observations, 4. It assigns a weight to each GCM (all weights summing up to one), 5. It can also assign a weight to the "H0 hypothesis" that all GCMs in the multimodel ensemble are not compatible with observations. The application of the weighting procedure is illustrated with several case studies including synthetic experiments, simple cases where the target GCM output is a simple univariate variable and more realistic cases where the target GCM output is a multivariate and/or a spatial variable. These case studies illustrate the generality of the procedure which can be applied in a wide range of situations, as long as the analyst is prepared to make an explicit probabilistic assumption on the target variable. Moreover, these case studies highlight several interesting properties of the weighting procedure. In

  10. Historical Account to the State of the Art in Debris Flow Modeling

    NASA Astrophysics Data System (ADS)

    Pudasaini, Shiva P.

    2013-04-01

    In this contribution, I present a historical account of debris flow modelling leading to the state of the art in simulations and applications. A generalized two-phase model is presented that unifies existing avalanche and debris flow theories. The new model (Pudasaini, 2012) covers both the single-phase and two-phase scenarios and includes many essential and observable physical phenomena. In this model, the solid-phase stress is closed by Mohr-Coulomb plasticity, while the fluid stress is modeled as a non-Newtonian viscous stress that is enhanced by the solid-volume-fraction gradient. A generalized interfacial momentum transfer includes viscous drag, buoyancy and virtual mass forces, and a new generalized drag force is introduced to cover both solid-like and fluid-like drags. Strong couplings between solid and fluid momentum transfer are observed. The two-phase model is further extended to describe the dynamics of rock-ice avalanches with new mechanical models. This model explains dynamic strength weakening and includes internal fluidization, basal lubrication, and exchanges of mass and momentum. The advantages of the two-phase model over classical (effectively single-phase) models are discussed. Advection and diffusion of the fluid through the solid are associated with non-linear fluxes. Several exact solutions are constructed, including the non-linear advection-diffusion of fluid, kinematic waves of debris flow front and deposition, phase-wave speeds, and velocity distribution through the flow depth and through the channel length. The new model is employed to study two-phase subaerial and submarine debris flows, the tsunami generated by the debris impact at lakes/oceans, and rock-ice avalanches. Simulation results show that buoyancy enhances flow mobility. The virtual mass force alters flow dynamics by increasing the kinetic energy of the fluid. Newtonian viscous stress substantially reduces flow deformation, whereas non-Newtonian viscous stress may change the

  11. Taking the Missing Propensity into Account When Estimating Competence Scores: Evaluation of Item Response Theory Models for Nonignorable Omissions

    ERIC Educational Resources Information Center

    Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H.

    2015-01-01

    When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…

  12. Materials measurement and accounting in an operating plutonium conversion and purification process. Phase I. Process modeling and simulation. [PUCSF code

    SciTech Connect

    Thomas, C.C. Jr.; Ostenak, C.A.; Gutmacher, R.G.; Dayem, H.A.; Kern, E.A.

    1981-04-01

    A model of an operating conversion and purification process for the production of reactor-grade plutonium dioxide was developed as the first component in the design and evaluation of a nuclear materials measurement and accountability system. The model accurately simulates process operation and can be used to identify process problems and to predict the effect of process modifications.

  13. Toward an Human Resource Accounting (HRA)-Based Model for Designing an Organizational Effectiveness Audit in Education.

    ERIC Educational Resources Information Center

    Myroon, John L.

    The major purpose of this paper was to develop a Human Resource Accounting (HRA) macro-model that could be used for designing a school organizational effectiveness audit. Initially, the paper reviewed the advent and definition of HRA. In order to develop the proposed model, the different approaches to measuring effectiveness were reviewed,…

  14. Associative Account of Self-Cognition: Extended Forward Model and Multi-Layer Structure

    PubMed Central

    Sugiura, Motoaki

    2013-01-01

    The neural correlates of “self” identified by neuroimaging studies differ depending on which aspects of self are addressed. Here, three categories of self are proposed based on neuroimaging findings and an evaluation of the likely underlying cognitive processes. The physical self, representing self-agency of action, body-ownership, and bodily self-recognition, is supported by the sensory and motor association cortices located primarily in the right hemisphere. The interpersonal self, representing the attention or intentions of others directed at the self, is supported by several amodal association cortices in the dorsomedial frontal and lateral posterior cortices. The social self, representing the self as a collection of context-dependent social-values, is supported by the ventral aspect of the medial prefrontal cortex and the posterior cingulate cortex. Despite differences in the underlying cognitive processes and neural substrates, all three categories of self are likely to share the computational characteristics of the forward model, which is underpinned by internal schema or learned associations between one’s behavioral output and the consequential input. Additionally, these three categories exist within a hierarchical layer structure based on developmental processes that updates the schema through the attribution of prediction error. In this account, most of the association cortices critically contribute to some aspect of the self through associative learning while the primary regions involved shift from the lateral to the medial cortices in a sequence from the physical to the interpersonal to the social self. PMID:24009578

  15. Modeling Occupancy of Hosts by Mistletoe Seeds after Accounting for Imperfect Detectability

    PubMed Central

    Fadini, Rodrigo F.; Cintra, Renato

    2015-01-01

    The detection of an organism in a given site is widely used as a state variable in many metapopulation and epidemiological studies. However, failure to detect the species does not necessarily mean that it is absent. Assessing detectability is important for occupancy (presence—absence) surveys; and identifying the factors reducing detectability may help improve survey precision and efficiency. A method was used to estimate the occupancy status of host trees colonized by mistletoe seeds of Psittacanthus plagiophyllus as a function of host covariates: host size and presence of mistletoe infections on the same or on the nearest neighboring host (the cashew tree Anacardium occidentale). The technique also evaluated the effect of taking detectability into account for estimating host occupancy by mistletoe seeds. Individual host trees were surveyed for presence of mistletoe seeds with the aid of two or three observers to estimate detectability and occupancy. Detectability was, on average, 17% higher in focal-host trees with infected neighbors, while decreased about 23 to 50% from smallest to largest hosts. The presence of mistletoe plants in the sample tree had negligible effect on detectability. Failure to detect hosts as occupied decreased occupancy by 2.5% on average, with maximum of 10% for large and isolated hosts. The method presented in this study has potential for use with metapopulation studies of mistletoes, especially those focusing on the seed stage, but also as improvement of accuracy in occupancy models estimates often used for metapopulation dynamics of tree-dwelling plants in general. PMID:25973754

  16. Modeling occupancy of hosts by mistletoe seeds after accounting for imperfect detectability.

    PubMed

    Fadini, Rodrigo F; Cintra, Renato

    2015-01-01

    The detection of an organism in a given site is widely used as a state variable in many metapopulation and epidemiological studies. However, failure to detect the species does not necessarily mean that it is absent. Assessing detectability is important for occupancy (presence-absence) surveys; and identifying the factors reducing detectability may help improve survey precision and efficiency. A method was used to estimate the occupancy status of host trees colonized by mistletoe seeds of Psittacanthus plagiophyllus as a function of host covariates: host size and presence of mistletoe infections on the same or on the nearest neighboring host (the cashew tree Anacardium occidentale). The technique also evaluated the effect of taking detectability into account for estimating host occupancy by mistletoe seeds. Individual host trees were surveyed for presence of mistletoe seeds with the aid of two or three observers to estimate detectability and occupancy. Detectability was, on average, 17% higher in focal-host trees with infected neighbors, while decreased about 23 to 50% from smallest to largest hosts. The presence of mistletoe plants in the sample tree had negligible effect on detectability. Failure to detect hosts as occupied decreased occupancy by 2.5% on average, with maximum of 10% for large and isolated hosts. The method presented in this study has potential for use with metapopulation studies of mistletoes, especially those focusing on the seed stage, but also as improvement of accuracy in occupancy models estimates often used for metapopulation dynamics of tree-dwelling plants in general. PMID:25973754

  17. Modeling occupancy of hosts by mistletoe seeds after accounting for imperfect detectability.

    PubMed

    Fadini, Rodrigo F; Cintra, Renato

    2015-01-01

    The detection of an organism in a given site is widely used as a state variable in many metapopulation and epidemiological studies. However, failure to detect the species does not necessarily mean that it is absent. Assessing detectability is important for occupancy (presence-absence) surveys; and identifying the factors reducing detectability may help improve survey precision and efficiency. A method was used to estimate the occupancy status of host trees colonized by mistletoe seeds of Psittacanthus plagiophyllus as a function of host covariates: host size and presence of mistletoe infections on the same or on the nearest neighboring host (the cashew tree Anacardium occidentale). The technique also evaluated the effect of taking detectability into account for estimating host occupancy by mistletoe seeds. Individual host trees were surveyed for presence of mistletoe seeds with the aid of two or three observers to estimate detectability and occupancy. Detectability was, on average, 17% higher in focal-host trees with infected neighbors, while decreased about 23 to 50% from smallest to largest hosts. The presence of mistletoe plants in the sample tree had negligible effect on detectability. Failure to detect hosts as occupied decreased occupancy by 2.5% on average, with maximum of 10% for large and isolated hosts. The method presented in this study has potential for use with metapopulation studies of mistletoes, especially those focusing on the seed stage, but also as improvement of accuracy in occupancy models estimates often used for metapopulation dynamics of tree-dwelling plants in general.

  18. Underwriting information-theoretic accounts of quantum mechanics with a realist, psi-epistemic model

    NASA Astrophysics Data System (ADS)

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    2016-05-01

    We propose an adynamical interpretation of quantum theory called Relational Blockworld (RBW) where the fundamental ontological element is a 4D graphical amalgam of space, time and sources called a “spacetimesource element.” These are fundamental elements of space, time and sources, not source elements in space and time. The transition amplitude for a spacetimesource element is computed using a path integral with discrete graphical action. The action for a spacetimesource element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint between sources, the spacetime metric and the energy-momentum content of the spacetimesource element, rather than a dynamical law for time-evolved entities. To illustrate this interpretation, we explain the simple EPR-Bell and twin-slit experiments. This interpretation of quantum mechanics constitutes a realist, psi-epistemic model that might underwrite certain information-theoretic accounts of the quantum.

  19. Sediment erodability in sediment transport modelling: Can we account for biota effects?

    NASA Astrophysics Data System (ADS)

    Le Hir, P.; Monbet, Y.; Orvain, F.

    2007-05-01

    Sediment erosion results from hydrodynamic forcing, represented by the bottom shear stress (BSS), and from the erodability of the sediment, defined by the critical erosion shear stress and the erosion rate. Abundant literature has dealt with the effects of biological components on sediment erodability and concluded that sediment processes are highly sensitive to the biota. However, very few sediment transport models account for these effects. We provide some background on the computation of BSS, and on the classical erosion laws for fine sand and mud, followed by a brief review of biota effects with the aim of quantifying the latter into generic formulations, where applicable. The effects of macrophytes, microphytobenthos, and macrofauna are considered in succession. Marine vegetation enhances the bottom dissipation of current energy, but also reduces shear stress at the sediment-water interface, which can be significant when the shoot density is high. The microphytobenthos and secreted extracellular polymeric substances (EPS) stabilise the sediment, and an increase of up to a factor of 5 can be assigned to the erosion threshold on muddy beds. However, the consequences with respect to the erosion rate are debatable since, once the protective biofilm is eroded, the underlying sediment probably has the same erosion behaviour as bare sediment. In addition, the development of benthic diatoms tends to be seasonal, so that stabilising effects are likely to be minimal in winter. Macrofaunal effects are characterised by extreme variability. For muddy sediments, destabilisation seems to be the general trend; this can become critical when benthic communities settle on consolidated sediments that would not be eroded if they remained bare. Biodeposition and bioresuspension fluxes are mentioned, for comparison with hydrodynamically induced erosion rates. Unlike the microphytobenthos, epifaunal benthic organisms create local roughness and are likely to change the BSS generated

  20. Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting

    NASA Astrophysics Data System (ADS)

    Edouard, Simon; Béatrice, Vincendon; Véronique, Ducrocq

    2015-04-01

    Title : Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting Authors : Simon EDOUARD*, Béatrice VINCENDON*, Véronique Ducrocq* * : GAME/CNRM(Météo-France, CNRS)Toulouse,France Mediterranean intense weather events often lead to devastating flash-floods (FF). Increasing the lead time of FF forecasts would permit to better anticipate their catastrophic consequences. These events are one part of Mediterranean hydrological cycle. HyMeX (HYdrological cycle in the Mediterranean EXperiment) aims at a better understanding and quantification of the hydrological cycle and related processes in the Mediterranean. In order to get a lot of data, measurement campaigns were conducted. The first special observing period (SOP1) of these campaigns, served as a test-bed for a real-time hydrological ensemble prediction system (HEPS) dedicated to FF forecasting. It produced an ensemble of quantitative discharge forecasts (QDF) using the ISBA-TOP system. ISBATOP is a coupling between the surface scheme ISBA and a version of TOPMODEL dedicated to Mediterranean fast responding rivers. ISBA-TOP was driven with several quantitative precipitation forecasts (QPF) ensembles based on AROME atmospheric convection-permitting model. This permitted to take into account the uncertainty that affects QPF and that propagates up to the QDF. This uncertainty is major for discharge forecasting especially in the case of Mediterranean flash-floods. But other sources of uncertainty need to be sampled in HEPS systems. One of them is inherent to the hydrological modelling. The ISBA-TOP coupled system has been improved since the initial version, that was used for instance during Hymex SOP1. The initial ISBA-TOP consisted into coupling a TOPMODEL approach with ISBA-3L, which represented the soil stratification with 3 layers. The new version consists into coupling the same TOPMODEL approach with a version of ISBA where more than ten layers describe the soil vertical

  1. Design of a Competency-Based Assessment Model in the Field of Accounting

    ERIC Educational Resources Information Center

    Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús

    2012-01-01

    This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…

  2. Improving Accountability Models by Using Technology-Enabled Knowledge Systems (TEKS). CSE Report 656

    ERIC Educational Resources Information Center

    Baker, Eva L.

    2005-01-01

    This paper addresses accountability and the role emerging technologies can play in improving education. The connection between accountability and technology can be best summarized in one term: feedback. Technological supports provide ways of designing, collecting and sharing information in order to provide the basis for the improvement of systems…

  3. Striving for Student Success: A Model of Shared Accountability. Education Sector Reports

    ERIC Educational Resources Information Center

    Bathgate, Kelly; Colvin, Richard Lee; Silva, Elena

    2011-01-01

    Instead of putting the entire achievement burden on schools, what would it look like to hold a whole community responsible for long-range student outcomes? This report explores the concept of "shared accountability" in education. The No Child Left Behind (NCLB) Act ushered in a new era of accountability in American education: for the first time,…

  4. A Pluralistic Account of Homology: Adapting the Models to the Data

    PubMed Central

    Haggerty, Leanne S.; Jachiet, Pierre-Alain; Hanage, William P.; Fitzpatrick, David A.; Lopez, Philippe; O’Connell, Mary J.; Pisani, Davide; Wilkinson, Mark; Bapteste, Eric; McInerney, James O.

    2014-01-01

    Defining homologous genes is important in many evolutionary studies but raises obvious issues. Some of these issues are conceptual and stem from our assumptions of how a gene evolves, others are practical, and depend on the algorithmic decisions implemented in existing software. Therefore, to make progress in the study of homology, both ontological and epistemological questions must be considered. In particular, defining homologous genes cannot be solely addressed under the classic assumptions of strong tree thinking, according to which genes evolve in a strictly tree-like fashion of vertical descent and divergence and the problems of homology detection are primarily methodological. Gene homology could also be considered under a different perspective where genes evolve as “public goods,” subjected to various introgressive processes. In this latter case, defining homologous genes becomes a matter of designing models suited to the actual complexity of the data and how such complexity arises, rather than trying to fit genetic data to some a priori tree-like evolutionary model, a practice that inevitably results in the loss of much information. Here we show how important aspects of the problems raised by homology detection methods can be overcome when even more fundamental roots of these problems are addressed by analyzing public goods thinking evolutionary processes through which genes have frequently originated. This kind of thinking acknowledges distinct types of homologs, characterized by distinct patterns, in phylogenetic and nonphylogenetic unrooted or multirooted networks. In addition, we define “family resemblances” to include genes that are related through intermediate relatives, thereby placing notions of homology in the broader context of evolutionary relationships. We conclude by presenting some payoffs of adopting such a pluralistic account of homology and family relationship, which expands the scope of evolutionary analyses beyond the traditional

  5. Situated sentence processing: the coordinated interplay account and a neurobehavioral model.

    PubMed

    Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R

    2010-03-01

    Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms

  6. Pharmacokinetic Modeling of Manganese III. Physiological Approaches Accounting for Background and Tracer Kinetics

    SciTech Connect

    Teeguarden, Justin G.; Gearhart, Jeffrey; Clewell, III, H. J.; Covington, Tammie R.; Nong, Andy; Anderson, Melvin E.

    2007-01-01

    assessments (Dixit et al., 2003). With most exogenous compounds, there is often no background exposure and body concentrations are not under active control from homeostatic processes as occurs with essential nutrients. Any complete Mn PBPK model would include the homeostatic regulation as an essential nutritional element and the additional exposure routes by inhalation. Two companion papers discuss the kinetic complexities of the quantitative dose-dependent alterations in hepatic and intestinal processes that control uptake and elimination of Mn (Teeguarden et al., 2006a, b). Radioactive 54Mn has been to investigate the behavior of the more common 55Mn isotope in the body because the distribution and elimination of tracer doses reflects the overall distributional characteristics of Mn. In this paper, we take the first steps in developing a multi-route PBPK model for Mn. Here we develop a PBPK model to account for tissue concentrations and tracer kinetics of Mn under normal dietary intake. This model for normal levels of Mn will serve as the starting point for more complete model descriptions that include dose-dependencies in both oral uptake and and biliary excretion. Material and Methods Experimental Data Two studies using 54Mn tracer were employed in model development. (Furchner et al. 1966; Wieczorek and Oberdorster 1989). In Furchner et al. (1966) male Sprague-Dawley rats received an ip injection of carrier-free 54MnCl2 while maintained on standard rodent feed containing ~ 45 ppm Mn. Tissue radioactivity of 54Mn was measured by liquid scintillation counting between post injection days 1 to 89 and reported as percent of administered dose per kg tissue. 54Mn time courses were reported for liver, kidney, bone, brain, muscle, blood, lung and whole body. Because ip uptake is via the portal circulation to the liver, this data set had information on distribution and clearance behaviors of Mn entering the systemic circulation from liver.

  7. Accounting for water management issues within hydrological simulation: Alternative modelling options and a network optimization approach

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Nalbantis, Ioannis; Rozos, Evangelos; Koutsoyiannis, Demetris

    2010-05-01

    In mixed natural and artificialized river basins, many complexities arise due to anthropogenic interventions in the hydrological cycle, including abstractions from surface water bodies, groundwater pumping or recharge and water returns through drainage systems. Typical engineering approaches adopt a multi-stage modelling procedure, with the aim to handle the complexity of process interactions and the lack of measured abstractions. In such context, the entire hydrosystem is separated into natural and artificial sub-systems or components; the natural ones are modelled individually, and their predictions (i.e. hydrological fluxes) are transferred to the artificial components as inputs to a water management scheme. To account for the interactions between the various components, an iterative procedure is essential, whereby the outputs of the artificial sub-systems (i.e. abstractions) become inputs to the natural ones. However, this strategy suffers from multiple shortcomings, since it presupposes that pure natural sub-systems can be located and that sufficient information is available for each sub-system modelled, including suitable, i.e. "unmodified", data for calibrating the hydrological component. In addition, implementing such strategy is ineffective when the entire scheme runs in stochastic simulation mode. To cope with the above drawbacks, we developed a generalized modelling framework, following a network optimization approach. This originates from the graph theory, which has been successfully implemented within some advanced computer packages for water resource systems analysis. The user formulates a unified system which is comprised of the hydrographical network and the typical components of a water management network (aqueducts, pumps, junctions, demand nodes etc.). Input data for the later include hydraulic properties, constraints, targets, priorities and operation costs. The real-world system is described through a conceptual graph, whose dummy properties

  8. Crash Simulation of Roll Formed Parts by Damage Modelling Taking Into Account Preforming Effects

    NASA Astrophysics Data System (ADS)

    Till, Edwin T.; Hackl, Benjamin; Schauer, Hermann

    2011-08-01

    Complex phase steels of strength levels up to 1200 MPa are suitable to roll forming. These may be applied in automotive structures for enhancing the crashworthiness, e. g. as stiffeners in doors. Even though the strain hardening of the material is low there is considerable bending formability. However ductility decreases with the strength level. Higher strength requires more focus to the structural integrity of the part during the process planning stage and with respect to the crash behavior. Nowadays numerical simulation is used as a process design tool for roll-forming in a production environment. The assessment of the stability of a roll forming process is quite challenging for AHSS grades. There are two objectives of the present work. First to provide a reliable assessment tool to the roll forming analyst for failure prediction. Second to establish simulation procedures in order to predict the part's behavior in crash applications taking into account damage and failure. Today adequate ductile fracture models are available which can be used in forming and crash applications. These continuum models are based on failure strain curves or surfaces which depend on the stress triaxiality (e. g. Crach or GISSMO) and may additionally include the Lode angle (extended Mohr Coulomb or extended GISSMO model). A challenging task is to obtain the respective failure strain curves. In the paper the procedure is described in detail how these failure strain curves are obtained using small scale tests within voestalpine Stahl, notch tensile-, bulge and shear tests. It is shown that capturing the surface strains is not sufficient for obtaining reliable material failure parameters. The simulation tool for roll-forming at the site of voestalpine Krems is Copra® FEA RF, which is a 3D continuum finite element solver based on MSC.Marc. The simulation environment for crash applications is LS-DYNA. Shell elements are used for this type of analyses. A major task is to provide results of

  9. Do prevailing societal models influence reports of near-death experiences?: a comparison of accounts reported before and after 1975.

    PubMed

    Athappilly, Geena K; Greyson, Bruce; Stevenson, Ian

    2006-03-01

    Transcendental near-death experiences show some cross-cultural variation that suggests they may be influenced by societal beliefs. The prevailing Western model of near-death experiences was defined by Moody's description of the phenomenon in 1975. To explore the influence of this cultural model, we compared near-death experience accounts collected before and after 1975. We compared the frequency of 15 phenomenological features Moody defined as characteristic of near-death experiences in 24 accounts collected before 1975 and in 24 more recent accounts matched on relevant demographic and situational variables. Near-death experience accounts collected after 1975 differed from those collected earlier only in increased frequency of tunnel phenomena, which other research has suggested may not be integral to the experience, and not in any of the remaining 14 features defined by Moody as characteristic of near-death experiences. These data challenge the hypothesis that near-death experience accounts are substantially influenced by prevailing cultural models.

  10. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    NASA Astrophysics Data System (ADS)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that

  11. Multiple-breed reaction norm animal model accounting for robustness and heteroskedastic in a Nelore-Angus crossed population.

    PubMed

    Oliveira, M M; Santana, M L; Cardoso, F F

    2016-07-01

    Our objective was to genetically characterize post-weaning weight gain (PWG), over a 345-day period after weaning, of Brangus-Ibagé (Nelore×Angus) cattle. Records (n=4016) were from the foundation herd of the Embrapa South Livestock Center. A Bayesian approach was used to assess genotype by environment (G×E) interaction and to identify a suitable model for the estimation of genetic parameters and use in genetic evaluation. A robust and heteroscedastic reaction norm multiple-breed animal model was proposed. The model accounted for heterogeneity of residual variance associated with effects of breed, heterozygosity, sex and contemporary group; and was robust with respect to outliers. Additive genetic effects were modeled for the intercept and slope of a reaction norm to changes in the environmental gradient. Inference was based on Monte Carlo Markov Chain of 110 000 cycles, after 10 000 cycles of burn-in. Bayesian model choice criteria indicated the proposed model was superior to simpler sub-models that did not account for G×E interaction, multiple-breed structure, robustness and heteroscedasticity. We conclude that, for the Brangus-Ibagé population, these factors should be jointly accounted for in genetic evaluation of PWG. Heritability estimates increased proportionally with improvement in the environmental conditions gradient. Therefore, an increased proportion of differences in performance among animals were explained by genetic factors rather than environmental factors as rearing conditions improved. As a consequence response to selection may be increased in favorable environments.

  12. Multiple-breed reaction norm animal model accounting for robustness and heteroskedastic in a Nelore-Angus crossed population.

    PubMed

    Oliveira, M M; Santana, M L; Cardoso, F F

    2016-07-01

    Our objective was to genetically characterize post-weaning weight gain (PWG), over a 345-day period after weaning, of Brangus-Ibagé (Nelore×Angus) cattle. Records (n=4016) were from the foundation herd of the Embrapa South Livestock Center. A Bayesian approach was used to assess genotype by environment (G×E) interaction and to identify a suitable model for the estimation of genetic parameters and use in genetic evaluation. A robust and heteroscedastic reaction norm multiple-breed animal model was proposed. The model accounted for heterogeneity of residual variance associated with effects of breed, heterozygosity, sex and contemporary group; and was robust with respect to outliers. Additive genetic effects were modeled for the intercept and slope of a reaction norm to changes in the environmental gradient. Inference was based on Monte Carlo Markov Chain of 110 000 cycles, after 10 000 cycles of burn-in. Bayesian model choice criteria indicated the proposed model was superior to simpler sub-models that did not account for G×E interaction, multiple-breed structure, robustness and heteroscedasticity. We conclude that, for the Brangus-Ibagé population, these factors should be jointly accounted for in genetic evaluation of PWG. Heritability estimates increased proportionally with improvement in the environmental conditions gradient. Therefore, an increased proportion of differences in performance among animals were explained by genetic factors rather than environmental factors as rearing conditions improved. As a consequence response to selection may be increased in favorable environments. PMID:26754914

  13. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology

    PubMed Central

    2013-01-01

    Background The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. Methods A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Results Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. Conclusions A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions. PMID:23419192

  14. Accounting for Global Climate Model Projection Uncertainty in Modern Statistical Downscaling

    SciTech Connect

    Johannesson, G

    2010-03-17

    configuring and running a regional climate model (RCM) nested within a given GCM projection (i.e., the GCM provides bounder conditions for the RCM). On the other hand, statistical downscaling aims at establishing a statistical relationship between observed local/regional climate variables of interest and synoptic (GCM-scale) climate predictors. The resulting empirical relationship is then applied to future GCM projections. A comparison of the pros and cons of dynamical versus statistical downscaling is outside the scope of this effort, but has been extensively studied and the reader is referred to Wilby et al. (1998); Murphy (1999); Wood et al. (2004); Benestad et al. (2007); Fowler et al. (2007), and references within those. The scope of this effort is to study methodology, a statistical framework, to propagate and account for GCM uncertainty in regional statistical downscaling assessment. In particular, we will explore how to leverage an ensemble of GCM projections to quantify the impact of the GCM uncertainty in such an assessment. There are three main component to this effort: (1) gather the necessary climate-related data for a regional SDS study, including multiple GCM projections, (2) carry out SDS, and (3) assess the uncertainty. The first step is carried out using tools written in the Python programming language, while analysis tools were developed in the statistical programming language R; see Figure 1.

  15. Growth Models and Accountability: A Recipe for Remaking ESEA. Education Sector Reports

    ERIC Educational Resources Information Center

    Carey, Kevin; Manwaring, Robert

    2011-01-01

    Under the federal No Child Left Behind Act, schools were held almost exclusively accountable for absolute levels of student performance. But that meant that even schools that were making great strides with students were still labeled as "failing," just because the students had not yet made it all the way to a "proficient" level of achievement. As…

  16. Measuring What Matters: A Stronger Accountability Model for Teacher Education [Executive Summary

    ERIC Educational Resources Information Center

    Crowe, Edward

    2010-01-01

    Our current system for holding U.S. teacher education programs accountable doesn't guarantee program quality or serve the needs of schools and students. State oversight for teacher preparation programs mostly ignores the impact of graduates on the K-12 students they teach, and it gives little attention to where graduates teach or how long they…

  17. Towards a Model of Stewardship and Accountability in Support of Innovation and "Good" Failure.

    PubMed

    Denny, Keith; Veillard, Jeremy

    2015-01-01

    From an evolutionary perspective, failures of imagination and missed opportunities to learn from experimentation are as potentially harmful for the health system as failures of practice. The conundrum is encapsulated in the fact that while commentators are steadfast about the need on the part of the stewards of the health system to avoid any waste of public dollars, they are also insistent about the need for innovation. There is tension between these two imperatives that is often unrecognized: the pursuit of efficiency, narrowly defined, can crowd out the goal of innovation by insisting on the elimination of "good waste" (the costs of experimentation) as well as "bad waste" (the costs of inefficiency) (Potts 2009). This tension is mirrored in the two broad drivers of performance reporting in health systems: public accountability and quality improvement. Health organizations, predominantly funded by public funds, are necessarily accountable for the ways in which those funds are used and outcomes achieved. This paper reviews how accountability relationships should be re-examined to create room for "good failure" and to ensure that system accountability does not become a barrier to performance improvement. PMID:26853610

  18. Accountability to Whom? For What? Teacher Identity and the Force Field Model of Teacher Development

    ERIC Educational Resources Information Center

    Samuel, Michael

    2008-01-01

    The rise of fundamentalism in the sphere of teacher education points to a swing back towards teachers as service workers for State agendas. Increasingly, teachers are expected to account for the outcomes of their practices. This article traces the trajectory of trends in teacher education over the past five decades arguing that this "new…

  19. Alternative Schools Accountability Model: 2001-2002 Indicator Selection and Reporting Guide.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    California has developed an alternative accountability system for schools with fewer than 100 students, alternative schools of various kinds, community day schools, and other schools under the jurisdiction of a county board of education or a county superintendent of schools. This document is a guide to assist local administrators in completing the…

  20. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    ERIC Educational Resources Information Center

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  1. Modeling Task Switching without Switching Tasks: A Short-Term Priming Account of Explicitly Cued Performance

    ERIC Educational Resources Information Center

    Schneider, Darryl W.; Logan, Gordon D.

    2005-01-01

    Switch costs in task switching are commonly attributed to an executive control process of task-set reconfiguration, particularly in studies involving the explicit task-cuing procedure. The authors propose an alternative account of explicitly cued performance that is based on 2 mechanisms: priming of cue encoding from residual activation of cues in…

  2. A collaborative accountable care model in three practices showed promising early results on costs and quality of care.

    PubMed

    Salmon, Richard B; Sanderson, Mark I; Walters, Barbara A; Kennedy, Karen; Flores, Robert C; Muney, Alan M

    2012-11-01

    Cigna's Collaborative Accountable Care initiative provides financial incentives to physician groups and integrated delivery systems to improve the quality and efficiency of care for patients in commercial open-access benefit plans. Registered nurses who serve as care coordinators employed by participating practices are a central feature of the initiative. They use patient-specific reports and practice performance reports provided by Cigna to improve care coordination, identify and close care gaps, and address other opportunities for quality improvement. We report interim quality and cost results for three geographically and structurally diverse provider practices in Arizona, New Hampshire, and Texas. Although not statistically significant, these early results revealed favorable trends in total medical costs and quality of care, suggesting that a shared-savings accountable care model and collaborative support from the payer can enable practices to take meaningful steps toward full accountability for care quality and efficiency.

  3. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    NASA Astrophysics Data System (ADS)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  4. On the treatment of evapotranspiration, soil moisture accounting, and aquifer recharge in monthly water balance models.

    USGS Publications Warehouse

    Alley, W.M.

    1984-01-01

    Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation.-from Author

  5. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    ERIC Educational Resources Information Center

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  6. Recommended Method To Account For Daughter Ingrowth For The Portsmouth On-Site Waste Disposal Facility Performance Assessment Modeling

    SciTech Connect

    Phifer, Mark A.; Smith, Frank G. III

    2013-06-21

    A 3-D STOMP model has been developed for the Portsmouth On-Site Waste Disposal Facility (OSWDF) at Site D as outlined in Appendix K of FBP 2013. This model projects the flow and transport of the following radionuclides to various points of assessments: Tc-99, U-234, U-235, U-236, U-238, Am-241, Np-237, Pu-238, Pu-239, Pu-240, Th-228, and Th-230. The model includes the radioactive decay of these parents, but does not include the associated daughter ingrowth because the STOMP model does not have the capability to model daughter ingrowth. The Savannah River National Laboratory (SRNL) provides herein a recommended method to account for daughter ingrowth in association with the Portsmouth OSWDF Performance Assessment (PA) modeling.

  7. An Individual-Based Model of Zebrafish Population Dynamics Accounting for Energy Dynamics

    PubMed Central

    Beaudouin, Rémy; Goussen, Benoit; Piccini, Benjamin; Augustine, Starrlight; Devillers, James; Brion, François; Péry, Alexandre R. R.

    2015-01-01

    Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model) was coupled to an individual based model of zebrafish population dynamics (IBM model). Next, we fitted the DEB model to new experimental data on zebrafish growth and reproduction thus improving existing models. We further analysed the DEB-model and DEB-IBM using a sensitivity analysis. Finally, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding), it can already serve to predict the impact of compounds at the population level. PMID:25938409

  8. Accounting for spatial effects in land use regression for urban air pollution modeling.

    PubMed

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models.

  9. Accounting for spatial effects in land use regression for urban air pollution modeling.

    PubMed

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models. PMID:26530819

  10. An intuitive Bayesian spatial model for disease mapping that accounts for scaling.

    PubMed

    Riebler, Andrea; Sørbye, Sigrunn H; Simpson, Daniel; Rue, Håvard

    2016-08-01

    In recent years, disease mapping studies have become a routine application within geographical epidemiology and are typically analysed within a Bayesian hierarchical model formulation. A variety of model formulations for the latent level have been proposed but all come with inherent issues. In the classical BYM (Besag, York and Mollié) model, the spatially structured component cannot be seen independently from the unstructured component. This makes prior definitions for the hyperparameters of the two random effects challenging. There are alternative model formulations that address this confounding; however, the issue on how to choose interpretable hyperpriors is still unsolved. Here, we discuss a recently proposed parameterisation of the BYM model that leads to improved parameter control as the hyperparameters can be seen independently from each other. Furthermore, the need for a scaled spatial component is addressed, which facilitates assignment of interpretable hyperpriors and make these transferable between spatial applications with different graph structures. The hyperparameters themselves are used to define flexible extensions of simple base models. Consequently, penalised complexity priors for these parameters can be derived based on the information-theoretic distance from the flexible model to the base model, giving priors with clear interpretation. We provide implementation details for the new model formulation which preserve sparsity properties, and we investigate systematically the model performance and compare it to existing parameterisations. Through a simulation study, we show that the new model performs well, both showing good learning abilities and good shrinkage behaviour. In terms of model choice criteria, the proposed model performs at least equally well as existing parameterisations, but only the new formulation offers parameters that are interpretable and hyperpriors that have a clear meaning.

  11. An intuitive Bayesian spatial model for disease mapping that accounts for scaling.

    PubMed

    Riebler, Andrea; Sørbye, Sigrunn H; Simpson, Daniel; Rue, Håvard

    2016-08-01

    In recent years, disease mapping studies have become a routine application within geographical epidemiology and are typically analysed within a Bayesian hierarchical model formulation. A variety of model formulations for the latent level have been proposed but all come with inherent issues. In the classical BYM (Besag, York and Mollié) model, the spatially structured component cannot be seen independently from the unstructured component. This makes prior definitions for the hyperparameters of the two random effects challenging. There are alternative model formulations that address this confounding; however, the issue on how to choose interpretable hyperpriors is still unsolved. Here, we discuss a recently proposed parameterisation of the BYM model that leads to improved parameter control as the hyperparameters can be seen independently from each other. Furthermore, the need for a scaled spatial component is addressed, which facilitates assignment of interpretable hyperpriors and make these transferable between spatial applications with different graph structures. The hyperparameters themselves are used to define flexible extensions of simple base models. Consequently, penalised complexity priors for these parameters can be derived based on the information-theoretic distance from the flexible model to the base model, giving priors with clear interpretation. We provide implementation details for the new model formulation which preserve sparsity properties, and we investigate systematically the model performance and compare it to existing parameterisations. Through a simulation study, we show that the new model performs well, both showing good learning abilities and good shrinkage behaviour. In terms of model choice criteria, the proposed model performs at least equally well as existing parameterisations, but only the new formulation offers parameters that are interpretable and hyperpriors that have a clear meaning. PMID:27566770

  12. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    NASA Astrophysics Data System (ADS)

    Branlard, E.; Gaunaa, M.; Machefaux, E.

    2014-06-01

    The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data from the MEXICO experiment are used as a basis for validation. Three tools using the same 2D airfoil coefficient data are compared: a BEM code, an Actuator-Line and a vortex code. The vortex code is further used to validate the results from the newly implemented BEM yaw-model. Significant improvements are obtained for the prediction of loads and induced velocities. Further relaxation of the main assumptions of the model are briefly presented and discussed.

  13. Comparative evaluation of ensemble Kalman filter, particle filter and variational techniques for river discharge forecast

    NASA Astrophysics Data System (ADS)

    Hirpa, F. A.; Gebremichael, M.; LEE, H.; Hopson, T. M.

    2012-12-01

    Hydrologic data assimilation techniques provide a means to improve river discharge forecasts through updating hydrologic model states and correcting the atmospheric forcing data via optimally combining model outputs with observations. The performance of the assimilation procedure, however, depends on the data assimilation techniques used and the amount of uncertainty in the data sets. To investigate the effects of these, we comparatively evaluate three data assimilation techniques, including ensemble Kalman filter (EnKF), particle filter (PF) and variational (VAR) technique, which assimilate discharge and synthetic soil moisture data at various uncertainty levels into the Sacramento Soil Moisture accounting (SAC-SMA) model used by the National Weather Service (NWS) for river forecasting in The United States. The study basin is Greens Bayou watershed with area of 178 km2 in eastern Texas. In the presentation, we summarize the results of the comparisons, and discuss the challenges of applying each technique for hydrologic applications.

  14. Renormalizing Sznajd model on complex networks taking into account the effects of growth mechanisms

    NASA Astrophysics Data System (ADS)

    González, M. C.; Sousa, A. O.; Herrmann, H. J.

    2006-01-01

    We present a renormalization approach to solve the Sznajd opinion formation model on complex networks. For the case of two opinions, we present an expression of the probability of reaching consensus for a given opinion as a function of the initial fraction of agents with that opinion. The calculations reproduce the sharp transition of the model on a fixed network, as well as the recently observed smooth function for the model when simulated on a growing complex networks.

  15. Medical savings accounts: microsimulation results from a model with adverse selection.

    PubMed

    Zabinski, D; Selden, T M; Moeller, J F; Banthin, J S

    1999-04-01

    This paper examines medical savings accounts combined with high-deductible catastrophic health plans (MSA/CHPs), exploring the possible consequences of making tax preferred MSA/CHPs available to the entire employment-related health insurance market. The paper uses microsimulation methods to examine the equilibrium effects of MSA/CHPs on health care and non-health care expenditures, tax revenues, insurance premiums, and exposure to risk. If MSA/CHPs are offered alongside comprehensive plans, biased MSA/CHP enrollment can lead to premium spirals that drive out comprehensive coverage. Our estimates also raise concerns about equity, insofar as those who stand to lose the most tend to be poorer and in families with infant children.

  16. Beyond Socks, Signs, and Alarms: A Reflective Accountability Model for Fall Prevention.

    PubMed

    Hoke, Linda M; Guarracino, Dana

    2016-01-01

    Despite standard fall precautions, including nonskid socks, signs, alarms, and patient instructions, our 48-bed cardiac intermediate care unit (CICU) had a 41% increase in the rate of falls (from 2.2 to 3.1 per 1,000 patient days) and a 65% increase in the rate of falls with injury (from 0.75 to 1.24 per 1,000 patient days) between fiscal years (FY) 2012 and 2013. An evaluation of the falls data conducted by a cohort of four clinical nurses found that the majority of falls occurred when patients were unassisted by nurses, most often during toileting. Supported by the leadership team, the clinical nurses developed an accountability care program that required nurses to use reflective practice to evaluate each fall, including sending an e-mail to all staff members with both the nurse's and the patient's perspective on the fall, as well as the nurse's reflection on what could have been done to prevent the fall. Other program components were a postfall huddle and guidelines for assisting and remaining with fall risk patients for the duration of their toileting. Placing the accountability for falls with the nurse resulted in decreases in the unit's rates of falls and falls with injury of 55% (from 3.1 to 1.39 per 1,000 patient days) and 72% (from 1.24 to 0.35 per 1,000 patient days), respectively, between FY2013 and FY2014. Prompt call bell response (less than 60 seconds) also contributed to the goal of fall prevention. PMID:26710147

  17. Beyond Socks, Signs, and Alarms: A Reflective Accountability Model for Fall Prevention.

    PubMed

    Hoke, Linda M; Guarracino, Dana

    2016-01-01

    Despite standard fall precautions, including nonskid socks, signs, alarms, and patient instructions, our 48-bed cardiac intermediate care unit (CICU) had a 41% increase in the rate of falls (from 2.2 to 3.1 per 1,000 patient days) and a 65% increase in the rate of falls with injury (from 0.75 to 1.24 per 1,000 patient days) between fiscal years (FY) 2012 and 2013. An evaluation of the falls data conducted by a cohort of four clinical nurses found that the majority of falls occurred when patients were unassisted by nurses, most often during toileting. Supported by the leadership team, the clinical nurses developed an accountability care program that required nurses to use reflective practice to evaluate each fall, including sending an e-mail to all staff members with both the nurse's and the patient's perspective on the fall, as well as the nurse's reflection on what could have been done to prevent the fall. Other program components were a postfall huddle and guidelines for assisting and remaining with fall risk patients for the duration of their toileting. Placing the accountability for falls with the nurse resulted in decreases in the unit's rates of falls and falls with injury of 55% (from 3.1 to 1.39 per 1,000 patient days) and 72% (from 1.24 to 0.35 per 1,000 patient days), respectively, between FY2013 and FY2014. Prompt call bell response (less than 60 seconds) also contributed to the goal of fall prevention.

  18. Educational Accountability

    ERIC Educational Resources Information Center

    Pincoffs, Edmund L.

    1973-01-01

    Discusses educational accountability as the paradigm of performance contracting, presents some arguments for and against accountability, and discusses the goals of education and the responsibility of the teacher. (Author/PG)

  19. Accounting for Individual Differences in Bradley-Terry Models by Means of Recursive Partitioning

    ERIC Educational Resources Information Center

    Strobl, Carolin; Wickelmaier, Florian; Zeileis, Achim

    2011-01-01

    The preference scaling of a group of subjects may not be homogeneous, but different groups of subjects with certain characteristics may show different preference scalings, each of which can be derived from paired comparisons by means of the Bradley-Terry model. Usually, either different models are fit in predefined subsets of the sample or the…

  20. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    EPA Science Inventory

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  1. Value-Added Models of Assessment: Implications for Motivation and Accountability

    ERIC Educational Resources Information Center

    Anderman, Eric M.; Anderman, Lynley H.; Yough, Michael S.; Gimbert, Belinda G.

    2010-01-01

    In this article, we examine the relations of value-added models of measuring academic achievement to student motivation. Using an achievement goal orientation theory perspective, we argue that value-added models, which focus on the progress of individual students over time, are more closely aligned with research on student motivation than are more…

  2. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    PubMed

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. PMID:26339919

  3. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    PubMed

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented.

  4. A spatially filtered multilevel model to account for spatial dependency: application to self-rated health status in South Korea

    PubMed Central

    2014-01-01

    Background This study aims to suggest an approach that integrates multilevel models and eigenvector spatial filtering methods and apply it to a case study of self-rated health status in South Korea. In many previous health-related studies, multilevel models and single-level spatial regression are used separately. However, the two methods should be used in conjunction because the objectives of both approaches are important in health-related analyses. The multilevel model enables the simultaneous analysis of both individual and neighborhood factors influencing health outcomes. However, the results of conventional multilevel models are potentially misleading when spatial dependency across neighborhoods exists. Spatial dependency in health-related data indicates that health outcomes in nearby neighborhoods are more similar to each other than those in distant neighborhoods. Spatial regression models can address this problem by modeling spatial dependency. This study explores the possibility of integrating a multilevel model and eigenvector spatial filtering, an advanced spatial regression for addressing spatial dependency in datasets. Methods In this spatially filtered multilevel model, eigenvectors function as additional explanatory variables accounting for unexplained spatial dependency within the neighborhood-level error. The specification addresses the inability of conventional multilevel models to account for spatial dependency, and thereby, generates more robust outputs. Results The findings show that sex, employment status, monthly household income, and perceived levels of stress are significantly associated with self-rated health status. Residents living in neighborhoods with low deprivation and a high doctor-to-resident ratio tend to report higher health status. The spatially filtered multilevel model provides unbiased estimations and improves the explanatory power of the model compared to conventional multilevel models although there are no changes in the

  5. Modeling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    NASA Astrophysics Data System (ADS)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2014-01-01

    Coral reefs are diverse ecosystems threatened by rising CO2 levels that are driving the observed increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are able to explicitly modelled by linking the rates of growth, recovery and calcification to the rates of bleaching and temperature stress induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The different characteristics of this model are also assessed against independent data to show that the model captures the observed response of corals. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles for understanding the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can reproduce much of the observed response of corals to changes in temperature and ocean acidification.

  6. A Critical Examination of the Models Proposed to Account for Baryon-Antibaryon Segregation Following the Quark-Hadron Transition

    NASA Astrophysics Data System (ADS)

    Garfinkle, Moishe

    2015-04-01

    The major concern of the Standard Cosmological Model (SCM) is to account for the continuing existence of the universe in spite of the Standard Particle Model (SPM). According to the SPM below the quark-hadron temperature (~ 150 +/- 50 MeV) the rate of baryon-antibaryon pair creation from γ radiation is in equilibrium with rate of pair annihilation. At freeze-out (~ 20 +/- 10 MeV) the rate of pair creation ceases. Henceforth only annihilation occurs below this temperature, resulting in a terminal pair ratio B+/ γ = B-/ γ ~ 10-18, insufficient to account for the present universe which would require a pair ratio minimum of at least B+/ γ = B-/ γ ~ 10-10. The present universe could not exist according to the SPM unless a mechanism was devised to segregation baryons from antibaryon before freeze-out. The SPM can be tweaked to accommodate the first two conditions but all of the mechanisms proposed over the past sixty years for the third condition failed. All baryon-number excursions devised were found to be reversible. The major concern of the SCM is to account for the continuing existence of the universe in spite of the SPM. The present universe could not exist according to the SPM unless a mechanism was devised to segregation baryons from antibaryon before freeze-out. It is the examination of these possible mechanisms that is subject of this work.

  7. A macro traffic flow model accounting for real-time traffic state

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Chen, Liang; Wu, Yong-Hong; Caccetta, Lou

    2015-11-01

    In this paper, we propose a traffic flow model to study the effects of the real-time traffic state on traffic flow. The numerical results show that the proposed model can describe oscillation in traffic and stop-and-go traffic, where the speed-density relationship is qualitatively accordant with the empirical data of the Weizikeng segment of the Badaling freeway in Beijing, which means that the proposed model can qualitatively reproduce some complex traffic phenomena associated with real-time traffic state.

  8. Impact of radar-rainfall error structure on estimated flood magnitude across scales: An investigation based on a parsimonious distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Cunha, Luciana K.; Mandapaka, Pradeep V.; Krajewski, Witold F.; Mantilla, Ricardo; Bradley, Allen A.

    2012-10-01

    The goal of this study is to diagnose the manner in which radar-rainfall input affects peak flow simulation uncertainties across scales. We used the distributed physically based hydrological model CUENCAS with parameters that are estimated from available data and without fitting the model output to discharge observations. We evaluated the model's performance using (1) observed streamflow at the outlet of nested basins ranging in scale from 20 to 16,000 km2 and (2) streamflow simulated by a well-established and extensively calibrated hydrological model used by the US National Weather Service (SAC-SMA). To mimic radar-rainfall uncertainty, we applied a recently proposed statistical model of radar-rainfall error to produce rainfall ensembles based on different expected error scenarios. We used the generated ensembles as input for the hydrological model and summarized the effects on flow sensitivities using a relative measure of the ensemble peak flow dispersion for every link in the river network. Results show that peak flow simulation uncertainty is strongly dependent on the catchment scale. Uncertainty decreases with increasing catchment drainage area due to the aggregation effect of the river network that filters out small-scale uncertainties. The rate at which uncertainty changes depends on the error structure of the input rainfall fields. We found that random errors that are uncorrelated in space produce high peak flow variability for small scale basins, but uncertainties decrease rapidly as scale increases. In contrast, spatially correlated errors produce less scatter in peak flows for small scales, but uncertainty decreases slowly with increasing catchment size. This study demonstrates the large impact of scale on uncertainty in hydrological simulations and demonstrates the need for a more robust characterization of the uncertainty structure in radar-rainfall. Our results are diagnostic and illustrate the benefits of using the calibration-free, multiscale

  9. A vortex model for Richtmyer-Meshkov instability accounting for finite Atwood number

    NASA Astrophysics Data System (ADS)

    Likhachev, Oleg A.; Jacobs, Jeffrey W.

    2005-03-01

    The vortex model developed by Jacobs and Sheeley ["Experimental study of incompressible Richtmyer-Meshkov instability," Phys. Fluids 8, 405 (1996)] is essentially a solution to the governing equations for the case of a uniform density fluid. Thus, this model strictly speaking only applies to the case of vanishing small Atwood number. A modification to this model for small to finite Atwood number is proposed in which the vortex row utilized is perturbed such that the vortex spacing is smaller across the spikes and larger across the bubbles, a fact readily observed in experimental images. It is shown that this modification more effectively captures the behavior of experimental amplitude measurements, especially when compared with separate bubble and spike data. In addition, it is shown that this modification will cause the amplitude to deviate from the logarithmic result given by the heuristic models at late time.

  10. A thermomechanical model accounting for the behavior of shape memory alloys in finite deformations

    NASA Astrophysics Data System (ADS)

    Haller, Laviniu; Nedjar, Boumedienne; Moumni, Ziad; Vedinaş, Ioan; Trană, Eugen

    2016-07-01

    Shape memory alloys (SMA) comport an interesting behavior. They can undertake large strains and then recover their undeformed shape by heating. In this context, one of the aspects that challenged many researchers was the development of a mathematical model to predict the behavior of a known SMA under real-life conditions, or finite strain. This paper is aimed at working out a finite strain mathematical model for a Ni-Ti SMA, under the superelastic experiment conditions and under uniaxial mechanical loading, based on the Zaki-Moumni 3D mathematical model developed under the small perturbations assumption. Within the current article, a comparison between experimental findings and calculated results is also investigated. The proposed finite strain mathematical model shows good agreement with experimental data.

  11. Taking into Account the Ion-induced Dipole Interaction in the Nonbonded Model of Ions

    PubMed Central

    Li, Pengfei; Merz, Kenneth M.

    2013-01-01

    Metal ions exist in almost half of the proteins in the protein databank and they serve as structural, electron-transfer and catalytic elements in the metabolic processes of organisms. Molecular Dynamics (MD) simulation is a powerful tool that provides information about biomolecular systems at the atomic level. Coupled with the growth in computing power, algorithms like the Particle Mesh Ewald (PME) method have become the accepted standard when dealing with long-range interactions in MD simulations. The nonbonded model of metal ions consists of an electrostatic plus 12-6 Lennard Jones (LJ) potential and is used largely because of its speed relative to more accurate models. In previous work we found that ideal parameters do not exist that reproduce several experimental properties for M(II) ions simultaneously using the nonbonded model coupled with the PME method due to the underestimation of metal ion-ligand interactions. Via a consideration of the nature of the nonbonded model, we proposed that the observed error largely arises from overlooking charge-induced dipole interactions. The electrostatic plus 12-6 LJ potential model works reasonably well for neutral systems but does struggle with more highly charged systems. In the present work we designed and parameterized a new nonbonded model for metal ions by adding a 1/r4 term to the 12-6 model. We call it the 12-6-4 LJ-type nonbonded model due to its mathematical construction. Parameters were determined for 16 +2 metal ions for the TIP3P, SPC/E and TIP4PEW water models. The final parameters reproduce the experimental hydration free energies (HFE), ion-oxygen distances (IOD) in the first solvation shell and coordination numbers (CN) accurately for the metal ions investigated. Preliminary tests on MgCl2 at different concentrations in aqueous solution and Mg2+--nucleic acid systems show reasonable results suggesting that the present parameters can work in mixed systems. The 12-6-4 LJ-type nonbonded model is readily

  12. Behavioral Health and Health Care Reform Models: Patient-Centered Medical Home, Health Home, and Accountable Care Organization

    PubMed Central

    Bao, Yuhua; Casalino, Lawrence P.; Pincus, Harold Alan

    2012-01-01

    Discussions of health care delivery and payment reforms have largely been silent about how behavioral health could be incorporated into reform initiatives. This paper draws attention to four patient populations defined by the severity of their behavioral health conditions and insurance status. It discusses the potentials and limitations of three prominent models promoted by the Affordable Care Act to serve populations with behavioral health conditions: the Patient Centered Medical Home, the Health Home initiative within Medicaid, and the Accountable Care Organization. To incorporate behavioral health into health reform, policymakers and practitioners may consider embedding in the reform efforts explicit tools – accountability measures and payment designs – to improve access to and quality of care for patients with behavioral health needs. PMID:23188486

  13. Modelling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    NASA Astrophysics Data System (ADS)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2015-05-01

    Coral reefs are diverse ecosystems that are threatened by rising CO2 levels through increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are explicitly modelled by linking rates of growth, recovery and calcification to rates of bleaching and temperature-stress-induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, correlated up- and down-regulation of traits that are consistent with resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The performance of the model is assessed against independent data to demonstrate how it can capture the observed response of corals to stress. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles to help understand the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can give insights into how corals respond to changes in temperature and ocean acidification.

  14. Analysis of homogeneous/non-homogeneous nanofluid models accounting for nanofluid-surface interactions

    NASA Astrophysics Data System (ADS)

    Ahmad, R.

    2016-07-01

    This article reports an unbiased analysis for the water based rod shaped alumina nanoparticles by considering both the homogeneous and non-homogeneous nanofluid models over the coupled nanofluid-surface interface. The mechanics of the surface are found for both the homogeneous and non-homogeneous models, which were ignored in previous studies. The viscosity and thermal conductivity data are implemented from the international nanofluid property benchmark exercise. All the simulations are being done by using the experimentally verified results. By considering the homogeneous and non-homogeneous models, the precise movement of the alumina nanoparticles over the surface has been observed by solving the corresponding system of differential equations. For the non-homogeneous model, a uniform temperature and nanofluid volume fraction are assumed at the surface, and the flux of the alumina nanoparticle is taken as zero. The assumption of zero nanoparticle flux at the surface makes the non-homogeneous model physically more realistic. The differences of all profiles for both the homogeneous and nonhomogeneous models are insignificant, and this is due to small deviations in the values of the Brownian motion and thermophoresis parameters.

  15. A retinal circuit model accounting for wide-field amacrine cells

    PubMed Central

    Sağlam, Murat; Murayama, Nobuki

    2008-01-01

    In previous experimental studies on the visual processing in vertebrates, higher-order visual functions such as the object segregation from background were found even in the retinal stage. Previously, the “linear–nonlinear” (LN) cascade models have been applied to the retinal circuit, and succeeded to describe the input-output dynamics for certain parts of the circuit, e.g., the receptive field of the outer retinal neurons. And recently, some abstract models composed of LN cascades as the circuit elements could explain the higher-order retinal functions. However, in such a model, each class of retinal neurons is mostly omitted and thus, how those neurons play roles in the visual computations cannot be explored. Here, we present a spatio-temporal computational model of the vertebrate retina, based on the response function for each class of retinal neurons and on the anatomical inter-cellular connections. This model was capable of not only reproducing the spatio-temporal filtering properties of the outer retinal neurons, but also realizing the object segregation mechanism in the inner retinal circuit involving the “wide-field” amacrine cells. Moreover, the first-order Wiener kernels calculated for the neurons in our model showed a reasonable fit to the kernels previously measured in the real retinal neuron in situ. PMID:19003460

  16. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    PubMed

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-01

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  17. Accounting for nitrogen fixation in simple models of lake nitrogen loading/export.

    PubMed

    Ruan, Xiaodan; Schellenger, Frank; Hellweger, Ferdi L

    2014-05-20

    Coastal eutrophication, an important global environmental problem, is primarily caused by excess nitrogen and management efforts consequently focus on lowering watershed N export (e.g., by reducing fertilizer use). Simple quantitative models are needed to evaluate alternative scenarios at the watershed scale. Existing models generally assume that, for a specific lake/reservoir, a constant fraction of N loading is exported downstream. However, N fixation by cyanobacteria may increase when the N loading is reduced, which may change the (effective) fraction of N exported. Here we present a model that incorporates this process. The model (Fixation and Export of Nitrogen from Lakes, FENL) is based on a steady-state mass balance with loading, output, loss/retention, and N fixation, where the amount fixed is a function of the N/P ratio of the loading (i.e., when N/P is less than a threshold value, N is fixed). Three approaches are used to parametrize and evaluate the model, including microcosm lab experiments, lake field observations/budgets and lake ecosystem model applications. Our results suggest that N export will not be reduced proportionally with N loading, which needs to be considered when evaluating management scenarios.

  18. Accounting for Misclassified Outcomes in Binary Regression Models Using Multiple Imputation With Internal Validation Data

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Troester, Melissa A.; Richardson, David B.

    2013-01-01

    Outcome misclassification is widespread in epidemiology, but methods to account for it are rarely used. We describe the use of multiple imputation to reduce bias when validation data are available for a subgroup of study participants. This approach is illustrated using data from 308 participants in the multicenter Herpetic Eye Disease Study between 1992 and 1998 (48% female; 85% white; median age, 49 years). The odds ratio comparing the acyclovir group with the placebo group on the gold-standard outcome (physician-diagnosed herpes simplex virus recurrence) was 0.62 (95% confidence interval (CI): 0.35, 1.09). We masked ourselves to physician diagnosis except for a 30% validation subgroup used to compare methods. Multiple imputation (odds ratio (OR) = 0.60; 95% CI: 0.24, 1.51) was compared with naive analysis using self-reported outcomes (OR = 0.90; 95% CI: 0.47, 1.73), analysis restricted to the validation subgroup (OR = 0.57; 95% CI: 0.20, 1.59), and direct maximum likelihood (OR = 0.62; 95% CI: 0.26, 1.53). In simulations, multiple imputation and direct maximum likelihood had greater statistical power than did analysis restricted to the validation subgroup, yet all 3 provided unbiased estimates of the odds ratio. The multiple-imputation approach was extended to estimate risk ratios using log-binomial regression. Multiple imputation has advantages regarding flexibility and ease of implementation for epidemiologists familiar with missing data methods. PMID:24627573

  19. Retrieval-Based Model Accounts for Striking Profile of Episodic Memory and Generalization

    PubMed Central

    Banino, Andrea; Koster, Raphael; Hassabis, Demis; Kumaran, Dharshan

    2016-01-01

    A fundamental theoretical tension exists between the role of the hippocampus in generalizing across a set of related episodes, and in supporting memory for individual episodes. Whilst the former requires an appreciation of the commonalities across episodes, the latter emphasizes the representation of the specifics of individual experiences. We developed a novel version of the hippocampal-dependent paired associate inference (PAI) paradigm, which afforded us the unique opportunity to investigate the relationship between episodic memory and generalization in parallel. Across four experiments, we provide surprising evidence that the overlap between object pairs in the PAI paradigm results in a marked loss of episodic memory. Critically, however, we demonstrate that superior generalization ability was associated with stronger episodic memory. Through computational simulations we show that this striking profile of behavioral findings is best accounted for by a mechanism by which generalization occurs at the point of retrieval, through the recombination of related episodes on the fly. Taken together, our study offers new insights into the intricate relationship between episodic memory and generalization, and constrains theories of the mechanisms by which the hippocampus supports generalization. PMID:27510579

  20. Educational Quality Is Measured by Individual Student Achievement Over Time. Mt. San Antonio College AB 1725 Model Accountability System Pilot Proposal.

    ERIC Educational Resources Information Center

    Mount San Antonio Coll., Walnut, CA.

    In December 1990, a project was begun at Mt. San Antonio College (MSAC) in Walnut, California, to develop a model accountability system based on the belief that educational quality is measured by individual achievement over time. This proposal for the Accountability Model (AM) presents information on project methodology and organization in four…

  1. A dissolution model that accounts for coverage of mineral surfaces by precipitation in core floods

    NASA Astrophysics Data System (ADS)

    Pedersen, Janne; Jettestuen, Espen; Madland, Merete V.; Hildebrand-Habel, Tania; Korsnes, Reidar I.; Vinningland, Jan Ludvig; Hiorth, Aksel

    2016-01-01

    In this paper, we propose a model for evolution of reactive surface area of minerals due to surface coverage by precipitating minerals. The model is used to interpret results from an experiment where a chalk core was flooded with MgCl2 for 1072 days, giving rise to calcite dissolution and magnesite precipitation. The model successfully describes both the long-term behavior of the measured effluent concentrations and the more or less homogeneous distribution of magnesite found in the core after 1072 days. The model also predicts that precipitating magnesite minerals form as larger crystals or aggregates of smaller size crystals, and not as thin flakes or as a monomolecular layer. Using rate constants obtained from literature gave numerical effluent concentrations that diverged from observed values only after a few days of flooding. To match the simulations to the experimental data after approximately 1 year of flooding, a rate constant that is four orders of magnitude lower than reported by powder experiments had to be used. We argue that a static rate constant is not sufficient to describe a chalk core flooding experiment lasting for nearly 3 years. The model is a necessary extension of standard rate equations in order to describe long term core flooding experiments where there is a large degree of textural alteration.

  2. Accounting for anatomical noise in SPECT with a visual-search human-model observer

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; King, M. A.; Smyczynski, M. S.

    2011-03-01

    Reliable human-model observers for clinically realistic detection studies are of considerable interest in medical imaging research, but current model observers require frequent revalidation with human data. A visual-search (VS) observer framework may improve reliability by better simulating realistic etection-localization tasks. Under this framework, model observers execute a holistic search to identify tumor-like candidates and then perform careful analysis of these candidates. With emission tomography, anatomical noise in the form of elevated uptake in neighboring tissue often complicates the task. Some scanning model observers simulate the human ability to read around such noise by presubtracting the mean normal background from the test image, but this backgroundknown- exactly (BKE) assumption has several drawbacks. The extent to which the VS observer can overcome these drawbacks was investigated by comparing it against humans and a scanning observer for detection of solitary pulmonary nodules in a simulated SPECT lung study. Our results indicate that the VS observer offers a robust alternative to the scanning observer for modeling humans.

  3. Accounting for anthropogenic actions in modeling of stream flow at the regional scale

    NASA Astrophysics Data System (ADS)

    David, C. H.; Famiglietti, J. S.

    2013-12-01

    The modeling of the horizontal movement of water from land to coasts at scales ranging from 10^5 km^2 to 10^6 km^2 has benefited from extensive research within the past two decades. In parallel, community technology for gathering/sharing surface water observations and datasets for describing the geography of terrestrial water bodies have recently had groundbreaking advancements. Yet, the fields of computational hydrology and hydroinformatics have barely started to work hand-in-hand, and much research remains to be performed before we can better understand the anthropogenic impact on surface water through combined observations and models. Here, we build on our existing river modeling approach that leverages community state-of-the-art tools such as atmospheric data from the second phase of the North American Land Data Assimilation System (NLDAS2), river networks from the enhanced National Hydrography Dataset (NHDPlus), and observations from the U.S. Geological Survey National Water Information System (NWIS) obtained through CUAHSI webservices. Modifications are made to our integrated observational/modeling system to include treatment for anthropogenic actions such as dams, pumping and divergences in river networks. Initial results of a study focusing on the entire State of California suggest that availability of data describing human alterations on natural river networks associated with proper representation of such actions in our models could help advance hydrology further. Snapshot from an animation of flow in California river networks. The full animation is available at: http://www.ucchm.org/david/rapid.htm.

  4. Modelling the redistribution of hospital supply to achieve equity taking account of patient's behaviour.

    PubMed

    Oliveira, Mónica Duarte; Bevan, Gwyn

    2006-02-01

    Policies that seek to achieve geographic equity in countries with a National Health Services (NHS) require information on how to change the distribution of supply to achieve greater equity in access and utilisation. Previous methods for analysing the impact of hospital changes have relied on crude assumptions on patients' behaviour in using hospitals. The approach developed in this study is a multi-modelling one based on two mathematical programming location-allocation models to redistribute hospital supply using different objective functions and assumptions about the utilisation behaviour of patients. These models show how different policy objectives seeking equity of geographic access or utilisation produce different results and imply trade-offs in terms of reduction in total utilisation.

  5. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    SciTech Connect

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  6. Does Don Fisher's high-pressure manifold model account for phloem transport and resource partitioning?

    PubMed Central

    Patrick, John W.

    2013-01-01

    The pressure flow model of phloem transport envisaged by Münch (1930) has gained wide acceptance. Recently, however, the model has been questioned on structural and physiological grounds. For instance, sub-structures of sieve elements may reduce their hydraulic conductances to levels that impede flow rates of phloem sap and observed magnitudes of pressure gradients to drive flow along sieve tubes could be inadequate in tall trees. A variant of the Münch pressure flow model, the high-pressure manifold model of phloem transport introduced by Donald Fisher may serve to reconcile at least some of these questions. To this end, key predicted features of the high-pressure manifold model of phloem transport are evaluated against current knowledge of the physiology of phloem transport. These features include: (1) An absence of significant gradients in axial hydrostatic pressure in sieve elements from collection to release phloem accompanied by transport properties of sieve elements that underpin this outcome; (2) Symplasmic pathways of phloem unloading into sink organs impose a major constraint over bulk flow rates of resources translocated through the source-path-sink system; (3) Hydraulic conductances of plasmodesmata, linking sieve elements with surrounding phloem parenchyma cells, are sufficient to support and also regulate bulk flow rates exiting from sieve elements of release phloem. The review identifies strong circumstantial evidence that resource transport through the source-path-sink system is consistent with the high-pressure manifold model of phloem transport. The analysis then moves to exploring mechanisms that may link demand for resources, by cells of meristematic and expansion/storage sinks, with plasmodesmal conductances of release phloem. The review concludes with a brief discussion of how these mechanisms may offer novel opportunities to enhance crop biomass yields. PMID:23802003

  7. Improved signal model for confocal sensors accounting for object depending artifacts.

    PubMed

    Mauch, Florian; Lyda, Wolfram; Gronle, Marc; Osten, Wolfgang

    2012-08-27

    The conventional signal model of confocal sensors is well established and has proven to be exceptionally robust especially when measuring rough surfaces. Its physical derivation however is explicitly based on plane surfaces or point like objects, respectively. Here we show experimental results of a confocal point sensor measurement of a surface standard. The results illustrate the rise of severe artifacts when measuring curved surfaces. On this basis, we present a systematic extension of the conventional signal model that is proven to be capable of qualitatively explaining these artifacts.

  8. Magnetic models of crystalline terrane: accounting for the effect of topography.

    USGS Publications Warehouse

    Blakely, R.J.; Grauch, V.J.S.

    1983-01-01

    Facilitates geologic interpretation of an aeromagnetic survey of the Oregon Cascade Range by calculating the magnetic field caused by a 3-D topographic model. Maps of the calculated field are compared with observed aeromagnetic data both visually and with a numerical technique that produces a contour map of correlation coefficients for the model. These comparisons allow quick recognition of anomalies caused by normally or reversely magnetized topographic features and, more importantly, identification of anomalies caused by geologic features not obviously caused by the topography. -from Authors

  9. When do sexual partnerships need to be accounted for in transmission models of human papillomavirus?

    PubMed

    Muller, Heidi; Bauch, Chris

    2010-02-01

    Human papillomavirus (HPV) is often transmitted through sexual partnerships. However, many previous HPV transmission models ignore the existence of partnerships by implicitly assuming that each new sexual contact is made with a different person. Here, we develop a simplified pair model--based on the example of HPV--that explicitly includes sexual partnership formation and dissolution. We show that not including partnerships can potentially result in biased projections of HPV prevalence. However, if transmission rates are calibrated to match empirical pre-vaccine HPV prevalence, the projected prevalence under a vaccination program does not vary significantly, regardless of whether partnerships are included. PMID:20616995

  10. Codon-substitution models to detect adaptive evolution that account for heterogeneous selective pressures among site classes.

    PubMed

    Yang, Ziheng; Swanson, Willie J

    2002-01-01

    The nonsynonymous to synonymous substitution rate ratio (omega = d(N)/d(S)) provides a sensitive measure of selective pressure at the protein level, with omega values <1, =1, and >1 indicating purifying selection, neutral evolution, and diversifying selection, respectively. Maximum likelihood models of codon substitution developed recently account for variable selective pressures among amino acid sites by employing a statistical distribution for the omega ratio among sites. Those models, called random-sites models, are suitable when we do not know a priori which sites are under what kind of selective pressure. Sometimes prior information (such as the tertiary structure of the protein) might be available to partition sites in the protein into different classes, which are expected to be under different selective pressures. It is then sensible to use such information in the model. In this paper, we implement maximum likelihood models for prepartitioned data sets, which account for the heterogeneity among site partitions by using different omega parameters for the partitions. The models, referred to as fixed-sites models, are also useful for combined analysis of multiple genes from the same set of species. We apply the models to data sets of the major histocompatibility complex (MHC) class I alleles from human populations and of the abalone sperm lysin genes. Structural information is used to partition sites in MHC into two classes: those in the antigen recognition site (ARS) and those outside. Positive selection is detected in the ARS by the fixed-sites models. Similarly, sites in lysin are classified into the buried and solvent-exposed classes according to the tertiary structure, and positive selection was detected at the solvent-exposed sites. The random-sites models identified a number of sites under positive selection in each data set, confirming and elaborating the results of the fixed-sites models. The analysis demonstrates the utility of the fixed-sites models

  11. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    PubMed

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction. PMID:26154937

  12. An Exemplar-Model Account of Feature Inference from Uncertain Categorizations

    ERIC Educational Resources Information Center

    Nosofsky, Robert M.

    2015-01-01

    In a highly systematic literature, researchers have investigated the manner in which people make feature inferences in paradigms involving uncertain categorizations (e.g., Griffiths, Hayes, & Newell, 2012; Murphy & Ross, 1994, 2007, 2010a). Although researchers have discussed the implications of the results for models of categorization and…

  13. Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper

    ERIC Educational Resources Information Center

    Hock, Heinrich; Isenberg, Eric

    2012-01-01

    Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…

  14. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  15. Accounting for Model Uncertainty in the Prediction of University Graduation Rates

    ERIC Educational Resources Information Center

    Goenner, Cullen F.; Snaith, Sean M.

    2004-01-01

    Empirical analysis requires researchers to choose which variables to use as controls in their models. Theory should dictate this choice, yet often in social science there are several theories that may suggest the inclusion or exclusion of certain variables as controls. The result of this is that researchers may use different variables in their…

  16. Accountability in Training Transfer: Adapting Schlenker's Model of Responsibility to a Persistent but Solvable Problem

    ERIC Educational Resources Information Center

    Burke, Lisa A.; Saks, Alan M.

    2009-01-01

    Decades have been spent studying training transfer in organizational environments in recognition of a transfer problem in organizations. Theoretical models of various antecedents, empirical studies of transfer interventions, and studies of best practices have all been advanced to address this continued problem. Yet a solution may not be so…

  17. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  18. Delay differential models in multimode laser dynamics: taking chromatic dispersion into account

    NASA Astrophysics Data System (ADS)

    Vladimirov, A. G.; Huyet, G.; Pimenov, A.

    2016-04-01

    A set of differential equations with distributed delay is derived for modeling of multimode ring lasers with intracavity chromatic dispersion. Analytical stability analysis of continuous wave regimes is performed and it is demonstrated that sufficiently strong anomalous dispersion can destabilize these regimes.

  19. Redesigning Urban Districts in the USA: Mayoral Accountability and the Diverse Provider Model

    ERIC Educational Resources Information Center

    Wong, Kenneth K.

    2011-01-01

    In response to public pressure, urban districts in the USA have initiated reforms that aim at redrawing the boundaries between the school system and other major local institutions. More specifically, this article focuses on two emerging reform strategies. We will examine an emerging model of governance that enables big-city mayors to establish…

  20. Assessing and accounting for time heterogeneity in stochastic actor oriented models.

    PubMed

    Lospinoso, Joshua A; Schweinberger, Michael; Snijders, Tom A B; Ripley, Ruth M

    2011-07-01

    This paper explores time heterogeneity in stochastic actor oriented models (SAOM) proposed by Snijders (Sociological Methodology. Blackwell, Boston, pp 361-395, 2001) which are meant to study the evolution of networks. SAOMs model social networks as directed graphs with nodes representing people, organizations, etc., and dichotomous relations representing underlying relationships of friendship, advice, etc. We illustrate several reasons why heterogeneity should be statistically tested and provide a fast, convenient method for assessment and model correction. SAOMs provide a flexible framework for network dynamics which allow a researcher to test selection, influence, behavioral, and structural properties in network data over time. We show how the forward-selecting, score type test proposed by Schweinberger (Chapter 4: Statistical modeling of network panel data: goodness of fit. PhD thesis, University of Groningen 2007) can be employed to quickly assess heterogeneity at almost no additional computational cost. One step estimates are used to assess the magnitude of the heterogeneity. Simulation studies are conducted to support the validity of this approach. The ASSIST dataset (Campbell et al. Lancet 371(9624):1595-1602, 2008) is reanalyzed with the score type test, one step estimators, and a full estimation for illustration. These tools are implemented in the RSiena package, and a brief walkthrough is provided. PMID:22003370

  1. Using state-and-transition modeling to account for imperfect detection in invasive species management

    USGS Publications Warehouse

    Frid, Leonardo; Holcombe, Tracy; Morisette, Jeffrey T.; Olsson, Aaryn D.; Brigham, Lindy; Bean, Travis M.; Betancourt, Julio L.; Bryan, Katherine

    2013-01-01

    Buffelgrass, a highly competitive and flammable African bunchgrass, is spreading rapidly across both urban and natural areas in the Sonoran Desert of southern and central Arizona. Damages include increased fire risk, losses in biodiversity, and diminished revenues and quality of life. Feasibility of sustained and successful mitigation will depend heavily on rates of spread, treatment capacity, and cost–benefit analysis. We created a decision support model for the wildland–urban interface north of Tucson, AZ, using a spatial state-and-transition simulation modeling framework, the Tool for Exploratory Landscape Scenario Analyses. We addressed the issues of undetected invasions, identifying potentially suitable habitat and calibrating spread rates, while answering questions about how to allocate resources among inventory, treatment, and maintenance. Inputs to the model include a state-and-transition simulation model to describe the succession and control of buffelgrass, a habitat suitability model, management planning zones, spread vectors, estimated dispersal kernels for buffelgrass, and maps of current distribution. Our spatial simulations showed that without treatment, buffelgrass infestations that started with as little as 80 ha (198 ac) could grow to more than 6,000 ha by the year 2060. In contrast, applying unlimited management resources could limit 2060 infestation levels to approximately 50 ha. The application of sufficient resources toward inventory is important because undetected patches of buffelgrass will tend to grow exponentially. In our simulations, areas affected by buffelgrass may increase substantially over the next 50 yr, but a large, upfront investment in buffelgrass control could reduce the infested area and overall management costs.

  2. A Semi-Empirical Model for Tilted-Gun Planar Magnetron Sputtering Accounting for Chimney Shadowing

    NASA Astrophysics Data System (ADS)

    Bunn, J. K.; Metting, C. J.; Hattrick-Simpers, J.

    2015-01-01

    Integrated computational materials engineering (ICME) approaches to composition and thickness profiles of sputtered thin-film samples are the key to expediting materials exploration for these materials. Here, an ICME-based semi-empirical approach to modeling the thickness of thin-film samples deposited via magnetron sputtering is developed. Using Yamamura's dimensionless differential angular sputtering yield and a measured deposition rate at a point in space for a single experimental condition, the model predicts the deposition profile from planar DC sputtering sources. The model includes corrections for off-center, tilted gun geometries as well as shadowing effects from gun chimneys used in most state-of-the-art sputtering systems. The modeling algorithm was validated by comparing its results with experimental deposition rates obtained from a sputtering system utilizing sources with a multi-piece chimney assembly that consists of a lower ground shield and a removable gas chimney. Simulations were performed for gun-tilts ranging from 0° to 31.3° from the vertical with and without the gas chimney installed. The results for the predicted and experimental angular dependence of the sputtering deposition rate were found to have an average magnitude of relative error of for a 0°-31.3° gun-tilt range without the gas chimney, and for a 17.7°-31.3° gun-tilt range with the gas chimney. The continuum nature of the model renders this approach reverse-optimizable, providing a rapid tool for assisting in the understanding of the synthesis-composition-property space of novel materials.

  3. Thermodynamic Modeling of Developed Structural Turbulence Taking into Account Fluctuations of Energy Dissipation

    NASA Astrophysics Data System (ADS)

    Kolesnichenko, A. V.

    2004-03-01

    A thermodynamic approach to the construction of a phenomenological macroscopic model of developed turbulence in a compressible fluid is considered with regard for the formation of space-time dissipative structures. A set of random variables were introduced into the model as internal parameters of the turbulent-chaos subsystem. This allowed us to obtain, by methods of nonequilibrium thermodynamics, the kinetic Fokker-Planck equation in the configuration space. This equation serves to determine the temporary evolution of the conditional probability distribution function of structural parameters pertaining to the cascade process of fragmentation of large-scale eddies and temperature inhomogeneities and to analyze Markovian stochastic processes of transition from one nonequilibrium stationary turbulent-motion state to another as a result of successive loss of stability caused by a change in the governing parameters. An alternative method for investigating the mechanisms of such transitions, based on the stochastic Langevin-type equation intimately related to the derived kinetic equation, is also considered. Some postulates and physical and mathematical assumptions used in the thermodynamic model of structurized turbulence are discussed in detail. In particular, we considered, using the deterministic transport equation for conditional means, the cardinal problem of the developed approach-the possibility of the existence of asymptotically stable stationary states of the turbulent-chaos subsystem. Also proposed is the nonequilibrium thermodynamic potential for internal coordinates, which extends the well-known Boltzmann-Planck relationship for equilibrium states to the nonequilibrium stationary states of the representing ensemble. This potential is shown to be the Lyapunov function for such states. The relation is also explored between the internal intermittence in the inertial interval of scales and the fluctuations of the energy of dissipation. This study is aimed at

  4. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    NASA Astrophysics Data System (ADS)

    Liang, Peixin; Chai, Feng; Bi, Yunlong; Pei, Yulong; Cheng, Shukang

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization.

  5. The effects of drugs on human models of emotional processing: an account of antidepressant drug treatment

    PubMed Central

    Pringle, Abbie; Harmer, Catherine J.

    2015-01-01

    Human models of emotional processing suggest that the direct effect of successful antidepressant drug treatment may be to modify biases in the processing of emotional information. Negative biases in emotional processing are documented in depression, and single or short-term dosing with conventional antidepressant drugs reverses these biases in depressed patients prior to any subjective change in mood. Antidepressant drug treatments also modulate emotional processing in healthy volunteers, which allows the consideration of the psychological effects of these drugs without the confound of changes in mood. As such, human models of emotional processing may prove to be useful for testing the efficacy of novel treatments and for matching treatments to individual patients or subgroups of patients. PMID:26869848

  6. Why does placing the question before an arithmetic word problem improve performance? A situation model account.

    PubMed

    Thevenot, Catherine; Devidal, Michel; Barrouillet, Pierre; Fayol, Michel

    2007-01-01

    The aim of this paper is to investigate the controversial issue of the nature of the representation constructed by individuals to solve arithmetic word problems. More precisely, we consider the relevance of two different theories: the situation or mental model theory (Johnson-Laird, 1983; Reusser, 1989) and the schema theory (Kintsch & Greeno, 1985; Riley, Greeno, & Heller, 1983). Fourth-graders who differed in their mathematical skills were presented with problems that varied in difficulty and with the question either before or after the text. We obtained the classic effect of the position of the question, with better performance when the question was presented prior to the text. In addition, this effect was more marked in the case of children who had poorer mathematical skills and in the case of more difficult problems. We argue that this pattern of results is compatible only with the situation or mental model theory, and not with the schema theory. PMID:17162507

  7. An Energy Approach to a Micromechanics Model Accounting for Nonlinear Interface Debonding.

    SciTech Connect

    Tan, H.; Huang, Y.; Geubelle, P. H.; Liu, C.; Breitenfeld, M. S.

    2005-01-01

    We developed a micromechanics model to study the effect of nonlinear interface debonding on the constitutive behavior of composite materials. While implementing this micromechanics model into a large simulation code on solid rockets, we are challenged by problems such as tension/shear coupling and the nonuniform distribution of displacement jump at the particle/matrix interfaces. We therefore propose an energy approach to solve these problems. This energy approach calculates the potential energy of the representative volume element, including the contribution from the interface debonding. By minimizing the potential energy with respect to the variation of the interface displacement jump, the traction balanced interface debonding can be found and the macroscopic constitutive relations established. This energy approach has the ability to treat different load conditions in a unified way, and the interface cohesive law can be in any arbitrary forms. In this paper, the energy approach is verified to give the same constitutive behaviors as reported before.

  8. The effects of drugs on human models of emotional processing: an account of antidepressant drug treatment.

    PubMed

    Pringle, Abbie; Harmer, Catherine J

    2015-12-01

    Human models of emotional processing suggest that the direct effect of successful antidepressant drug treatment may be to modify biases in the processing of emotional information. Negative biases in emotional processing are documented in depression, and single or short-term dosing with conventional antidepressant drugs reverses these biases in depressed patients prior to any subjective change in mood. Antidepressant drug treatments also modulate emotional processing in healthy volunteers, which allows the consideration of the psychological effects of these drugs without the confound of changes in mood. As such, human models of emotional processing may prove to be useful for testing the efficacy of novel treatments and for matching treatments to individual patients or subgroups of patients.

  9. Structure-Based Statistical Mechanical Model Accounts for the Causality and Energetics of Allosteric Communication.

    PubMed

    Guarnera, Enrico; Berezovsky, Igor N

    2016-03-01

    Allostery is one of the pervasive mechanisms through which proteins in living systems carry out enzymatic activity, cell signaling, and metabolism control. Effective modeling of the protein function regulation requires a synthesis of the thermodynamic and structural views of allostery. We present here a structure-based statistical mechanical model of allostery, allowing one to observe causality of communication between regulatory and functional sites, and to estimate per residue free energy changes. Based on the consideration of ligand free and ligand bound systems in the context of a harmonic model, corresponding sets of characteristic normal modes are obtained and used as inputs for an allosteric potential. This potential quantifies the mean work exerted on a residue due to the local motion of its neighbors. Subsequently, in a statistical mechanical framework the entropic contribution to allosteric free energy of a residue is directly calculated from the comparison of conformational ensembles in the ligand free and ligand bound systems. As a result, this method provides a systematic approach for analyzing the energetics of allosteric communication based on a single structure. The feasibility of the approach was tested on a variety of allosteric proteins, heterogeneous in terms of size, topology and degree of oligomerization. The allosteric free energy calculations show the diversity of ways and complexity of scenarios existing in the phenomenology of allosteric causality and communication. The presented model is a step forward in developing the computational techniques aimed at detecting allosteric sites and obtaining the discriminative power between agonistic and antagonistic effectors, which are among the major goals in allosteric drug design. PMID:26939022

  10. Structure-Based Statistical Mechanical Model Accounts for the Causality and Energetics of Allosteric Communication.

    PubMed

    Guarnera, Enrico; Berezovsky, Igor N

    2016-03-01

    Allostery is one of the pervasive mechanisms through which proteins in living systems carry out enzymatic activity, cell signaling, and metabolism control. Effective modeling of the protein function regulation requires a synthesis of the thermodynamic and structural views of allostery. We present here a structure-based statistical mechanical model of allostery, allowing one to observe causality of communication between regulatory and functional sites, and to estimate per residue free energy changes. Based on the consideration of ligand free and ligand bound systems in the context of a harmonic model, corresponding sets of characteristic normal modes are obtained and used as inputs for an allosteric potential. This potential quantifies the mean work exerted on a residue due to the local motion of its neighbors. Subsequently, in a statistical mechanical framework the entropic contribution to allosteric free energy of a residue is directly calculated from the comparison of conformational ensembles in the ligand free and ligand bound systems. As a result, this method provides a systematic approach for analyzing the energetics of allosteric communication based on a single structure. The feasibility of the approach was tested on a variety of allosteric proteins, heterogeneous in terms of size, topology and degree of oligomerization. The allosteric free energy calculations show the diversity of ways and complexity of scenarios existing in the phenomenology of allosteric causality and communication. The presented model is a step forward in developing the computational techniques aimed at detecting allosteric sites and obtaining the discriminative power between agonistic and antagonistic effectors, which are among the major goals in allosteric drug design.

  11. Structure-Based Statistical Mechanical Model Accounts for the Causality and Energetics of Allosteric Communication

    PubMed Central

    Guarnera, Enrico; Berezovsky, Igor N.

    2016-01-01

    Allostery is one of the pervasive mechanisms through which proteins in living systems carry out enzymatic activity, cell signaling, and metabolism control. Effective modeling of the protein function regulation requires a synthesis of the thermodynamic and structural views of allostery. We present here a structure-based statistical mechanical model of allostery, allowing one to observe causality of communication between regulatory and functional sites, and to estimate per residue free energy changes. Based on the consideration of ligand free and ligand bound systems in the context of a harmonic model, corresponding sets of characteristic normal modes are obtained and used as inputs for an allosteric potential. This potential quantifies the mean work exerted on a residue due to the local motion of its neighbors. Subsequently, in a statistical mechanical framework the entropic contribution to allosteric free energy of a residue is directly calculated from the comparison of conformational ensembles in the ligand free and ligand bound systems. As a result, this method provides a systematic approach for analyzing the energetics of allosteric communication based on a single structure. The feasibility of the approach was tested on a variety of allosteric proteins, heterogeneous in terms of size, topology and degree of oligomerization. The allosteric free energy calculations show the diversity of ways and complexity of scenarios existing in the phenomenology of allosteric causality and communication. The presented model is a step forward in developing the computational techniques aimed at detecting allosteric sites and obtaining the discriminative power between agonistic and antagonistic effectors, which are among the major goals in allosteric drug design. PMID:26939022

  12. A car-following model accounting for the driver’s attribution

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; He, Jia; Yang, Shi-Chun; Shang, Hua-Yan

    2014-11-01

    In this paper, we use the FVD (full velocity difference) model to develop a car-following model with consideration of the driver’s attribution. The numerical results show that the proposed model can qualitatively reproduce the effects of the driver’s attribution on each vehicle’s speed, acceleration, fuel consumption and exhaust emissions under two typical traffic states, i.e., the aggressive driver enhances his speed, acceleration, fuel consumption and exhaust emissions while the conservative driver reduces his speed, acceleration, fuel consumption and exhaust emissions, so the aggressive driver’s speed, acceleration, fuel consumption and exhaust emissions are greater than those of the neutral driver and the neutral driver’s speed, acceleration, fuel consumption and exhaust emissions are greater than those of the conservative driver. In addition, we further study each vehicle’s speed, acceleration, fuel consumption and exhaust emissions when aggressive drivers, neutral drivers and conservative drivers are uniformly mixed. The numerical results show that there are no prominent differences between the mixed traffic flow and the homogeneous traffic flow consisting of neutral driver, i.e., each vehicle’s speed, acceleration, fuel consumption and exhaust emissions are similar to those of the neutral driver.

  13. On the influence of debris in glacier melt modelling: a new temperature-index model accounting for the debris thickness feedback

    NASA Astrophysics Data System (ADS)

    Carenzo, Marco; Mabillard, Johan; Pellicciotti, Francesca; Reid, Tim; Brock, Ben; Burlando, Paolo

    2013-04-01

    The increase of rockfalls from the surrounding slopes and of englacial melt-out material has led to an increase of the debris cover extent on Alpine glaciers. In recent years, distributed debris energy-balance models have been developed to account for the melt rate enhancing/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya. Some of the input data such as wind or temperature are also of difficult extrapolation from station measurements. Due to their lower data requirement, empirical models have been used in glacier melt modelling. However, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of debris thickness on melt. In this paper, we present a new temperature-index model accounting for the debris thickness feedback in the computation of melt rates at the debris-ice interface. The empirical parameters (temperature factor, shortwave radiation factor, and lag factor accounting for the energy transfer through the debris layer) are optimized at the point scale for several debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter has been validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. The new model is developed on Miage Glacier, Italy, a debris cover glacier in which the ablation area is mantled in near-continuous layer of rock. Subsequently, its transferability is tested on Haut Glacier d'Arolla, Switzerland, where debris is thinner and its extension has been seen to expand in the last decades. The results show that the performance of the new debris temperature-index model (DETI) in simulating the glacier melt rate at the point scale

  14. Accounting for natural organic matter in aqueous chemical equilibrium models: a review of the theories and applications

    NASA Astrophysics Data System (ADS)

    Dudal, Yves; Gérard, Frédéric

    2004-08-01

    Soil organic matter consists of a highly complex and diversified blend of organic molecules, ranging from low molecular weight organic acids (LMWOAs), sugars, amines, alcohols, etc., to high apparent molecular weight fulvic and humic acids. The presence of a wide range of functional groups on these molecules makes them very reactive and influential in soil chemistry, in regards to acid-base chemistry, metal complexation, precipitation and dissolution of minerals and microbial reactions. Out of these functional groups, the carboxylic and phenolic ones are the most abundant and most influential in regards to metal complexation. Therefore, chemical equilibrium models have progressively dealt with organic matter in their calculations. This paper presents a review of six chemical equilibrium models, namely N ICA-Donnan, E Q3/6, G EOCHEM, M INTEQA2, P HREEQC and W HAM, in light of the account they make of natural organic matter (NOM) with the objective of helping potential users in choosing a modelling approach. The account has taken various faces, mainly by adding specific molecules within the existing model databases (E Q3/6, G EOCHEM, and P HREEQC) or by using either a discrete (W HAM) or a continuous (N ICA-Donnan and M INTEQA2) distribution of the deprotonated carboxylic and phenolic groups. The different ways in which soil organic matter has been integrated into these models are discussed in regards to the model-experiment comparisons that were found in the literature, concerning applications to either laboratory or natural systems. Much of the attention has been focused on the two most advanced models, W HAM and N ICA-Donnan, which are able to reasonably describe most of the experimental results. Nevertheless, a better knowledge of the humic substances metal-binding properties is needed to better constrain model inputs with site-specific parameter values. This represents the main axis of research that needs to be carried out to improve the models. In addition to

  15. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections.

    PubMed

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model.

  16. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    PubMed Central

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model

  17. Accounting for Long Term Sediment Storage in a Watershed Scale Numerical Model for Suspended Sediment Routing

    NASA Astrophysics Data System (ADS)

    Keeler, J. J.; Pizzuto, J. E.; Skalak, K.; Karwan, D. L.; Benthem, A.; Ackerman, T. R.

    2015-12-01

    Quantifying the delivery of suspended sediment from upland sources to downstream receiving waters is important for watershed management, but current routing models fail to accurately represent lag times in delivery resulting from sediment storage. In this study, we route suspended sediment tagged by a characteristic tracer using a 1-dimensional model that implicitly includes storage and remobilization processes and timescales. From an input location where tagged sediment is added, the model advects suspended sediment downstream at the velocity of the stream (adjusted for the intermittency of transport events). Deposition rates are specified by the fraction of the suspended load stored per kilometer of downstream transport (presumably available from a sediment budget). Tagged sediment leaving storage is evaluated from a convolution equation based on the probability distribution function (pdf) of sediment storage waiting times; this approach avoids the difficulty of accurately representing complex processes of sediment remobilization from floodplain and other deposits. To illustrate the role of storage on sediment delivery, we compare exponential and bounded power-law waiting time pdfs with identical means of 94 years. In both cases, the median travel time for sediment to reach the depocenter in fluvial systems less than 40km long is governed by in-channel transport and is unaffected by sediment storage. As the channel length increases, however, the median sediment travel time reflects storage rather than in-channel transport; travel times do not vary significantly between the two different waiting time functions. At distances of 50, 100, and 200 km, the median travel time for suspended sediment is 36, 136, and 325 years, orders of magnitude slower than travel times associated with in-channel transport. These computations demonstrate that storage can be neglected for short rivers, but for longer systems, storage controls the delivery of suspended sediment.

  18. A theoretical plate model accounting for slow kinetics in chromatographic elution.

    PubMed

    Baeza-Baeza, J J; García-Álvarez-Coque, M C

    2011-08-01

    The chromatographic elution has been studied from different perspectives. However, in spite of the simplicity and evident deficiencies of the plate model proposed by Martin and Synge, it has served as a basis for the characterization of columns up-to-date. This approach envisions the chromatographic column as an arbitrary number of theoretical plates, each of them consisting of identical repeating portions of mobile phase and stationary phase. Solutes partition between both phases, reaching the equilibrium. Mobile phase transference between the theoretical plates is assumed to be infinitesimally stepwise (or continuous), giving rise to the mixing of the solutions in adjacent plates. This yields an additional peak broadening, which is added to the dispersion associated to the equilibrium conditions. It is commonly assumed that when the solute concentration is sufficiently small, chromatographic elution is carried out under linear conditions, which is the case in almost all analytical applications. When the solute concentration increases above a value where the stationary phase approximates saturation (i.e. becomes overloaded), non-linear elution is obtained. In addition to overloading, another source of non-linearity can be a slow mass transfer. An extended Martin and Synge model is here proposed to include slow mass-transfer kinetics (with respect to flow rate) between the mobile phase and stationary phase. We show that there is a linear relationship between the variance and the ratio of the kinetic constants for the mass transfer in the flow direction (τ) and the mass transfer between the mobile phase and stationary phase (ν), which has been called the kinetic ratio (κ=τ/ν). The proposed model was validated with data obtained according to an approach that simulates the solute migration through the theoretical plates. An experimental approach to measure the deviation from the equilibrium conditions using the experimental peak variances and retention times at

  19. Accounting for management costs in sensitivity analyses of matrix population models.

    PubMed

    Baxter, Peter W J; McCarthy, Michael A; Possingham, Hugh P; Menkhorst, Peter W; McLean, Natasha

    2006-06-01

    Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.

  20. Accounting for management costs in sensitivity analyses of matrix population models.

    PubMed

    Baxter, Peter W J; McCarthy, Michael A; Possingham, Hugh P; Menkhorst, Peter W; McLean, Natasha

    2006-06-01

    Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. PMID

  1. Consistent Treatment of Hydrophobicity in Protein Lattice Models Accounts for Cold Denaturation

    NASA Astrophysics Data System (ADS)

    van Dijk, Erik; Varilly, Patrick; Knowles, Tuomas P. J.; Frenkel, Daan; Abeln, Sanne

    2016-02-01

    The hydrophobic effect stabilizes the native structure of proteins by minimizing the unfavorable interactions between hydrophobic residues and water through the formation of a hydrophobic core. Here, we include the entropic and enthalpic contributions of the hydrophobic effect explicitly in an implicit solvent model. This allows us to capture two important effects: a length-scale dependence and a temperature dependence for the solvation of a hydrophobic particle. This consistent treatment of the hydrophobic effect explains cold denaturation and heat capacity measurements of solvated proteins.

  2. Accounting for Heaping in Retrospectively Reported Event Data – A Mixture-Model Approach

    PubMed Central

    Bar, Haim Y.; Lillard, Dean R.

    2012-01-01

    When event data are retrospectively reported, more temporally distal events tend to get “heaped” on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch time-varying right-hand side variables. We develop a model-based approach to estimate the extent of heaping in the data, and how it affects regression parameter estimates. We use smoking cessation data as a motivating example, but our method is general. It facilitates the use of retrospective data from the multitude of cross-sectional and longitudinal studies worldwide that collect and potentially could collect event data. PMID:22733577

  3. Accounting for heaping in retrospectively reported event data - a mixture-model approach.

    PubMed

    Bar, Haim Y; Lillard, Dean R

    2012-11-30

    When event data are retrospectively reported, more temporally distal events tend to get 'heaped' on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch time-varying right-hand side variables. We develop a model-based approach to estimate the extent of heaping in the data and how it affects regression parameter estimates. We use smoking cessation data as a motivating example, but our method is general. It facilitates the use of retrospective data from the multitude of cross-sectional and longitudinal studies worldwide that collect and potentially could collect event data.

  4. EVAPORISATION: a new vapor pressure model taking into account neighbour effects

    NASA Astrophysics Data System (ADS)

    Compernolle, Steven; Ceulemans, Karl; Muller, Jean-Francois

    2010-05-01

    Secondary organic aerosol (SOA) is a complex mixture of water and organic molecules. The vapor pressure of an organic molecule is one of the most important properties regulating its partitioning to the particulate phase, but as it is unknown for most- typically polyfunctional- organic molecules in Biogenic SOA it has to be estimated by a vapor pressure model fitted to experimental data. While a lot of vapor pressure data is generally available for hydrocarbons and monofunctional compounds, much less data are available for bifunctional compounds. For compounds with more functional groups, data is sparse and relatively inaccurate. We have developed a vapor pressure model, EVAPORISATION (Estimation of VApor Pressure of ORganics, Including effects Such As The Interaction of Neighbours), starting from the group-contribution principle: each functional group gives a contribution to the logarithm of the vapor pressure. On top of that, second order effects -chemically rationalized- are added due to carbon skeleton, nonadditivity of functional groups and -for neighbouring functional groups- intramolecular interactions . These effects can be very significant: eg. when two carbonyl groups are neighbouring, the vapor pressure is about 1 order of magnitude higher than when they are nonneighbouring. Due to the lack of data, some of these effects must be estimated by analogy. The method is compared to several other vapor pressure estimation techniques, such as SIMPOL (Pankow et al. 2008), SPARC (Hilal et al. 2003), and the methods of Myrdal and Yalkowsky (1997), Capouet and Muller (2006), Nannoolal et al. (2008), Moller et al. (2008). We extended some of these methods as they are not able to treat hydroperoxides, peracids, peroxy acyl nitrates. With our model BOREAM, outlined in a previous publication (Capouet et al. 2008), several smog chamber experiments are simulated, and the impact of vapor pressure method choice is elucidated. It turns out that the choice of the vapor

  5. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    USGS Publications Warehouse

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  6. Account of near-cathode sheath in numerical models of high-pressure arc discharges

    NASA Astrophysics Data System (ADS)

    Benilov, M. S.; Almeida, N. A.; Baeva, M.; Cunha, M. D.; Benilova, L. G.; Uhrlandt, D.

    2016-06-01

    Three approaches to describing the separation of charges in near-cathode regions of high-pressure arc discharges are compared. The first approach employs a single set of equations, including the Poisson equation, in the whole interelectrode gap. The second approach employs a fully non-equilibrium description of the quasi-neutral bulk plasma, complemented with a newly developed description of the space-charge sheaths. The third, and the simplest, approach exploits the fact that significant power is deposited by the arc power supply into the near-cathode plasma layer, which allows one to simulate the plasma-cathode interaction to the first approximation independently of processes in the bulk plasma. It is found that results given by the different models are generally in good agreement, and in some cases the agreement is even surprisingly good. It follows that the predicted integral characteristics of the plasma-cathode interaction are not strongly affected by details of the model provided that the basic physics is right.

  7. Modelling of trace metal uptake by roots taking into account complexation by exogenous organic ligands

    NASA Astrophysics Data System (ADS)

    Jean-Marc, Custos; Christian, Moyne; Sterckeman, Thibault

    2010-05-01

    The context of this study is phytoextraction of soil trace metals such as Cd, Pb or Zn. Trace metal transfer from soil to plant depends on physical and chemical processes such as minerals alteration, transport, adsorption/desorption, reactions in solution and biological processes including the action of plant roots and of associated micro-flora. Complexation of metal ions by organic ligands is considered to play a role on the availability of trace metals for roots in particular in the event that synthetic ligands (EDTA, NTA, etc.) are added to the soil to increase the solubility of the contaminants. As this role is not clearly understood, we wanted to simulate it in order to quantify the effect of organic ligands on root uptake of trace metals and produce a tool which could help in optimizing the conditions of phytoextraction.We studied the effect of an aminocarboxilate ligand on the absorption of the metal ion by roots, both in hydroponic solution and in soil solution, for which we had to formalize the buffer power for the metal. We assumed that the hydrated metal ion is the only form which can be absorbed by the plants. Transport and reaction processes were modelled for a system made up of the metal M, a ligand L and the metal complex ML. The Tinker-Nye-Barber model was adapted to describe the transport of solutes M, L and ML in the soil and absorption of M by the roots. This allowed to represent the interactions between transport, chelating reactions, absorption of the solutes at the root surface, root growth with time, in order to simulate metal uptake by a whole root system.Several assumptions were tested such as i) absorption of the metal by an infinite sink and according to a Michaelis-Menten kinetics, solutes transport by diffusion with and without ii) mass flow and iii) soil buffer power for the ligand L. In hydroponic solution (without soil buffer power), ligands decreased the trace metal flux towards roots, as they reduced the concentration of hydrated

  8. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  9. A multi-level model accounting for the effects of JAK2-STAT5 signal modulation in erythropoiesis.

    PubMed

    Lai, Xin; Nikolov, Svetoslav; Wolkenhauer, Olaf; Vera, Julio

    2009-08-01

    We develop a multi-level model, using ordinary differential equations, based on quantitative experimental data, accounting for murine erythropoiesis. At the sub-cellular level, the model includes a description of the regulation of red blood cell differentiation through Epo-stimulated JAK2-STAT5 signalling activation, while at the cell population level the model describes the dynamics of (STAT5-mediated) red blood cell differentiation from their progenitors. Furthermore, the model includes equations depicting the hypoxia-mediated regulation of hormone erythropoietin blood levels. Take all together, the model constitutes a multi-level, feedback loop-regulated biological system, involving processes in different organs and at different organisational levels. We use our model to investigate the effect of deregulation in the proteins involved in the JAK2-STAT5 signalling pathway in red blood cells. Our analysis results suggest that down-regulation in any of the three signalling system components affects the hematocrit level in an individual considerably. In addition, our analysis predicts that exogenous Epo injection (an already existing treatment for several blood diseases) may compensate the effects of single down-regulation of Epo hormone level, STAT5 or EpoR/JAK2 expression level, and that it may be insufficient to counterpart a combined down-regulation of all the elements in the JAK2-STAT5 signalling cascade. PMID:19660986

  10. The application of multilevel modelling to account for the influence of walking speed in gait analysis.

    PubMed

    Keene, David J; Moe-Nilssen, Rolf; Lamb, Sarah E

    2016-01-01

    Differences in gait performance can be explained by variations in walking speed, which is a major analytical problem. Some investigators have standardised speed during testing, but this can result in an unnatural control of gait characteristics. Other investigators have developed test procedures where participants walking at their self-selected slow, preferred and fast speeds, with computation of gait characteristics at a standardised speed. However, this analysis is dependent upon an overlap in the ranges of gait speed observed within and between participants, and this is difficult to achieve under self-selected conditions. In this report a statistical analysis procedure is introduced that utilises multilevel modelling to analyse data from walking tests at self-selected speeds, without requiring an overlap in the range of speeds observed or the routine use of data transformations.

  11. The gamesmanship of sex: a model based on African American adolescent accounts.

    PubMed

    Eyre, S L; Hoffman, V; Millstein, S G

    1998-12-01

    This article examines adolescent understanding of the social context of sexual behavior. Using grounded theory to interpret interviews with 39 African American male and female adolescents, the article builds a model of sex-related behavior as a set of interrelated games. A courtship game involves communication of sexual or romantic interest and, over time, formation of a romantic relationship. A duplicity game draws on conventions of a courtship game to trick a partner into having sex. A disclosure game spreads stories about one's own and other's sex-related activities to peers in a gossip network. Finally, a prestige game builds social reputation in the eyes of peers, typically based on gender-specific standards. The article concludes by examining the meanings that sex-related behavior may have for adolescents and the potential use of social knowledge for facilitating adolescent health.

  12. Accounting for false-positive acoustic detections of bats using occupancy models

    USGS Publications Warehouse

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    4. Synthesis and applications. Our results suggest that false positives sufficient to affect inferences may be common in acoustic surveys for bats. We demonstrate an approach that can estimate occupancy, regardless of the false-positive rate, when acoustic surveys are paired with capture surveys. Applications of this approach include monitoring the spread of White-Nose Syndrome, estimating the impact of climate change and informing conservation listing decisions. We calculate a site-specific probability of occupancy, conditional on survey results, which could inform local permitting decisions, such as for wind energy projects. More generally, the magnitude of false positives suggests that false-positive occupancy models can improve accuracy in research and monitoring of bats and provide wildlife managers with more reliable information.

  13. Accounting for subordinate perceptions of supervisor power: an identity-dependence model.

    PubMed

    Farmer, Steven M; Aguinis, Herman

    2005-11-01

    The authors present a model that explains how subordinates perceive the power of their supervisors and the causal mechanisms by which these perceptions translate into subordinate outcomes. Drawing on identity and resource-dependence theories, the authors propose that supervisors have power over their subordinates when they control resources needed for the subordinates' enactment and maintenance of current and desired identities. The joint effect of perceptions of supervisor power and supervisor intentions to provide such resources leads to 4 conditions ranging from highly functional to highly dysfunctional: confirmation, hope, apathy, and progressive withdrawal. Each of these conditions is associated with specific outcomes such as the quality of the supervisor-subordinate relationship, turnover, and changes in the type and centrality of various subordinate identities.

  14. Accounting for crustal magnetization in models of the core magnetic field

    NASA Technical Reports Server (NTRS)

    Jackson, Andrew

    1990-01-01

    The problem of determining the magnetic field originating in the earth's core in the presence of remanent and induced magnetization is considered. The effect of remanent magnetization in the crust on satellite measurements of the core magnetic field is investigated. The crust as a zero-mean stationary Gaussian random process is modelled using an idea proposed by Parker (1988). It is shown that the matrix of second-order statistics is proportional to the Gram matrix, which depends only on the inner-products of the appropriate Green's functions, and that at a typical satellite altitude of 400 km the data are correlated out to an angular separation of approximately 15 deg. Accurate and efficient means of calculating the matrix elements are given. It is shown that the variance of measurements of the radial component of a magnetic field due to the crust is expected to be approximately twice that in horizontal components.

  15. Rotating Stellar Models Can Account for the Extended Main-sequence Turnoffs in Intermediate-age Clusters

    NASA Astrophysics Data System (ADS)

    Brandt, Timothy D.; Huang, Chelsea X.

    2015-07-01

    We show that the extended main-sequence turnoffs seen in intermediate-age Large Magellanic Cloud (LMC) clusters, often attributed to age spreads of several 100 Myr, may be easily accounted for by variable stellar rotation in a coeval population. We compute synthetic photometry for grids of rotating stellar evolution models and interpolate them to produce isochrones at a variety of rotation rates and orientations. An extended main-sequence turnoff naturally appears in color-magnitude diagrams at ages just under 1 Gyr, peaks in extent between ˜1 and 1.5 Gyr, and gradually disappears by around 2 Gyr in age. We then fit our interpolated isochrones by eye to four LMC clusters with very extended main-sequence turnoffs: NGC 1783, 1806, 1846, and 1987. In each case, stellar populations with a single age and metallicity can comfortably account for the observed extent of the turnoff region. The new stellar models predict almost no correlation of turnoff color with rotational v{sin}i. The red part of the turnoff is populated by a combination of slow rotators and edge-on rapid rotators, while the blue part contains rapid rotators at lower inclinations.

  16. The Situated Inference Model: An Integrative Account of the Effects of Primes on Perception, Behavior, and Motivation.

    PubMed

    Loersch, Chris; Payne, B Keith

    2011-05-01

    The downstream consequences of a priming induction range from changes in the perception of objects in the environment to the initiation of prime-related behavior and goal striving. Although each of these outcomes has been accounted for by separate mechanisms, we argue that a single process could produce all three priming effects. In this article, we introduce the situated inference model of priming, discuss its potential to account for these divergent outcomes with one mechanism, and demonstrate its ability to organize the priming literatures surrounding these effects. According to the model, primes often do not cause direct effects, instead altering only the accessibility of prime-related mental content. This information produces downstream effects on judgment, behavior, or motivation when it is mistakenly viewed as originating from one's own internal thought processes. When this misattribution occurs, the prime-related mental content becomes a possible source of information for solving whatever problems are afforded by the current situation. Because different situations afford very different questions and concerns, the inferred meaning of this prime-related content can vary greatly. The use of this information to answer qualitatively different questions can lead a single prime to produce varied effects on judgment, behavior, and motivation.

  17. A general model for likelihood computations of genetic marker data accounting for linkage, linkage disequilibrium, and mutations.

    PubMed

    Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter

    2015-09-01

    Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers.

  18. Hysteresis modelling of GO laminations for arbitrary in-plane directions taking into account the dynamics of orthogonal domain walls

    NASA Astrophysics Data System (ADS)

    Baghel, A. P. S.; Sai Ram, B.; Chwastek, K.; Daniel, L.; Kulkarni, S. V.

    2016-11-01

    The anisotropy of magnetic properties in grain-oriented steels is related to their microstructure. It results from the anisotropy of the single crystal properties combined to crystallographic texture. The magnetization process along arbitrary directions can be explained using phase equilibrium for domain patterns, which can be described using Neel's phase theory. According to the theory the fractions of 180° and 90° domain walls depend on the direction of magnetization. This paper presents an approach to model hysteresis loops of grain-oriented steels along arbitrary in-plane directions. The considered description is based on a modification of the Jiles-Atherton model. It includes a modified expression for the anhysteretic magnetization which takes into account contributions of two types of domain walls. The computed hysteresis curves for different directions are in good agreement with experimental results.

  19. A liquid film model of tetrakaidecahedral packing to account for the establishment of epidermal cell columns.

    PubMed

    Menton, D N

    1976-05-01

    The possiblity that the organization of cells into columns in the mammalian epidermis may be a result of the close packing of these cells has been investigated in a model system involving the association of randomly produced soap bubbles into a stable froth. Upon floating to the surface of a liquid, soap bubbles have been found to spontaneously assemble into precise columns of interdigitating bubbles. The tetrakaidecahedral shape and the spatial configuration of these bubbles closely resemble those of stacked epidermal cells, although the columns of a froth were oriented at a 60degrees angle to their substratum rather than at right angles as occurs in the epidermal cell columns. These observations lend support to the theory that the organization of the cells in the epidermis into columns is due to the assumption of the keratocytes of a minimum surface-close packing array. Such an organizing mechanism would be independent of both positional control of the underlying mitoses and active guidance of the cells as they become superficially displaced within the epidermis. The observation that a high rate of cell turnover is incompatible with the epidermal column structure may be related to the finding that rapidly produced soap bubbles do not, at least initially, assemble into a columnar array. PMID:1270835

  20. Accounting for uncertainty due to 'last observation carried forward' outcome imputation in a meta-analysis model.

    PubMed

    Dimitrakopoulou, Vasiliki; Efthimiou, Orestis; Leucht, Stefan; Salanti, Georgia

    2015-02-28

    Missing outcome data are a problem commonly observed in randomized control trials that occurs as a result of participants leaving the study before its end. Missing such important information can bias the study estimates of the relative treatment effect and consequently affect the meta-analytic results. Therefore, methods on manipulating data sets with missing participants, with regard to incorporating the missing information in the analysis so as to avoid the loss of power and minimize the bias, are of interest. We propose a meta-analytic model that accounts for possible error in the effect sizes estimated in studies with last observation carried forward (LOCF) imputed patients. Assuming a dichotomous outcome, we decompose the probability of a successful unobserved outcome taking into account the sensitivity and specificity of the LOCF imputation process for the missing participants. We fit the proposed model within a Bayesian framework, exploring different prior formulations for sensitivity and specificity. We illustrate our methods by performing a meta-analysis of five studies comparing the efficacy of amisulpride versus conventional drugs (flupenthixol and haloperidol) on patients diagnosed with schizophrenia. Our meta-analytic models yield estimates similar to meta-analysis with LOCF-imputed patients. Allowing for uncertainty in the imputation process, precision is decreased depending on the priors used for sensitivity and specificity. Results on the significance of amisulpride versus conventional drugs differ between the standard LOCF approach and our model depending on prior beliefs on the imputation process. Our method can be regarded as a useful sensitivity analysis that can be used in the presence of concerns about the LOCF process.

  1. A field-scale infiltration model accounting for spatial heterogeneity of rainfall and soil saturated hydraulic conductivity

    NASA Astrophysics Data System (ADS)

    Morbidelli, Renato; Corradini, Corrado; Govindaraju, Rao S.

    2006-04-01

    This study first explores the role of spatial heterogeneity, in both the saturated hydraulic conductivity Ks and rainfall intensity r, on the integrated hydrological response of a natural slope. On this basis, a mathematical model for estimating the expected areal-average infiltration is then formulated. Both Ks and r are considered as random variables with assessed probability density functions. The model relies upon a semi-analytical component, which describes the directly infiltrated rainfall, and an empirical component, which accounts further for the infiltration of surface water running downslope into pervious soils (the run-on effect). Monte Carlo simulations over a clay loam soil and a sandy loam soil were performed for constructing the ensemble averages of field-scale infiltration used for model validation. The model produced very accurate estimates of the expected field-scale infiltration rate, as well as of the outflow generated by significant rainfall events. Furthermore, the two model components were found to interact appropriately for different weights of the two infiltration mechanisms involved.

  2. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili.

    PubMed

    Xiao, Ke; Malvankar, Nikhil S; Shu, Chuanjun; Martz, Eric; Lovley, Derek R; Sun, Xiao

    2016-03-22

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics.

  3. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili

    PubMed Central

    Xiao, Ke; Malvankar, Nikhil S.; Shu, Chuanjun; Martz, Eric; Lovley, Derek R.; Sun, Xiao

    2016-01-01

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics. PMID:27001169

  4. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili.

    PubMed

    Xiao, Ke; Malvankar, Nikhil S; Shu, Chuanjun; Martz, Eric; Lovley, Derek R; Sun, Xiao

    2016-01-01

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics. PMID:27001169

  5. Forest summer albedo is sensitive to species and thinning: how should we account for this in Earth system models?

    NASA Astrophysics Data System (ADS)

    Otto, J.; Berveiller, D.; Bréon, F.-M.; Delpierre, N.; Geppert, G.; Granier, A.; Jans, W.; Knohl, A.; Kuusk, A.; Longdoz, B.; Moors, E.; Mund, M.; Pinty, B.; Schelhaas, M.-J.; Luyssaert, S.

    2014-04-01

    Although forest management is one of the instruments proposed to mitigate climate change, the relationship between forest management and canopy albedo has been ignored so far by climate models. Here we develop an approach that could be implemented in Earth system models. A stand-level forest gap model is combined with a canopy radiation transfer model and satellite-derived model parameters to quantify the effects of forest thinning on summertime canopy albedo. This approach reveals which parameter has the largest affect on summer canopy albedo: we examined the effects of three forest species (pine, beech, oak) and four thinning strategies with a constant forest floor albedo (light to intense thinning regimes) and five different solar zenith angles at five different sites (40° N 9° E-60° N 9° E). During stand establishment, summertime canopy albedo is driven by tree species. In the later stages of stand development, the effect of tree species on summertime canopy albedo decreases in favour of an increasing influence of forest thinning. These trends continue until the end of the rotation, where thinning explains up to 50% of the variance in near-infrared albedo and up to 70% of the variance in visible canopy albedo. The absolute summertime canopy albedo of all species ranges from 0.03 to 0.06 (visible) and 0.20 to 0.28 (near-infrared); thus the albedo needs to be parameterised at species level. In addition, Earth system models need to account for forest management in such a way that structural changes in the canopy are described by changes in leaf area index and crown volume (maximum change of 0.02 visible and 0.05 near-infrared albedo) and that the expression of albedo depends on the solar zenith angle (maximum change of 0.02 visible and 0.05 near-infrared albedo). Earth system models taking into account these parameters would not only be able to examine the spatial effects of forest management but also the total effects of forest management on climate.

  6. Accounting for geochemical alterations of caprock fracture permeability in basin-scale models of leakage from geologic CO2 reservoirs

    NASA Astrophysics Data System (ADS)

    Guo, B.; Fitts, J. P.; Dobossy, M.; Bielicki, J. M.; Peters, C. A.

    2012-12-01

    Climate mitigation, public acceptance and energy, markets demand that the potential CO2 leakage rates from geologic storage reservoirs are predicted to be low and are known to a high level of certainty. Current approaches to predict CO2 leakage rates assume constant permeability of leakage pathways (e.g., wellbores, faults, fractures). A reactive transport model was developed to account for geochemical alterations that result in permeability evolution of leakage pathways. The one-dimensional reactive transport model was coupled with the basin-scale Estimating Leakage Semi-Analytical (ELSA) model to simulate CO2 and brine leakage through vertical caprock pathways for different CO2 storage reservoir sites and injection scenarios within the Mt. Simon and St. Peter sandstone formations of the Michigan basin. Mineral dissolution in the numerical reactive transport model expands leakage pathways and increases permeability as a result of calcite dissolution by reactions driven by CO2-acidified brine. A geochemical model compared kinetic and equilibrium treatments of calcite dissolution within each grid block for each time step. For a single fracture, we investigated the effect of the reactions on leakage by performing sensitivity analyses of fracture geometry, CO2 concentration, calcite abundance, initial permeability, and pressure gradient. Assuming that calcite dissolution reaches equilibrium at each time step produces unrealistic scenarios of buffering and permeability evolution within fractures. Therefore, the reactive transport model with a kinetic treatment of calcite dissolution was coupled to the ELSA model and used to compare brine and CO2 leakage rates at a variety of potential geologic storage sites within the Michigan basin. The results are used to construct maps based on the susceptibility to geochemically driven increases in leakage rates. These maps should provide useful and easily communicated inputs into decision-making processes for siting geologic CO2

  7. Accounting for Population Structure in Gene-by-Environment Interactions in Genome-Wide Association Studies Using Mixed Models.

    PubMed

    Sul, Jae Hoon; Bilow, Michael; Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar

    2016-03-01

    Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants. PMID:26943367

  8. Accounting for intracell flow in models with emphasis on water table recharge and stream-aquifer interaction. 2. A procedure

    USGS Publications Warehouse

    Jorgensen, D.G.; Signor, D.C.; Imes, J.L.

    1989-01-01

    Intercepted intracell flow, especially if cell includes water table recharge and a stream (sink), can result in significant model error if not accounted for. A procedure utilizing net flow per cell (Fn) that accounts for intercepted intracell flow can be used for both steady state and transient simulations. Germane to the procedure is the determination of the ratio of area of influence of the interior sink to the area of the cell (Ai/Ac). Ai is the area in which water table recharge has the potential to be intercepted by the sink. Determining Ai/Ac requires either a detailed water table map or observation of stream conditions within the cell. A proportioning parameter M, which is equal to 1 or slightly less and is a function of cell geometry, is used to determine how much of the water that has potential for interception is intercepted by the sink within the cell. Also germane to the procedure is the determination of the flow across the streambed (Fs) which is not directly a function of cell size, due to difference in head between the water level in the stream and the potentiometric surface of the aquifer underlying the streambed. -from Authors

  9. Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing.

    PubMed

    Tan, Cheston; Poggio, Tomaso

    2016-01-01

    Faces are an important and unique class of visual stimuli, and have been of interest to neuroscientists for many years. Faces are known to elicit certain characteristic behavioral markers, collectively labeled "holistic processing", while non-face objects are not processed holistically. However, little is known about the underlying neural mechanisms. The main aim of this computational simulation work is to investigate the neural mechanisms that make face processing holistic. Using a model of primate visual processing, we show that a single key factor, "neural tuning size", is able to account for three important markers of holistic face processing: the Composite Face Effect (CFE), Face Inversion Effect (FIE) and Whole-Part Effect (WPE). Our proof-of-principle specifies the precise neurophysiological property that corresponds to the poorly-understood notion of holism, and shows that this one neural property controls three classic behavioral markers of holism. Our work is consistent with neurophysiological evidence, and makes further testable predictions. Overall, we provide a parsimonious account of holistic face processing, connecting computation, behavior and neurophysiology.

  10. Accounting for Population Structure in Gene-by-Environment Interactions in Genome-Wide Association Studies Using Mixed Models.

    PubMed

    Sul, Jae Hoon; Bilow, Michael; Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar

    2016-03-01

    Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants.

  11. A sampling design and model for estimating abundance of Nile crocodiles while accounting for heterogeneity of detectability of multiple observers

    USGS Publications Warehouse

    Shirley, Matthew H.; Dorazio, Robert M.; Abassery, Ekramy; Elhady, Amr A.; Mekki, Mohammed S.; Asran, Hosni H.

    2012-01-01

    As part of the development of a management program for Nile crocodiles in Lake Nasser, Egypt, we used a dependent double-observer sampling protocol with multiple observers to compute estimates of population size. To analyze the data, we developed a hierarchical model that allowed us to assess variation in detection probabilities among observers and survey dates, as well as account for variation in crocodile abundance among sites and habitats. We conducted surveys from July 2008-June 2009 in 15 areas of Lake Nasser that were representative of 3 main habitat categories. During these surveys, we sampled 1,086 km of lake shore wherein we detected 386 crocodiles. Analysis of the data revealed significant variability in both inter- and intra-observer detection probabilities. Our raw encounter rate was 0.355 crocodiles/km. When we accounted for observer effects and habitat, we estimated a surface population abundance of 2,581 (2,239-2,987, 95% credible intervals) crocodiles in Lake Nasser. Our results underscore the importance of well-trained, experienced monitoring personnel in order to decrease heterogeneity in intra-observer detection probability and to better detect changes in the population based on survey indices. This study will assist the Egyptian government establish a monitoring program as an integral part of future crocodile harvest activities in Lake Nasser

  12. Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing.

    PubMed

    Tan, Cheston; Poggio, Tomaso

    2016-01-01

    Faces are an important and unique class of visual stimuli, and have been of interest to neuroscientists for many years. Faces are known to elicit certain characteristic behavioral markers, collectively labeled "holistic processing", while non-face objects are not processed holistically. However, little is known about the underlying neural mechanisms. The main aim of this computational simulation work is to investigate the neural mechanisms that make face processing holistic. Using a model of primate visual processing, we show that a single key factor, "neural tuning size", is able to account for three important markers of holistic face processing: the Composite Face Effect (CFE), Face Inversion Effect (FIE) and Whole-Part Effect (WPE). Our proof-of-principle specifies the precise neurophysiological property that corresponds to the poorly-understood notion of holism, and shows that this one neural property controls three classic behavioral markers of holism. Our work is consistent with neurophysiological evidence, and makes further testable predictions. Overall, we provide a parsimonious account of holistic face processing, connecting computation, behavior and neurophysiology. PMID:26985989

  13. Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing

    PubMed Central

    Tan, Cheston; Poggio, Tomaso

    2016-01-01

    Faces are an important and unique class of visual stimuli, and have been of interest to neuroscientists for many years. Faces are known to elicit certain characteristic behavioral markers, collectively labeled “holistic processing”, while non-face objects are not processed holistically. However, little is known about the underlying neural mechanisms. The main aim of this computational simulation work is to investigate the neural mechanisms that make face processing holistic. Using a model of primate visual processing, we show that a single key factor, “neural tuning size”, is able to account for three important markers of holistic face processing: the Composite Face Effect (CFE), Face Inversion Effect (FIE) and Whole-Part Effect (WPE). Our proof-of-principle specifies the precise neurophysiological property that corresponds to the poorly-understood notion of holism, and shows that this one neural property controls three classic behavioral markers of holism. Our work is consistent with neurophysiological evidence, and makes further testable predictions. Overall, we provide a parsimonious account of holistic face processing, connecting computation, behavior and neurophysiology. PMID:26985989

  14. Accounting for Population Structure in Gene-by-Environment Interactions in Genome-Wide Association Studies Using Mixed Models

    PubMed Central

    Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar

    2016-01-01

    Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants. PMID:26943367

  15. Impact of accounting for coloured noise in radar altimetry data on a regional quasi-geoid model

    NASA Astrophysics Data System (ADS)

    Farahani, H. H.; Slobbe, D. C.; Klees, R.; Seitz, Kurt

    2016-07-01

    We study the impact of an accurate computation and incorporation of coloured noise in radar altimeter data when computing a regional quasi-geoid model using least-squares techniques. Our test area comprises the Southern North Sea including the Netherlands, Belgium, and parts of France, Germany, and the UK. We perform the study by modelling the disturbing potential with spherical radial base functions. To that end, we use the traditional remove-compute-restore procedure with a recent GRACE/GOCE static gravity field model. Apart from radar altimeter data, we use terrestrial, airborne, and shipboard gravity data. Radar altimeter sea surface heights are corrected for the instantaneous dynamic topography and used in the form of along-track quasi-geoid height differences. Noise in these data are estimated using repeat-track and post-fit residual analysis techniques and then modelled as an auto regressive moving average process. Quasi-geoid models are computed with and without taking the modelled coloured noise into account. The difference between them is used as a measure of the impact of coloured noise in radar altimeter along-track quasi-geoid height differences on the estimated quasi-geoid model. The impact strongly depends on the availability of shipboard gravity data. If no such data are available, the impact may attain values exceeding 10 centimetres in particular areas. In case shipboard gravity data are used, the impact is reduced, though it still attains values of several centimetres. We use geometric quasi-geoid heights from GPS/levelling data at height markers as control data to analyse the quality of the quasi-geoid models. The quasi-geoid model computed using a model of the coloured noise in radar altimeter along-track quasi-geoid height differences shows in some areas a significant improvement over a model that assumes white noise in these data. However, the interpretation in other areas remains a challenge due to the limited quality of the control data.

  16. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.

  17. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. PMID:27494960

  18. Thermal creep model for CWSR zircaloy-4 cladding taking into account the annealing of the irradiation hardening

    SciTech Connect

    Cappelaere, Chantal; Limon, Roger; Duguay, Chrstelle; Pinte, Gerard; Le Breton, Michel; Bouffioux, Pol; Chabretou, Valerie; Miquet, Alain

    2012-02-15

    After irradiation and cooling in a pool, spent nuclear fuel assemblies are either transported for wet storage to a devoted site or loaded in casks for dry storage. During dry transportation or at the beginning of dry storage, the cladding is expected to be submitted to creep deformation under the hoop stress induced by the internal pressure of the fuel rod. The thermal creep is a potential mechanism that might lead to cladding failure. A new creep model was developed, based on a database of creep tests on as-received and irradiated cold-worked stress-relieved Zircaloy-4 cladding in a wide range of temperatures (310 degrees C to 470 degrees C) and hoop stress (80 to 260 MPa). Based on three laws-a flow law, a strain-hardening recovery law, and an annealing of irradiation hardening law this model allows the simulation of not only the transient creep and the steady-state creep, but also the early creep acceleration observed on irradiated samples tested in severe conditions, which was not taken into account in the previous models. The extrapolation of the creep model in the conditions of very long-term creep tests is reassuring, proving the robustness of the chosen formalism. The creep model has been assessed in progressively decreasing stress conditions, more representative of a transport. Set up to predict the cladding creep behavior under variable temperature and stress conditions, this model can easily be implemented into codes in order to simulate the thermomechanical behavior of spent fuel rods in various scenarios of postirradiation phases. (authors)

  19. Crystal Plasticity Constitutive Model for Multiphase Advanced High Strength Steels to Account for Phase Transformation and Yield Point Elongation

    NASA Astrophysics Data System (ADS)

    Park, Taejoon; Pourboghrat, Farhang

    2016-08-01

    A constitutive law was developed based on a rate-independent crystal plasticity to account for the mechanical behavior of multiphase advanced high strength steels. Martensitic phase transformation induced by the plastic deformation of the retained austenite was represented by considering the lattice invariant shear deformation and the orientation relationship between parent austenite and transformed martensite. The stress dependent transformation kinetics were represented by adopting the stress state dependent volume fraction evolution law. The plastic deformation of the austenite was determined to have the minimum- energy associated with the work during the phase transformation. In addition to the martensitic phase transformation, yield point elongation and subsequent hardening along with inhomogeneous plastic deformation were also represented by developing a hardening stagnation model induced by the delayed dislocation density evolution.

  20. An energy-based model accounting for snow accumulation and snowmelt in a coniferous forest and in an open area

    NASA Astrophysics Data System (ADS)

    Matějka, Ondřej; Jeníček, Michal

    2016-04-01

    An energy balance approach was used to simulate snow water equivalent (SWE) evolution in an open area, forest clearing and coniferous forest during winter seasons 2011/12 and 2012/13 in the Bystřice River basin (Krušné Mountains, Czech Republic). The aim was to describe the impact of vegetation on snow accumulation and snowmelt under different forest canopy structure and trees density. Hemispherical photographs were used to describe the forest canopy structure. Energy balance model of snow accumulation and melt was set up. The snow model was adjusted to account the effects of forest canopy on driving meteorological variables. Leaf area index derived from 32 hemispherical photographs of vegetation and sky was used to implement the forest influence in the snow model. The model was evaluated using snow depth and SWE data measured at 16 localities in winter seasons from 2011 to 2013. The model was able to reproduce the SWE evolution in both winter seasons beneath the forest canopy, forest clearing and open area. The SWE maximum in forest sites was by 18% lower than in open areas and forest clearings. The portion of shortwave radiation on snowmelt rate was by 50% lower in forest areas than in open areas due to shading effect. The importance of turbulent fluxes was by 30% lower in forest sites compared to openings because of wind speed reduction up to 10% of values at corresponding open areas. Indirect estimation of interception rates was derived. Between 14 and 60% of snowfall was intercept and sublimated in the forest canopy in both winter seasons. Based on model results, the underestimation of solid precipitation (heated precipitation gauge used for measurement) at the weather station Hřebečná was revealed. The snowfall was underestimated by 40% in winter season 2011/12 and by 13% in winter season 2012/13. Although, the model formulation appeared sufficient for both analysed winter seasons, canopy effects on the longwave radiation and ground heat flux were not

  1. Mixture models of nucleotide sequence evolution that account for heterogeneity in the substitution process across sites and across lineages.

    PubMed

    Jayaswal, Vivek; Wong, Thomas K F; Robinson, John; Poladian, Leon; Jermiin, Lars S

    2014-09-01

    Molecular phylogenetic studies of homologous sequences of nucleotides often assume that the underlying evolutionary process was globally stationary, reversible, and homogeneous (SRH), and that a model of evolution with one or more site-specific and time-reversible rate matrices (e.g., the GTR rate matrix) is enough to accurately model the evolution of data over the whole tree. However, an increasing body of data suggests that evolution under these conditions is an exception, rather than the norm. To address this issue, several non-SRH models of molecular evolution have been proposed, but they either ignore heterogeneity in the substitution process across sites (HAS) or assume it can be modeled accurately using the distribution. As an alternative to these models of evolution, we introduce a family of mixture models that approximate HAS without the assumption of an underlying predefined statistical distribution. This family of mixture models is combined with non-SRH models of evolution that account for heterogeneity in the substitution process across lineages (HAL). We also present two algorithms for searching model space and identifying an optimal model of evolution that is less likely to over- or underparameterize the data. The performance of the two new algorithms was evaluated using alignments of nucleotides with 10 000 sites simulated under complex non-SRH conditions on a 25-tipped tree. The algorithms were found to be very successful, identifying the correct HAL model with a 75% success rate (the average success rate for assigning rate matrices to the tree's 48 edges was 99.25%) and, for the correct HAL model, identifying the correct HAS model with a 98% success rate. Finally, parameter estimates obtained under the correct HAL-HAS model were found to be accurate and precise. The merits of our new algorithms were illustrated with an analysis of 42 337 second codon sites extracted from a concatenation of 106 alignments of orthologous genes encoded by the nuclear

  2. Modelling of the physico-chemical behaviour of clay minerals with a thermo-kinetic model taking into account particles morphology in compacted material.

    NASA Astrophysics Data System (ADS)

    Sali, D.; Fritz, B.; Clément, C.; Michau, N.

    2003-04-01

    Modelling of fluid-mineral interactions is largely used in Earth Sciences studies to better understand the involved physicochemical processes and their long-term effect on the materials behaviour. Numerical models simplify the processes but try to preserve their main characteristics. Therefore the modelling results strongly depend on the data quality describing initial physicochemical conditions for rock materials, fluids and gases, and on the realistic way of processes representations. The current geo-chemical models do not well take into account rock porosity and permeability and the particle morphology of clay minerals. In compacted materials like those considered as barriers in waste repositories, low permeability rocks like mudstones or compacted powders will be used : they contain mainly fine particles and the geochemical models used for predicting their interactions with fluids tend to misjudge their surface areas, which are fundamental parameters in kinetic modelling. The purpose of this study was to improve how to take into account the particles morphology in the thermo-kinetic code KINDIS and the reactive transport code KIRMAT. A new function was integrated in these codes, considering the reaction surface area as a volume depending parameter and the calculated evolution of the mass balance in the system was coupled with the evolution of reactive surface areas. We made application exercises for numerical validation of these new versions of the codes and the results were compared with those of the pre-existing thermo-kinetic code KINDIS. Several points are highlighted. Taking into account reactive surface area evolution during simulation modifies the predicted mass transfers related to fluid-minerals interactions. Different secondary mineral phases are also observed during modelling. The evolution of the reactive surface parameter helps to solve the competition effects between different phases present in the system which are all able to fix the chemical

  3. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    SciTech Connect

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-03-15

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  4. Integrated water resources management of the Ichkeul basin taking into account the durability of its wetland ecosystem using WEAP model

    NASA Astrophysics Data System (ADS)

    Shabou, M.; Lili-Chabaane, Z.; Gastli, W.; Chakroun, H.; Ben Abdallah, S.; Oueslati, I.; Lasram, F.; Laajimi, R.; Shaiek, M.; Romdhane, M. S.; Mnajja, A.

    2012-04-01

    The Conservation of coastal wetlands in the Mediterranean area is generally faced with development issues. It is the case of Tunisia where the precipitation is irregular in time and space. For the equity of water use (drinking, irrigation), there is a planning at the national level allowing the possibility of water transfer from regions rich in water resources to poor ones. This plan was initially done in Tunisia without taking into account the wetlands ecosystems and their specificities. The main purpose of this study is to find a model able to integrate simultaneously available resources and various water demands within a watershed by taking into account the durability of related wetland ecosystems. It is the case of the Ichkeul basin. This later is situated in northern of Tunisia, having an area of 2080 km2 and rainfall of about 600 mm/year. Downstream this basin, the Ichkeul Lake is characterized by a double alternation of seasonal high water and low salinity in winter and spring and low water levels and high salinity in summer and autumn that makes the Ichkeul an exceptional ecosystem. The originality of this hydrological system of Lake-marsh conditions is related to the presence of aquatic vegetation in the lake and special rich and varied hygrophilic in the marshes that constitutes the main source of food for large migrating water birds. After the construction of three dams on the principle rivers that are feeding the Ichkeul Lake, aiming particularly to supply the local irrigation and the drinking water demand of cities in the north and the east of Tunisia, freshwater inflow to the lake is greatly reduced causing a hydrological disequilibrium that influences the ecological conditions of the different species. Therefore, to ensure the sustainability of the water resources management, it's important to find a trade off between the existing hydrological and ecological systems taking into account water demands of various users (drinking, irrigation fishing, and

  5. Accountability Overboard

    ERIC Educational Resources Information Center

    Chieppo, Charles D.; Gass, James T.

    2009-01-01

    This article reports that special interest groups opposed to charter schools and high-stakes testing have hijacked Massachusetts's once-independent board of education and stand poised to water down the Massachusetts Comprehensive Assessment System (MCAS) tests and the accountability system they support. President Barack Obama and Massachusetts…

  6. Painless Accountability.

    ERIC Educational Resources Information Center

    Brown, R. W.; And Others

    The computerized Painless Accountability System is a performance objective system from which instructional programs are developed. Three main simplified behavioral response levels characterize this system: (1) cognitive, (2) psychomotor, and (3) affective domains. Each of these objectives are classified by one of 16 descriptors. The second major…

  7. Accounting Specialist.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This publication identifies 20 subjects appropriate for use in a competency list for the occupation of accounting specialist, 1 of 12 occupations within the business/computer technologies cluster. Each unit consists of a number of competencies; a list of competency builders is provided for each competency. Titles of the 20 units are as follows:…

  8. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system.

    PubMed

    Beckon, William N

    2016-07-01

    For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  9. Nonlinear analysis of a new car-following model accounting for the optimal velocity changes with memory

    NASA Astrophysics Data System (ADS)

    Peng, Guanghan; Lu, Weizhen; He, Hongdi; Gu, Zhenghua

    2016-11-01

    We, in this study, construct a new car-following model by accounting for the effect of the optimal velocity changes with memory in terms of the full velocity difference (FVD) model. The stability condition and mKdV equation concerning the optimal velocity changes with memory are derived through both linear stability and nonlinear analyses, respectively. Then, the space concerned can be divided into three regions classified as the stable, the metastable and the unstable ones. Moreover, it is shown that the effect of the optimal velocity changes with memory could enhance the stability of traffic flow. Furthermore, the numerical results verify that not only the sensitivity parameter of the optimal velocity changes with memory of driver but also the memory step could effectively stabilize the traffic flow. In addition, the stability of traffic flow is strengthened by increasing the memory step-size of optimal velocity changes and the intensity of drivers' memory with such changes. Most importantly, the effect of the optimal velocity changes with memory may avoid the disadvantage of historical information, which decreases the stability of traffic flow on road.

  10. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system.

    PubMed

    Beckon, William N

    2016-07-01

    For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment). PMID:27149556

  11. A simple arc column model that accounts for the relationship between voltage, current and electrode gap during VAR

    SciTech Connect

    Williamson, R.L.

    1997-02-01

    Mean arc voltage is a process parameter commonly used in vacuum arc remelting (VAR) control schemes. The response of this parameter to changes in melting current (I) and electrode gap (g{sub e}) at constant pressure may be accurately described by an equation of the form V = V{sub 0} + c{sub 1}g{sub e}I + c{sub 2}g{sub e}{sup 2} + c{sub 3}I{sup 2}, where c{sub 1}, c{sub 2} and c{sub 3} are constants, and where the non-linear terms generally constitute a relatively small correction. If the non-linear terms are ignored, the equation has the form of Ohm`s law with a constant offset (V{sub 0}), c{sub 1}g{sub e} playing the role of resistance. This implies that the arc column may be treated approximately as a simple resistor during constant current VAR, the resistance changing linearly with g{sub e}. The VAR furnace arc is known to originate from multiple cathode spot clusters situated randomly on the electrode tip surface. Each cluster marks a point of exist for conduction electrons leaving the cathode surface and entering the electrode gap. Because the spot clusters re highly localized on the cathode surface, each gives rise to an arc column that may be considered to operate independently of other local arc columns. This approximation is used to develop a model that accounts for the observed arc voltage dependence on electrode gap at constant current. Local arc column resistivity is estimated from elementary plasma physics and used to test the model for consistency by using it to predict local column heavy particle density. Furthermore, it is shown that the local arc column resistance increases as particle density increases. This is used to account for the common observation that the arc stiffens with increasing current, i.e. the arc voltage becomes more sensitive to changes in electrode gap as the melting current is increased. This explains why arc voltage is an accurate electrode gap indicator for high current VAR processes but not low current VAR processes.

  12. Emerging accounting trends accounting for leases.

    PubMed

    Valletta, Robert; Huggins, Brian

    2010-12-01

    A new model for lease accounting can have a significant impact on hospitals and healthcare organizations. The new approach proposes a "right-of-use" model that involves complex estimates and significant administrative burden. Hospitals and health systems that draw heavily on lease arrangements should start preparing for the new approach now even though guidance and a final rule are not expected until mid-2011. This article highlights a number of considerations from the lessee point of view.

  13. Response function theories that account for size distribution effects - A review. [mathematical models concerning composite propellant heterogeneity effects on combustion instability

    NASA Technical Reports Server (NTRS)

    Cohen, N. S.

    1980-01-01

    The paper presents theoretical models developed to account for the heterogeneity of composite propellants in expressing the pressure-coupled combustion response function. It is noted that the model of Lengelle and Williams (1968) furnishes a viable basis to explain the effects of heterogeneity.

  14. Developing a Global Model of Accounting Education and Examining IES Compliance in Australia, Japan, and Sri Lanka

    ERIC Educational Resources Information Center

    Watty, Kim; Sugahara, Satoshi; Abayadeera, Nadana; Perera, Luckmika

    2013-01-01

    The introduction of International Education Standards (IES) signals a clear move by the International Accounting Education Standards Board (IAESB) to ensure high quality standards in professional accounting education at a global level. This study investigated how IES are perceived and valued by member bodies and academics in three counties:…

  15. Self-consistent modeling of induced magnetic field in Titan's atmosphere accounting for the generation of Schumann resonance

    NASA Astrophysics Data System (ADS)

    Béghin, Christian

    2015-02-01

    This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.

  16. Accountability and Primary Healthcare

    PubMed Central

    Mukhi, Shaheena; Barnsley, Jan; Deber, Raisa B.

    2014-01-01

    This paper examines the accountability structures within primary healthcare (PHC) in Ontario; in particular, who is accountable for what and to whom, and the policy tools being used. Ontario has implemented a series of incremental reforms, using expenditure policy instruments, enforced through contractual agreements to provide a defined set of publicly financed services that are privately delivered, most often by family physicians. The findings indicate that reporting, funding, evaluation and governance accountability requirements vary across service provider models. Accountability to the funder and patients is most common. Agreements, incentives and compensation tools have been used but may be insufficient to ensure parties are being held responsible for their activities related to stated goals. Clear definitions of various governance structures, a cohesive approach to monitoring critical performance indicators and associated improvement strategies are important elements in operationalizing accountability and determining whether goals are being met. PMID:25305392

  17. Accountability and primary healthcare.

    PubMed

    Mukhi, Shaheena; Barnsley, Jan; Deber, Raisa B

    2014-09-01

    This paper examines the accountability structures within primary healthcare (PHC) in Ontario; in particular, who is accountable for what and to whom, and the policy tools being used. Ontario has implemented a series of incremental reforms, using expenditure policy instruments, enforced through contractual agreements to provide a defined set of publicly financed services that are privately delivered, most often by family physicians. The findings indicate that reporting, funding, evaluation and governance accountability requirements vary across service provider models. Accountability to the funder and patients is most common. Agreements, incentives and compensation tools have been used but may be insufficient to ensure parties are being held responsible for their activities related to stated goals. Clear definitions of various governance structures, a cohesive approach to monitoring critical performance indicators and associated improvement strategies are important elements in operationalizing accountability and determining whether goals are being met. PMID:25305392

  18. Low-level and narm radioactive wastes. Model documentation: accounting model for PRESTO-EPA-POP, PRESTO-EPA-DEEP, and PRESTO-EPA-BRC. Methodology and users manual. Final report

    SciTech Connect

    Rogers, V.; Hung, C.

    1987-12-01

    The accounting model was used as a utility model for assessing the cumulative health effects to the general population residing in the downstream regional water basin as a result of the disposal of LLW when a unit response analysis method is used. The utility model is specifically designed to assess the cumulative population health effects in conjunction with the PRESTO-EPA-POP, PRESTO-EPA-BRC, or PRESTO-EPA-DEEP model simply for the purpose of reducing the cost of analysis. Therefore, the assessment of the cumulative population health effects may also be conducted with one of the above appropriate models without the aid of this accounting model.

  19. Can the Five Factor Model of Personality Account for the Variability of Autism Symptom Expression? Multivariate Approaches to Behavioral Phenotyping in Adult Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Schwartzman, Benjamin C.; Wood, Jeffrey J.; Kapp, Steven K.

    2016-01-01

    The present study aimed to: determine the extent to which the five factor model of personality (FFM) accounts for variability in autism spectrum disorder (ASD) symptomatology in adults, examine differences in average FFM personality traits of adults with and without ASD and identify distinct behavioral phenotypes within ASD. Adults (N = 828;…

  20. Accounting for Unmeasured Population Substructure in Case-Control Studies of Genetic Association Using a Novel Latent-Class Model

    PubMed Central

    Satten, Glen A.; Flanders, W. Dana; Yang, Quanhe

    2001-01-01

    We propose a novel latent-class approach to detect and account for population stratification in a case-control study of association between a candidate gene and a disease. In our approach, population substructure is detected and accounted for using data on additional loci that are in linkage equilibrium within subpopulations but have alleles that vary in frequency between subpopulations. We have tested our approach using simulated data based on allele frequencies in 12 short tandem repeat (STR) loci in four populations in Argentina. PMID:11170894

  1. A mathematical model of the global processes of plastic degradation in the World Ocean with account for the surface temperature distribution

    NASA Astrophysics Data System (ADS)

    Bartsev, S. I.; Gitelson, J. I.

    2016-02-01

    The suggested model of plastic garbage degradation allows us to obtain an estimate of the stationary density of their distribution over the surface of the World Ocean with account for the temperature dependence on the degradation rate. The model also allows us to estimate the characteristic time periods of degradation of plastic garbage and the dynamics of the mean density variation as the mean rate of plastic garbage entry into the ocean varies

  2. Accountability in Communication and Learning.

    ERIC Educational Resources Information Center

    Findley, Charles A.

    The purpose of this paper is to present a general overview of the nature of and the need for accountability in educational communication. To clarify the nature of a model that will facilitate accountability, a comparative analysis is constructed between a model for instructional design and a model for speech preparation. Detailed attention is…

  3. A Model and Method of Evaluative Accounts: Development Impact of the National Literacy Mission (NLM of India).

    ERIC Educational Resources Information Center

    Bhola, H. S.

    2002-01-01

    Studied the developmental impact of the National Literacy Mission of India, providing an evaluative account based on 97 evaluation studies. Compared findings with those from a 27-study synthesis of studies of effects of adult literacy efforts in Africa. Findings show the impact of literacy on the development of nations. (SLD)

  4. A physically meaningful equivalent circuit network model of a lithium-ion battery accounting for local electrochemical and thermal behaviour, variable double layer capacitance and degradation

    NASA Astrophysics Data System (ADS)

    von Srbik, Marie-Therese; Marinescu, Monica; Martinez-Botas, Ricardo F.; Offer, Gregory J.

    2016-09-01

    A novel electrical circuit analogy is proposed modelling electrochemical systems under realistic automotive operation conditions. The model is developed for a lithium ion battery and is based on a pseudo 2D electrochemical model. Although cast in the framework familiar to application engineers, the model is essentially an electrochemical battery model: all variables have a direct physical interpretation and there is direct access to all states of the cell via the model variables (concentrations, potentials) for monitoring and control systems design. This is the first Equivalent Circuit Network -type model that tracks directly the evolution of species inside the cell. It accounts for complex electrochemical phenomena that are usually omitted in online battery performance predictors such as variable double layer capacitance, the full current-overpotential relation and overpotentials due to mass transport limitations. The coupled electrochemical and thermal model accounts for capacity fade via a loss in active species and for power fade via an increase in resistive solid electrolyte passivation layers at both electrodes. The model's capability to simulate cell behaviour under dynamic events is validated against test procedures, such as standard battery testing load cycles for current rates up to 20 C, as well as realistic automotive drive cycle loads.

  5. Integrating a distributed hydrological model and SEEA-Water for improving water account and water allocation management under a climate change context.

    NASA Astrophysics Data System (ADS)

    Jauch, Eduardo; Almeida, Carina; Simionesei, Lucian; Ramos, Tiago; Neves, Ramiro

    2015-04-01

    The crescent demand and situations of water scarcity and droughts are a difficult problem to solve by water managers, with big repercussions in the entire society. The complexity of this question is increased by trans-boundary river issues and the environmental impacts of the usual adopted solutions to store water, like reservoirs. To be able to answer to the society requirements regarding water allocation in a sustainable way, the managers must have a complete and clear picture of the present situation, as well as being able to understand the changes in the water dynamics both in the short and long time period. One of the available tools for the managers is the System of Environmental-Economic Accounts for Water (SEEA-Water), a subsystem of SEEA with focus on water accounts, developed by the United Nations Statistical Division (UNSD) in collaboration with the London Group on Environmental Accounting, This system provides, between other things, with a set of tables and accounts for water and water related emissions, organizing statistical data making possible the derivation of indicators that can be used to assess the relations between economy and environment. One of the main issues with the SEEA-Water framework seems to be the requirement of large amounts of data, including field measurements of water availability in rivers/lakes/reservoirs, soil and groundwater, as also precipitation, irrigation and other water sources and uses. While this is an incentive to collecting and using data, it diminishes the usefulness of the system on countries where this data is not yet available or is incomplete, as it can lead to a poor understanding of the water availability and uses. Distributed hydrological models can be used to fill missing data required by the SEEA-Water framework. They also make it easier to assess different scenarios (usually soil use, water demand and climate changes) for a better planning of water allocation. In the context of the DURERO project (www

  6. Modeling slug tests in unconfined aquifers taking into account water table kinematics, wellbore skin and inertial effects

    NASA Astrophysics Data System (ADS)

    Malama, Bwalya; Kuhlman, Kristopher L.; Barrash, Warren; Cardiff, Michael; Thoma, Michael

    2011-09-01

    Two models for slug tests conducted in unconfined aquifers are developed by (a) extending the unconfined KGS solution to oscillatory responses, yielding a model referred to herein as the unified model, and (b) replacing the constant head condition with the linearized kinematic condition at the water table. The models can be used to analyze the full range of responses from highly oscillatory to overdamped. The second model, refered to as the moving water table (MWT) model, is only applicable when effects of well bore skin are negligible. The models are validated by comparison with published solutions, and by application to a published case study of field tests conducted in wells without skin in an unconfined aquifer at the MSEA site in Nebraska. In this regard (a) the MWT model essentially yields the same results as the confined KGS model, except very close to the water table, and (b) the unified model yields slightly smaller aquifer K-values relative to the MWT model at all positions in the well. All model solutions yield comparable results when fitted to published field data obtained in an unconfined fluvial aquifer at the MSEA site in Nebraska. The unified model is fitted to field data collected in wells known to exhibit positive skin effects at the Boise Hydrogeophysical Research Site (BHRS) in Boise, Idaho. It is shown to yield hydraulic conductivity estimates of comparable magnitude to those obtained with the KGS model for overdamped responses, and the Springer-Gelhar model for oscillatory responses. Sensitivity of the MWT model to specific yield, Sy, and hydraulic anisotropy, κ is evaluated and the results, when plotted in log-log space and with consideration of log-scale time derivatives of the response, indicate that these two parameters should be estimable from slug test data, though challenges still remain.

  7. A Comparison of Seven Cox Regression-Based Models to Account for Heterogeneity Across Multiple HIV Treatment Cohorts in Latin America and the Caribbean

    PubMed Central

    Giganti, Mark J.; Luz, Paula M.; Caro-Vega, Yanink; Cesar, Carina; Padgett, Denis; Koenig, Serena; Echevarria, Juan; McGowan, Catherine C.; Shepherd, Bryan E.

    2015-01-01

    Abstract Many studies of HIV/AIDS aggregate data from multiple cohorts to improve power and generalizability. There are several analysis approaches to account for cross-cohort heterogeneity; we assessed how different approaches can impact results from an HIV/AIDS study investigating predictors of mortality. Using data from 13,658 HIV-infected patients starting antiretroviral therapy from seven Latin American and Caribbean cohorts, we illustrate the assumptions of seven readily implementable approaches to account for across cohort heterogeneity with Cox proportional hazards models, and we compare hazard ratio estimates across approaches. As a sensitivity analysis, we modify cohort membership to generate specific heterogeneity conditions. Hazard ratio estimates varied slightly between the seven analysis approaches, but differences were not clinically meaningful. Adjusted hazard ratio estimates for the association between AIDS at treatment initiation and death varied from 2.00 to 2.20 across approaches that accounted for heterogeneity; the adjusted hazard ratio was estimated as 1.73 in analyses that ignored across cohort heterogeneity. In sensitivity analyses with more extreme heterogeneity, we noted a slightly greater distinction between approaches. Despite substantial heterogeneity between cohorts, the impact of the specific approach to account for heterogeneity was minimal in our case study. Our results suggest that it is important to account for across cohort heterogeneity in analyses, but that the specific technique for addressing heterogeneity may be less important. Because of their flexibility in accounting for cohort heterogeneity, we prefer stratification or meta-analysis methods, but we encourage investigators to consider their specific study conditions and objectives. PMID:25647087

  8. A Comparison of Seven Cox Regression-Based Models to Account for Heterogeneity Across Multiple HIV Treatment Cohorts in Latin America and the Caribbean.

    PubMed

    Giganti, Mark J; Luz, Paula M; Caro-Vega, Yanink; Cesar, Carina; Padgett, Denis; Koenig, Serena; Echevarria, Juan; McGowan, Catherine C; Shepherd, Bryan E

    2015-05-01

    Many studies of HIV/AIDS aggregate data from multiple cohorts to improve power and generalizability. There are several analysis approaches to account for cross-cohort heterogeneity; we assessed how different approaches can impact results from an HIV/AIDS study investigating predictors of mortality. Using data from 13,658 HIV-infected patients starting antiretroviral therapy from seven Latin American and Caribbean cohorts, we illustrate the assumptions of seven readily implementable approaches to account for across cohort heterogeneity with Cox proportional hazards models, and we compare hazard ratio estimates across approaches. As a sensitivity analysis, we modify cohort membership to generate specific heterogeneity conditions. Hazard ratio estimates varied slightly between the seven analysis approaches, but differences were not clinically meaningful. Adjusted hazard ratio estimates for the association between AIDS at treatment initiation and death varied from 2.00 to 2.20 across approaches that accounted for heterogeneity; the adjusted hazard ratio was estimated as 1.73 in analyses that ignored across cohort heterogeneity. In sensitivity analyses with more extreme heterogeneity, we noted a slightly greater distinction between approaches. Despite substantial heterogeneity between cohorts, the impact of the specific approach to account for heterogeneity was minimal in our case study. Our results suggest that it is important to account for across cohort heterogeneity in analyses, but that the specific technique for addressing heterogeneity may be less important. Because of their flexibility in accounting for cohort heterogeneity, we prefer stratification or meta-analysis methods, but we encourage investigators to consider their specific study conditions and objectives.

  9. Peer-to-peer accountability.

    PubMed

    Guidi, M A

    1995-10-01

    Peer-to-peer accountability is an essential component of empowerment-based management models. To foster this environment, skills such as conflict resolution, team building, communication and group dynamics need to be identified and supported. The lack of peer-to-peer accountability can seriously hinder the development of management models. PMID:7566813

  10. Modelling the behaviour of uranium-series radionuclides in soils and plants taking into account seasonal variations in soil hydrology.

    PubMed

    Pérez-Sánchez, D; Thorne, M C

    2014-05-01

    In a previous paper, a mathematical model for the behaviour of (79)Se in soils and plants was described. Subsequently, a review has been published relating to the behaviour of (238)U-series radionuclides in soils and plants. Here, we bring together those two strands of work to describe a new mathematical model of the behaviour of (238)U-series radionuclides entering soils in solution and their uptake by plants. Initial studies with the model that are reported here demonstrate that it is a powerful tool for exploring the behaviour of this decay chain or subcomponents of it in soil-plant systems under different hydrological regimes. In particular, it permits studies of the degree to which secular equilibrium assumptions are appropriate when modelling this decay chain. Further studies will be undertaken and reported separately examining sensitivities of model results to input parameter values and also applying the model to sites contaminated with (238)U-series radionuclides.

  11. An Analytical Approach to Model Heterogonous Recrystallization Kinetics Taking into Account the Natural Spatial Inhomogeneity of Deformation

    NASA Astrophysics Data System (ADS)

    Luo, Haiwen; van der Zwaag, Sybrand

    2016-01-01

    The classical Johnson-Mehl-Avrami-Kolmogorov equation was modified to take into account the normal local strain distribution in deformed samples. This new approach is not only able to describe the influence of the local heterogeneity of recrystallization but also to produce an average apparent Avrami exponent to characterize the entire recrystallization process. In particular, it predicts that the apparent Avrami exponent should be within a narrow range of 1 to 2 and converges to 1 when the local strain varies greatly. Moreover, the apparent Avrami exponent is predicted to be insensitive to temperature and deformation conditions. These predictions are in excellent agreement with the experimental observations on static recrystallization after hot deformation in different steels and other metallic alloys.

  12. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses

    PubMed Central

    Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.

    2015-01-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447

  13. Demonstrating marketing accountability.

    PubMed

    Gombeski, William R; Britt, Jason; Taylor, Jan; Riggs, Karen; Wray, Tanya; Adkins, Wanda; Springate, Suzanne

    2008-01-01

    Pressure on health care marketers to demonstrate effectiveness of their strategies and show their contribution to organizational goals is growing. A seven-tiered model based on the concepts of structure (having the right people, systems), process (doing the right things in the right way), and outcomes (results) is discussed. Examples of measures for each tier are provided and the benefits of using the model as a tool for measuring, organizing, tracking, and communicating appropriate information are provided. The model also provides a framework for helping management understand marketing's value and can serve as a vehicle for demonstrating marketing accountability.

  14. Improving Quality and Accountability in Vocational Technological Programs: An Evaluation of Arizona's VTE Model and Performance Standards.

    ERIC Educational Resources Information Center

    Vandegrift, Judith A.; And Others

    A study examined statewide implementation of the Arizona Department of Education's vocational technological education (ADE/VTE) model and the feasibility of using Arizona's performance standards in evaluating processes/outcomes at model sites. Data were collected from a pilot study of 12 sites and survey of 128 Arizona local education authorities…

  15. Construction of a mathematical model of the human body, taking the nonlinear rigidity of the spine into account

    NASA Technical Reports Server (NTRS)

    Glukharev, K. K.; Morozova, N. I.; Potemkin, B. A.; Solovyev, V. S.; Frolov, K. V.

    1973-01-01

    A mathematical model of the human body was constructed, under the action of harmonic vibrations, in the 2.5-7 Hz frequency range. In this frequency range, the model of the human body as a vibrating system, with concentrated parameters is considered. Vertical movements of the seat and vertical components of vibrations of the human body are investigated.

  16. Production model in the conditions of unstable demand taking into account the influence of trading infrastructure: Ergodicity and its application

    NASA Astrophysics Data System (ADS)

    Obrosova, N. K.; Shananin, A. A.

    2015-04-01

    A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by an attempt to analyze the problems of functioning of low competitive macroeconomic structures. The model is formalized in the form of a Bellman equation, for which a closed-form solution is found. The stochastic process of product stock variations is proved to be ergodic and its final probability distribution is found. Expressions for the average production load and the average product stock are found by analyzing the stochastic process. A system of model equations relating the model variables to official statistical parameters is derived. The model is identified using data from the Fiat and KAMAZ companies. The influence of the credit interest rate on the firm market value assessment and the production load level are analyzed using comparative statics methods.

  17. The Asian clam Corbicula fluminea as a biomonitor of trace element contamination: Accounting for different sources of variation using an hierarchical linear model

    USGS Publications Warehouse

    Shoults-Wilson, W. A.; Peterson, J.T.; Unrine, J.M.; Rickard, J.; Black, M.C.

    2009-01-01

    In the present study, specimens of the invasive clam, Corbicula fluminea, were collected above and below possible sources of potentially toxic trace elements (As, Cd, Cr, Cu, Hg, Pb, and Zn) in the Altamaha River system (Georgia, USA). Bioaccumulation of these elements was quantified, along with environmental (water and sediment) concentrations. Hierarchical linear models were used to account for variability in tissue concentrations related to environmental (site water chemistry and sediment characteristics) and individual (growth metrics) variables while identifying the strongest relations between these variables and trace element accumulation. The present study found significantly elevated concentrations of Cd, Cu, and Hg downstream of the outfall of kaolin-processing facilities, Zn downstream of a tire cording facility, and Cr downstream of both a nuclear power plant and a paper pulp mill. Models of the present study indicated that variation in trace element accumulation was linked to distance upstream from the estuary, dissolved oxygen, percentage of silt and clay in the sediment, elemental concentrations in sediment, shell length, and bivalve condition index. By explicitly modeling environmental variability, the Hierarchical linear modeling procedure allowed the identification of sites showing increased accumulation of trace elements that may have been caused by human activity. Hierarchical linear modeling is a useful tool for accounting for environmental and individual sources of variation in bioaccumulation studies. ?? 2009 SETAC.

  18. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    SciTech Connect

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.

    2012-04-24

    We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.

  19. Accounting for age uncertainty in growth modeling, the case study of yellowfin tuna (Thunnus albacares) of the Indian Ocean.

    PubMed

    Dortel, Emmanuelle; Massiot-Granier, Félix; Rivot, Etienne; Million, Julien; Hallier, Jean-Pierre; Morize, Eric; Munaron, Jean-Marie; Bousquet, Nicolas; Chassot, Emmanuel

    2013-01-01

    Age estimates, typically determined by counting periodic growth increments in calcified structures of vertebrates, are the basis of population dynamics models used for managing exploited or threatened species. In fisheries research, the use of otolith growth rings as an indicator of fish age has increased considerably in recent decades. However, otolith readings include various sources of uncertainty. Current ageing methods, which converts an average count of rings into age, only provide periodic age estimates in which the range of uncertainty is fully ignored. In this study, we describe a hierarchical model for estimating individual ages from repeated otolith readings. The model was developed within a Bayesian framework to explicitly represent the sources of uncertainty associated with age estimation, to allow for individual variations and to include knowledge on parameters from expertise. The performance of the proposed model was examined through simulations, and then it was coupled to a two-stanza somatic growth model to evaluate the impact of the age estimation method on the age composition of commercial fisheries catches. We illustrate our approach using the sagittal otoliths of yellowfin tuna of the Indian Ocean collected through large-scale mark-recapture experiments. The simulation performance suggested that the ageing error model was able to estimate the ageing biases and provide accurate age estimates, regardless of the age of the fish. Coupled with the growth model, this approach appeared suitable for modeling the growth of Indian Ocean yellowfin and is consistent with findings of previous studies. The simulations showed that the choice of the ageing method can strongly affect growth estimates with subsequent implications for age-structured data used as inputs for population models. Finally, our modeling approach revealed particularly useful to reflect uncertainty around age estimates into the process of growth estimation and it can be applied to any

  20. Accounting for age uncertainty in growth modeling, the case study of yellowfin tuna (Thunnus albacares) of the Indian Ocean.

    PubMed

    Dortel, Emmanuelle; Massiot-Granier, Félix; Rivot, Etienne; Million, Julien; Hallier, Jean-Pierre; Morize, Eric; Munaron, Jean-Marie; Bousquet, Nicolas; Chassot, Emmanuel

    2013-01-01

    Age estimates, typically determined by counting periodic growth increments in calcified structures of vertebrates, are the basis of population dynamics models used for managing exploited or threatened species. In fisheries research, the use of otolith growth rings as an indicator of fish age has increased considerably in recent decades. However, otolith readings include various sources of uncertainty. Current ageing methods, which converts an average count of rings into age, only provide periodic age estimates in which the range of uncertainty is fully ignored. In this study, we describe a hierarchical model for estimating individual ages from repeated otolith readings. The model was developed within a Bayesian framework to explicitly represent the sources of uncertainty associated with age estimation, to allow for individual variations and to include knowledge on parameters from expertise. The performance of the proposed model was examined through simulations, and then it was coupled to a two-stanza somatic growth model to evaluate the impact of the age estimation method on the age composition of commercial fisheries catches. We illustrate our approach using the sagittal otoliths of yellowfin tuna of the Indian Ocean collected through large-scale mark-recapture experiments. The simulation performance suggested that the ageing error model was able to estimate the ageing biases and provide accurate age estimates, regardless of the age of the fish. Coupled with the growth model, this approach appeared suitable for modeling the growth of Indian Ocean yellowfin and is consistent with findings of previous studies. The simulations showed that the choice of the ageing method can strongly affect growth estimates with subsequent implications for age-structured data used as inputs for population models. Finally, our modeling approach revealed particularly useful to reflect uncertainty around age estimates into the process of growth estimation and it can be applied to any

  1. Evolution of gene structural complexity: an alternative-splicing-based model accounts for intron-containing retrogenes.

    PubMed

    Zhang, Chengjun; Gschwend, Andrea R; Ouyang, Yidan; Long, Manyuan

    2014-05-01

    The structure of eukaryotic genes evolves extensively by intron loss or gain. Previous studies have revealed two models for gene structure evolution through the loss of introns: RNA-based gene conversion, dubbed the Fink model and retroposition model. However, retrogenes that experienced both intron loss and intron-retaining events have been ignored; evolutionary processes responsible for the variation in complex exon-intron structure were unknown. We detected hundreds of retroduplication-derived genes in human (Homo sapiens), fly (Drosophila melanogaster), rice (Oryza sativa), and Arabidopsis (Arabidopsis thaliana) and categorized them either as duplicated genes that have all introns lost or as duplicated genes that have at least lost one and retained one intron compared with the parental copy (intron-retaining [IR] type). Our new model attributes intron retention alternative splicing to the generation of these IR-type gene pairs. We presented 25 parental genes that have an intron retention isoform and have retained introns in the same locations in the IR-type duplicate genes, which directly support our hypothesis. Our alternative-splicing-based model in conjunction with the retroposition and Fink models can explain the IR-type gene observed. We discovered a greater percentage of IR-type genes in plants than in animals, which may be due to the abundance of intron retention cases in plants. Given the prevalence of intron retention in plants, this new model gives a support that plant genomes have very complex gene structures.

  2. Elastic consequences of a single plastic event: Towards a realistic account of structural disorder and shear wave propagation in models of flowing amorphous solids

    NASA Astrophysics Data System (ADS)

    Nicolas, Alexandre; Puosi, Francesco; Mizuno, Hideyuki; Barrat, Jean-Louis

    2015-05-01

    Shear transformations (i.e., localized rearrangements of particles resulting in the shear deformation of a small region of the sample) are the building blocks of mesoscale models for the flow of disordered solids. In order to compute the time-dependent response of the solid material to such a shear transformation, with a proper account of elastic heterogeneity and shear wave propagation, we propose and implement a very simple Finite-Element (FE)-based method. Molecular Dynamics (MD) simulations of a binary Lennard-Jones glass are used as a benchmark for comparison, and information about the microscopic viscosity and the local elastic constants is directly extracted from the MD system and used as input in FE. We find very good agreement between FE and MD regarding the temporal evolution of the disorder-averaged displacement field induced by a shear transformation, which turns out to coincide with the response of a uniform elastic medium. However, fluctuations are relatively large, and their magnitude is satisfactorily captured by the FE simulations of an elastically heterogeneous system. Besides, accounting for elastic anisotropy on the mesoscale is not crucial in this respect. The proposed method thus paves the way for models of the rheology of amorphous solids which are both computationally efficient and realistic, in that structural disorder and inertial effects are accounted for.

  3. Accounting for the Impact of Impermeable Soil Layers on Pesticide Runoff and Leaching in a Landscape Vulnerability Model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A regional-scale model that estimates landscape vulnerability of pesticide leaching and runoff (solution and particle adsorbed) underestimated runoff vulnerability and overestimated leaching vulnerability compared to measured data when applied to a gently rolling landscape in northeast Missouri. Man...

  4. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; Chuvieco, E.; Koutsias, N.

    2013-02-01

    Humans are responsible for most forest fires in Europe, but anthropogenic factors behind these events are still poorly understood. We tried to identify the driving factors of human-caused fire occurrence in Spain by applying two different statistical approaches. Firstly, assuming stationary processes for the whole country, we created models based on multiple linear regression and binary logistic regression to find factors associated with fire density and fire presence, respectively. Secondly, we used geographically weighted regression (GWR) to better understand and explore the local and regional variations of those factors behind human-caused fire occurrence. The number of human-caused fires occurring within a 25-yr period (1983-2007) was computed for each of the 7638 Spanish mainland municipalities, creating a binary variable (fire/no fire) to develop logistic models, and a continuous variable (fire density) to build standard linear regression models. A total of 383 657 fires were registered in the study dataset. The binary logistic model, which estimates the probability of having/not having a fire, successfully classified 76.4% of the total observations, while the ordinary least squares (OLS) regression model explained 53% of the variation of the fire density patterns (adjusted R2 = 0.53). Both approaches confirmed, in addition to forest and climatic variables, the importance of variables related with agrarian activities, land abandonment, rural population exodus and developmental processes as underlying factors of fire occurrence. For the GWR approach, the explanatory power of the GW linear model for fire density using an adaptive bandwidth increased from 53% to 67%, while for the GW logistic model the correctly classified observations improved only slightly, from 76.4% to 78.4%, but significantly according to the corrected Akaike Information Criterion (AICc), from 3451.19 to 3321.19. The results from GWR indicated a significant spatial variation in the local

  5. Modeling scale-dependent runoff generation in a small semi-arid watershed accounting for rainfall intensity and water depth

    NASA Astrophysics Data System (ADS)

    Langhans, Christoph; Govers, Gerard; Diels, Jan; Stone, Jeffry J.; Nearing, Mark A.

    2014-07-01

    Observed scale effects of runoff on hillslopes and small watersheds derive from complex interactions of time-varying rainfall rates with runoff, infiltration and macro- and microtopographic structures. A little studied aspect of scale effects is the concept of water depth-dependent infiltration. For semi-arid rangeland it has been demonstrated that mounds underneath shrubs have a high infiltrability and lower lying compacted or stony inter-shrub areas have a lower infiltrability. It is hypothesized that runoff accumulation further downslope leads to increased water depth, inundating high infiltrability areas, which increases the area-averaged infiltration rate. A model was developed that combines the concepts of water depth-dependent infiltration, partial contributing area under variable rainfall intensity, and the Green-Ampt theory for point-scale infiltration. The model was applied to rainfall simulation data and natural rainfall-runoff data from a small sub-watershed (0.4 ha) of the Walnut Gulch Experimental Watershed in the semi-arid US Southwest. Its performance to reproduce observed hydrographs was compared to that of a conventional Green-Ampt model assuming complete inundation sheet flow, with runon infiltration, which is infiltration of runoff onto pervious downstream areas. Parameters were derived from rainfall simulations and from watershed-scale calibration directly from the rainfall-runoff events. The performance of the water depth-dependent model was better than that of the conventional model on the scale of a rainfall simulator plot, but on the scale of a small watershed the performance of both model types was similar. We believe that the proposed model contributes to a less scale-dependent way of modeling runoff and erosion on the hillslope-scale.

  6. Can the Five Factor Model of Personality Account for the Variability of Autism Symptom Expression? Multivariate Approaches to Behavioral Phenotyping in Adult Autism Spectrum Disorder.

    PubMed

    Schwartzman, Benjamin C; Wood, Jeffrey J; Kapp, Steven K

    2016-01-01

    The present study aimed to: determine the extent to which the five factor model of personality (FFM) accounts for variability in autism spectrum disorder (ASD) symptomatology in adults, examine differences in average FFM personality traits of adults with and without ASD and identify distinct behavioral phenotypes within ASD. Adults (N = 828; nASD = 364) completed an online survey with an autism trait questionnaire and an FFM personality questionnaire. FFM facets accounted for 70 % of variance in autism trait scores. Neuroticism positively correlated with autism symptom severity, while extraversion, openness to experience, agreeableness, and conscientiousness negatively correlated with autism symptom severity. Four FFM subtypes emerged within adults with ASD, with three subtypes characterized by high neuroticism and none characterized by lower-than-average neuroticism.

  7. An analytical model for the celestial distribution of polarized light, accounting for polarization singularities, wavelength and atmospheric turbidity

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Gao, Jun; Fan, Zhiguo; Roberts, Nicholas W.

    2016-06-01

    We present a computationally inexpensive analytical model for simulating celestial polarization patterns in variable conditions. We combine both the singularity theory of Berry et al (2004 New J. Phys. 6 162) and the intensity model of Perez et al (1993 Sol. Energy 50 235-245) such that our single model describes three key sets of data: (1) the overhead distribution of the degree of polarization as well as the existence of neutral points in the sky; (2) the change in sky polarization as a function of the turbidity of the atmosphere; and (3) sky polarization patterns as a function of wavelength, calculated in this work from the ultra-violet to the near infra-red. To verify the performance of our model we generate accurate reference data using a numerical radiative transfer model and statistical comparisons between these two methods demonstrate no significant difference in almost all situations. The development of our analytical model provides a novel method for efficiently calculating the overhead skylight polarization pattern. This provides a new tool of particular relevance for our understanding of animals that use the celestial polarization pattern as a source of visual information.

  8. Using a new high resolution regional model for malaria that accounts for population density and surface hydrology to determine sensitivity of malaria risk to climate drivers

    NASA Astrophysics Data System (ADS)

    Tompkins, Adrian; Ermert, Volker; Di Giuseppe, Francesca

    2013-04-01

    In order to better address the role of population dynamics and surface hydrology in the assessment of malaria risk, a new dynamical disease model been developed at ICTP, known as VECTRI: VECtor borne disease community model of ICTP, TRIeste (VECTRI). The model accounts for the temperature impact on the larvae, parasite and adult vector populations. Local host population density affects the transmission intensity, and the model thus reproduces the differences between peri-urban and rural transmission noted in Africa. A new simple pond model framework represents surface hydrology. The model can be used on with spatial resolutions finer than 10km to resolve individual health districts and thus can be used as a planning tool. Results of the models representation of interannual variability and longer term projections of malaria transmission will be shown for Africa. These will show that the model represents the seasonality and spatial variations of malaria transmission well matching a wide range of survey data of parasite rate and entomological inoculation rate (EIR) from across West and East Africa taken in the period prior to large-scale interventions. The model is used to determine the sensitivity of malaria risk to climate variations, both in rainfall and temperature, and then its use in a prototype forecasting system coupled with ECMWF forecasts will be demonstrated.

  9. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    USGS Publications Warehouse

    Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  10. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    PubMed Central

    Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior. PMID:24834339

  11. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data.

    PubMed

    Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D

    2014-04-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50-100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior. PMID:24834339

  12. A hybrid Bayesian hierarchical model combining cohort and case-control studies for meta-analysis of diagnostic tests: Accounting for partial verification bias.

    PubMed

    Ma, Xiaoye; Chen, Yong; Cole, Stephen R; Chu, Haitao

    2014-05-26

    To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities, and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented.

  13. Two-way FSI modelling of blood flow through CCA accounting on-line medical diagnostics in hypertension

    NASA Astrophysics Data System (ADS)

    Czechowicz, K.; Badur, J.; Narkiewicz, K.

    2014-08-01

    Flow parameters can induce pathological changes in the arteries. We propose a method to asses those parameters using a 3D computer model of the flow in the Common Carotid Artery. Input data was acquired using an automatic 2D ultrasound wall tracking system. This data has been used to generate a 3D geometry of the artery. The diameter and wall thickness have been assessed individually for every patient, but the artery has been taken as a 75mm straight tube. The Young's modulus for the arterial walls was calculated using the pulse pressure, diastolic (minimal) diameter and wall thickness (IMT). Blood flow was derived from the pressure waveform using a 2-parameter Windkessel model. The blood is assumed to be non-Newtonian. The computational models were generated and calculated using commercial code. The coupling method required the use of Arbitrary Lagrangian-Euler formulation to solve Navier-Stokes and Navier-Lame equations in a moving domain. The calculations showed that the distention of the walls in the model is not significantly different from the measurements. Results from the model have been used to locate additional risk factors, such as wall shear stress or circumferential stress, that may predict adverse hypertension complications.

  14. Modelling runoff at the plot scale taking into account rainfall partitioning by vegetation: application to stemflow of banana (Musa spp.) plant

    NASA Astrophysics Data System (ADS)

    Charlier, J.-B.; Moussa, R.; Cattan, P.; Cabidoche, Y.-M.; Voltz, M.

    2009-06-01

    Rainfall partitioning by vegetation modifies the intensity of rainwater reaching the ground, which affects runoff generation. Incident rainfall is intercepted by the plant canopy and then redistributed into throughfall and stemflow. Rainfall intensities at the soil surface are therefore not spatially uniform, generating local variations of runoff production that are disregarded in runoff models. The aim of this paper was to model runoff at the plot scale, accounting for rainfall partitioning by vegetation in the case of plants concentrating rainwater at the plant foot and promoting stemflow. We developed a lumped modelling approach, including a stemflow function that divided the plot into two compartments: one compartment including stemflow and the relative water pathways and one compartment for the rest of the plot. This stemflow function was coupled with a production function and a transfer function to simulate a flood hydrograph using the MHYDAS model. Calibrated parameters were a "stemflow coefficient", which compartmented the plot; the saturated hydraulic conductivity (Ks), which controls infiltration and runoff; and the two parameters of the diffusive wave equation. We tested our model on a banana plot of 3000 m2 on permeable Andosol (mean Ks=75 mm h-1) under tropical rainfalls, in Guadeloupe (FWI). Runoff simulations without and with the stemflow function were performed and compared to 18 flood events from 10 to 130 mm rainfall depth. Modelling results showed that the stemflow function improved the calibration of hydrographs according to the error criteria on volume and on peakflow and to the Nash and Sutcliffe coefficient. This was particularly the case for low flows observed during residual rainfall, for which the stemflow function allowed runoff to be simulated for rainfall intensities lower than the Ks measured at the soil surface. This approach also allowed us to take into account the experimental data, without needing to calibrate the runoff volume on

  15. A Function Accounting for Training Set Size and Marker Density to Model the Average Accuracy of Genomic Prediction

    PubMed Central

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments (). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5′698 Holstein Friesian bulls genotyped with 50 K SNPs and 1′332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2–10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is . The proportion of genetic variance captured by the complete SNP sets () was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20′000 SNPs in the Brown Swiss population studied. PMID:24339895

  16. Modeling complicated rheological behaviors in encapsulating shells of lipid-coated microbubbles accounting for nonlinear changes of both shell viscosity and elasticity.

    PubMed

    Li, Qian; Matula, Thomas J; Tu, Juan; Guo, Xiasheng; Zhang, Dong

    2013-02-21

    It has been accepted that the dynamic responses of ultrasound contrast agent (UCA) microbubbles will be significantly affected by the encapsulating shell properties (e.g., shell elasticity and viscosity). In this work, a new model is proposed to describe the complicated rheological behaviors in an encapsulating shell of UCA microbubbles by applying the nonlinear 'Cross law' to the shell viscous term in the Marmottant model. The proposed new model was verified by fitting the dynamic responses of UCAs measured with either a high-speed optical imaging system or a light scattering system. The comparison results between the measured radius-time curves and the numerical simulations demonstrate that the 'compression-only' behavior of UCAs can be successfully simulated with the new model. Then, the shell elastic and viscous coefficients of SonoVue microbubbles were evaluated based on the new model simulations, and compared to the results obtained from some existing UCA models. The results confirm the capability of the current model for reducing the dependence of bubble shell parameters on the initial bubble radius, which indicates that the current model might be more comprehensive to describe the complex rheological nature (e.g., 'shear-thinning' and 'strain-softening') in encapsulating shells of UCA microbubbles by taking into account the nonlinear changes of both shell elasticity and shell viscosity. PMID:23339902

  17. The Cost of Being Accountable: An Objective-Referenced Program Cost Model for Educational Management--A Maryland Perspective.

    ERIC Educational Resources Information Center

    Holowenzak, Stephen P.; Stagmer, Robert A.

    This publication describes in detail an objective-referenced program cost model for educational management that was developed by the Maryland State Department of Education. Primary purpose of the publication is to aid educational decision-makers in developing and refining their own method of cost-pricing educational programs for use in state and…

  18. Modelling scale-dependent runoff generation in a small semi-arid watershed accounting for rainfall intensity and water depth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Observed scale effects of runoff and erosion on hillslopes and small watersheds pose one of the most intriguing challenges to modellers, because it results from complex interactions of time-dependent rainfall input with runoff, infiltration and macro- and microtopographic structures. A little studie...

  19. Public Higher Education and the State: Models for Financing, Budgeting, and Accountability. ASHE 1986 Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Curry, Denis J.; Fischer, Norman M.

    The trend toward greater state regulation of public higher education is discussed, along with alternative structures or models for state financing of public institutions. The situation in Washington State is briefly described as an illustration. It is proposed that interests of the state, college, and student can be enhanced by allowing colleges…

  20. Diagnostic Competence of Teachers: A Process Model That Accounts for Diagnosing Learning Behavior Tested by Means of a Case Scenario

    ERIC Educational Resources Information Center

    Klug, Julia; Bruder, Simone; Kelava, Augustin; Spiel, Christiane; Schmitz, Bernhard

    2013-01-01

    Diagnosing learning behavior is one of teachers' most central tasks. So far, accuracy in teachers' judgments on students' achievement has been investigated. In this study, a new perspective is taken by developing and testing a three-dimensional model that describes the process of diagnosing learning behavior within a sample of N = 293…

  1. A general model to calculate the spin-lattice (T1) relaxation time of blood, accounting for haematocrit, oxygen saturation and magnetic field strength.

    PubMed

    Hales, Patrick W; Kirkham, Fenella J; Clark, Christopher A

    2016-02-01

    Many MRI techniques require prior knowledge of the T1-relaxation time of blood (T1bl). An assumed/fixed value is often used; however, T1bl is sensitive to magnetic field (B0), haematocrit (Hct), and oxygen saturation (Y). We aimed to combine data from previous in vitro measurements into a mathematical model, to estimate T1bl as a function of B0, Hct, and Y. The model was shown to predict T1bl from in vivo studies with a good accuracy (± 87 ms). This model allows for improved estimation of T1bl between 1.5-7.0 T while accounting for variations in Hct and Y, leading to improved accuracy of MRI-derived perfusion measurements.

  2. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in

  3. Modeling Water Resource Systems Accounting for Water-Related Energy Use, GHG Emissions and Water-Dependent Energy Generation in California

    NASA Astrophysics Data System (ADS)

    Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Medellin-Azuara, J.

    2015-12-01

    Most individual processes relating water and energy interdependence have been assessed in many different ways over the last decade. It is time to step up and include the results of these studies in management by proportionating a tool for integrating these processes in decision-making to effectively understand the tradeoffs between water and energy from management options and scenarios. A simple but powerful decision support system (DSS) for water management is described that includes water-related energy use and GHG emissions not solely from the water operations, but also from final water end uses, including demands from cities, agriculture, environment and the energy sector. Because one of the main drivers of energy use and GHG emissions is water pumping from aquifers, the DSS combines a surface water management model with a simple groundwater model, accounting for their interrelationships. The model also explicitly includes economic data to optimize water use across sectors during shortages and calculate return flows from different uses. Capabilities of the DSS are demonstrated on a case study over California's intertied water system. Results show that urban end uses account for most GHG emissions of the entire water cycle, but large water conveyance produces significant peaks over the summer season. Also the development of more efficient water application on the agricultural sector has increased the total energy consumption and the net water use in the basins.

  4. Constitutive modeling of hot horming of austenitic stainless steel 316LN by accounting for recrystallization in the dislocation evolution

    NASA Astrophysics Data System (ADS)

    Kooiker, H.; Perdahcioğlu, E. S.; van den Boogaard, A. H.

    2016-08-01

    Hot compression test data taken from Zhang [1] of metastable austenitic stainless steel AISI 316LN over a range of strain rates and temperatures shows typical dynamic recovery and recrystallization behavior. It is proposed to model this behavior by incorporating not only the hardening and recovery into the Bergstrom dislocation evolution equation, but also the recrystallization. It is shown that the initial mechanical response before recrystallization can be accurately represented by assuming that the mean free path evolves as the microstructure evolves from homogeneously spaced dislocations to cell-pattern. Results show that this novel continuum mechanical model can predict the observed behavior, showing a good match to the experimental data and capturing the transition from recrystallization to (almost) no recrystallization.

  5. Accounting for anthropic energy flux of traffic in winter urban road surface temperature simulations with the TEB model

    NASA Astrophysics Data System (ADS)

    Khalifa, A.; Marchetti, M.; Bouilloud, L.; Martin, E.; Bues, M.; Chancibaut, K.

    2016-02-01

    Snowfall forecasts help winter maintenance of road networks, ensure better coordination between services, cost control, and a reduction in environmental impacts caused by an inappropriate use of de-icers. In order to determine the possible accumulation of snow on pavements, forecasting the road surface temperature (RST) is mandatory. Weather outstations are used along these networks to identify changes in pavement status, and to make forecasts by analyzing the data they provide. Physical numerical models provide such forecasts, and require an accurate description of the infrastructure along with meteorological parameters. The objective of this study was to build a reliable urban RST forecast with a detailed integration of traffic in the Town Energy Balance (TEB) numerical model for winter maintenance. The study first consisted in generating a physical and consistent description of traffic in the model with two approaches to evaluate traffic incidence on RST. Experiments were then conducted to measure the effect of traffic on RST increase with respect to non-circulated areas. These field data were then used for comparison with the forecast provided by this traffic-implemented TEB version.

  6. Accounting for anthropic energy flux of traffic in winter urban road surface temperature simulations with TEB model

    NASA Astrophysics Data System (ADS)

    Khalifa, A.; Marchetti, M.; Bouilloud, L.; Martin, E.; Bues, M.; Chancibaut, K.

    2015-06-01

    A forecast of the snowfall helps winter coordination operating services, reducing the cost of the maintenance actions, and the environmental impacts caused by an inappropriate use of de-icing. In order to determine the possible accumulation of snow on pavement, the forecast of the road surface temperature (RST) is mandatory. Physical numerical models provide such forecast, and do need an accurate description of the infrastructure along with meteorological parameters. The objective of this study was to build a reliable urban RST forecast with a detailed integration of traffic in the Town Energy Balance (TEB) numerical model for winter maintenance. The study first consisted in generating a physical and consistent description of traffic in the model with all the energy interactions, with two approaches to evaluate the traffic incidence on RST. Experiments were then conducted to measure the traffic effect on RST increase with respect to non circulated areas. These field data were then used for comparison with forecast provided by this traffic-implemented TEB version.

  7. A ray-tracing model to account for off-great circle HF propagation over northerly paths

    NASA Astrophysics Data System (ADS)

    Zaalov, N. Y.; Warrington, E. M.; Stocker, A. J.

    2005-08-01

    Off-great circle HF propagation effects are a common feature of the northerly ionosphere (i.e., the subauroral trough region, the auroral zone, and the polar cap). In addition to their importance in radiolocation applications where deviations from the great circle path may result in significant triangulation errors, they are also important in two other respects: (1) In systems employing directional antennas pointed along the great circle path, the signal quality may be degraded at times when propagation is via off-great circle propagation modes; and (2) the off-great circle propagation mechanisms may result in propagation at times when the signal frequency exceeds the maximum usable frequency along the great circle path. A ray-tracing model covering the northerly ionosphere is described in this paper. The results obtained using the model are very reminiscent of the directional characteristics observed in various experimental measurement programs, and consequently, it is believed that the model may be employed to enable the nature of off-great circle propagation effects to be estimated for paths which were not subject to experimental investigation. Although it is not possible to predict individual off-great circle propagation events, it is possible to predict the periods during which large deviations are likely to occur and their magnitudes and directions.

  8. A simple differential diffusion model to account for the discrepancy between 223Ra- and 224Ra-based eddy diffusivities

    NASA Astrophysics Data System (ADS)

    Stachelhaus, Scott L.; Moran, S. Bradley

    2012-03-01

    A series of 223Ra (t1/2 = 11.4 d) and 224Ra (t1/2 = 3.66 d) measurements made in the Mid-Atlantic Bight yield eddy diffusivity (K) estimates of 1.2 ± 0.3 × 102 m2 s-1 and 1.4 ± 0.2 × 102 m2 s-1, respectively. These results fall in line with previous studies from multiple locations throughout the ocean, in which 224Ra-based eddy diffusivities invariably exceed those determined using 223Ra. Such a pattern conflicts with the Fickian model for eddy diffusivity, in which K is constant. Moreover, this trend runs counter to the length scale-dependent view of eddy diffusion, which suggests that K values estimated using 223Ra should exceed those of 224Ra, because the length scale of the former is greater than that of the latter. A finite mixing-length model based on the concept of differential diffusion is used to provide an explanation for this discrepancy.

  9. Accounting for uncertainty in confounder and effect modifier selection when estimating average causal effects in generalized linear models.

    PubMed

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-09-01

    Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.

  10. Accounting for Uncertainty in Confounder and Effect Modifier Selection when Estimating Average Causal Effects in Generalized Linear Models

    PubMed Central

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-01-01

    Summary Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012) and Lefebvre et al. (2014), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to non-collapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100 to 150 observations and 50 covariates. The method is applied to data on 15060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within thirty days of diagnosis. PMID:25899155

  11. A multiscale modelling methodology applicable for regulatory purposes taking into account effects of complex terrain and buildings on pollutant dispersion: a case study for an inner Alpine basin.

    PubMed

    Oettl, D

    2015-11-01

    Dispersion modelling in complex terrain always has been challenging for modellers. Although a large number of publications are dedicated to that field, candidate methods and models for usage in regulatory applications are scarce. This is all the more true when the combined effect of topography and obstacles on pollutant dispersion has to be taken into account. In Austria, largely situated in Alpine regions, such complex situations are quite frequent. This work deals with an approach, which is in principle capable of considering both buildings and topography in simulations by combining state-of-the-art wind field models at the micro- (<1 km) and mesoscale γ (2-20 km) with a Lagrangian particle model. In order to make such complex numerical models applicable for regulatory purposes, meteorological input data for the models need to be readily derived from routine observations. Here, use was made of the traditional way to bin meteorological data based on wind direction, speed, and stability class, formerly mainly used in conjunction with Gaussian-type models. It is demonstrated that this approach leads to reasonable agreements (fractional bias < 0.1) between observed and modelled annual average concentrations in an Alpine basin with frequent low-wind-speed conditions, temperature inversions, and quite complex flow patterns, while keeping the simulation times within the frame of possibility with regard to applications in licencing procedures. However, due to the simplifications in the derivation of meteorological input data as well as several ad hoc assumptions regarding the boundary conditions of the mesoscale wind field model, the methodology is not suited for computing detailed time and space variations of pollutant concentrations. PMID:26162440

  12. A multiscale modelling methodology applicable for regulatory purposes taking into account effects of complex terrain and buildings on pollutant dispersion: a case study for an inner Alpine basin.

    PubMed

    Oettl, D

    2015-11-01

    Dispersion modelling in complex terrain always has been challenging for modellers. Although a large number of publications are dedicated to that field, candidate methods and models for usage in regulatory applications are scarce. This is all the more true when the combined effect of topography and obstacles on pollutant dispersion has to be taken into account. In Austria, largely situated in Alpine regions, such complex situations are quite frequent. This work deals with an approach, which is in principle capable of considering both buildings and topography in simulations by combining state-of-the-art wind field models at the micro- (<1 km) and mesoscale γ (2-20 km) with a Lagrangian particle model. In order to make such complex numerical models applicable for regulatory purposes, meteorological input data for the models need to be readily derived from routine observations. Here, use was made of the traditional way to bin meteorological data based on wind direction, speed, and stability class, formerly mainly used in conjunction with Gaussian-type models. It is demonstrated that this approach leads to reasonable agreements (fractional bias < 0.1) between observed and modelled annual average concentrations in an Alpine basin with frequent low-wind-speed conditions, temperature inversions, and quite complex flow patterns, while keeping the simulation times within the frame of possibility with regard to applications in licencing procedures. However, due to the simplifications in the derivation of meteorological input data as well as several ad hoc assumptions regarding the boundary conditions of the mesoscale wind field model, the methodology is not suited for computing detailed time and space variations of pollutant concentrations.

  13. Computer-program documentation of an interactive-accounting model to simulate streamflow, water quality, and water-supply operations in a river basin

    USGS Publications Warehouse

    Burns, A.W.

    1988-01-01

    This report describes an interactive-accounting model used to simulate streamflow, chemical-constituent concentrations and loads, and water-supply operations in a river basin. The model uses regression equations to compute flow from incremental (internode) drainage areas. Conservative chemical constituents (typically dissolved solids) also are computed from regression equations. Both flow and water quality loads are accumulated downstream. Optionally, the model simulates the water use and the simplified groundwater systems of a basin. Water users include agricultural, municipal, industrial, and in-stream users , and reservoir operators. Water users list their potential water sources, including direct diversions, groundwater pumpage, interbasin imports, or reservoir releases, in the order in which they will be used. Direct diversions conform to basinwide water law priorities. The model is interactive, and although the input data exist in files, the user can modify them interactively. A major feature of the model is its color-graphic-output options. This report includes a description of the model, organizational charts of subroutines, and examples of the graphics. Detailed format instructions for the input data, example files of input data, definitions of program variables, and listing of the FORTRAN source code are Attachments to the report. (USGS)

  14. An efficient model to predict guided wave radiation by finite-sized sources in multilayered anisotropic plates with account of caustics

    NASA Astrophysics Data System (ADS)

    Stévenin, M.; Lhémery, A.; Grondel, S.

    2016-01-01

    Elastic guided waves (GW) are used in various non-destructive testing (NDT) methods to inspect plate-like structures, generated by finite-sized transducers. Thanks to GW long range propagation, using a few transducers at permanent positions can provide a full coverage of the plate. Transducer diffraction effects take place, leading to complex radiated fields. Optimizing transducers positioning makes it necessary to accurately predict the GW field radiated by a transducer. Fraunhofer-like approximations applied to GW in isotropic homogeneous plates lead to fast and accurate field computation but can fail when applied to multi-layered anisotropic composite plates, as shown by some examples given. Here, a model is proposed for composite plates, based on the computation of the approximate Green's tensor describing modal propagation from a source point, with account of caustics typically seen when strong anisotropy is concerned. Modal solutions are otherwise obtained by the Semi-Analytic Finite Element method. Transducer diffraction effects are accounted for by means of an angular integration over the transducer surface as seen from the calculation point, that is, over energy paths involved, which are mode-dependent. The model is validated by comparing its predictions with those computed by means of a full convolution integration of the Green's tensor with the source over transducer surface. Examples given concern disk and rectangular shaped transducers commonly used in NDT.

  15. Mathematical Modeling of the Thermal State of an Isothermal Element with Account of the Radiant Heat Transfer Between Parts of a Spacecraft

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Paleshkin, A. V.; Terent‧ev, V. V.; Firsyuk, S. O.

    2016-01-01

    A methodological approach to determination of the thermal state at a point on the surface of an isothermal element of a small spacecraft has been developed. A mathematical model of heat transfer between surfaces of intricate geometric configuration has been described. In this model, account was taken of the external field of radiant fluxes and of the differentiated mutual influence of the surfaces. An algorithm for calculation of the distribution of the density of the radiation absorbed by surface elements of the object under study has been proposed. The temperature field on the lateral surface of the spacecraft exposed to sunlight and on its shady side has been calculated. By determining the thermal state of magnetic controls of the orientation system as an example, the authors have assessed the contribution of the radiation coming from the solar-cell panels and from the spacecraft surface.

  16. The Luminosity Function of Quasars (active Galactic Nuclei) in a Merging Model with the Eddington Limit Taken Into Account

    NASA Astrophysics Data System (ADS)

    Kontorovich, V. M.; Krivitsky, D. S.

    The influence of Eddington's limit on the active galactic nuclei (AGN) luminosity function within the framework of a phenomenological activity model (Kats and Kontorovich, 1990, 1991) based on angular momentum compensation in the process of galaxy merging is investigated. In particular, it is shown that in spite of the essential dependence of the galaxy merging probability on their masses in the most important and interesting case it behaves effectively as a constant, so that the abovementioned (Kats and Kontorovich, 1991) correspondence between the observed galaxy mass function (Binggeli et al., 1988) and quasar luminosity function power exponents (Boyle et al., 1988; Koo and Kron, 1988; Cristiani et al., 1993) for a constant merger probability takes place in reality. A break in the power-law dependence of the luminosity function due to Eddington's restriction (cf. Dibai, 1981; Padovani and Rafanelli, 1988) is obtained in certain cases. Possible correlation between masses of black holes in AGN and masses of their host galaxies is discussed. A more detailed paper containing the results presented at this conference was published in Pis'ma v Astron. Zh. (Kontorovich and Krivitsky, 1995). Here we have added also some additional notes and references.

  17. Accounting for observational uncertainties in the evaluation of low latitude turbulent air-sea fluxes simulated in a suite of IPSL model versions

    NASA Astrophysics Data System (ADS)

    Servonnat, Jerome; Braconnot, Pascale; Gainusa-Bogdan, Alina

    2015-04-01

    Turbulent momentum and heat (sensible and latent) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate and their good representation in climate models is of prime importance. In this work, we use the methodology developed by Braconnot & Frankignoul (1993) to perform a Hotelling T2 test on spatio-temporal fields (annual cycles). This statistic provides a quantitative measure accounting for an estimate of the observational uncertainty for the evaluation of low-latitude turbulent air-sea fluxes in a suite of IPSL model versions. The spread within the observational ensemble of turbulent flux data products assembled by Gainusa-Bogdan et al (submitted) is used as an estimate of the observational uncertainty for the different turbulent fluxes. The methodology holds on a selection of a small number of dominating variability patterns (EOFs) that are common to both the model and the observations for the comparison. Consequently it focuses on the large-scale variability patterns and avoids the possibly noisy smaller scales. The results show that different versions of the IPSL couple model share common large scale model biases, but also that there the skill on sea surface temperature is not necessarily directly related to the skill in the representation of the different turbulent fluxes. Despite the large error bars on the observations the test clearly distinguish the different merits of the different model version. The analyses of the common EOF patterns and related time series provide guidance on the major differences with the observations. This work is a first attempt to use such statistic on the evaluation of the spatio-temporal variability of the turbulent fluxes, accounting for an observational uncertainty, and represents an efficient tool for systematic evaluation of simulated air-seafluxes, considering both the fluxes and the related atmospheric variables. References Braconnot, P., and C. Frankignoul (1993), Testing Model

  18. What is accountability in health care?

    PubMed

    Emanuel, E J; Emanuel, L L

    1996-01-15

    Accountability has become a major issue in health care. Accountability entails the procedures and processes by which one party justifies and takes responsibility for its activities. The concept of accountability contains three essential components: 1) the loci of accountability--health care consists of at least 11 different parties that can be held accountable or hold others accountable; 2) the domains of accountability--in health care, parties can be held accountable for as many as six activities: professional competence, legal and ethical conduct, financial performance, adequacy of access, public health promotion, and community benefit; and 3) the procedures of accountability, including formal and informal procedures for evaluating compliance with domains and for disseminating the evaluation and responses by the accountable parties. Different models of accountability stress different domains, evaluative criteria, loci, and procedures. We characterize and compare three dominant models of accountability: 1) the professional model, in which the individual physician and patient participate in shared decision making and physicians are held accountable to professional colleagues and to patients; 2) the economic model, in which the market is brought to bear in health care and accountability is mediated through consumer choice of providers; and 3) the political model, in which physicians and patients interact as citizen-members within a community and in which physicians are accountable to a governing board elected from the members of the community, such as the board of a managed care plan. We argue that no single model of accountability is appropriate to health care. Instead, we advocate a stratified model of accountability in which the professional model guides the physician-patient relationship, the political model operates within managed care plans and other integrated health delivery networks, and the economic and political models operate in the relations between

  19. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  20. Funding Medical Research Projects: Taking into Account Referees' Severity and Consistency through Many-Faceted Rasch Modeling of Projects' Scores.

    PubMed

    Tesio, Luigi; Simone, Anna; Grzeda, Mariuzs T; Ponzio, Michela; Dati, Gabriele; Zaratin, Paola; Perucca, Laura; Battaglia, Mario A

    2015-01-01

    The funding policy of research projects often relies on scores assigned by a panel of experts (referees). The non-linear nature of raw scores and the severity and inconsistency of individual raters may generate unfair numeric project rankings. Rasch measurement (many-facets version, MFRM) provides a valid alternative to scoring. MFRM was applied to the scores achieved by 75 research projects on multiple sclerosis sent in response to a previous annual call by FISM-Italian Foundation for Multiple Sclerosis. This allowed to simulate, a posteriori, the impact of MFRM on the funding scenario. The applications were each scored by 2 to 4 independent referees (total = 131) on a 10-item, 0-3 rating scale called FISM-ProQual-P. The rotation plan assured "connection" of all pairs of projects through at least 1 shared referee.The questionnaire fulfilled satisfactorily the stringent criteria of Rasch measurement for psychometric quality (unidimensionality, reliability and data-model fit). Arbitrarily, 2 acceptability thresholds were set at a raw score of 21/30 and at the equivalent Rasch measure of 61.5/100, respectively. When the cut-off was switched from score to measure 8 out of 18 acceptable projects had to be rejected, while 15 rejected projects became eligible for funding. Some referees, of various severity, were grossly inconsistent (z-std fit indexes less than -1.9 or greater than 1.9). The FISM-ProQual-P questionnaire seems a valid and reliable scale. MFRM may help the decision-making process for allocating funds to MS research projects but also in other fields. In repeated assessment exercises it can help the selection of reliable referees. Their severity can be steadily calibrated, thus obviating the need to connect them with other referees assessing the same projects.

  1. Assessing the performance of dispersionless and dispersion-accounting methods: helium interaction with cluster models of the TiO2(110) surface.

    PubMed

    de Lara-Castells, María Pilar; Stoll, Hermann; Mitrushchenkov, Alexander O

    2014-08-21

    As a prototypical dispersion-dominated physisorption problem, we analyze here the performance of dispersionless and dispersion-accounting methodologies on the helium interaction with cluster models of the TiO2(110) surface. A special focus has been given to the dispersionless density functional dlDF and the dlDF+Das construction for the total interaction energy (K. Pernal, R. Podeswa, K. Patkowski, and K. Szalewicz, Phys. Rev. Lett. 2009, 109, 263201), where Das is an effective interatomic pairwise functional form for the dispersion. Likewise, the performance of symmetry-adapted perturbation theory (SAPT) method is evaluated, where the interacting monomers are described by density functional theory (DFT) with the dlDF, PBE, and PBE0 functionals. Our benchmarks include CCSD(T)-F12b calculations and comparative analysis on the nuclear bound states supported by the He-cluster potentials. Moreover, intra- and intermonomer correlation contributions to the physisorption interaction are analyzed through the method of increments (H. Stoll, J. Chem. Phys. 1992, 97, 8449) at the CCSD(T) level of theory. This method is further applied in conjunction with a partitioning of the Hartree-Fock interaction energy to estimate individual interaction energy components, comparing them with those obtained using the different SAPT(DFT) approaches. The cluster size evolution of dispersionless and dispersion-accounting energy components is then discussed, revealing the reduced role of the dispersionless interaction and intramonomer correlation when the extended nature of the surface is better accounted for. On the contrary, both post-Hartree-Fock and SAPT(DFT) results clearly demonstrate the high-transferability character of the effective pairwise dispersion interaction whatever the cluster model is. Our contribution also illustrates how the method of increments can be used as a valuable tool not only to achieve the accuracy of CCSD(T) calculations using large cluster models but also to

  2. Modelling runoff at the plot scale taking into account rainfall partitioning by vegetation: application to stemflow of banana (Musa spp.) plant

    NASA Astrophysics Data System (ADS)

    Charlier, J.-B.; Moussa, R.; Cattan, P.; Cabidoche, Y.-M.; Voltz, M.

    2009-11-01

    Rainfall partitioning by vegetation modifies the intensity of rainwater reaching the ground, which affects runoff generation. Incident rainfall is intercepted by the plant canopy and then redistributed into throughfall and stemflow. Rainfall intensities at the soil surface are therefore not spatially uniform, generating local variations of runoff production that are disregarded in runoff models. The aim of this paper was to model runoff at the plot scale, accounting for rainfall partitioning by vegetation in the case of plants concentrating rainwater at the plant foot and promoting stemflow. We developed a lumped modelling approach, including a stemflow function that divided the plot into two compartments: one compartment including stemflow and the related water pathways and one compartment for the rest of the plot. This stemflow function was coupled with a production function and a transfer function to simulate a flood hydrograph using the MHYDAS model. Calibrated parameters were a "stemflow coefficient", which compartmented the plot; the saturated hydraulic conductivity (Ks), which controls infiltration and runoff; and the two parameters of the diffusive wave equation. We tested our model on a banana plot of 3000 m2 on permeable Andosol (mean Ks=75 mm h-1) under tropical rainfalls, in Guadeloupe (FWI). Runoff simulations without and with the stemflow function were performed and compared to 18 flood events from 10 to 140 rainfall mm depth. Modelling results showed that the stemflow function improved the calibration of hydrographs according to the error criteria on volume and on peakflow, to the Nash and Sutcliffe coefficient, and to the root mean square error. This was particularly the case for low flows observed during residual rainfall, for which the stemflow function allowed runoff to be simulated for rainfall intensities lower than the Ks measured at the soil surface. This approach also allowed us to take into account the experimental data, without needing to

  3. Calibration and use of an interactive-accounting model to simulate dissolved solids, streamflow, and water-supply operations in the Arkansas River basin, Colorado

    USGS Publications Warehouse

    Burns, A.W.

    1989-01-01

    An interactive-accounting model was used to simulate dissolved solids, streamflow, and water supply operations in the Arkansas River basin, Colorado. Model calibration of specific conductance to streamflow relations at three sites enabled computation of dissolved-solids loads throughout the basin. To simulate streamflow only, all water supply operations were incorporated in the regression relations for streamflow. Calibration for 1940-85 resulted in coefficients of determination that ranged from 0.89 to 0.58, and values in excess of 0.80 were determined for 16 of 20 nodes. The model then incorporated 74 water users and 11 reservoirs to simulate the water supply operations for two periods, 1943-74 and 1975-85. For the 1943-74 calibration, coefficients of determination for streamflow ranged from 0.87 to 0.02. Calibration of the water supply operations resulted in coefficients of determination that ranged from 0.87 to negative for simulated irrigation diversions of 37 selected water users. Calibration for 1975-85 was not evaluated statistically, but average values and plots of reservoir contents indicated reasonableness of the simulation. To demonstrate the utility of the model, six specific alternatives were simulated to consider effects of potential enlargement of Pueblo Reservoir. Three general major alternatives were simulated: the 1975-85 calibrated model data, the calibrated model data with an addition of 30 cu ft/sec in Fountain Creek flows, and the calibrated model data plus additional municipal water in storage. These three major alternatives considered the options of reservoir enlargement or no enlargement. A 40,000-acre-foot reservoir enlargement resulted in average increases of 2,500 acre-ft in transmountain diversions, of 800 acre-ft in storage diversions, and of 100 acre-ft in winter-water storage. (USGS)

  4. Examining the Role Played by Meteorological Ensemble Forecasts, Ensemble Kalman Filter Streamflow Assimilation, and Multiple Hydrological Models within a Prediction System Accounting for Three Sources of Uncertainty

    NASA Astrophysics Data System (ADS)

    Anctil, F.; Thiboult, A.; Boucher, M. A.

    2015-12-01

    Building a hydrological ensemble prediction system (H-EPS) from an operational deterministic one may look like an easy task. Indeed, one has only to issue many forecasts at each time step instead of a single one. The problem gets much more complicated when that same person seeks the predictive distribution to be interpretable (reliable). This presentation examine the role played by meteorological ensemble forecasts (meteorological uncertainty), Ensemble Kalman Filter (EnKF) streamflow assimilation (initial conditions uncertainty), and multiple hydrological models (structural uncertainty), combined in a nearly reliable H-EPS. The EnKF is shown to contribute largely to the ensemble accuracy and dispersion, indicating that the initial condition uncertainty is dominant. However, it fails to maintain the required dispersion throughout the entire forecast horizon and needs to be supported by a multimodel approach to take into account structural uncertainty. Moreover, the multimodel approach contributes to improve the general forecasting performance and prevents from falling into the model selection pitfall since models differ strongly in their ability. Finally, the use of probabilistic meteorological forcing was found to contribute mostly to long lead time reliability. The H-EPS was implemented on 38 catchments (Québec, Canada) characterized by a dominant spring freshet and tested for 9-day ahead forecasts over a 2-year period. Twenty lumped hydrological models, chosen for their structural and conceptual diversity, were available to the project as well as ECMWF probabilistic meteorological weather forecasts.

  5. Accounting for age Structure in Ponderosa Pine Ecosystem Analyses: Integrating Management, Disturbance Histories and Observations with the BIOME-BGC Model

    NASA Astrophysics Data System (ADS)

    Hibbard, K. A.; Law, B.; Thornton, P.

    2003-12-01

    61% for sites averaging 9,16 and 23 years, respectively. It was assumed that changes in long-term pools (e.g. soil C) were negligible within these timeframes. In Law et al. (2003), the model performed well for old and mature sites, however, model simulations of the younger sites (9-50Y) were weak compared to NEP estimates from observations. Error for the young plots in Law et al. (2003) ranged from 150 - >400% of observed NEP. By accounting for the observed age structure through harvest removal, model error from this study ranged from 20-90% in young plots. This study is one of a few that have sought to account for age structure in simulating ecosystem dynamics and processes.

  6. S-R Associations, Their Extinction, and Recovery in an Animal Model of Anxiety: A New Associative Account of Phobias Without Recall of Original Trauma

    PubMed Central

    Laborda, Mario A.; Miller, Ralph R.

    2012-01-01

    Associative accounts of the etiology of phobias have been criticized because of numerous cases of phobias in which the client does not remember a relevant traumatic event (i.e., Pavlovian conditioning trial), instructions, or vicarious experience with the phobic object. In three lick suppression experiments with rats as subjects, we modeled an associative account of such fears. Experiment 1 assessed stimulus-response (S-R) associations in first-order fear conditioning. After behaviorally complete devaluation of the unconditioned stimulus, the target stimulus still produced strong conditioned responses, suggesting that an S-R association had been formed and that this association was not significantly affected when the outcome was devalued through unsignaled presentations of the unconditioned stimulus. Experiments 2 and 3 examined extinction and recovery of S-R associations. Experiment 2 showed that extinguished S-R associations returned when testing occurred outside of the extinction context (i.e., renewal) and Experiment 3 found that a long delay between extinction and testing also produced a return of the extinguished S-R associations (i.e., spontaneous recovery). These experiments suggest that fears for which people cannot recall a cause are explicable in an associative framework, and indicate that those fears are susceptible to relapse after extinction treatment just like stimulus-outcome (S-O) associations. PMID:21496503

  7. International Accounting and the Accounting Educator.

    ERIC Educational Resources Information Center

    Laribee, Stephen F.

    The American Assembly of Collegiate Schools of Business (AACSB) has been instrumental in internationalizing the accounting curriculum by means of accreditation requirements and standards. Colleges and universities have met the AACSB requirements either by providing separate international accounting courses or by integrating international topics…

  8. A Harmonious Accounting Duo?

    ERIC Educational Resources Information Center

    Schapperle, Robert F.; Hardiman, Patrick F.

    1992-01-01

    Accountants have urged "harmonization" of standards between the Governmental Accounting Standards Board and the Financial Accounting Standards Board, recommending similar reporting of like transactions. However, varying display of similar accounting events does not necessarily indicate disharmony. The potential for problems because of differing…

  9. Enhancing the Variable Infiltration Capacity Model to Account for Natural and Anthropogenic Impacts on Evapotranspiration in the North American Monsoon Region

    NASA Astrophysics Data System (ADS)

    Bohn, T. J.; Vivoni, E. R.

    2015-12-01

    Evapotranspiration (ET) is a poorly constrained flux in the North American monsoon (NAM) region, leading to potential errors in land-atmosphere feedbacks. Due to the region's arid to semi-arid climate, two factors play major roles in ET: sparse vegetation that exhibits dramatic seasonal greening, and irrigated agriculture. To more accurately characterize the spatio-temporal variations of ET in the NAM region, we used the Variable Infiltration Capacity (VIC) model, modified to account for soil evaporation (Esoil), irrigated agriculture, and the variability of land surface properties derived from the Moderate Resolution Imaging Spectroradiometer during 2000-2012. Simulated ET patterns were compared to field observations at fifty-nine eddy covariance towers, water balance estimates in nine basins, and six available gridded ET products. The modified VIC model performed well at eddy covariance towers representing the natural and agricultural land covers in the region. Simulations revealed that major source areas for ET were forested mountain areas during the summer season and irrigated croplands at peak times of growth in the winter and summer, accounting for 22% and 9% of the annual ET, respectively. Over the NAM region, Esoil was the largest component (60%) of annual ET, followed by plant transpiration (T, 32%) and evaporation of canopy interception (8%). Esoil and T displayed different relations with P in natural land covers, with Esoil tending to peak earlier than T by up to one month, while only a weak correlation between ET and P was found in irrigated croplands. These VIC-based estimates are the most realistic to date for this region, outperforming several other process-based and remote-sensing-based gridded ET products. Furthermore, spatio-temporal patterns reveal new information on the magnitudes, locations and timing of ET in the North American monsoon region, with implications for land-atmosphere feedbacks.

  10. Estimating the evolution of atrazine concentration in a fractured sandstone aquifer using lumped-parameter models and taking land-use into account

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Gallé, T.; Bayerle, M.; Pittois, D.; Braun, C.; El Khabbaz, H.; Maloszewski, P.; Elsner, M.

    2012-04-01

    The European water framework directive and the groundwater directive require member states to identify water bodies at risk and assess the significance of increasing trend in pollutant concentration. For groundwater bodies, estimating the time to trend reversal or the pollution potential of the different sources present in the catchment require a sound understanding of the hydraulic behaviour of the aquifer. Although numerical groundwater models can theoretically be used for such forecasts, their calibration remains in many real-world cases problematic. A more parsimonious lumped-parameter model was applied to predict the evolution of atrazine concentration in springs draining a fractured sandstone aquifer in Luxembourg. Despite a nationwide ban in 2005, spring water concentrations of both atrazine and its metabolite desethylatrazine still had not begun to decrease four years later. The transfer function of the model was calibrated using tritium measurements and modified to take into account the fact that whereas tritium is applied uniformly over the entire catchment, atrazine was only used in areas where cereals are grown. We could also show that sorption processes in the aquifer can be neglected and that including pesticide degradation does not modify the shape of the atrazine breakthrough, but only affects the magnitude of the predicted spring water concentration. Results indicate that due to the large hydraulic inertia of the aquifer, trend reversal should not be expected before 2018.

  11. Towards a Best Practice Approach in PBPK Modeling: Case Example of Developing a Unified Efavirenz Model Accounting for Induction of CYPs 3A4 and 2B6.

    PubMed

    Ke, A; Barter, Z; Rowland-Yeo, K; Almond, L

    2016-07-01

    In this study, we present efavirenz physiologically based pharmacokinetic (PBPK) model development as an example of our best practice approach that uses a stepwise approach to verify the different components of the model. First, a PBPK model for efavirenz incorporating in vitro and clinical pharmacokinetic (PK) data was developed to predict exposure following multiple dosing (600 mg q.d.). Alfentanil i.v. and p.o. drug-drug interaction (DDI) studies were utilized to evaluate and refine the CYP3A4 induction component in the liver and gut. Next, independent DDI studies with substrates of CYP3A4 (maraviroc, atazanavir, and clarithromycin) and CYP2B6 (bupropion) verified the induction components of the model (area under the curve [AUC] ratios within 1.0-1.7-fold of observed). Finally, the model was refined to incorporate the fractional contribution of enzymes, including CYP2B6, propagating autoinduction into the model (Racc 1.7 vs. 1.7 observed). This validated mechanistic model can now be applied in clinical pharmacology studies to prospectively assess both the victim and perpetrator DDI potential of efavirenz. PMID:27435752

  12. Towards a Best Practice Approach in PBPK Modeling: Case Example of Developing a Unified Efavirenz Model Accounting for Induction of CYPs 3A4 and 2B6

    PubMed Central

    Ke, A; Barter, Z; Rowland‐Yeo, K

    2016-01-01

    In this study, we present efavirenz physiologically based pharmacokinetic (PBPK) model development as an example of our best practice approach that uses a stepwise approach to verify the different components of the model. First, a PBPK model for efavirenz incorporating in vitro and clinical pharmacokinetic (PK) data was developed to predict exposure following multiple dosing (600 mg q.d.). Alfentanil i.v. and p.o. drug‐drug interaction (DDI) studies were utilized to evaluate and refine the CYP3A4 induction component in the liver and gut. Next, independent DDI studies with substrates of CYP3A4 (maraviroc, atazanavir, and clarithromycin) and CYP2B6 (bupropion) verified the induction components of the model (area under the curve [AUC] ratios within 1.0–1.7‐fold of observed). Finally, the model was refined to incorporate the fractional contribution of enzymes, including CYP2B6, propagating autoinduction into the model (Racc 1.7 vs. 1.7 observed). This validated mechanistic model can now be applied in clinical pharmacology studies to prospectively assess both the victim and perpetrator DDI potential of efavirenz. PMID:27435752

  13. Towards a Best Practice Approach in PBPK Modeling: Case Example of Developing a Unified Efavirenz Model Accounting for Induction of CYPs 3A4 and 2B6.

    PubMed

    Ke, A; Barter, Z; Rowland-Yeo, K; Almond, L

    2016-07-01

    In this study, we present efavirenz physiologically based pharmacokinetic (PBPK) model development as an example of our best practice approach that uses a stepwise approach to verify the different components of the model. First, a PBPK model for efavirenz incorporating in vitro and clinical pharmacokinetic (PK) data was developed to predict exposure following multiple dosing (600 mg q.d.). Alfentanil i.v. and p.o. drug-drug interaction (DDI) studies were utilized to evaluate and refine the CYP3A4 induction component in the liver and gut. Next, independent DDI studies with substrates of CYP3A4 (maraviroc, atazanavir, and clarithromycin) and CYP2B6 (bupropion) verified the induction components of the model (area under the curve [AUC] ratios within 1.0-1.7-fold of observed). Finally, the model was refined to incorporate the fractional contribution of enzymes, including CYP2B6, propagating autoinduction into the model (Racc 1.7 vs. 1.7 observed). This validated mechanistic model can now be applied in clinical pharmacology studies to prospectively assess both the victim and perpetrator DDI potential of efavirenz.

  14. Information-Theoretic Benchmarking of Land Surface Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  15. JSC interactive basic accounting system

    NASA Technical Reports Server (NTRS)

    Spitzer, J. F.

    1978-01-01

    Design concepts for an interactive basic accounting system (IBAS) are considered in terms of selecting the design option which provides the best response at the lowest cost. Modeling the IBAS workload and applying this workload to a U1108 EXEC 8 based system using both a simulation model and the real system is discussed.

  16. A Single-Level Tunnel Model to Account for Electrical Transport through Single Molecule- and Self-Assembled Monolayer-based Junctions

    PubMed Central

    Garrigues, Alvar R.; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R.; Thompon, Damien; del Barco, Enrique; Nijhuis, Christian A.

    2016-01-01

    We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction. PMID:27216489

  17. A Single-Level Tunnel Model to Account for Electrical Transport through Single Molecule- and Self-Assembled Monolayer-based Junctions

    NASA Astrophysics Data System (ADS)

    Garrigues, Alvar R.; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R.; Thompon, Damien; Del Barco, Enrique; Nijhuis, Christian A.

    2016-05-01

    We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction.

  18. User's guide for RIV2; a package for routing and accounting of river discharge for a modular, three-dimensional, finite-difference, ground- water flow model

    USGS Publications Warehouse

    Miller, Roger S.

    1988-01-01

    RIV2 is a package for the U.S. Geological Survey 's modular, three-dimensional, finite-difference, groundwater flow model developed by M. G. McDonald and A. W. Harbaugh that simulates river-discharge routing. RIV2 replaces RIVI, the original river package used in the model. RIV2 preserves the basic logic of RIV1, but better represents river-discharge routing. The main features of RIV2 are (1) The river system is divided into reaches and simulated river discharge is routed from one node to the next. (2) Inflow (river discharge) entering the upstream end of a reach can be specified. (3) More than one river can be represented at one node and rivers can cross, as when representing a siphon. (4) The quantity of leakage to or from the aquifer at a given node is proportional to the hydraulic-head difference between that specified for the river and that calculated for the aquifer. Also, the quantity of leakage to the aquifer at any node can be limited by the user and, within this limit, the maximum leakage to the aquifer is the discharge available in the river. This feature allows for the simulation of intermittent rivers and drains that have no discharge routed to their upstream reaches. (5) An accounting of river discharge is maintained. Neither stage-discharge relations nor storage in the river or river banks is simulated. (USGS)

  19. Toward a 3D cellular model for studying in vitro the outcome of photodynamic treatments: accounting for the effects of tissue complexity.

    PubMed

    Alemany-Ribes, Mireia; García-Díaz, María; Busom, Marta; Nonell, Santi; Semino, Carlos E

    2013-08-01

    Clinical therapies have traditionally been developed using two-dimensional (2D) cell culture systems, which fail to accurately capture tissue complexity. Therefore, three-dimensional (3D) cell cultures are more attractive platforms to integrate multiple cues that arise from the extracellular matrix and cells, closer to an in vivo scenario. Here we report the development of a 3D cellular model for the in vitro assessment of the outcome of oxygen- and drug-dependent therapies, exemplified by photodynamic therapy (PDT). Using a synthetic self-assembling peptide as a cellular scaffold (RAD16-I), we were able to recreate the in vivo limitation of oxygen and drug diffusion and its biological effect, which is the development of cellular resistance to therapy. For the first time, the production and decay of the cytotoxic species singlet oxygen could be observed in a 3D cell culture. Results revealed that the intrinsic mechanism of action is maintained in both systems and, hence, the dynamic mass transfer effects accounted for the major differences in efficacy between the 2D and 3D models. We propose that this methodological approach will help to improve the efficacy of future oxygen- and drug-dependent therapies such as PDT.

  20. A Single-Level Tunnel Model to Account for Electrical Transport through Single Molecule- and Self-Assembled Monolayer-based Junctions.

    PubMed

    Garrigues, Alvar R; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R; Thompon, Damien; Del Barco, Enrique; Nijhuis, Christian A

    2016-01-01

    We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction. PMID:27216489

  1. Testing for the dual-route cascade reading model in the brain: an fMRI effective connectivity account of an efficient reading style.

    PubMed

    Levy, Jonathan; Pernet, Cyril; Treserras, Sébastien; Boulanouar, Kader; Aubry, Florent; Démonet, Jean-François; Celsis, Pierre

    2009-01-01

    Neuropsychological data about the forms of acquired reading impairment provide a strong basis for the theoretical framework of the dual-route cascade (DRC) model which is predictive of reading performance. However, lesions are often extensive and heterogeneous, thus making it difficult to establish precise functional anatomical correlates. Here, we provide a connective neural account in the aim of accommodating the main principles of the DRC framework and to make predictions on reading skill. We located prominent reading areas using fMRI and applied structural equation modeling to pinpoint distinct neural pathways. Functionality of regions together with neural network dissociations between words and pseudowords corroborate the existing neuroanatomical view on the DRC and provide a novel outlook on the sub-regions involved. In a similar vein, congruent (or incongruent) reliance of pathways, that is reliance on the word (or pseudoword) pathway during word reading and on the pseudoword (or word) pathway during pseudoword reading predicted good (or poor) reading performance as assessed by out-of-magnet reading tests. Finally, inter-individual analysis unraveled an efficient reading style mirroring pathway reliance as a function of the fingerprint of the stimulus to be read, suggesting an optimal pattern of cerebral information trafficking which leads to high reading performance. PMID:19688099

  2. Testing for the Dual-Route Cascade Reading Model in the Brain: An fMRI Effective Connectivity Account of an Efficient Reading Style

    PubMed Central

    Levy, Jonathan; Pernet, Cyril; Treserras, Sébastien; Boulanouar, Kader; Aubry, Florent; Démonet, Jean-François; Celsis, Pierre

    2009-01-01

    Neuropsychological data about the forms of acquired reading impairment provide a strong basis for the theoretical framework of the dual-route cascade (DRC) model which is predictive of reading performance. However, lesions are often extensive and heterogeneous, thus making it difficult to establish precise functional anatomical correlates. Here, we provide a connective neural account in the aim of accommodating the main principles of the DRC framework and to make predictions on reading skill. We located prominent reading areas using fMRI and applied structural equation modeling to pinpoint distinct neural pathways. Functionality of regions together with neural network dissociations between words and pseudowords corroborate the existing neuroanatomical view on the DRC and provide a novel outlook on the sub-regions involved. In a similar vein, congruent (or incongruent) reliance of pathways, that is reliance on the word (or pseudoword) pathway during word reading and on the pseudoword (or word) pathway during pseudoword reading predicted good (or poor) reading performance as assessed by out-of-magnet reading tests. Finally, inter-individual analysis unraveled an efficient reading style mirroring pathway reliance as a function of the fingerprint of the stimulus to be read, suggesting an optimal pattern of cerebral information trafficking which leads to high reading performance. PMID:19688099

  3. Bayesian conjugate analysis using a generalized inverted Wishart distribution accounts for differential uncertainty among the genetic parameters--an application to the maternal animal model.

    PubMed

    Munilla, S; Cantet, R J C

    2012-06-01

    Consider the estimation of genetic (co)variance components from a maternal animal model (MAM) using a conjugated Bayesian approach. Usually, more uncertainty is expected a priori on the value of the maternal additive variance than on the value of the direct additive variance. However, it is not possible to model such differential uncertainty when assuming an inverted Wishart (IW) distribution for the genetic covariance matrix. Instead, consider the use of a generalized inverted Wishart (GIW) distribution. The GIW is essentially an extension of the IW distribution with a larger set of distinct parameters. In this study, the GIW distribution in its full generality is introduced and theoretical results regarding its use as the prior distribution for the genetic covariance matrix of the MAM are derived. In particular, we prove that the conditional conjugacy property holds so that parameter estimation can be accomplished via the Gibbs sampler. A sampling algorithm is also sketched. Furthermore, we describe how to specify the hyperparameters to account for differential prior opinion on the (co)variance components. A recursive strategy to elicit these parameters is then presented and tested using field records and simulated data. The procedure returned accurate estimates and reduced standard errors when compared with non-informative prior settings while improving the convergence rates. In general, faster convergence was always observed when a stronger weight was placed on the prior distributions. However, analyses based on the IW distribution have also produced biased estimates when the prior means were set to over-dispersed values.

  4. Negotiations and Accountability

    ERIC Educational Resources Information Center

    Hough, Charles R.

    1971-01-01

    School boards by state statutes are alone accountable for the education of their communities' youth. What's needed, the writer contends, is a rectification of the statutes so that all parties to negotiations are accountable. (Editor)

  5. LMAL Accounting Office 1936

    NASA Technical Reports Server (NTRS)

    1936-01-01

    Accounting Office: The Langley Memorial Aeronautical Laboratory's accounting office, 1936, with photographs of the Wright brothers on the wall. Although the Lab was named after Samuel P. Langley, most of the NACA staff held the Wrights as their heroes.

  6. Viscoplastic Model Development to Account for Strength Differential: Application to Aged Inconel 718 at Elevated Temperature. Degree awarded by Pennsylvania State Univ., 2000

    NASA Technical Reports Server (NTRS)

    Iyer, Saiganesh; Lerch, Brad (Technical Monitor)

    2001-01-01

    The magnitude of yield and flow stresses in aged Inconel 718 are observed to be different in tension and compression. This phenomenon, called the Strength differential (SD), contradicts the metal plasticity axiom that the second deviatoric stress invariant alone is sufficient for representing yield and flow. Apparently, at least one of the other two stress invariants is also significant. A unified viscoplastic model was developed that is able to account for the SD effect in aged Inconel 718. Building this model involved both theory and experiments. First, a general threshold function was proposed that depends on all three stress invariants and then the flow and evolution laws were developed using a potential-based thermodynamic framework. Judiciously chosen shear and axial tests were conducted to characterize the material. Shear tests involved monotonic loading, relaxation, and creep tests with different loading rates and load levels. The axial tests were tension and compression tests that resulted in sufficiently large inelastic strains. All tests were performed at 650 C. The viscoplastic material parameters were determined by optimizing the fit to the shear tests, during which the first and the third stress invariants remained zero. The threshold surface parameters were then fit to the tension and compression test data. An experimental procedure was established to quantify the effect of each stress invariant on inelastic deformation. This requires conducting tests with nonproportional three-dimensional load paths. Validation of the model was done using biaxial tests on tubular specimens of aged Inconel 718 using proportional and nonproportional axial-torsion loading. These biaxial tests also helped to determine the most appropriate form of the threshold function; that is, how to combine the stress invariants. Of the set of trial threshold functions, the ones that incorporated the third stress invariant give the best predictions. However, inclusion of the first

  7. Managerial Accounting. Study Guide.

    ERIC Educational Resources Information Center

    Plachta, Leonard E.

    This self-instructional study guide is part of the materials for a college-level programmed course in managerial accounting. The study guide is intended for use by students in conjuction with a separate textbook, Horngren's "Accounting for Management Control: An Introduction," and a workbook, Curry's "Student Guide to Accounting for Management…

  8. Accounting & Computing Curriculum Guide.

    ERIC Educational Resources Information Center

    Avani, Nathan T.; And Others

    This curriculum guide consists of materials for use in teaching a competency-based accounting and computing course that is designed to prepare students for employability in the following occupational areas: inventory control clerk, invoice clerk, payroll clerk, traffic clerk, general ledger bookkeeper, accounting clerk, account information clerk,…

  9. Accounting Education in Crisis

    ERIC Educational Resources Information Center

    Turner, Karen F.; Reed, Ronald O.; Greiman, Janel

    2011-01-01

    Almost on a daily basis new accounting rules and laws are put into use, creating information that must be known and learned by the accounting faculty and then introduced to and understood by the accounting student. Even with the 150 hours of education now required for CPA licensure, it is impossible to teach and learn all there is to learn. Over…

  10. The Accounting Capstone Problem

    ERIC Educational Resources Information Center

    Elrod, Henry; Norris, J. T.

    2012-01-01

    Capstone courses in accounting programs bring students experiences integrating across the curriculum (University of Washington, 2005) and offer unique (Sanyal, 2003) and transformative experiences (Sill, Harward, & Cooper, 2009). Students take many accounting courses without preparing complete sets of financial statements. Accountants not only…

  11. Educational Leadership in an Era of Accountability.

    ERIC Educational Resources Information Center

    Riles, Wilson

    Given the present economic situation, it is inevitable that more state legislatures and school boards will adopt a "cost accounting" attitude toward education. However, schools aren't factories, and using an industrial model for accountability doesn't work. To have a viable system of accountability, everyone who is concerned with education must be…

  12. Learning by Doing: Concepts and Models for Service-Learning in Accounting. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Rama, D. V., Ed.

    This volume is part of a series of 18 monographs on service learning and the academic disciplines. It is designed to (1) develop a theoretical framework for service learning in accounting consistent with the goals identified by accounting educators and the recent efforts toward curriculum reform, and (2) describe specific active learning…

  13. (13)C metabolic flux analysis in neurons utilizing a model that accounts for hexose phosphate recycling within the pentose phosphate pathway.

    PubMed

    Gebril, Hoda M; Avula, Bharathi; Wang, Yan-Hong; Khan, Ikhlas A; Jekabsons, Mika B

    2016-02-01

    Glycolysis, mitochondrial substrate oxidation, and the pentose phosphate pathway (PPP) are critical for neuronal bioenergetics and oxidation-reduction homeostasis, but quantitating their fluxes remains challenging, especially when processes such as hexose phosphate (i.e., glucose/fructose-6-phosphate) recycling in the PPP are considered. A hexose phosphate recycling model was developed which exploited the rates of glucose consumption, lactate production, and mitochondrial respiration to infer fluxes through the major glucose consuming pathways of adherent cerebellar granule neurons by replicating [(13)C]lactate labeling from metabolism of [1,2-(13)C2]glucose. Flux calculations were predicated on a steady-state system with reactions having known stoichiometries and carbon atom transitions. Non-oxidative PPP activity and consequent hexose phosphate recycling, as well as pyruvate production by cytoplasmic malic enzyme, were optimized by the model and found to account for 28 ± 2% and 7.7 ± 0.2% of hexose phosphate and pyruvate labeling, respectively. From the resulting fluxes, 52 ± 6% of glucose was metabolized by glycolysis, compared to 19 ± 2% by the combined oxidative/non-oxidative pentose cycle that allows for hexose phosphate recycling, and 29 ± 8% by the combined oxidative PPP/de novo nucleotide synthesis reactions. By extension, 62 ± 6% of glucose was converted to pyruvate, the metabolism of which resulted in 16 ± 1% of glucose oxidized by mitochondria and 46 ± 6% exported as lactate. The results indicate a surprisingly high proportion of glucose utilized by the pentose cycle and the reactions synthesizing nucleotides, and exported as lactate. While the in vitro conditions to which the neurons were exposed (high glucose, no lactate or other exogenous substrates) limit extrapolating these results to the in vivo state, the approach provides a means of assessing a number of metabolic fluxes within the context of hexose phosphate recycling in the PPP from a

  14. Do we need to account for scenarios of land use/land cover changes in regional climate modeling and impact studies?

    NASA Astrophysics Data System (ADS)

    Strada, Susanna; de Noblet-Ducoudré, Nathalie; Perrin, Mathieu; Stefanon, Marc

    2016-04-01

    By modifying the Earth's natural landscapes, humans have introduced an imbalance in the Earth System's energy, water and emission fluxes via land-use and land-cover changes (LULCCs). Through land-atmosphere interactions, LULCCs influence weather, air quality and climate at different scales, from regional/local (a few ten kilometres) (Pielke et al., 2011) to global (a few hundred kilometres) (Mahmood et al., 2014). Therefore, in the context of climate change, LULCCs will play a role locally/regionally in altering weather/atmospheric conditions. In addition to the global climate change impacts, LULCCs will possibly induce further changes in the functioning of terrestrial ecosystems and thereby affect adaptation strategies. If LULCCs influence weather/atmospheric conditions, could land use planning alter climate conditions and ease the impact of climate change by wisely shaping urban and rural landscapes? Nowadays, numerical land-atmosphere modelling allows to assess LULCC impacts at different scales (e.g., Marshall et al., 2003; de Noblet-Ducoudré et al., 2011). However, most scenarios of climate changes used to force impact models result from downscaling procedures that do not account for LULCCs (e.g., Jacob et al., 2014). Therefore, if numerical modelling may help in tackling the discussion about LULCCs, do existing LULCC scenarios encompass realistic changes in terms of land use planning? In the present study, we apply a surface model to compare projected LULCC scenarios over France and to assess their impacts on surface fluxes (i.e., water, heat and carbon dioxide fluxes) and on water and carbon storage in soils. To depict future LULCCs in France, we use RCP scenarios from the IPCC AR5 report (Moss et al., 2011). LULCCs encompassed in RCPs are discussed in terms of: (a) their impacts on water and energy balance over France, and (b) their feasibility in the framework of land use planning in France. This study is the first step to quantify the sensitivity of land

  15. The Ensemble Framework for Flash Flood Forecasting: Global and CONUS Applications

    NASA Astrophysics Data System (ADS)

    Flamig, Z.; Vergara, H. J.; Clark, R. A.; Gourley, J. J.; Kirstetter, P. E.; Hong, Y.

    2015-12-01

    The Ensemble Framework for Flash Flood Forecasting (EF5) is a distributed hydrologic modeling framework combining water balance components such as the Variable Infiltration Curve (VIC) and Sacramento Soil Moisture Accounting (SAC-SMA) with kinematic wave channel routing. The Snow-17 snow pack model is included as an optional component in EF5 for basins where snow impacts are important. EF5 also contains the Differential Evolution Adaptive Metropolis (DREAM) parameter estimation scheme for model calibration. EF5 is made to be user friendly and as such training has been developed into a weeklong course. This course has been tested in modeling workshops held in Namibia and Mexico. EF5 has also been applied to specialized applications including the Flooded Locations and Simulated Hydrographs (FLASH) project. FLASH aims to provide flash flood monitoring and forecasting over the CONUS using Multi-Radar Multi-Sensor precipitation forcing. Using the extensive field measurements database from the 10,000 USGS measurement locations across the CONUS, parameters were developed for the kinematic wave routing in FLASH. This presentation will highlight FLASH performance over the CONUS on basins less than 1,000 km2 and discuss the development of simulated streamflow climatology over the CONUS for data mining applications. A global application of EF5 has also been developed using satellite based precipitation measurements combined with numerical weather prediction forecasts to produce flood and impact forecasts. The performance of this global system will be assessed and future plans detailed.

  16. Do we need to account for scenarios of land use/land cover changes in regional climate modeling and impact studies?

    NASA Astrophysics Data System (ADS)

    Strada, Susanna; de Noblet-Ducoudré, Nathalie; Perrin, Mathieu; Stefanon, Marc

    2016-04-01

    By modifying the Earth's natural landscapes, humans have introduced an imbalance in the Earth System's energy, water and emission fluxes via land-use and land-cover changes (LULCCs). Through land-atmosphere interactions, LULCCs influence weather, air quality and climate at different scales, from regional/local (a few ten kilometres) (Pielke et al., 2011) to global (a few hundred kilometres) (Mahmood et al., 2014). Therefore, in the context of climate change, LULCCs will play a role locally/regionally in altering weather/atmospheric conditions. In addition to the global climate change impacts, LULCCs will possibly induce further changes in the functioning of terrestrial ecosystems and thereby affect adaptation strategies. If LULCCs influence weather/atmospheric conditions, could land use planning alter climate conditions and ease the impact of climate change by wisely shaping urban and rural landscapes? Nowadays, numerical land-atmosphere modelling allows to assess LULCC impacts at different scales (e.g., Marshall et al., 2003; de Noblet-Ducoudré et al., 2011). However, most scenarios of climate changes used to force impact models result from downscaling procedures that do not account for LULCCs (e.g., Jacob et al., 2014). Therefore, if numerical modelling may help in tackling the discussion about LULCCs, do existing LULCC scenarios encompass realistic changes in terms of land use planning? In the present study, we apply a surface model to compare projected LULCC scenarios over France and to assess their impacts on surface fluxes (i.e., water, heat and carbon dioxide fluxes) and on water and carbon storage in soils. To depict future LULCCs in France, we use RCP scenarios from the IPCC AR5 report (Moss et al., 2011). LULCCs encompassed in RCPs are discussed in terms of: (a) their impacts on water and energy balance over France, and (b) their feasibility in the framework of land use planning in France. This study is the first step to quantify the sensitivity of land

  17. Zebrafish Seizure Model Identifies p,p′-DDE as the Dominant Contaminant of Fetal California Sea Lions That Accounts for Synergistic Activity with Domoic Acid

    PubMed Central

    Tiedeken, Jessica A.; Ramsdell, John S.

    2010-01-01

    Background Fetal poisoning of California sea lions (CSLs; Zalophus californianus) has been associated with exposure to the algal toxin domoic acid. These same sea lions accumulate a mixture of persistent environmental contaminants including pesticides and industrial products such as polychlorinated biphenyls (PCBs) and polybrominated diphenyl ethers (PBDEs). Developmental exposure to the pesticide dichlorodiphenyltrichloroethane (DDT) and its stable metabolite 1,1-bis-(4-chlorophenyl)-2,2-dichloroethene (p,p′-DDE) has been shown to enhance domoic acid–induced seizures in zebrafish; however, the contribution of other co-occurring contaminants is unknown. Objective We formulated a mixture of contaminants to include PCBs, PBDEs, hexachlorocyclohexane (HCH), and chlordane at levels matching those reported for fetal CSL blubber to determine the impact of co-occurring persistent contaminants with p,p′-DDE on chemically induced seizures in zebrafish as a model for the CSLs. Methods Embryos were exposed (6–30 hr postfertilization) to p,p′-DDE in the presence or absence of a defined contaminant mixture prior to neurodevelopment via either bath exposure or embryo yolk sac microinjection. After brain maturation (7 days postfertilization), fish were exposed to a chemical convulsant, either pentylenetetrazole or domoic acid; resulting seizure behavior was then monitored and analyzed for changes, using cameras and behavioral tracking software. Results Induced seizure behavior did not differ significantly between subjects with embryonic exposure to a contaminant mixture and those exposed to p,p′-DDE only. Conclusion These studies demonstrate that p,p′-DDE—in the absence of PCBs, HCH, chlordane, and PBDEs that co-occur in fetal sea lions—accounts for the synergistic activity that leads to greater sensitivity to domoic acid seizures. PMID:20368122

  18. Improving Hydrologic Data Assimilation by a Multivariate Particle Filter-Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Yan, H.; DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Data assimilation (DA) is a popular method for merging information from multiple sources (i.e. models and remotely sensing), leading to improved hydrologic prediction. With the increasing availability of satellite observations (such as soil moisture) in recent years, DA is emerging in operational forecast systems. Although these techniques have seen widespread application, developmental research has continued to further refine their effectiveness. This presentation will examine potential improvements to the Particle Filter (PF) through the inclusion of multivariate correlation structures. Applications of the PF typically rely on univariate DA schemes (such as assimilating the outlet observed discharge), and multivariate schemes generally ignore the spatial correlation of the observations. In this study, a multivariate DA scheme is proposed by introducing geostatistics into the newly developed particle filter with Markov chain Monte Carlo (PF-MCMC) method. This new method is assessed by a case study over one of the basin with natural hydrologic process in Model Parameter Estimation Experiment (MOPEX), located in Arizona. The multivariate PF-MCMC method is used to assimilate the Advanced Scatterometer (ASCAT) grid (12.5 km) soil moisture retrievals and the observed streamflow in five gages (four inlet and one outlet gages) into the Sacramento Soil Moisture Accounting (SAC-SMA) model for the same scale (12.5 km), leading to greater skill in hydrologic predictions.

  19. Runoff changes in Czech headwater regions after deforestation induced by acid rains

    NASA Astrophysics Data System (ADS)

    Buchtele, J.; Buchtelova, M.; Hrkal, Z.; Koskova, R.

    2003-04-01

    Tendencies in water regime resulting from land-use change represent an important subject for research and in the region of so called Black Triangle at the borders of Czech Republic, Germany and Poland urgent practical problem. Namely extensive deforestation in Czech hilly basins induced by acid rains, which appeared in seventies and eighties, requires attention. Discussions among professionals and public, sometimes having emotional character, took place after large floods on the rivers Odra and Morava in 1997 and in Vltava and Elbe river basins in August 2002. The influence of deforestation induced by acid rains in the Central Europe has been considered as important contribution to disastrous character of floods. Simulations of rainfall-runoff process in several catchments and experimental basins in two distinct headwater regions along German borders, with different extent of deforestation have been carried out using daily time series up to 40 years long. The outputs of two hydrological models of different structure have been compared in these investigations: - the conceptual model SAC-SMA - Sacramento soil moisture accounting - physically based 1- D model BROOK´90 The differences between observed and simulated discharge, which could show the tendencies in the runoff have been followed. They indicate increase of runoff after deforestation.

  20. Managerial accounting applications in radiology.

    PubMed

    Lexa, Frank James; Mehta, Tushar; Seidmann, Abraham

    2005-03-01

    We review the core issues in managerial accounting for radiologists. We introduce the topic and then explore its application to diagnostic imaging. We define key terms such as fixed cost, variable cost, marginal cost, and marginal revenue and discuss their role in understanding the operational and financial implications for a radiology facility by using a cost-volume-profit model. Our work places particular emphasis on the role of managerial accounting in understanding service costs, as well as how it assists executive decision making.

  1. Public Accountancy Handbook.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Office of the Professions.

    A reference guide to laws, rules, and regulations that govern public accountancy practice in New York State is presented. In addition to identifying licensing requirements/procedures for certified public accountants, general provisions of Title VIII of the Education Law are covered, along with state management, professional misconduct, and…

  2. PLATO IV Accountancy Index.

    ERIC Educational Resources Information Center

    Pondy, Dorothy, Comp.

    The catalog was compiled to assist instructors in planning community college and university curricula using the 48 computer-assisted accountancy lessons available on PLATO IV (Programmed Logic for Automatic Teaching Operation) for first semester accounting courses. It contains information on lesson access, lists of acceptable abbreviations for…

  3. Public Accountancy Handbook.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Office of the Professions.

    The laws, rules and regulations of the State Education Department governing public accountancy practice in New York State are provided in this handbook. Licensure requirements are also described, and the forms for obtaining a license and first registration as a certified public accountant are provided. The booklet is divided into the following…

  4. Leadership for Accountability.

    ERIC Educational Resources Information Center

    Lashway, Larry

    2001-01-01

    This document explores issues of leadership for accountability and reviews five resources on the subject. These include: (1) "Accountability by Carrots and Sticks: Will Incentives and Sanctions Motivate Students, Teachers, and Administrators for Peak Performance?" (Larry Lashway); (2) "Organizing Schools for Teacher Learning" (Judith Warren…

  5. The Accountability Illusion: Arizona

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  6. The Accountability Illusion: Minnesota

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  7. The Accountability Illusion: Nevada

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  8. Teaching Accounting with Computers.

    ERIC Educational Resources Information Center

    Shaoul, Jean

    This paper addresses the numerous ways that computers may be used to enhance the teaching of accounting and business topics. It focuses on the pedagogical use of spreadsheet software to improve the conceptual coverage of accounting principles and practice, increase student understanding by involvement in the solution process, and reduce the amount…

  9. The Accountability Illusion: California

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  10. Accountability for What?

    ERIC Educational Resources Information Center

    Knowles, Rex; Knowles; Trudy

    2001-01-01

    Our emphasis on accountability overlooks children's differences. Half of all individuals who take a norm-referenced test will be below average. Should such students be pushed, mauled, and remediated or squeezed into a common learning mold? Holding teachers accountable for humane treatment of "whole children" is a worthier pursuit. (MLH)

  11. The Evolution of Accountability

    ERIC Educational Resources Information Center

    Webb, P. Taylor

    2011-01-01

    Campus 2020: Thinking ahead is a policy in British Columbia (BC), Canada, that attempted to hold universities accountable to performance. Within, I demonstrate how this Canadian articulation of educational accountability intended to develop "governmentality constellations" to control the university and regulate its knowledge output. This research…

  12. Accountability in Education.

    ERIC Educational Resources Information Center

    Chippendale, P. R., Ed.; Wilkes, Paula V., Ed.

    This collection of papers delivered at a conference on accountability held at Darling Downs Institute of Advanced Education in Australia examines the meaning of accountability in education for teachers, lecturers, government, parents, administrators, education authorities, and the society at large. In Part 1, W. G. Walker attempts to answer the…

  13. Accountability in delivering care.

    PubMed

    Castledine, G

    In the penultimate part of this series on issues in ward management facing charge nurses. George Castledine concentrates on the issue of accountability. The immensely powerful position of the charge nurse as arbitrator and co-ordinator of all health care given to the patient demands that helshe exercises this power responsibly and positively; hence, the crucial importance of accountability. The author explores this concept and also those of advocacy and conscientious objection. He concludes by suggesting that the ultimate area of accountability in nursing is the individual conscience of the practitioner and that in this may lie the key to the setting and maintenance of high standards of care.

  14. The National Biomass and Carbon Dataset 2000: A High Spatial Resolution Baseline to Reduce Uncertainty in Carbon Accounting and Flux Modeling

    NASA Astrophysics Data System (ADS)

    Kellndorfer, J. M.; Walker, W. S.; Hoppus, M.; Westfall, J.; Lapoint, E.

    2005-12-01

    A major goal of the North American Carbon Program (NACP) is to develop a quantitative scientific basis for regional to continental scale carbon accounting to reduce uncertainties about the carbon cycle component of the climate system. Given the highly complementary nature and quasi-synchronous data acquisition of the 2000 Shuttle Radar Topography Mission (SRTM) and the Landsat-based 2001 National Land Cover Database (NLCD, 2001), an exceptional opportunity exists for exploiting data synergies afforded by the fusion of these high-resolution data sources. Accurate area-based estimates of terrestrial biomass and carbon require biophysical measures that capture both horizontal and vertical vegetation structure. Whereas the thematic layers of the NLCD are suitable for characterizing horizontal vegetation structure (i.e., cover type, canopy density, etc.), SRTM provides information relating to the vertical structure, i.e., primarily height. Research from pilot study sites in Georgia, Michigan, and California have shown that SRTM height information, analyzed in conjunction with bald Earth elevation data from the National Elevation Dataset (NED), is highly correlated with vegetation canopy height. Currently, a project funded under the NASA 'Carbon Cycle Science' program ('The National Biomass and Carbon Dataset 2000 - NBCD 2000') is underway to generate a 'millennium' high-resolution ecoregional database of circa-2000 vegetation canopy height, aboveground biomass, and carbon stocks for the conterminous U.S. which will provide an unprecedented baseline against which to compare data products from the next generation of advanced microwave and optical remote sensing platforms. In the NBCD2000 initiative, data are analyzed in 60 ecologically diverse regions, consistent with the NLCD 2001 mapping zones, which cover the entire conterminous United States. Within each mapping zone, data from the space shuttle are combined with topographic survey data from the NED to form a radar

  15. Accountability and values in radically collaborative research.

    PubMed

    Winsberg, Eric; Huebner, Bryce; Kukla, Rebecca

    2014-06-01

    This paper discusses a crisis of accountability that arises when scientific collaborations are massively epistemically distributed. We argue that social models of epistemic collaboration, which are social analogs to what Patrick Suppes called a "model of the experiment," must play a role in creating accountability in these contexts. We also argue that these social models must accommodate the fact that the various agents in a collaborative project often have ineliminable, messy, and conflicting interests and values; any story about accountability in a massively distributed collaboration must therefore involve models of such interests and values and their methodological and epistemic effects.

  16. Computerized material accounting

    SciTech Connect

    Claborn, J.; Erkkila, B.

    1995-07-01

    With the advent of fast, reliable database servers running on inexpensive networked personal computers, it is possible to create material accountability systems that are easy to learn, easy to use, and cost-effective to implement. Maintaining the material data in a relational database allows data to be viewed in ways that were previously very difficult. This paper describes a software and hardware platforms for the implementation of such an accountability system.

  17. Uncertainty calculation in the RIO air quality interpolation model and aggregation to yearly average and exceedance probability taking into account the temporal auto-correlation.

    NASA Astrophysics Data System (ADS)

    Maiheu, Bino; Nele, Veldeman; Janssen, Stijn; Fierens, Frans; Trimpeneers, Elke

    2010-05-01

    RIO is an operational air quality interpolation model developed by VITO and IRCEL-CELINE and produces hourly maps for different pollutant concentrations such as O3, PM10 and NO2 measured in Belgium [1]. The RIO methodology consists of residual interpolation by Ordinary Kriging of the residuals of the measured concentrations and pre-determined trend functions which express the relation between land cover information derived from the CORINE dataset and measured time-averaged concentrations [2]. RIO is an important tool for the Flemish administration and is among others used to report, as is required by each member state, on the air quality status in Flanders to the European Union. We feel that a good estimate of the uncertainty of the yearly average concentration maps and the probability of norm-exceedance are both as important as the values themselves. In this contribution we will discuss the uncertainties specific to the RIO methodology, where we have both contributions from the Ordinary Kriging technique as well as the trend functions. Especially the parameterisation of the uncertainty w.r.t. the trend functions will be the key indicator for the degree of confidence the model puts into using land cover information for spatial interpolation of pollutant concentrations. Next, we will propose a method which enables us to calculate the uncertainty on the yearly average concentrations as well as the number of exceedance days, taking into account the temporal auto-correlation of the concentration fields. It is clear that the autocorrelation will have a strong impact on the uncertainty estimation [3] of yearly averages. The method we propose is based on a Monte Carlo technique that generates an ensemble of interpolation maps with the correct temporal auto-correlation structure. From a generated ensemble, the calculation of norm-exceedance probability at each interpolation location becomes quite straightforward. A comparison with the ad-hoc method proposed in [3], where

  18. EPA’s ALPHA Model Fuel Economy Simulation and Refinements to Account for Fuel Economy Effects of a Vehicle’s Transient Operation and Overhead Needs

    EPA Science Inventory

    This paper will describe how ALPHA accounts for each type of fuel use overhead, using a variety of data from general vehicle and engine benchmarking, as well as data from special test procedures to characterize engine operation during the overhead conditions.

  19. Water Accounting from Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Bastiaanssen, W. G.; Savenije, H.

    2014-12-01

    Water scarcity is increasing globally. This requires a more accurate management of the water resources at river basin scale and understanding of withdrawals and return flows; both naturally and man-induced. Many basins and their tributaries are, however, ungauged or poorly gauged. This hampers sound planning and monitoring processes. While certain countries have developed clear guidelines and policies on data observatories and data sharing, other countries and their basin organization still have to start on developing data democracies. Water accounting quantifies flows, fluxes, stocks and consumptive use pertaining to every land use class in a river basin. The objective is to derive a knowledge base with certain minimum information that facilitates decision making. Water Accounting Plus (WA+) is a new method for water resources assessment reporting (www.wateraccounting.org). While the PUB framework has yielded several deterministic models for flow prediction, WA+ utilizes remote sensing data of rainfall, evaporation (including soil, water, vegetation and interception evaporation), soil moisture, water levels, land use and biomass production. Examples will be demonstrated that show how remote sensing and hydrological models can be smartly integrated for generating all the required input data into WA+. A standard water accounting system for all basins in the world - with a special emphasis on data scarce regions - is under development. First results of using remote sensing measurements and hydrological modeling as an alternative to expensive field data sets, will be presented and discussed.

  20. Accounting for the environment.

    PubMed

    Lutz, E; Munasinghe, M

    1991-03-01

    Environmental awareness in the 1980s has led to efforts to improve the current UN System of National Accounts (SNA) for better measurement of the value of environmental resources when estimating income. National governments, the UN, the International Monetary Fund, and the World Bank are interested in solving this issue. The World Bank relies heavily on national aggregates in income accounts compiled by means of the SNA that was published in 1968 and stressed gross domestic product (GDP). GDP measures mainly market activity, but it takes does not consider the consumption of natural capital, and indirectly inhibits sustained development. The deficiencies of the current method of accounting are inconsistent treatment of manmade and natural capital, the omission of natural resources and their depletion from balance sheets, and pollution cleanup costs from national income. In the calculation of GDP pollution is overlooked, and beneficial environmental inputs are valued at zero. The calculation of environmentally adjusted net domestic product (EDP) and environmentally adjusted net income (ENI) would lower income and growth rate, as the World Resources Institute found with respect to Indonesia for 1971-84. When depreciation for oil, timber, and top soil was included the net domestic product (NDP) was only 4% compared with a 7.1% GDP. The World Bank has advocated environmental accounting since 1983 in SNA revisions. The 1989 revised Blue Book of the SNA takes environment concerns into account. Relevant research is under way in Mexico and Papua New Guinea using the UN Statistical Office framework as a system for environmentally adjusted economic accounts that computes EDP and ENI and integrates environmental data with national accounts while preserving SNA concepts. PMID:12285741

  1. Thinking about Accountability

    PubMed Central

    Deber, Raisa B.

    2014-01-01

    Accountability is a key component of healthcare reforms, in Canada and internationally, but there is increasing recognition that one size does not fit all. A more nuanced understanding begins with clarifying what is meant by accountability, including specifying for what, by whom, to whom and how. These papers arise from a Partnership for Health System Improvement (PHSI), funded by the Canadian Institutes of Health Research (CIHR), on approaches to accountability that examined accountability across multiple healthcare subsectors in Ontario. The partnership features collaboration among an interdisciplinary team, working with senior policy makers, to clarify what is known about best practices to achieve accountability under various circumstances. This paper presents our conceptual framework. It examines potential approaches (policy instruments) and postulates that their outcomes may vary by subsector depending upon (a) the policy goals being pursued, (b) governance/ownership structures and relationships and (c) the types of goods and services being delivered, and their production characteristics (e.g., contestability, measurability and complexity). PMID:25305385

  2. Significant decadal channel change 58-67 years post-dam accounting for uncertainty in topographic change detection between contour maps and point cloud models

    NASA Astrophysics Data System (ADS)

    Carley, Jennifer K.; Pasternack, Gregory B.; Wyrick, Joshua R.; Barker, Jesse R.; Bratovich, Paul M.; Massa, Duane A.; Reedy, Gary D.; Johnson, Thomas R.

    2012-12-01

    Construction of digital elevation models (DEMs) and the subtraction of DEMs between different points in time as a method to determine temporal patterns of scour and fill is a highly valuable procedure emerging in geomorphology. These DEMs of Differences (DoDs) must be assessed for error in order to distinguish actual topographic change from uncertainty and surface error. Current methods include: (1) uniformly excluding all values that fall below a minimum threshold; (2) using a spatially variable approach such as the construction of minimum Level of Detection (LoD) grids; or (3) the creation of a fuzzy inference system. Although spatially variable methods for determining error have been more accurate in excluding noise without discarding large amounts of meaningful data, a challenge remains in performing DoDs against preexisting contour-based maps for which no original point data are available. The goals of this study were to (1) develop a method that overcomes the unknown point density of contour (and other historical) data sets and allows for some assessment of DoD uncertainty on the basis of information on topographic variability, (2) perform comprehensive uncertainty analysis testing to understand the opportunities and constraints associated with this new method, and (3) report and interpret the overall pattern and volume of decadal topographic change for a regulated river 67 years post-dam in light of alternate conjectured mechanisms of post-dam longitudinal profile adjustment. The key feature of the new approach is the introduction of a high-density artificial point grid that samples the topographic variability evident in the available historical data set. The testbed used to develop and assess this new DoD method was the ~ 37.5-km lower Yuba River, California. Historical data consisted of 0.6-m contours from a 1999 survey, while a more detailed point cloud was available for the most recent survey in 2006-2008. To evaluate uncertainty in the method, this

  3. An Existentialist Account of Identity Formation.

    ERIC Educational Resources Information Center

    Bilsker, Dan

    1992-01-01

    Gives account of Marcia's identity formation model in language of existentialist philosophy. Examines parallels between ego-identity and existentialist approaches. Describes identity in terms of existentialist concepts of Heidegger and Sartre. Argues that existentialist account of identity formation has benefits of clarification of difficult…

  4. Reclaiming "Sense" from "Cents" in Accounting Education

    ERIC Educational Resources Information Center

    Dellaportas, Steven

    2015-01-01

    This essay adopts an interpretive methodology of relevant literature to explore the limitations of accounting education when it is taught purely as a technical practice. The essay proceeds from the assumption that conventional accounting education is captured by a positivistic neo-classical model of decision-making that draws on economic rationale…

  5. Materials for Training Specialized Accounting Clerks

    ERIC Educational Resources Information Center

    McKitrick, Max O.

    1974-01-01

    To prepare instructional materials for training specialized accounting clerks, teachers must visit offices and make task analyses of these jobs utilizing the systems approach. Described are models developed for training these types of accounting clerks: computer control clerks, coupon clerks, internal auditing clerks, and statement clerks. (SC)

  6. Integrated Approach to User Account Management

    NASA Technical Reports Server (NTRS)

    Kesselman, Glenn; Smith, William

    2007-01-01

    IT environments consist of both Windows and other platforms. Providing user account management for this model has become increasingly diffi cult. If Microsoft#s Active Directory could be enhanced to extend a W indows identity for authentication services for Unix, Linux, Java and Macintosh systems, then an integrated approach to user account manag ement could be realized.

  7. 17 CFR 17.01 - Identification of special accounts, volume threshold accounts, and omnibus accounts.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... accounts, volume threshold accounts, and omnibus accounts. 17.01 Section 17.01 Commodity and Securities..., CLEARING MEMBERS, AND FOREIGN BROKERS § 17.01 Identification of special accounts, volume threshold accounts... in § 17.02(b). (b) Identification of volume threshold accounts. Each clearing member shall...

  8. Mathematical modeling of liquid/liquid hollow fiber membrane contactor accounting for interfacial transport phenomena: Extraction of lanthanides as a surrogate for actinides

    SciTech Connect

    Rogers, J.D.

    1994-08-04

    This report is divided into two parts. The second part is divided into the following sections: experimental protocol; modeling the hollow fiber extractor using film theory; Graetz model of the hollow fiber membrane process; fundamental diffusive-kinetic model; and diffusive liquid membrane device-a rigorous model. The first part is divided into: membrane and membrane process-a concept; metal extraction; kinetics of metal extraction; modeling the membrane contactor; and interfacial phenomenon-boundary conditions-applied to membrane transport.

  9. Risk-Informed Monitoring, Verification and Accounting (RI-MVA). An NRAP White Paper Documenting Methods and a Demonstration Model for Risk-Informed MVA System Design and Operations in Geologic Carbon Sequestration

    SciTech Connect

    Unwin, Stephen D.; Sadovsky, Artyom; Sullivan, E. C.; Anderson, Richard M.

    2011-09-30

    This white paper accompanies a demonstration model that implements methods for the risk-informed design of monitoring, verification and accounting (RI-MVA) systems in geologic carbon sequestration projects. The intent is that this model will ultimately be integrated with, or interfaced with, the National Risk Assessment Partnership (NRAP) integrated assessment model (IAM). The RI-MVA methods described here apply optimization techniques in the analytical environment of NRAP risk profiles to allow systematic identification and comparison of the risk and cost attributes of MVA design options.

  10. A Pariah Profession? Some Student Perceptions of Accounting and Accountancy.

    ERIC Educational Resources Information Center

    Fisher, Roy; Murphy, Vivienne

    1995-01-01

    Existing literature and a survey of 106 undergraduate accounting students in the United Kingdom were analyzed for perceptions of the accounting profession and the academic discipline of accounting. Results suggest that among accounting and nonaccounting students alike, there exist coexisting perceptions of accounting as having high status and low…

  11. 18 CFR 367.2320 - Account 232, Accounts payable.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Account 232, Accounts... POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO... ACT Balance Sheet Chart of Accounts Current and Accrued Liabilities § 367.2320 Account 232,...

  12. 18 CFR 367.2320 - Account 232, Accounts payable.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 232, Accounts... POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO... ACT Balance Sheet Chart of Accounts Current and Accrued Liabilities § 367.2320 Account 232,...

  13. 18 CFR 367.2320 - Account 232, Accounts payable.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Account 232, Accounts... POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO... ACT Balance Sheet Chart of Accounts Current and Accrued Liabilities § 367.2320 Account 232,...

  14. 18 CFR 367.2320 - Account 232, Accounts payable.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Account 232, Accounts... POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO... ACT Balance Sheet Chart of Accounts Current and Accrued Liabilities § 367.2320 Account 232,...

  15. 18 CFR 367.2320 - Account 232, Accounts payable.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Account 232, Accounts... POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO... ACT Balance Sheet Chart of Accounts Current and Accrued Liabilities § 367.2320 Account 232,...

  16. Excel in the Accounting Curriculum: Perceptions from Accounting Professors

    ERIC Educational Resources Information Center

    Ramachandran Rackliffe, Usha; Ragland, Linda

    2016-01-01

    Public accounting firms emphasize the importance of accounting graduates being proficient in Excel. Since many accounting graduates often aspire to work in public accounting, a question arises as to whether there should be an emphasis on Excel in accounting education. The purpose of this paper is to specifically look at this issue by examining…

  17. STAR facility tritium accountancy

    SciTech Connect

    Pawelko, R. J.; Sharpe, J. P.; Denny, B. J.

    2008-07-15

    The Safety and Tritium Applied Research (STAR) facility has been established to provide a laboratory infrastructure for the fusion community to study tritium science associated with the development of safe fusion energy and other technologies. STAR is a radiological facility with an administrative total tritium inventory limit of 1.5 g (14,429 Ci) [1]. Research studies with moderate tritium quantities and various radionuclides are performed in STAR. Successful operation of the STAR facility requires the ability to receive, inventory, store, dispense tritium to experiments, and to dispose of tritiated waste while accurately monitoring the tritium inventory in the facility. This paper describes tritium accountancy in the STAR facility. A primary accountancy instrument is the tritium Storage and Assay System (SAS): a system designed to receive, assay, store, and dispense tritium to experiments. Presented are the methods used to calibrate and operate the SAS. Accountancy processes utilizing the Tritium Cleanup System (TCS), and the Stack Tritium Monitoring System (STMS) are also discussed. Also presented are the equations used to quantify the amount of tritium being received into the facility, transferred to experiments, and removed from the facility. Finally, the STAR tritium accountability database is discussed. (authors)

  18. STAR Facility Tritium Accountancy

    SciTech Connect

    R. J. Pawelko; J. P. Sharpe; B. J. Denny

    2007-09-01

    The Safety and Tritium Applied Research (STAR) facility has been established to provide a laboratory infrastructure for the fusion community to study tritium science associated with the development of safe fusion energy and other technologies. STAR is a radiological facility with an administrative total tritium inventory limit of 1.5g (14,429 Ci) [1]. Research studies with moderate tritium quantities and various radionuclides are performed in STAR. Successful operation of the STAR facility requires the ability to receive, inventory, store, dispense tritium to experiments, and to dispose of tritiated waste while accurately monitoring the tritium inventory in the facility. This paper describes tritium accountancy in the STAR facility. A primary accountancy instrument is the tritium Storage and Assay System (SAS): a system designed to receive, assay, store, and dispense tritium to experiments. Presented are the methods used to calibrate and operate the SAS. Accountancy processes utilizing the Tritium Cleanup System (TCS), and the Stack Tritium Monitoring System (STMS) are also discussed. Also presented are the equations used to quantify the amount of tritium being received into the facility, transferred to experiments, and removed from the facility. Finally, the STAR tritium accountability database is discussed.

  19. Planning for Accountability.

    ERIC Educational Resources Information Center

    Cuneo, Tim; Bell, Shareen; Welsh-Gray, Carol

    1999-01-01

    Through its Challenge 2000 program, Joint Venture: Silicon Valley Network's 21st Century Education Initiative has been working with K-12 schools to improve student performance in literature, math, and science. Clearly stated standards, appropriate assessments, formal monitoring, critical friends, and systemwide accountability are keys to success.…

  20. Student Attendance Accounting Manual.

    ERIC Educational Resources Information Center

    Freitas, Joseph M.

    In response to state legislation authorizing procedures for changes in academic calendars and measurement of student workload in California community colleges, this manual from the Chancellor's Office provides guidelines for student attendance accounting. Chapter 1 explains general items such as the academic calendar, admissions policies, student…

  1. Accountability for Productivity

    ERIC Educational Resources Information Center

    Wellman, Jane

    2010-01-01

    Productivity gains in higher education won't be made just by improving cost effectiveness or even performance. They need to be documented, communicated, and integrated into a strategic agenda to increase attainment. This requires special attention to "accountability" for productivity, meaning public presentation and communication of evidence about…

  2. The Accountability Illusion

    ERIC Educational Resources Information Center

    Cronin, John; Dahlin, Michael; Xiang, Yun; McCahon, Donna

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states have leeway to: (1) Craft their own academic standards, select their own tests, and define…

  3. Accounting for What Counts

    ERIC Educational Resources Information Center

    Milner, Joseph O.; Ferran, Joan E.; Martin, Katharine Y.

    2003-01-01

    No Child Left Behind legislation makes it clear that outside evaluators determine what gets taught in the classroom. It is important to ensure they measure what truly counts in school. This fact is poignantly and sadly true for the under funded, poorly resourced, "low performing" schools that may be hammered by administration accountants in the…

  4. Professional Capital as Accountability

    ERIC Educational Resources Information Center

    Fullan, Michael; Rincón-Gallardo, Santiago; Hargreaves, Andy

    2015-01-01

    This paper seeks to clarify and spells out the responsibilities of policy makers to create the conditions for an effective accountability system that produces substantial improvements in student learning, strengthens the teaching profession, and provides transparency of results to the public. The authors point out that U.S. policy makers will need…

  5. Accountability: A Rationale.

    ERIC Educational Resources Information Center

    Brademas, John

    1974-01-01

    The idea of accountability has by now been interpreted in ways which are different enough from one another to have permitted a certain ambiguity to creep into the notion in its present use within the educational community. The principal purpose of this report is, therefore, to try to set forth some clearer statement of what the idea of…

  6. Fiscal Accounting Manual.

    ERIC Educational Resources Information Center

    California State Dept. of Housing and Community Development, Sacramento. Indian Assistance Program.

    Written in simple, easy to understand form, the manual provides a vehicle for the untrained person in bookkeeping to control funds received from grants for Indian Tribal Councils and Indian organizations. The method used to control grants (federal, state, or private) is fund accounting, designed to organize rendering services on a non-profit…

  7. Curtail Accountability, Cultivate Attainability

    ERIC Educational Resources Information Center

    Wraga, William G.

    2011-01-01

    The current test-driven accountability movement, codified in the No Child Left Behind Act of 2001 ([NCLB] 2002), was a misguided idea that will have the effect not of improving the education of children and youth, but of indicting the public school system of the United States. To improve education in the United States, politicians, policy makers,…

  8. Legal responsibility and accountability.

    PubMed

    Cox, Chris

    2010-06-01

    Shifting boundaries in healthcare roles have led to anxiety among some nurses about their legal responsibilities and accountabilities. This is partly because of a lack of education about legal principles that underpin healthcare delivery. This article explains the law in terms of standards of care, duty of care, vicarious liability and indemnity insurance.

  9. Accounting 202, 302.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg.

    This teaching guide consists of guidelines for conducting two secondary-level introductory accounting courses. Intended for vocational business education students, the courses are designed to introduce financial principles and practices important to personal and business life, to promote development of clerical and bookkeeping skills sufficient…

  10. Democracy, Accountability, and Education

    ERIC Educational Resources Information Center

    Levinson, Meira

    2011-01-01

    Educational standards, assessments, and accountability systems are of immense political moment around the world. But there is no developed theory exploring the role that these systems should play within a democratic polity in particular. On the one hand, well-designed standards are public goods, supported by assessment and accountability…

  11. Educational Accounting Procedures.

    ERIC Educational Resources Information Center

    Tidwell, Sam B.

    This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing, annual…

  12. CEBAF beam loss accounting

    SciTech Connect

    Ursic, R.; Mahoney, K.; Hovater, C.; Hutton, A.; Sinclair, C.

    1995-12-31

    This paper describes the design and implementation of a beam loss accounting system for the CEBAF electron accelerator. This system samples the beam curent throughout the beam path and measures the beam current accurately. Personnel Safety and Machine Protection systems use this system to turn off the beam when hazardous beam losses occur.

  13. MATERIAL CONTROL ACCOUNTING INMM

    SciTech Connect

    Hasty, T.

    2009-06-14

    Since 1996, the Mining and Chemical Combine (MCC - formerly known as K-26), and the United States Department of Energy (DOE) have been cooperating under the cooperative Nuclear Material Protection, Control and Accounting (MPC&A) Program between the Russian Federation and the U.S. Governments. Since MCC continues to operate a reactor for steam and electricity production for the site and city of Zheleznogorsk which results in production of the weapons grade plutonium, one of the goals of the MPC&A program is to support implementation of an expanded comprehensive nuclear material control and accounting (MC&A) program. To date MCC has completed upgrades identified in the initial gap analysis and documented in the site MC&A Plan and is implementing additional upgrades identified during an update to the gap analysis. The scope of these upgrades includes implementation of MCC organization structure relating to MC&A, establishing material balance area structure for special nuclear materials (SNM) storage and bulk processing areas, and material control functions including SNM portal monitors at target locations. Material accounting function upgrades include enhancements in the conduct of physical inventories, limit of error inventory difference procedure enhancements, implementation of basic computerized accounting system for four SNM storage areas, implementation of measurement equipment for improved accountability reporting, and both new and revised site-level MC&A procedures. This paper will discuss the implementation of MC&A upgrades at MCC based on the requirements established in the comprehensive MC&A plan developed by the Mining and Chemical Combine as part of the MPC&A Program.

  14. Public Accountability in the Age of Neo-Liberal Governance.

    ERIC Educational Resources Information Center

    Ranson, Stewart

    2003-01-01

    Analyzes the impact of neo-liberal corporate accountability on educational governance since the demise of professional accountability in the mid-1970s. Argues that corporate accountability is inappropriate for educational governance. Proposes an alternative model: democratic accountability. (Contains 1 figure and 125 references.)(PKP)

  15. Iowa Community Colleges Accounting Manual.

    ERIC Educational Resources Information Center

    Iowa State Dept. of Education, Des Moines. Div. of Community Colleges and Workforce Preparation.

    This document describes account classifications and definitions for the accounting system of the Iowa community colleges. In view of the objectives of the accounting system, it is necessary to segregate the assets of the community college according to its source and intended use. Additionally, the accounting system should provide for accounting by…

  16. Numerical modeling of 1D heterogeneous combustion in porous media under free convection taking into account dependence of permeability on porosity

    NASA Astrophysics Data System (ADS)

    Lutsenko, N. A.

    2016-06-01

    Using numerical experiment the one-dimensional unsteady process of heterogeneous combustion in porous object under free convection is considered when the dependence of permeability on porosity is taken into account. The combustion is due to exothermic reaction between the fuel in the solid porous medium and oxidizer contained in the gas flowing through the porous object. In the present work the process is considered under natural convection, i.e. when the flow rate and velocity of the gas at the inlet to the porous objects are unknown, but the gas pressure at object boundaries is known. The influence of changing of permeability due to the changing of porosity on the solution is investigated using original numerical method, which is based on a combination of explicit and implicit finite-difference schemes. It was shown that taking into account the dependence of permeability on porosity, which is described by some known equations, can significantly change the solution in one-dimensional case. The changing of permeability due to the changing of porosity leads to the speed increasing of both cocurrent and the countercurrent combustion waves, and to the temperature increasing in the combustion zone of countercurrent combustion wave.

  17. Managing global accounts.

    PubMed

    Yip, George S; Bink, Audrey J M

    2007-09-01

    Global account management--which treats a multinational customer's operations as one integrated account, with coherent terms for pricing, product specifications, and service--has proliferated over the past decade. Yet according to the authors' research, only about a third of the suppliers that have offered GAM are pleased with the results. The unhappy majority may be suffering from confusion about when, how, and to whom to provide it. Yip, the director of research and innovation at Capgemini, and Bink, the head of marketing communications at Uxbridge College, have found that GAM can improve customer satisfaction by 20% or more and can raise both profits and revenues by at least 15% within just a few years of its introduction. They provide guidelines to help companies achieve similar results. The first steps are determining whether your products or services are appropriate for GAM, whether your customers want such a program, whether those customers are crucial to your strategy, and how GAM might affect your competitive advantage. If moving forward makes sense, the authors' exhibit, "A Scorecard for Selecting Global Accounts," can help you target the right customers. The final step is deciding which of three basic forms to offer: coordination GAM (in which national operations remain relatively strong), control GAM (in which the global operation and the national operations are fairly balanced), and separate GAM (in which a new business unit has total responsibility for global accounts). Given the difficulty and expense of providing multiple varieties, the vast majority of companies should initially customize just one---and they should be careful not to start with a choice that is too ambitious for either themselves or their customers to handle.

  18. First-Person Accounts.

    ERIC Educational Resources Information Center

    Gribs, H.; And Others

    1995-01-01

    Personal accounts describe the lives of 2 individuals with deaf-blindness, one an 87-year-old woman who was deaf from birth and became totally blind over a 50-year period and the other of a woman who became deaf-blind as a result of a fever at the age of 7. Managing activities of daily life and experiencing sensory hallucinations are among topics…

  19. Managing global accounts.

    PubMed

    Yip, George S; Bink, Audrey J M

    2007-09-01

    Global account management--which treats a multinational customer's operations as one integrated account, with coherent terms for pricing, product specifications, and service--has proliferated over the past decade. Yet according to the authors' research, only about a third of the suppliers that have offered GAM are pleased with the results. The unhappy majority may be suffering from confusion about when, how, and to whom to provide it. Yip, the director of research and innovation at Capgemini, and Bink, the head of marketing communications at Uxbridge College, have found that GAM can improve customer satisfaction by 20% or more and can raise both profits and revenues by at least 15% within just a few years of its introduction. They provide guidelines to help companies achieve similar results. The first steps are determining whether your products or services are appropriate for GAM, whether your customers want such a program, whether those customers are crucial to your strategy, and how GAM might affect your competitive advantage. If moving forward makes sense, the authors' exhibit, "A Scorecard for Selecting Global Accounts," can help you target the right customers. The final step is deciding which of three basic forms to offer: coordination GAM (in which national operations remain relatively strong), control GAM (in which the global operation and the national operations are fairly balanced), and separate GAM (in which a new business unit has total responsibility for global accounts). Given the difficulty and expense of providing multiple varieties, the vast majority of companies should initially customize just one---and they should be careful not to start with a choice that is too ambitious for either themselves or their customers to handle. PMID:17886487

  20. Hospitals' Internal Accountability

    PubMed Central

    Kraetschmer, Nancy; Jass, Janak; Woodman, Cheryl; Koo, Irene; Kromm, Seija K.; Deber, Raisa B.

    2014-01-01

    This study aimed to enhance understanding of the dimensions of accountability captured and not captured in acute care hospitals in Ontario, Canada. Based on an Ontario-wide survey and follow-up interviews with three acute care hospitals in the Greater Toronto Area, we found that the two dominant dimensions of hospital accountability being reported are financial and quality performance. These two dimensions drove both internal and external reporting. Hospitals' internal reports typically included performance measures that were required or mandated in external reports. Although respondents saw reporting as a valuable mechanism for hospitals and the health system to monitor and track progress against desired outcomes, multiple challenges with current reporting requirements were communicated, including the following: 58% of survey respondents indicated that performance-reporting resources were insufficient; manual data capture and performance reporting were prevalent, with the majority of hospitals lacking sophisticated tools or technology to effectively capture, analyze and report performance data; hospitals tended to focus on those processes and outcomes with high measurability; and 53% of respondents indicated that valuable cross-system accountability, performance measures or both were not captured by current reporting requirements. PMID:25305387

  1. Hospitals' internal accountability.

    PubMed

    Kraetschmer, Nancy; Jass, Janak; Woodman, Cheryl; Koo, Irene; Kromm, Seija K; Deber, Raisa B

    2014-09-01

    This study aimed to enhance understanding of the dimensions of accountability captured and not captured in acute care hospitals in Ontario, Canada. Based on an Ontario-wide survey and follow-up interviews with three acute care hospitals in the Greater Toronto Area, we found that the two dominant dimensions of hospital accountability being reported are financial and quality performance. These two dimensions drove both internal and external reporting. Hospitals' internal reports typically included performance measures that were required or mandated in external reports. Although respondents saw reporting as a valuable mechanism for hospitals and the health system to monitor and track progress against desired outcomes, multiple challenges with current reporting requirements were communicated, including the following: 58% of survey respondents indicated that performance-reporting resources were insufficient; manual data capture and performance reporting were prevalent, with the majority of hospitals lacking sophisticated tools or technology to effectively capture, analyze and report performance data; hospitals tended to focus on those processes and outcomes with high measurability; and 53% of respondents indicated that valuable cross-system accountability, performance measures or both were not captured by current reporting requirements. PMID:25305387

  2. Driving population health through accountable care organizations.

    PubMed

    Devore, Susan; Champion, R Wesley

    2011-01-01

    Accountable care organizations, scheduled to become part of the Medicare program under the Affordable Care Act, have been promoted as a way to improve health care quality, reduce growth in costs, and increase patients' satisfaction. It is unclear how these organizations will develop. Yet in principle they will have to meet quality metrics, adopt improved care processes, assume risk, and provide incentives for population health and wellness. These capabilities represent a radical departure from today's health delivery system. In May 2010 the Premier healthcare alliance formed the Accountable Care Implementation Collaborative, which consists of health systems that seek to pursue accountability by forming partnerships with private payers to evolve from fee-for-service payment models to new, value-driven models. This article describes how participants in the collaborative are building models and developing best practices that can inform the implementation of accountable care organizations as well as public policies.

  3. Cold formability prediction by the modified maximum force criterion with a non-associated Hill48 model accounting for anisotropic hardening

    NASA Astrophysics Data System (ADS)

    Lian, J.; Ahn, D. C.; Chae, D. C.; Münstermann, S.; Bleck, W.

    2016-08-01

    Experimental and numerical investigations on the characterisation and prediction of cold formability of a ferritic steel sheet are performed in this study. Tensile tests and Nakajima tests were performed for the plasticity characterisation and the forming limit diagram determination. In the numerical prediction, the modified maximum force criterion is selected as the localisation criterion. For the plasticity model, a non-associated formulation of the Hill48 model is employed. With the non-associated flow rule, the model can result in a similar predictive capability of stress and r-value directionality to the advanced non-quadratic associated models. To accurately characterise the anisotropy evolution during hardening, the anisotropic hardening is also calibrated and implemented into the model for the prediction of the formability.

  4. Teaching Elementary Accounting to Non-Accounting Majors

    ERIC Educational Resources Information Center

    Lloyd, Cynthia B.; Abbey, Augustus

    2009-01-01

    A central recurring theme in business education is the optimal strategy for improving introductory accounting, the gateway subject of business education. For many students, especially non-accounting majors, who are required to take introductory accounting as a requirement of the curriculum, introductory accounting has become a major obstacle for…

  5. New Frontiers: Training Forensic Accountants within the Accounting Program

    ERIC Educational Resources Information Center

    Ramaswamy, Vinita

    2007-01-01

    Accountants have recently been subject to very unpleasant publicity following the collapse of Enron and other major companies. There has been a plethora of accounting failures and accounting restatements of falsified earnings, with litigations and prosecutions taking place every day. As the FASB struggles to tighten the loopholes in accounting,…

  6. 18 CFR 367.1420 - Account 142, Customer accounts receivable.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., FEDERAL POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT... GAS ACT Balance Sheet Chart of Accounts Current and Accrued Assets § 367.1420 Account 142, Customer... merchandising, jobbing and contract work. This account must not include amounts due from associate companies....

  7. Estimating indoor semi-volatile organic compounds (SVOCs) associated with settled dust by an integrated kinetic model accounting for aerosol dynamics

    NASA Astrophysics Data System (ADS)

    Shi, Shanshan; Zhao, Bin

    2015-04-01

    Due to their low vapor pressure, semi-volatile organic compounds (SVOCs) can absorb onto other compartments in indoor environments, including settled dust. Incidental ingestion of settled dust-bound SVOCs contributes to the majority of daily non-dietary exposure to some SVOCs by human beings. With this pathway in mind, an integrated kinetic model to estimate indoor SVOC was developed to better predict the mass-fraction of SVOC associated with settled dust, which is important to accurately assess the non-dietary ingestion exposure to SVOC. In this integrated kinetic model, the aerosol dynamics were considered, including particle penetration, deposition and resuspension. The newly developed model was evaluated by comparing the predicted mass-fraction of SVOC associated with the settled dust (Xdust) and the measured Xdust from previous studies. Sixty Xdust values of thirty-eight different SVOCs measured in residences located in seven countries from four continents were involved in the model evaluation. The Xdust value predicted by the integrated kinetic model correlated linearly with the measured Xdust: y = 0.93x + 0.09 (R2 = 0.73), which indicates that the predicted Xdust by the integrated kinetic model are in good match with the measured data. This model may be utilized to predict SVOC concentrations in different indoor compartments, including dust-bound SVOC.

  8. Enhancing the Reliability of GPCR Models by Accounting for Flexibility of Their Pro-Containing Helices: the Case of the Human mAChR1 Receptor.

    PubMed

    Pedretti, Alessandro; Mazzolari, Angelica; Ricci, Chiara; Vistoli, Giulio

    2015-04-01

    To better investigate the GPCR structures, we have recently proposed to explore their flexibility by simulating the bending of their Pro-containing TM helices so generating a set of models (the so-called chimeras) which exhaustively combine the two conformations (bent and straight) of these helices. The primary objective of the study is to investigate whether such an approach can be exploited to enhance the reliability of the GPCR models generated by distant templates. The study was focused on the human mAChR1 receptor for which a presumably reliable model was generated using the congener mAChR3 as the template along with a second less reliable model based on the distant β2-AR template. The second model was then utilized to produce the chimeras by combining the conformations of its Pro-containing helices (i.e., TM4, TM5, TM6 and TM7 with 16 modeled chimeras). The reliability of such chimeras was assessed by virtual screening campaigns as evaluated using a novel skewness metric where they surpassed the predictive power of the more reliable mAChR1 model. Finally, the virtual screening campaigns emphasize the opportunity of synergistically combining the scores of more chimeras using a specially developed tool which generates highly predictive consensus functions by maximizing the corresponding enrichment factors. PMID:27490167

  9. Performance and Accountability Report

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The NASA Fiscal Year 2002 Performance and Accountability Report is presented. Over the past year, significant changes have been implemented to greatly improve NASA's management while continuing to break new ground in science and technology. Excellent progress has been made in implementing the President's Management Agenda. NASA is leading the government in its implementation of the five government-wide initiatives. NASA received an unqualified audit opinion on FY 2002 financial statements. The vast majority of performance goals have been achieved, furthering each area of NASA's mission. The contents include: 1) NASA Vision and Mission; 2) Management's Discussion and Analysis; 3) Performance; and 4) Financial.

  10. Accounting for Unresolved Spatial Variability in Large Scale Models: Development and Evaluation of a Statistical Cloud Parameterization with Prognostic Higher Order Moments

    SciTech Connect

    Robert Pincus

    2011-05-17

    This project focused on the variability of clouds that is present across a wide range of scales ranging from the synoptic to the millimeter. In particular, there is substantial variability in cloud properties at scales smaller than the grid spacing of models used to make climate projections (GCMs) and weather forecasts. These models represent clouds and other small-scale processes with parameterizations that describe how those processes respond to and feed back on the largescale state of the atmosphere.

  11. A modelling exercise to examine variations of NOx concentrations on adjacent footpaths in a street canyon: The importance of accounting for wind conditions and fleet composition.

    PubMed

    Gallagher, J

    2016-04-15

    Personal measurement studies and modelling investigations are used to examine pollutant exposure for pedestrians in the urban environment: each presenting various strengths and weaknesses in relation to labour and equipment costs, a sufficient sampling period and the accuracy of results. This modelling exercise considers the potential benefits of modelling results over personal measurement studies and aims to demonstrate how variations in fleet composition affects exposure results (presented as mean concentrations along the centre of both footpaths) in different traffic scenarios. A model of Pearse Street in Dublin, Ireland was developed by combining a computational fluid dynamic (CFD) model and a semi-empirical equation to simulate pollutant dispersion in the street. Using local NOx concentrations, traffic and meteorological data from a two-week period in 2011, the model were validated and a good fit was presented. To explore the long-term variations in personal exposure due to variations in fleet composition, synthesised traffic data was used to compare short-term personal exposure data (over a two-week period) with the results for an extended one-year period. Personal exposure during the two-week period underestimated the one-year results by between 8% and 65% on adjacent footpaths. The findings demonstrate the potential for relative differences in pedestrian exposure to exist between the north and south footpaths due to changing wind conditions in both peak and off-peak traffic scenarios. This modelling approach may help overcome potential under- or over-estimations of concentrations in personal measurement studies on the footpaths. Further research aims to measure pollutant concentrations on adjacent footpaths in different traffic and wind conditions and to develop a simpler modelling system to identify pollutant hotspots on our city footpaths so that urban planners can implement improvement strategies to improve urban air quality. PMID:26859699

  12. A modelling exercise to examine variations of NOx concentrations on adjacent footpaths in a street canyon: The importance of accounting for wind conditions and fleet composition.

    PubMed

    Gallagher, J

    2016-04-15

    Personal measurement studies and modelling investigations are used to examine pollutant exposure for pedestrians in the urban environment: each presenting various strengths and weaknesses in relation to labour and equipment costs, a sufficient sampling period and the accuracy of results. This modelling exercise considers the potential benefits of modelling results over personal measurement studies and aims to demonstrate how variations in fleet composition affects exposure results (presented as mean concentrations along the centre of both footpaths) in different traffic scenarios. A model of Pearse Street in Dublin, Ireland was developed by combining a computational fluid dynamic (CFD) model and a semi-empirical equation to simulate pollutant dispersion in the street. Using local NOx concentrations, traffic and meteorological data from a two-week period in 2011, the model were validated and a good fit was presented. To explore the long-term variations in personal exposure due to variations in fleet composition, synthesised traffic data was used to compare short-term personal exposure data (over a two-week period) with the results for an extended one-year period. Personal exposure during the two-week period underestimated the one-year results by between 8% and 65% on adjacent footpaths. The findings demonstrate the potential for relative differences in pedestrian exposure to exist between the north and south footpaths due to changing wind conditions in both peak and off-peak traffic scenarios. This modelling approach may help overcome potential under- or over-estimations of concentrations in personal measurement studies on the footpaths. Further research aims to measure pollutant concentrations on adjacent footpaths in different traffic and wind conditions and to develop a simpler modelling system to identify pollutant hotspots on our city footpaths so that urban planners can implement improvement strategies to improve urban air quality.

  13. Comparison of the sequestering properties of yeast cell wall extract and hydrated sodium calcium aluminosilicate in three in vitro models accounting for the animal physiological bioavailability of zearalenone.

    PubMed

    Yiannikouris, A; Kettunen, H; Apajalahti, J; Pennala, E; Moran, C A

    2013-01-01

    The sequestration/inactivation of the oestrogenic mycotoxin zearalenone (ZEA) by two adsorbents--yeast cell wall extract (YCW) and hydrated sodium calcium aluminosilicate (HSCAS)--was studied in three laboratory models: (1) an in vitro model was adapted from referenced methods to test for the sequestrant sorption capabilities under buffer conditions at two pH values using liquid chromatography coupled to a fluorescence detector for toxin quantification; (2) a second in vitro model was used to evaluate the sequestrant sorption stability according to pH variations and using ³H-labelled ZEA at low toxin concentration; and (3) an original, ex vivo Ussing chamber model was developed to further understand the transfer of ZEA through intestinal tissue and the impact of each sequestrant on the mycotoxin bioavailability of ³H-labelled ZEA. YCW was a more efficient ZEA adsorbent than HSCAS in all three models, except under very acidic conditions (pH 2.5 or 3.0). The Ussing chamber model offered a novel, ex vivo, alternative method for understanding the effect of sequestrant on the bioavailability of ZEA. The results showed that compared with HSCAS, YCW was more efficient in sequestering ZEA and that it reduced the accumulation of ZEA in the intestinal tissue by 40% (p < 0.001).

  14. The union, the mining company, and the environment: steelworkers build a multi-stakeholder model for corporate accountability at Phelps Dodge.

    PubMed

    Lewis, S

    1999-01-01

    This is a case study of ongoing relations between the Phelps Dodge mining company, a United Steelworkers local representing 560 employees at the company's Chino Mines in New Mexico, and an array of other concerned stakeholders. This case study shows that labor can be a full partner in environmental advocacy, and even take a leadership role in building a strong multi-stakeholder alliance for corporate accountability. While the case also shows that corporate jobs blackmail is alive and well in the global economy, the labor community-coalition that has emerged at the mining complex has broken some new ground. The approach taken attends to diverse stakeholder interests--cultural protection issues of Native-American and Mexican-American ethnic groups; conservation, groundwater and Right-to-Know issues of traditional environmental constituencies; and environmental liability and disclosure concerns of corporate shareholders. Among the key developments are: A new approach to corporate reporting to shareholders as an enforcement and right-to-know tool; The use of the internet as an information dissemination and action tool; The potential for environmentally needed improvements to serve as a receptor for employment of workers at a mine during periods of reduced production. PMID:17208913

  15. Revamping High School Accounting Courses.

    ERIC Educational Resources Information Center

    Bittner, Joseph

    2002-01-01

    Provides ideas for updating accounting courses: convert to semester length; focus on financial reporting/analysis, financial statements, the accounting cycle; turn textbook exercises into practice sets for the accounting cycle; teach about corporate accounting; and address individual line items on financial statements. (SK)

  16. Where Are the Accounting Professors?

    ERIC Educational Resources Information Center

    Chang, Jui-Chin; Sun, Huey-Lian

    2008-01-01

    Accounting education is facing a crisis of shortage of accounting faculty. This study discusses the reasons behind the shortage and offers suggestions to increase the supply of accounting faculty. Our suggestions are as followings. First, educators should begin promoting accounting academia as one of the career choices to undergraduate and…

  17. Accountability and the New Essentials.

    ERIC Educational Resources Information Center

    Dowd, Steven B.

    The current emphasis in education on accountability is tending toward "push-button accountability." The challenge is to evaluate access and retention as well as other educationally relevant goals to define "quality" or "accountability." In higher education, accountability should be proven through assessment and should consist of proof that what…

  18. MuSE: accounting for tumor heterogeneity using a sample-specific error model improves sensitivity and specificity in mutation calling from sequencing data.

    PubMed

    Fan, Yu; Xi, Liu; Hughes, Daniel S T; Zhang, Jianjun; Zhang, Jianhua; Futreal, P Andrew; Wheeler, David A; Wang, Wenyi

    2016-01-01

    Subclonal mutations reveal important features of the genetic architecture of tumors. However, accurate detection of mutations in genetically heterogeneous tumor cell populations using next-generation sequencing remains challenging. We develop MuSE ( http://bioinformatics.mdanderson.org/main/MuSE ), Mutation calling using a Markov Substitution model for Evolution, a novel approach for modeling the evolution of the allelic composition of the tumor and normal tissue at each reference base. MuSE adopts a sample-specific error model that reflects the underlying tumor heterogeneity to greatly improve the overall accuracy. We demonstrate the accuracy of MuSE in calling subclonal mutations in the context of large-scale tumor sequencing projects using whole exome and whole genome sequencing. PMID:27557938

  19. MuSE: accounting for tumor heterogeneity using a sample-specific error model improves sensitivity and specificity in mutation calling from sequencing data.

    PubMed

    Fan, Yu; Xi, Liu; Hughes, Daniel S T; Zhang, Jianjun; Zhang, Jianhua; Futreal, P Andrew; Wheeler, David A; Wang, Wenyi

    2016-01-01

    Subclonal mutations reveal important features of the genetic architecture of tumors. However, accurate detection of mutations in genetically heterogeneous tumor cell populations using next-generation sequencing remains challenging. We develop MuSE ( http://bioinformatics.mdanderson.org/main/MuSE ), Mutation calling using a Markov Substitution model for Evolution, a novel approach for modeling the evolution of the allelic composition of the tumor and normal tissue at each reference base. MuSE adopts a sample-specific error model that reflects the underlying tumor heterogeneity to greatly improve the overall accuracy. We demonstrate the accuracy of MuSE in calling subclonal mutations in the context of large-scale tumor sequencing projects using whole exome and whole genome sequencing.

  20. A DGTD method for the numerical modeling of the interaction of light with nanometer scale metallic structures taking into account non-local dispersion effects

    NASA Astrophysics Data System (ADS)

    Schmitt, Nikolai; Scheid, Claire; Lanteri, Stéphane; Moreau, Antoine; Viquerat, Jonathan

    2016-07-01

    The interaction of light with metallic nanostructures is increasingly attracting interest because of numerous potential applications. Sub-wavelength metallic structures, when illuminated with a frequency close to the plasma frequency of the metal, present resonances that cause extreme local field enhancements. Exploiting the latter in applications of interest requires a detailed knowledge about the occurring fields which can actually not be obtained analytically. For the latter mentioned reason, numerical tools are thus an absolute necessity. The insight they provide is very often the only way to get a deep enough understanding of the very rich physics at play. For the numerical modeling of light-structure interaction on the nanoscale, the choice of an appropriate material model is a crucial point. Approaches that are adopted in a first instance are based on local (i.e. with no interaction between electrons) dispersive models, e.g. Drude or Drude-Lorentz models. From the mathematical point of view, when a time-domain modeling is considered, these models lead to an additional system of ordinary differential equations coupled to Maxwell's equations. However, recent experiments have shown that the repulsive interaction between electrons inside the metal makes the response of metals intrinsically non-local and that this effect cannot generally be overlooked. Technological achievements have enabled the consideration of metallic structures in a regime where such non-localities have a significant influence on the structures' optical response. This leads to an additional, in general non-linear, system of partial differential equations which is, when coupled to Maxwell's equations, significantly more difficult to treat. Nevertheless, dealing with a linearized non-local dispersion model already opens the route to numerous practical applications of plasmonics. In this work, we present a Discontinuous Galerkin Time-Domain (DGTD) method able to solve the system of Maxwell