Science.gov

Sample records for accounting sac-sma model

  1. Modeling the Hellenic karst catchments with the Sacramento Soil Moisture Accounting model

    NASA Astrophysics Data System (ADS)

    Katsanou, K.; Lambrakis, N.

    2017-01-01

    Karst aquifers are very complex due to the presence of dual porosity. Rain-runoff hydrological models are frequently used to characterize these aquifers and assist in their management. The calibration of such models requires knowledge of many parameters, whose quality can be directly related to the quality of the simulation results. The Sacramento Soil Moisture Accounting (SAC-SMA) model includes a number of physically based parameters that permit accurate simulations and predictions of the rain-runoff relationships. Due to common physical characteristics of mature karst structures, expressed by sharp recession limbs of the runoff hydrographs, the calibration of the model becomes relatively simple, and the values of the parameters range within narrow bands. The most sensitive parameters are those related to groundwater storage regulated by the zone of the epikarst. The SAC-SMA model was calibrated for data from the mountainous part of the Louros basin, north-western Greece, which is considered to be representative of such geological formations. Visual assessment of the hydrographs as statistical outcomes revealed that the SAC-SMA model simulated the timing and magnitude of the peak flow and the shape of recession curves well.

  2. Evaluation of the Sacramento Soil Moisture Accounting Model for Flood Forecasting in a Hawaiian Watershed

    NASA Astrophysics Data System (ADS)

    Awal, R.; Fares, A.; Michaud, J.; Chu, P.; Fares, S.; Rosener, M.; Kevin, K.

    2012-12-01

    The focus of this study was to assess the performance of the U.S. National Weather Service Sacramento Soil Moisture Accounting Model (SAC-SMA) on the flash flood prone Hanalei watershed, Kauai, Hawaii, using site specific hydrologic data. The model was calibrated and validated using six-years of observed field hydrological data, e.g., stream flow, and spatially distributed rainfall. The ordinary kriging method was used to calculate mean watershed wide hourly precipitation for the six years using data from twenty rain gauges from north shore Kauai including five rain gauges within the watershed. Ranges of the values of a priori SAC-SMA parameters were also estimated based on the site specific soil hydrological properties; these calculated values were well within those reported in literature for different watersheds SAC-SMA was run for one year runs using the calibration and validation data. The performance of model in predicting streamflow using average watershed wide values of the a priori parameters was very poor. SAC-SMA over predicted streamflow throughout the year as compared to observed streamflow data. The upper limit of the lower layer tension water capacity, LZTWM, parameter was higher than those reported in the literature this might be due to the wetter conditions, higher precipitation, in Hanalei watershed (>6400mm) than the other previously studied watersheds (<1600mm). When the upper bound of LZTWM varied between 2500 and 3000 during calibration, SAC-SMA's performance improved to satisfactory and even to good for almost all years based on PBIAS and Nash-Sutcliffe coefficients of efficiency. When we used optimized parameter of one year to other years for the validation, the performance of optimized parameter of year 2005 was satisfactory for most of the year when upper bound of LZTWM = 2500 and the optimized parameter of year 2004 was satisfactory for most of the year when upper bound of LZTWM = 3000. The annual precipitation of 2004 was the highest

  3. Application of stochastic parameter optimization to the Sacramento Soil Moisture Accounting model

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Gupta, Hoshin V.; Dekker, Stefan C.; Sorooshian, Soroosh; Wagener, Thorsten; Bouten, Willem

    2006-06-01

    Hydrological models generally contain parameters that cannot be measured directly, but can only be meaningfully inferred by calibration against a historical record of input-output data. While considerable progress has been made in the development and application of automatic procedures for model calibration, such methods have received criticism for their lack of rigor in treating uncertainty in the parameter estimates. In this paper, we apply the recently developed Shuffled Complex Evolution Metropolis algorithm (SCEM-UA) to stochastic calibration of the parameters in the Sacramento Soil Moisture Accounting model (SAC-SMA) model using historical data from the Leaf River in Mississippi. The SCEM-UA algorithm is a Markov Chain Monte Carlo sampler that provides an estimate of the most likely parameter set and underlying posterior distribution within a single optimization run. In particular, we explore the relationship between the length and variability of the streamflow data and the Bayesian uncertainty associated with the SAC-SMA model parameters and compare SCEM-UA derived parameter values with those obtained using deterministic SCE-UA calibrations. Most significantly, for the Leaf River catchments under study our results demonstrate that most of the 13 SAC-SMA parameters are well identified by calibration to daily streamflow data suggesting that this data contains more information than has previously been reported in the literature.

  4. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or

  5. Parameter estimation of hydrologic models using data assimilation

    NASA Astrophysics Data System (ADS)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  6. Comparison of a Neural Network and a Conceptual Model for Rainfall-Runoff Modelling with Monthly Input

    NASA Astrophysics Data System (ADS)

    Chochlidakis, Chronis; Daliakopoulos, Ioannis; Tsanis, Ioannis

    2014-05-01

    Rainfall-runoff (RR) models contain parameters that can seldom be directly measured or estimated by expert judgment, but are rather inferred by calibration against a historical record of input-output datasets. Here, a comparison is made between a conceptual model and an Artificial Neural Network (ANN) for efficient modeling of complex hydrological processes. The monthly rainfall, streamflow, and evapotranspiration data from 15 catchments in Crete, Greece are used to compare the proposed methodologies. Genetic Algorithms (GA) are applied for the stochastic calibration of the parameters in the Sacramento Soil Moisture Accounting (SAC-SMA) model yielding R2 values between 0.65 and 0.90. A Feedforward NN (FNN) is trained using a time delay approach, optimized through trial and error for each catchment, yielding R2 values between 0.70 and 0.91. The results obtained show that the ANN models can be superior to the conventional conceptual models due to their ability to handle the non-linearity and dynamic nature of the natural physical processes in a more efficient manner. On the other hand, SAC-SMA depicts high flows with greater accuracy and results suggest that conceptual models can be more robust in extrapolating beyond historical record limits.

  7. Custom accounts receivable modeling.

    PubMed

    Veazie, J

    1994-04-01

    In hospital and clinic management, accounts are valued as units and handled equally--a $20 account receives the same minimum number of statements as a $20,000 account. Quite often, the sheer number of accounts a hospital or clinic has to handle forces executives to manage accounts by default and failure--accounts mature on an aging track and, if left unpaid by patients, eventually are sent to collections personnel. Of the bad-debt accounts placed with collections agencies, many are misclassified as charity or hardship cases, while others could be collected by hospital or clinic staff with a limited amount of additional effort.

  8. Model Accounting Program. Adopters Guide.

    ERIC Educational Resources Information Center

    Beaverton School District 48, OR.

    The accounting cluster demonstration project conducted at Aloha High School in the Beaverton, Oregon, school district developed a model curriculum for high school accounting. The curriculum is based on interviews with professionals in the accounting field and emphasizes the use of computers. It is suitable for use with special needs students as…

  9. An Integrated Bayesian Uncertainty Estimator: fusion of Input, Parameter and Model Structural Uncertainty Estimation in Hydrologic Prediction System

    NASA Astrophysics Data System (ADS)

    Ajami, N. K.; Duan, Q.; Sorooshian, S.

    2005-12-01

    To-date single conceptual hydrologic models often applied to interpret physical processes within a watershed. Nevertheless hydrologic models regardless of their sophistication and complexity are simplified representation of the complex, spatially distributed and highly nonlinear real world system. Consequently their hydrologic predictions contain considerable uncertainty from different sources including: hydrometeorological forcing inputs, boundary/initial conditions, model structure, model parameters which need to be accounted for. Thus far the effort has gone to address these sources of uncertainty explicitly, making an implicit assumption that uncertainties from different sources are additive. Nevertheless because of the nonlinear nature of the hydrologic systems, it is not feasible to account for these uncertainties independently. Here we present the Integrated Bayesian Uncertainty Estimator (IBUNE) which accounts for total uncertainties from all major sources: inputs forcing, model structure, model parameters. This algorithm explores multi-model framework to tackle model structural uncertainty while using the Bayesian rules to estimate parameter and input uncertainty within individual models. Three hydrologic models including SACramento Soil Moisture Accounting (SAC-SMA) model, Hydrologic model (HYMOD) and Simple Water Balance (SWB) model were considered within IBUNE framework for this study. The results which are presented for the Leaf River Basin, MS, indicates that IBUNE gives a better quantification of uncertainty through hydrological modeling processes, therefore provide more reliable and less bias prediction with realistic uncertainty boundaries.

  10. Assessing model state and forecasts variation in hydrologic data assimilation

    NASA Astrophysics Data System (ADS)

    Samuel, Jos; Coulibaly, Paulin; Dumedah, Gift; Moradkhani, Hamid

    2014-05-01

    Data assimilation (DA) has been widely used in hydrological models to improve model state and subsequent streamflow estimates. However, for poor or non-existent state observations, the state estimation in hydrological DA can be problematic, leading to inaccurate streamflow updates. This study evaluates the soil moisture and flow variations and forecasts by assimilating streamflow and soil moisture. Three approaches of Ensemble Kalman Filter (EnKF) with dual state-parameter estimation are applied: (1) streamflow assimilation, (2) soil moistue assimilation, and (3) combined assimilation of soil moisture and streamflow. The assimilation approaches are evaluated using the Sacramento Soil Moisture Accounting (SAC-SMA) model in the Spencer Creek catchment in southern Ontario, Canada. The results show that there are significant differences in soil moisture variations and streamflow estimates when the three assimilation approaches were applied. In the streamflow assimilation, soil moisture states were markedly distorted, particularly soil moisture of lower soil layer; whereas, in the soil moisture assimilation, streamflow estimates are inaccurate. The combined assimilation of streamflow and soil moisture provides more accurate forecasts of both soil moisture and streamflow, particularly for shorter lead times. The combined approach has the flexibility to account for model adjustment through the time variation of parameters together with state variables when soil moisture and streamflow observations are integrated into the assimilation procedure. This evaluation is important for the application of DA methods to simultaneously estimate soil moisture states and watershed response and forecasts.

  11. Analysis of the Second Model Parameter Estimation Experiment Workshop Results

    NASA Astrophysics Data System (ADS)

    Duan, Q.; Schaake, J.; Koren, V.; Mitchell, K.; Lohmann, D.

    2002-05-01

    The goal of Model Parameter Estimation Experiment (MOPEX) is to investigate techniques for the a priori parameter estimation for land surface parameterization schemes of atmospheric models and for hydrologic models. A comprehensive database has been developed which contains historical hydrometeorologic time series data and land surface characteristics data for 435 basins in the United States and many international basins. A number of international MOPEX workshops have been convened or planned for MOPEX participants to share their parameter estimation experience. The Second International MOPEX Workshop is held in Tucson, Arizona, April 8-10, 2002. This paper presents the MOPEX goal/objectives and science strategy. Results from our participation in developing and testing of the a priori parameter estimation procedures for the National Weather Service (NWS) Sacramento Soil Moisture Accounting (SAC-SMA) model, the Simple Water Balance (SWB) model, and the National Center for Environmental Prediction Center (NCEP) NOAH Land Surface Model (NOAH LSM) are highlighted. The test results will include model simulations using both a priori parameters and calibrated parameters for 12 basins selected for the Tucson MOPEX Workshop.

  12. Implementing a trustworthy cost-accounting model.

    PubMed

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  13. Combined assimilation of streamflow and satellite soil moisture with the particle filter and geostatistical modeling

    NASA Astrophysics Data System (ADS)

    Yan, Hongxiang; Moradkhani, Hamid

    2016-08-01

    Assimilation of satellite soil moisture and streamflow data into a distributed hydrologic model has received increasing attention over the past few years. This study provides a detailed analysis of the joint and separate assimilation of streamflow and Advanced Scatterometer (ASCAT) surface soil moisture into a distributed Sacramento Soil Moisture Accounting (SAC-SMA) model, with the use of recently developed particle filter-Markov chain Monte Carlo (PF-MCMC) method. Performance is assessed over the Salt River Watershed in Arizona, which is one of the watersheds without anthropogenic effects in Model Parameter Estimation Experiment (MOPEX). A total of five data assimilation (DA) scenarios are designed and the effects of the locations of streamflow gauges and the ASCAT soil moisture on the predictions of soil moisture and streamflow are assessed. In addition, a geostatistical model is introduced to overcome the significantly biased satellite soil moisture and also discontinuity issue. The results indicate that: (1) solely assimilating outlet streamflow can lead to biased soil moisture estimation; (2) when the study area can only be partially covered by the satellite data, the geostatistical approach can estimate the soil moisture for those uncovered grid cells; (3) joint assimilation of streamflow and soil moisture from geostatistical modeling can further improve the surface soil moisture prediction. This study recommends that the geostatistical model is a helpful tool to aid the remote sensing technique and the hydrologic DA study.

  14. Evaluation and Sensitivity Analysis of An Ensemble-based Coupled Flash Flood and Landslide Modelling System Using Remote Sensing Forcing

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Hong, Y.; Gourley, J. J.; Xue, X.; He, X.

    2015-12-01

    Heavy rainfall-triggered landslides are often associated with flood events and cause additional loss of life and property. It is pertinent to build a robust coupled flash flood and landslide disaster early warning system for disaster preparedness and hazard management based. In this study, we built an ensemble-based coupled flash flood and landslide disaster early warning system, which is aimed for operational use by the US National Weather Service, by integrating the Coupled Routing and Excess STorage (CREST) model and Sacramento Soil Moisture Accounting Model (SAC-SMA) with the physically based SLope-Infiltration-Distributed Equilibrium (SLIDE) landslide prediction model. We further evaluated this ensemble-based prototype warning system by conducting multi-year simulations driven by the Multi-Radar Multi-Sensor (MRMS) rainfall estimates in North Carolina and Oregon. We comprehensively evaluated the predictive capabilities of this system against observed and reported flood and landslides events. We then evaluated the sensitivity of the coupled system to the simulated hydrological processes. Our results show that the system is generally capable of making accurate predictions of flash flood and landslide events in terms of their locations and time of occurrence. The occurrence of predicted landslides show high sensitivity to total infiltration and soil water content, highlighting the importance of accurately simulating the hydrological processes on the accurate forecasting of rainfall triggered landslide events.

  15. Modeling habitat dynamics accounting for possible misclassification

    USGS Publications Warehouse

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  16. Carbon accounting model for forests in Australia.

    PubMed

    Brack, C L; Richards, G P

    2002-01-01

    CAMFor (Carbon Accounting Model for Forests) is a sophisticated spreadsheet model developed to assist in carbon accounting and projection. This model can integrate information from a range of alternate sources including user input, default parameters and third party model outputs to calculate the carbon flows associated with a stand of trees and the wood products derived from harvests of that stand. Carbon is tracked in the following pools: * Biomass (stemwood, branches, bark, fine and coarse roots, leaves and twigs) * Soil (organic matter and inert charcoal) * Debris (coarse and fine litter, slash, below ground dead material) * Products (waste wood, sawn timber, paper, biofuel, reconstituted wood products). These pools can be tracked following thinning, fires and over multiple rotations. A sensitivity module has been developed to assist examination of the important assumptions and inputs. This paper reviews the functionality of CAMFor and reports on its use in a case study to explore the precision of estimates of carbon sequestration in a eucalypt plantation. Information on variability in unbiased models, measurement accuracy and other sources of error are combined in a sensitivity analysis to estimate the overall precision of sequestration estimates.

  17. Satellite-derived potential evapotranspiration for distributed hydrologic runoff modeling

    NASA Astrophysics Data System (ADS)

    Spies, R. R.; Franz, K. J.; Bowman, A.; Hogue, T. S.; Kim, J.

    2012-12-01

    Distributed models have the ability of incorporating spatially variable data, especially high resolution forcing inputs such as precipitation, temperature and evapotranspiration in hydrologic modeling. Use of distributed hydrologic models for operational streamflow prediction has been partially hindered by a lack of readily available, spatially explicit input observations. Potential evapotranspiration (PET), for example, is currently accounted for through PET input grids that are based on monthly climatological values. The goal of this study is to assess the use of satellite-based PET estimates that represent the temporal and spatial variability, as input to the National Weather Service (NWS) Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM). Daily PET grids are generated for six watersheds in the upper Mississippi River basin using a method that applies only MODIS satellite-based observations and the Priestly Taylor formula (MODIS-PET). The use of MODIS-PET grids will be tested against the use of the current climatological PET grids for simulating basin discharge. Gridded surface temperature forcing data are derived by applying the inverse distance weighting spatial prediction method to point-based station observations from the Automated Surface Observing System (ASOS) and Automated Weather Observing System (AWOS). Precipitation data are obtained from the Climate Prediction Center's (CPC) Climatology-Calibrated Precipitation Analysis (CCPA). A-priori gridded parameters for the Sacramento Soil Moisture Accounting Model (SAC-SMA), Snow-17 model, and routing model are initially obtained from the Office of Hydrologic Development and further calibrated using an automated approach. The potential of the MODIS-PET to be used in an operational distributed modeling system will be assessed with the long-term goal of promoting research to operations transfers and advancing the science of hydrologic forecasting.

  18. To Trust or Not to Trust: Assessing the consistency of controls across hydrologic models

    NASA Astrophysics Data System (ADS)

    Herman, J. D.

    2011-12-01

    Watershed models can vary significantly in their formulation and complexity. Conceptual lumped models are the most widely used type, but they have received criticism for their limited physical interpretability. A key challenge in drawing process-level inferences from these models lies in the systematic assessment of how their controlling parameters and processes change over time. The extensive use of these models and the increasing popularity of multi-model frameworks highlight the need for diagnostic approaches that can rigorously evaluate conceptual model structures, with a particular focus on the consistency of their implied process controls. In this study, we develop a diagnostic method to explore the consistency of dominant process controls across the HBV, HyMod, and Sacramento Soil Moisture Accounting (SAC-SMA) model structures. The parametric controls for several signature metrics are determined using Sobol Sensitivity Analysis for twelve watersheds selected across a hydro-climatic gradient in the eastern United States. These controls are evaluated to determine whether, and under what conditions, the models' behavior is consistent with our perception of the underlying system. Controls are also compared across models to explore the impact of model structure choice on process-level inferences. Results indicate that each of the three model structures offers a functionally different simplification of the physical system. Strong seasonal variation in parametric sensitivities allows for comparisons between real-world dominant processes and those implied by the models. These dynamic sensitivities often behave differently across models, which emphasizes the danger of inferring process-level information from individual model structures.

  19. Accountability.

    ERIC Educational Resources Information Center

    Lashway, Larry

    1999-01-01

    This issue reviews publications that provide a starting point for principals looking for a way through the accountability maze. Each publication views accountability differently, but collectively these readings argue that even in an era of state-mandated assessment, principals can pursue proactive strategies that serve students' needs. James A.…

  20. "Growth Models" Gaining in Accountability Debate

    ERIC Educational Resources Information Center

    Hoff, David J.

    2007-01-01

    In the debate over the future of the No Child Left Behind Act, policymakers, educators, and researchers seem to agree on one thing: The federal law's accountability system should be rewritten so it rewards or sanctions schools on the basis of students' academic growth. The U.S. Department of Education recently reaffirmed the Bush administration's…

  1. Evaluation of climate anomalies impacts on the Upper Blue Nile Basin in Ethiopia using a distributed and a lumped hydrologic model

    NASA Astrophysics Data System (ADS)

    Elsanabary, Mohamed Helmy; Gan, Thian Yew

    2015-11-01

    Evaluating the climate anomalies impacts on the Upper Blue Nile Basin (UBNB), Ethiopia, a large basin with scarce hydroclimatic data, through hydrologic modeling is a challenge. A fully distributed, physically-based model, a modified version of the Interactions Soil-Biosphere Atmosphere model of Météo France (MISBA), and a lumped, conceptual rainfall-runoff Sacramento model, SAC-SMA of the US National Weather Service, were used to simulate the streamflow of UBNB. To study the potential hydrologic effect of climate anomalies on the UBNB, rainfall and temperature data observed when climate anomalies were active, were resampled and used to drive MISBA and SAC-SMA. To obtain representative, distributed precipitation data in mountainous basins, it was found that a 3% adjustment factor for every 25 m rise in elevation was needed to orographically correct the rainfall over UBNB. The performance of MISBA applied to UBNB improved after MISBA was modified so that it could simulate evaporation loss from the canopy, providing coefficient of determination (R2) = 0.58, and root mean square error (RMSE) = 0.34 m3/s in comparison with the observed streamflow. In contrast, the performance of SAC-SMA at the calibration run and the validation run is better than that of MISBA, such that R2 is 0.79 for calibration and 0.82 for validation even though it models the hydrology of UBNB in a lumped, conceptual framework as against the physically-based, fully distributed framework of MISBA. El Niño tends to decrease the June-September rainfall but increase the February-May rainfall, while La Niña has opposite effect on the rainfall of UBNB. Based on the simulations of MISBA and SAC-SMA for UBNB, La Niña and Indian Ocean Dipole (IOD) tend to have a wetting effect while El Niño has a drying effect on the streamflow of the UBNB. In addition, El Niño Southern Oscillation (ENSO) and IOD increase the streamflow variability more than changing the magnitude of streamflow. The results provide

  2. Assimilation of AMSR-E snow water equivalent data in a spatially-lumped snow model

    NASA Astrophysics Data System (ADS)

    Dziubanski, David J.; Franz, Kristie J.

    2016-09-01

    Accurately initializing snow model states in hydrologic prediction models is important for estimating future snowmelt, water supplies, and flooding potential. While ground-based snow observations give the most reliable information about snowpack conditions, they are spatially limited. In the north-central USA, there are no continual observations of hydrologically critical snow variables. Satellites offer the most likely source of spatial snow data, such as the snow water equivalent (SWE), for this region. In this study, we test the impact of assimilating SWE data from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) instrument into the US National Weather Service (NWS) SNOW17 model for seven watersheds in the Upper Mississippi River basin. The SNOW17 is coupled with the NWS Sacramento Soil Moisture Accounting (SACSMA) model, and both simulated SWE and discharge are evaluated. The ensemble Kalman filter (EnKF) assimilation framework is applied and updating occurs on a daily cycle for water years 2006-2011. Prior to assimilation, AMSR-E data is bias corrected using data from the National Operational Hydrologic Remote Sensing Center (NOHRSC) airborne snow survey program. An average AMSR-E SWE bias of -17.91 mm was found for the study basins. SNOW17 and SAC-SMA model parameters from the North Central River Forecast Center (NCRFC) are used. Compared to a baseline run without assimilation, the SWE assimilation improved discharge for five of the seven study sites, in particular for high discharge magnitudes associated with snow melt runoff. SWE and discharge simulations suggest that the SNOW17 is underestimating SWE and snowmelt rates in the study basins. Deep snow conditions and periods of snowmelt may have introduced error into the assimilation due to difficulty obtaining accurate brightness temperatures under these conditions. Overall results indicate that the AMSR-E data and EnKF are viable and effective solutions for improving simulations

  3. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more

  4. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    SciTech Connect

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient

  5. Accountability: An Action Model for the Public Schools.

    ERIC Educational Resources Information Center

    DeMont, Bill[ie; DeMont, Roger

    The model proposed in this book specifies that there are four types of interrelated practices that determine the extent to which accountability is realized. These are (1) the identification of primary accountability agents and their respective program responsibilities, (2) the execution of internal program reviews by those program officers, (3)…

  6. The Relevance of the CIPP Evaluation Model for Educational Accountability.

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.

    The CIPP Evaluation Model was originally developed to provide timely information in a systematic way for decision making, which is a proactive application of evaluation. This article examines whether the CIPP model also serves the retroactive purpose of providing information for accountability. Specifically, can the CIPP Model adequately assist…

  7. Individual Learning Accounts and Other Models of Financing Lifelong Learning

    ERIC Educational Resources Information Center

    Schuetze, Hans G.

    2007-01-01

    To answer the question "Financing what?" this article distinguishes several models of lifelong learning as well as a variety of lifelong learning activities. Several financing methods are briefly reviewed, however the principal focus is on Individual Learning Accounts (ILAs) which were seen by some analysts as a promising model for…

  8. Counselor Accountability Model of Grossmont College: A Working Paper.

    ERIC Educational Resources Information Center

    Anderson, Del M.

    In response to increased scrutiny of public education and the need for counselors to quantify and legitimate their work, Grossmont College (GC) has developed an accountability model for counselors. The model prescribes the identification of the statutory requirements, institutional needs and characteristics that establish the parameters for…

  9. Program Evaluation: The Accountability Bridge Model for Counselors

    ERIC Educational Resources Information Center

    Astramovich, Randall L.; Coker, J. Kelly

    2007-01-01

    The accountability and reform movements in education and the human services professions have pressured counselors to demonstrate outcomes of counseling programs and services. Evaluation models developed for large-scale evaluations are generally impractical for counselors to implement. Counselors require practical models to guide them in planning…

  10. A Diffusion Model Account of the Lexical Decision Task

    ERIC Educational Resources Information Center

    Ratcliff, Roger; Gomez, Pablo; McKoon, Gail

    2004-01-01

    The diffusion model for 2-choice decisions (R. Ratcliff, 1978) was applied to data from lexical decision experiments in which word frequency, proportion of high- versus low-frequency words, and type of nonword were manipulated. The model gave a good account of all of the dependent variables--accuracy, correct and error response times, and their…

  11. Regionalization of runoff models derived by genetic programming

    NASA Astrophysics Data System (ADS)

    Heřmanovský, M.; Havlíček, V.; Hanel, M.; Pech, P.

    2017-04-01

    The aim of this study is to assess the potential of hydrological models derived by genetic programming (GP) to estimate runoff at ungauged catchments by regionalization. A set of 176 catchments from the MOPEX (Model Parameter Estimation Experiment) project was used for our analysis. Runoff models for each catchment were derived by genetic programming (hereafter GP models). A comparison of efficiency was made between GP models and three conceptual models (SAC-SMA, BTOPMC, GR4J). The efficiency of the GP models was in general comparable with that of the SAC-SMA and BTOPMC models but slightly lower (up to 10% for calibration and 15% in validation) than for the GR4J model. The relationship between the efficiency of the GP models and catchment descriptors (CDs) was investigated. From 13 available CDs the aridity index and mean catchment elevation explained most of the variation in the efficiency of the GP models. The runoff for each catchment was then estimated considering GP models from single or multiple physically similar catchments (donors). Better results were obtained with multiple donor catchments. Increasing the number of CDs used for quantification of physical similarity improves the efficiency of the GP models in runoff simulation. The best regionalization results were obtained with 6 CDs together with 6 donors. Our results show that transfer of the GP models is possible and leads to satisfactory results when applied at physically similar catchments. The GP models can be therefore used as an alternative for runoff modelling at ungauged catchments if similar gauged catchments can be identified and successfully simulated.

  12. Application of a predictive Bayesian model to environmental accounting.

    PubMed

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  13. Accommodating environmental variation in population models: metaphysiological biomass loss accounting.

    PubMed

    Owen-Smith, Norman

    2011-07-01

    1. There is a pressing need for population models that can reliably predict responses to changing environmental conditions and diagnose the causes of variation in abundance in space as well as through time. In this 'how to' article, it is outlined how standard population models can be modified to accommodate environmental variation in a heuristically conducive way. This approach is based on metaphysiological modelling concepts linking populations within food web contexts and underlying behaviour governing resource selection. Using population biomass as the currency, population changes can be considered at fine temporal scales taking into account seasonal variation. Density feedbacks are generated through the seasonal depression of resources even in the absence of interference competition. 2. Examples described include (i) metaphysiological modifications of Lotka-Volterra equations for coupled consumer-resource dynamics, accommodating seasonal variation in resource quality as well as availability, resource-dependent mortality and additive predation, (ii) spatial variation in habitat suitability evident from the population abundance attained, taking into account resource heterogeneity and consumer choice using empirical data, (iii) accommodating population structure through the variable sensitivity of life-history stages to resource deficiencies, affecting susceptibility to oscillatory dynamics and (iv) expansion of density-dependent equations to accommodate various biomass losses reducing population growth rate below its potential, including reductions in reproductive outputs. Supporting computational code and parameter values are provided. 3. The essential features of metaphysiological population models include (i) the biomass currency enabling within-year dynamics to be represented appropriately, (ii) distinguishing various processes reducing population growth below its potential, (iii) structural consistency in the representation of interacting populations and

  14. Short communication: Accounting for new mutations in genomic prediction models.

    PubMed

    Casellas, Joaquim; Esquivelzeta, Cecilia; Legarra, Andrés

    2013-08-01

    Genomic evaluation models so far do not allow for accounting of newly generated genetic variation due to mutation. The main target of this research was to extend current genomic BLUP models with mutational relationships (model AM), and compare them against standard genomic BLUP models (model A) by analyzing simulated data. Model performance and precision of the predicted breeding values were evaluated under different population structures and heritabilities. The deviance information criterion (DIC) clearly favored the mutational relationship model under large heritabilities or populations with moderate-to-deep pedigrees contributing phenotypic data (i.e., differences equal or larger than 10 DIC units); this model provided slightly higher correlation coefficients between simulated and predicted genomic breeding values. On the other hand, null DIC differences, or even relevant advantages for the standard genomic BLUP model, were reported under small heritabilities and shallow pedigrees, although precision of the genomic breeding values did not differ across models at a significant level. This method allows for slightly more accurate genomic predictions and handling of newly created variation; moreover, this approach does not require additional genotyping or phenotyping efforts, but a more accurate handing of available data.

  15. Rainfall-runoff modeling in a flashy tropical watershed using the distributed HL-RDHM model

    NASA Astrophysics Data System (ADS)

    Fares, Ali; Awal, Ripendra; Michaud, Jene; Chu, Pao-Shin; Fares, Samira; Kodama, Kevin; Rosener, Matt

    2014-11-01

    Many watersheds in Hawai';i are flash flood prone due to their small contributing areas and frequent intense rainfall. Motivated by the possibility of developing an operational flood forecasting system, this study evaluated the performance of the National Weather Service (NWS) model, the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) in simulating the hydrology of the flood-prone Hanalei watershed in Kaua';i, Hawai';i. This rural watershed is very wet and has strong spatial rainfall gradients. Application of HL-RDHM to Hanalei watershed required (i) modifying the Hydrologic Rainfall Analysis Project (HRAP) coordinate system; (ii) generating precipitation grids from rain gauge data, and (iii) generating parameters for Sacramento Soil Moisture Accounting Model (SAC-SMA) and routing parameter grids for the modified HRAP coordinate system. Results were obtained for several spatial resolutions. Hourly basin-average rainfall calculated from one HRAP resolution grid (4 km × 4 km) was too low and inaccurate. More realistic rainfall and more accurate streamflow predictions were obtained with the ½ and ¼ HRAP grids. For a one year period with the best precipitation data, the performance of HL-RDHM was satisfactory even without calibration for basin-averaged and distributed a priori parameter grids. Calibration and validation of HL-RDHM were conducted using four-year data set each. The model reasonably matched the observed peak discharges and time to peak during calibration and validation periods. The performance of model was assessed using the following three statistical measures: Root Mean Square Error (RMSE), Nash-Sutcliffe efficiency (NSE) and Percent bias (PBIAS). Overall, HL-RDHM's performance was "very good (NSE > 0.75, PBIAS < ±10)" for the finer resolution grids (½ HRAP or ¼ HRAP). The quality of flood forecasting capability of the model was accessed using four accuracy measures (probability of false detection, false alarm ratio, critical

  16. The Accountable Care Organization (ACO) model: building blocks for success.

    PubMed

    Lowell, Kristina Hanson; Bertko, John

    2010-01-01

    The Accountable Care Organization (ACO) model has received significant attention among policymakers and leaders in the healthcare community in the context of the ongoing debate over health reform, not only because of the unsustainable path on which the country now finds itself but also because it directly focuses on what must be a key goal of the healthcare system: higher value. The model offers a promising approach for achieving this goal. This article provides an overview of the ACO model and its role in the current policy context, highlights the key elements that will be common to all ACOs, and provides details of several challenges that may arise throughout the implementation process, including a host of technical, legal, and operational challenges. These challenges range from issues such as the organizational form and management of the ACO to analytic challenges such as the calculation of spending benchmarks and the selection of quality measures.

  17. Reconstruction of Danio rerio metabolic model accounting for subcellular compartmentalisation.

    PubMed

    Bekaert, Michaël

    2012-01-01

    Plant and microbial metabolic engineering is commonly used in the production of functional foods and quality trait improvement. Computational model-based approaches have been used in this important endeavour. However, to date, fish metabolic models have only been scarcely and partially developed, in marked contrast to their prominent success in metabolic engineering. In this study we present the reconstruction of fully compartmentalised models of the Danio rerio (zebrafish) on a global scale. This reconstruction involves extraction of known biochemical reactions in D. rerio for both primary and secondary metabolism and the implementation of methods for determining subcellular localisation and assignment of enzymes. The reconstructed model (ZebraGEM) is amenable for constraint-based modelling analysis, and accounts for 4,988 genes coding for 2,406 gene-associated reactions and only 418 non-gene-associated reactions. A set of computational validations (i.e., simulations of known metabolic functionalities and experimental data) strongly testifies to the predictive ability of the model. Overall, the reconstructed model is expected to lay down the foundations for computational-based rational design of fish metabolic engineering in aquaculture.

  18. Accounting for Vegetation Effects in Spatially Distributed Snowmelt Modeling

    NASA Astrophysics Data System (ADS)

    Garen, D. C.; Marks, D.

    2004-05-01

    The effects of vegetation on snowpack energy dynamics can be highly significant and must be taken into account when simulating snowmelt. This becomes challenging, however, for spatially distributed models covering large areas such as river basins. In this case, processes occurring at the scale of individual trees or bushes must be parameterized and upscaled to the size of the model's grid cells, which could range from 10 up to a few hundred meters. An application of a spatially distributed energy balance snowmelt model to the Boise River basin in Idaho, USA has required the development of algorithms to account for the effects of vegetation (especially forest) on the climate input data to the model. This particularly affects the solar and thermal radiation input to the snowpack, including not only the direct effects of the vegetation but also the effect of vegetation debris on the snow albedo. Vegetation effects on vertical profiles of wind speed and temperature could not be considered due to limited measurements, and only a crude estimate of wind speed differences between forested and nonforested grid cells was used. The simulated snow fields were verified using point snow water equivalent and snow depth data as well as satellite images of snow covered area. Although good results were obtained in these comparisons, each of these methods has limitations, in that point measurements are not necessarily representative of a grid cell, and satellite images have a coarse resolution and cannot detect snow under trees. Another test was to use the simulated snowmelt fields as input to a spatially distributed water balance and streamflow simulation model, which indicated that the volume and timing of snowmelt input to the basin were accurately represented. A limitation of the modeling method used is that the models are run independently in sequence, the output of one being stored and becoming the input of the next. This means that there is no opportunity for feedbacks between

  19. Accounting for nonlinear material characteristics in modeling ferroresonant transformers

    NASA Astrophysics Data System (ADS)

    Voisine, J. T.

    1985-04-01

    A mathematical model relating core material properties, including nonlinear magnetization characteristics, to the performance of ferroresonant transformers has been developed. In accomplishing this, other factors such as fabrication destruction factors, leakage flux, air gap characteristics, loading, and coil resistances and self-inductances are also accounted for. From a material manufacturer's view, knowing such information facilitates isolating sources of performance variations between units of similar design and is therefore highly desirable. The model predicts the primary induction necessary to establish a specified secondary induction and determines peak induction at other points in the magnetic circuit. A study comparing the model with a transformer indicated that each predicted peak induction was within ±5% of the corresponding measured peak induction. A generalized 4-node magnetic circuit having two shunt paths was chosen and modeled. Such a circuit is easily modified facilitating the analyses of numerous other core designs. A computer program designed to run on an HP-41 programmable calculator was also developed and is briefly described.

  20. Descriptive accounts of thermodynamic and colloidal models of asphaltene flocculation

    SciTech Connect

    Leontaritis, K.J.; Kawanaka, S.; Mansoori, G.A.

    1987-01-01

    At the present the oil industry, basically, is combating the problem of asphaltene deposition through remedial rather than preventive techniques. Mechanical and chemical cleaning methods are being improvised to maintain production, transportation, and processing of petroleum at economical levels. There are a number of recent reports that indicate so. The research community, currently, is rather unfamiliar with the reasons and extent of the asphaltene deposition problem. This paper reviews the experiences of the oil industry with asphaltene precipitation and presents justifications and a descriptive account for the development of two different models for asphaltene flocculation. In one of the models the authors consider the asphaltenes to be dissolved in the oil in a true liquid state and dwell upon statistical thermodynamic techniques of multicomponent mixtures to predict their phase behavior. In the other model, they consider asphaltenes to exist in oil in a colloidal state, as minute suspended particles, and utilize colloidal science techniques to predict their colloidal behavior. Experimental work over the last 40 years suggests that asphaltenes possess a wide molecular weight distribution and they exist in both colloidal and dissolved states in the crude oil. Further pursue of the subject in this direction by both the industrial and research communities is warranted.

  1. Accounting for Water Insecurity in Modeling Domestic Water Demand

    NASA Astrophysics Data System (ADS)

    Galaitsis, S. E.; Huber-lee, A. T.; Vogel, R. M.; Naumova, E.

    2013-12-01

    Water demand management uses price elasticity estimates to predict consumer demand in relation to water pricing changes, but studies have shown that many additional factors effect water consumption. Development scholars document the need for water security, however, much of the water security literature focuses on broad policies which can influence water demand. Previous domestic water demand studies have not considered how water security can affect a population's consumption behavior. This study is the first to model the influence of water insecurity on water demand. A subjective indicator scale measuring water insecurity among consumers in the Palestinian West Bank is developed and included as a variable to explore how perceptions of control, or lack thereof, impact consumption behavior and resulting estimates of price elasticity. A multivariate regression model demonstrates the significance of a water insecurity variable for data sets encompassing disparate water access. When accounting for insecurity, the R-squaed value improves and the marginal price a household is willing to pay becomes a significant predictor for the household quantity consumption. The model denotes that, with all other variables held equal, a household will buy more water when the users are more water insecure. Though the reasons behind this trend require further study, the findings suggest broad policy implications by demonstrating that water distribution practices in scarcity conditions can promote consumer welfare and efficient water use.

  2. Accounting for agriculture in modelling the global terrestrial carbon cycle

    NASA Astrophysics Data System (ADS)

    Bondeau, A.; Smith, P.; Schaphoff, S.; Zaehle, S.; Smith, B.; Sitch, S.; Gerten, D.; Schröder, B.; Lucht, W.; Cramer, W.

    2003-04-01

    Among the different approaches that investigate the role of the terrestrial biosphere within the global carbon cycle, Dynamic Global Vegetation Models (DGVMs) are an important tool. They represent the major biogeochemical mechanisms (carbon and water fluxes), depending on climate and soil, in order to simulate vegetation type (tree/grass, evergreen/deciduous, etc) as well as ecosystem function. The models should be validated for different features at various scales, in order to be used to assess the future terrestrial productivity in relation to climate change scenarios. The Lund-Potsdam-Jena (LPJ) model (Sitch et al. 2002) is one of the few existing DGVMs, from which some interesting features have been validated like the seasonal atmospheric CO2 concentrations as measured at the global network of monitoring stations, the increase of the growing season length in the northern areas (Lucht et al. 2002), the runoff of large catchment (Gerten et al. Nice 2003, session HS25). In agreement with other models, LPJ estimates that the terrestrial biosphere is currently a carbon sink that will reduce in the middle of the century because of climate change (Cramer et al. 2000). However, regarding the terrestrial productivity, land use and cover change might be even more important than climate change. Until now, none of the global vegetation models were considering agriculture, or in the best case, agricultural areas were represented as a grassland. We describe the first implementation of crop parameterization within LPJ. As compared to natural vegetation, the main features of crops that must be accounted for in a global vegetation model are: i) the specific phenology, related to the sowing date, ii) the farming practices (nutrient inputs, irrigation), iii) the man-made dynamics (harvest, choice of variety, crop rotation). In a first step we consider the 8 crops types for which a global land cover data set is available for the 20th Century (RIVM). A simple phenological model

  3. Verification of Advances in a Coupled Snow-runoff Modeling Framework for Operational Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.

  4. Capture-recapture survival models taking account of transients

    USGS Publications Warehouse

    Pradel, R.; Hines, J.E.; Lebreton, J.D.; Nichols, J.D.

    1997-01-01

    The presence of transient animals, common enough in natural populations, invalidates the estimation of survival by traditional capture- recapture (CR) models designed for the study of residents only. Also, the study of transit is interesting in itself. We thus develop here a class of CR models to describe the presence of transients. In order to assess the merits of this approach we examme the bias of the traditional survival estimators in the presence of transients in relation to the power of different tests for detecting transients. We also compare the relative efficiency of an ad hoc approach to dealing with transients that leaves out the first observation of each animal. We then study a real example using lazuli bunting (Passerina amoena) and, in conclusion, discuss the design of an experiment aiming at the estimation of transience. In practice, the presence of transients is easily detected whenever the risk of bias is high. The ad hoc approach, which yields unbiased estimates for residents only, is satisfactory in a time-dependent context but poorly efficient when parameters are constant. The example shows that intermediate situations between strict 'residence' and strict 'transience' may exist in certain studies. Yet, most of the time, if the study design takes into account the expected length of stay of a transient, it should be possible to efficiently separate the two categories of animals.

  5. Advancing Ensemble Streamflow Prediction with Stochastic Meteorological Forcings for Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Caraway, N.; Wood, A. W.; Rajagopalan, B.; Zagona, E. A.; Daugherty, L.

    2012-12-01

    River Forecast Centers of National Weather Service (NWS) produce seasonal streamflow forecasts via a method called Ensemble Streamflow Prediction (ESP). NWS ESP forces the temperature index Snow17 and Sacramento Soil Moisture Accounting model (SAC-SMA) models with historical weather sequences for the forecasting period, starting from models' current watershed initial conditions, to produce ensemble streamflow forecasts. There are two major drawbacks of this method: (i) the ensembles are limited to the length of historical, limiting ensemble variability and (ii) incorporating seasonal climate forecasts (e.g., El Nino Southern Oscillation) relies on adjustment or weighting of ESP streamflow sequences. These drawbacks motivate the research presented here, which has two components: (i) a multi-site stochastic weather generator and (ii) generation of ensemble weather forecast inputs to the NWS model to produce ensemble streamflow forecasts. We enhanced the K-nearest neighbor bootstrap based stochastic generator include: (i) clustering the forecast locations into climatologically homogeneous regions to better capture the spatial heterogeneity and, (ii) conditioning the weather forecasts on a probabilistic seasonal climate forecast. This multi-site stochastic weather generator runs in R and the NWS models run within the new Community Hydrologic Prediction System, a forecasting sequence we label WG-ESP. The WG-ESP framework was applied to generate ensemble forecasts of spring season (April-July) streamflow in the San Juan River Basin, one of the major tributaries of the Colorado River, for the period 1981-2010. The hydrologic model requires daily weather sequences at 66 locations in the basin. The enhanced daily weather generator sequences captured the distributional properties and spatial dependence of the climatological ESP, and also generated weather sequences consistent with conditioning on seasonal climate forecasts. Spring season ensemble forecast lead times from

  6. Conception of a cost accounting model for doctors' offices.

    PubMed

    Britzelmaier, Bernd; Eller, Brigitte

    2004-01-01

    Physicians are required, due to economical, financial, competitive, demographical and market-induced framework conditions, to pay increasing attention to the entrepreneurial administration of their offices. Because of restructuring policies throughout the public health system--on the grounds of increasing financing problems--more and better transparency of costs will be indispensable in all fields of medical activities in the future. The more cost-conscious public health insurance institutions or other public health funds will need professional cost accounting systems, which will provide, for minimum maintenance expense, standardised basis cost information as a device for decision. The conception of cost accounting for doctors' offices presented in this paper shows an integrated cost accounting approach based on activity and marginal costing philosophy. The conception presented provides a suitable basis for the development of standard software for cost accounting systems for doctors' offices.

  7. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.

  8. Teacher Effects, Value-Added Models, and Accountability

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2014-01-01

    Background: In the last decade, the effects of teachers on student performance (typically manifested as state-wide standardized tests) have been re-examined using statistical models that are known as value-added models. These statistical models aim to compute the unique contribution of the teachers in promoting student achievement gains from grade…

  9. 76 FR 34712 - Medicare Program; Pioneer Accountable Care Organization Model; Extension of the Submission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ...: This notice extends the deadlines for the submission of the Pioneer Accountable Care Organization Model...-coordinated-care-models/pioneer-aco . Application Submission Deadline: Applications must be postmarked on or before August 19, 2011. The Pioneer Accountable Care Organization Model ] Application is available...

  10. Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems

    EPA Science Inventory

    Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...

  11. 31 CFR Appendix A to Part 212 - Model Notice to Account Holder

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false Model Notice to Account Holder A Appendix A to Part 212 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... CONTAINING FEDERAL BENEFIT PAYMENTS Pt. 212, App. A Appendix A to Part 212—Model Notice to Account Holder...

  12. Applying the International Medical Graduate Program Model to Alleviate the Supply Shortage of Accounting Doctoral Faculty

    ERIC Educational Resources Information Center

    HassabElnaby, Hassan R.; Dobrzykowski, David D.; Tran, Oanh Thikie

    2012-01-01

    Accounting has been faced with a severe shortage in the supply of qualified doctoral faculty. Drawing upon the international mobility of foreign scholars and the spirit of the international medical graduate program, this article suggests a model to fill the demand in accounting doctoral faculty. The underlying assumption of the suggested model is…

  13. 76 FR 29249 - Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-20

    ... HUMAN SERVICES Centers for Medicare & Medicaid Services Medicare Program; Pioneer Accountable Care... participate in the Pioneer Accountable Care Organization Model for a period beginning in 2011 and ending...://innovations.cms.gov/areas-of-focus/seamless-and-coordinated-care-models/pioneer-aco . Application...

  14. Facilitative Orthographic Neighborhood Effects: The SERIOL Model Account

    ERIC Educational Resources Information Center

    Whitney, Carol; Lavidor, Michal

    2005-01-01

    A large orthographic neighborhood (N) facilitates lexical decision for central and left visual field/right hemisphere (LVF/RH) presentation, but not for right visual field/left hemisphere (RVF/LH) presentation. Based on the SERIOL model of letter-position encoding, this asymmetric N effect is explained by differential activation patterns at the…

  15. Evaluating Value-Added Models for Teacher Accountability. Monograph

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Lockwood, J. R.; Koretz, Daniel M.; Hamilton, Laura S.

    2003-01-01

    Value-added modeling (VAM) to estimate school and teacher effects is currently of considerable interest to researchers and policymakers. Recent reports suggest that VAM demonstrates the importance of teachers as a source of variance in student outcomes. Policymakers see VAM as a possible component of education reform through improved teacher…

  16. Modeling tools to Account for Ethanol Impacts on BTEX Plumes

    EPA Science Inventory

    Widespread usage of ethanol in gasoline leads to impacts at leak sites which differ from those of non-ethanol gasolines. The presentation reviews current research results on the distribution of gasoline and ethanol, biodegradation, phase separation and cosolvancy. Model results f...

  17. 76 FR 33306 - Medicare Program; Pioneer Accountable Care Organization Model, Request for Applications; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-08

    ... Care Organization Model: Request for Applications.'' FOR FURTHER INFORMATION CONTACT: Maria Alexander... http://innovations.cms.gov/areas-of-focus/seamless-and-coordinated-care-models/pioneer-aco... HUMAN SERVICES Centers for Medicare & Medicaid Services Medicare Program; Pioneer Accountable...

  18. An evacuation model accounting for elementary students' individual properties

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Chen, Liang; Guo, Ren-Yong; Shang, Hua-Yan

    2015-12-01

    In this paper, we propose a cellular automata model for pedestrian flow to investigate the effects of elementary students' individual properties on the evacuation process in a classroom with two exits. In this model, each student's route choice behavior is determined by the capacity of his current route to each exit, the distance between his current position and the corresponding exit, the repulsive interactions between his adjacent students and him, and the congestion degree near each exit; the elementary students are sorted into rational and irrational students. The simulation results show that the irrational students' proportion has significant impacts on the evacuation process and efficiency, and that all students simultaneously evacuating may be inefficient.

  19. Dynamic model of production enterprises based on accounting registers and its identification

    NASA Astrophysics Data System (ADS)

    Sirazetdinov, R. T.; Samodurov, A. V.; Yenikeev, I. A.; Markov, D. S.

    2016-06-01

    The report focuses on the mathematical modeling of economic entities based on accounting registers. Developed the dynamic model of financial and economic activity of the enterprise as a system of differential equations. Created algorithms for identification of parameters of the dynamic model. Constructed and identified the model of Russian machine-building enterprises.

  20. Key Elements for Educational Accountability Models in Transition: A Guide for Policymakers

    ERIC Educational Resources Information Center

    Klau, Kenneth

    2010-01-01

    State educational accountability models are in transition. Whether modifying the present accountability system to comply with existing state and federal requirements or anticipating new ones--such as the U.S. Department of Education's (ED) Race to the Top competition--recording the experiences of state education agencies (SEAs) that are currently…

  1. Application of a Cooperative University/Middle School Model to Enhance Science Education Accountability.

    ERIC Educational Resources Information Center

    Zigler, John T.; And Others

    This study concerned whether a model designed to enchance educational accountability developed by university science educators could be used to enhance science educational accountability in a public middle school. Participants in the study were 39 teachers who attended a combined summer/inservice institute in the summer of 1973 at Ball State…

  2. A Dynamic Simulation Model of the Management Accounting Information Systems (MAIS)

    NASA Astrophysics Data System (ADS)

    Konstantopoulos, Nikolaos; Bekiaris, Michail G.; Zounta, Stella

    2007-12-01

    The aim of this paper is to examine the factors which determine the problems and the advantages on the design of management accounting information systems (MAIS). A simulation is carried out with a dynamic model of the MAIS design.

  3. A four-compartment PBPK heart model accounting for cardiac metabolism - model development and application.

    PubMed

    Tylutki, Zofia; Polak, Sebastian

    2017-01-04

    In the field of cardiac drug efficacy and safety assessment, information on drug concentration in heart tissue is desirable. Because measuring drug concentrations in human cardiac tissue is challenging in healthy volunteers, mathematical models are used to cope with such limitations. With a goal of predicting drug concentration in cardiac tissue, we have developed a whole-body PBPK model consisting of seventeen perfusion-limited compartments. The proposed PBPK heart model consisted of four compartments: the epicardium, midmyocardium, endocardium, and pericardial fluid, and accounted for cardiac metabolism using CYP450. The model was written in R. The plasma:tissues partition coefficients (Kp) were calculated in Simcyp Simulator. The model was fitted to the concentrations of amitriptyline in plasma and the heart. The estimated parameters were as follows: 0.80 for the absorption rate [h(-1)], 52.6 for Kprest, 0.01 for the blood flow through the pericardial fluid [L/h], and 0.78 for the P-parameter describing the diffusion between the pericardial fluid and epicardium [L/h]. The total cardiac clearance of amitriptyline was calculated as 0.316 L/h. Although the model needs further improvement, the results support its feasibility, and it is a first attempt to provide an active drug concentration in various locations within heart tissue using a PBPK approach.

  4. A four-compartment PBPK heart model accounting for cardiac metabolism - model development and application

    PubMed Central

    Tylutki, Zofia; Polak, Sebastian

    2017-01-01

    In the field of cardiac drug efficacy and safety assessment, information on drug concentration in heart tissue is desirable. Because measuring drug concentrations in human cardiac tissue is challenging in healthy volunteers, mathematical models are used to cope with such limitations. With a goal of predicting drug concentration in cardiac tissue, we have developed a whole-body PBPK model consisting of seventeen perfusion-limited compartments. The proposed PBPK heart model consisted of four compartments: the epicardium, midmyocardium, endocardium, and pericardial fluid, and accounted for cardiac metabolism using CYP450. The model was written in R. The plasma:tissues partition coefficients (Kp) were calculated in Simcyp Simulator. The model was fitted to the concentrations of amitriptyline in plasma and the heart. The estimated parameters were as follows: 0.80 for the absorption rate [h−1], 52.6 for Kprest, 0.01 for the blood flow through the pericardial fluid [L/h], and 0.78 for the P-parameter describing the diffusion between the pericardial fluid and epicardium [L/h]. The total cardiac clearance of amitriptyline was calculated as 0.316 L/h. Although the model needs further improvement, the results support its feasibility, and it is a first attempt to provide an active drug concentration in various locations within heart tissue using a PBPK approach. PMID:28051093

  5. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    ERIC Educational Resources Information Center

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  6. A Teacher Accountability Model for Overcoming Self-Exclusion of Pupils

    ERIC Educational Resources Information Center

    Jamal, Abu-Hussain; Tilchin, Oleg; Essawi, Mohammad

    2015-01-01

    Self-exclusion of pupils is one of the prominent challenges of education. In this paper we propose the TERA model, which shapes the process of creating formative accountability of teachers to overcome the self-exclusion of pupils. Development of the model includes elaboration and integration of interconnected model components. The TERA model…

  7. Articulated Instruction Objectives Guide for Accounting (Module 5.0--Accounting I) (Module 6.0--Accounting II). Project Period, March 1981-February 1982 (Pilot Model). Edition I.

    ERIC Educational Resources Information Center

    Chandler, Wylda; And Others

    Developed during a project designed to provide a continuous, competency-based line of vocational training in business and office education programs at the secondary and postsecondary levels, this package consists of an instructor's guide and learning modules for use in Accounting I and II. Various aspects of implementing and articulating secondary…

  8. Accounting for the influence of vegetation and landscape improves model transferability in a tropical savannah region

    NASA Astrophysics Data System (ADS)

    Gao, Hongkai; Hrachowitz, Markus; Sriwongsitanon, Nutchanart; Fenicia, Fabrizio; Gharari, Shervan; Savenije, Hubert H. G.

    2016-10-01

    Understanding which catchment characteristics dominate hydrologic response and how to take them into account remains a challenge in hydrological modeling, particularly in ungauged basins. This is even more so in nontemperate and nonhumid catchments, where—due to the combination of seasonality and the occurrence of dry spells—threshold processes are more prominent in rainfall runoff behavior. An example is the tropical savannah, the second largest climatic zone, characterized by pronounced dry and wet seasons and high evaporative demand. In this study, we investigated the importance of landscape variability on the spatial variability of stream flow in tropical savannah basins. We applied a stepwise modeling approach to 23 subcatchments of the Upper Ping River in Thailand, where gradually more information on landscape was incorporated. The benchmark is represented by a classical lumped model (FLEXL), which does not account for spatial variability. We then tested the effect of accounting for vegetation information within the lumped model (FLEXLM), and subsequently two semidistributed models: one accounting for the spatial variability of topography-based landscape features alone (FLEXT), and another accounting for both topographic features and vegetation (FLEXTM). In cross validation, each model was calibrated on one catchment, and then transferred with its fitted parameters to the remaining catchments. We found that when transferring model parameters in space, the semidistributed models accounting for vegetation and topographic heterogeneity clearly outperformed the lumped model. This suggests that landscape controls a considerable part of the hydrological function and explicit consideration of its heterogeneity can be highly beneficial for prediction in ungauged basins in tropical savannah.

  9. Development and application of a large scale river system model for National Water Accounting in Australia

    NASA Astrophysics Data System (ADS)

    Dutta, Dushmanta; Vaze, Jai; Kim, Shaun; Hughes, Justin; Yang, Ang; Teng, Jin; Lerat, Julien

    2017-04-01

    Existing global and continental scale river models, mainly designed for integrating with global climate models, are of very coarse spatial resolutions and lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing water accounts, which have become increasingly important for water resources planning and management at regional and national scales. A continental scale river system model called Australian Water Resource Assessment River System model (AWRA-R) has been developed and implemented for national water accounting in Australia using a node-link architecture. The model includes major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. Two key components of the model are an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. The results in the Murray-Darling Basin shows highly satisfactory performance of the model with median daily Nash-Sutcliffe Efficiency (NSE) of 0.64 and median annual bias of less than 1% for the period of calibration (1970-1991) and median daily NSE of 0.69 and median annual bias of 12% for validation period (1992-2014). The results have demonstrated that the performance of the model is less satisfactory when the key processes such as overbank flow, groundwater seepage and irrigation diversion are switched off. The AWRA-R model, which has been operationalised by the Australian Bureau of Meteorology for continental scale water accounting, has contributed to improvements in the national water account by substantially reducing accounted different volume (gain/loss).

  10. A simulation model of hospital management based on cost accounting analysis according to disease.

    PubMed

    Tanaka, Koji; Sato, Junzo; Guo, Jinqiu; Takada, Akira; Yoshihara, Hiroyuki

    2004-12-01

    Since a little before 2000, hospital cost accounting has been increasingly performed at Japanese national university hospitals. At Kumamoto University Hospital, for instance, departmental costs have been analyzed since 2000. And, since 2003, the cost balance has been obtained according to certain diseases for the preparation of Diagnosis-Related Groups and Prospective Payment System. On the basis of these experiences, we have constructed a simulation model of hospital management. This program has worked correctly at repeated trials and with satisfactory speed. Although there has been room for improvement of detailed accounts and cost accounting engine, the basic model has proved satisfactory. We have constructed a hospital management model based on the financial data of an existing hospital. We will later improve this program from the viewpoint of construction and using more various data of hospital management. A prospective outlook may be obtained for the practical application of this hospital management model.

  11. Towards ecosystem accounting: a comprehensive approach to modelling multiple hydrological ecosystem services

    NASA Astrophysics Data System (ADS)

    Duku, C.; Rathjens, H.; Zwart, S. J.; Hein, L.

    2015-10-01

    Ecosystem accounting is an emerging field that aims to provide a consistent approach to analysing environment-economy interactions. One of the specific features of ecosystem accounting is the distinction between the capacity and the flow of ecosystem services. Ecohydrological modelling to support ecosystem accounting requires considering among others physical and mathematical representation of ecohydrological processes, spatial heterogeneity of the ecosystem, temporal resolution, and required model accuracy. This study examines how a spatially explicit ecohydrological model can be used to analyse multiple hydrological ecosystem services in line with the ecosystem accounting framework. We use the Upper Ouémé watershed in Benin as a test case to demonstrate our approach. The Soil Water and Assessment Tool (SWAT), which has been configured with a grid-based landscape discretization and further enhanced to simulate water flow across the discretized landscape units, is used to simulate the ecohydrology of the Upper Ouémé watershed. Indicators consistent with the ecosystem accounting framework are used to map and quantify the capacities and the flows of multiple hydrological ecosystem services based on the model outputs. Biophysical ecosystem accounts are subsequently set up based on the spatial estimates of hydrological ecosystem services. In addition, we conduct trend analysis statistical tests on biophysical ecosystem accounts to identify trends in changes in the capacity of the watershed ecosystems to provide service flows. We show that the integration of hydrological ecosystem services into an ecosystem accounting framework provides relevant information on ecosystems and hydrological ecosystem services at appropriate scales suitable for decision-making.

  12. School Board Improvement Plans in Relation to the AIP Model of Educational Accountability: A Content Analysis

    ERIC Educational Resources Information Center

    van Barneveld, Christina; Stienstra, Wendy; Stewart, Sandra

    2006-01-01

    For this study we analyzed the content of school board improvement plans in relation to the Achievement-Indicators-Policy (AIP) model of educational accountability (Nagy, Demeris, & van Barneveld, 2000). We identified areas of congruence and incongruence between the plans and the model. Results suggested that the content of the improvement…

  13. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  14. A Balanced School Accountability Model: An Alternative to High-Stakes Testing

    ERIC Educational Resources Information Center

    Jones, Ken

    2004-01-01

    This article asserts that the health of public schools depends on defining a new model of accountability--one that is balanced and comprehensive. This new model needs be one that involves much more than test scores. This article outlines the premises behind this argument asking for what, to whom, and by what means schools should be held…

  15. Model Selection and Accounting for Model Uncertainty in Graphical Models Using OCCAM’s Window

    DTIC Science & Technology

    1991-07-22

    There are also approaches based on information criteria and discrepancy measures (Gokhale and Kullback, 1978; Sakamoto, 1984; Linhart and Zucchini , 1986...Statistical Society (Series B), 50,157-224. Linhart, H. and Zucchini , W. (1986) Model Selection. New York: Wiley. Miller, A.J. (1984) Selection of

  16. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    PubMed

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  17. Modelling characteristics of photovoltaic panels with thermal phenomena taken into account

    NASA Astrophysics Data System (ADS)

    Krac, Ewa; Górecki, Krzysztof

    2016-01-01

    In the paper a new form of the electrothermal model of photovoltaic panels is proposed. This model takes into account the optical, electrical and thermal properties of the considered panels, as well as electrical and thermal properties of the protecting circuit and thermal inertia of the considered panels. The form of this model is described and some results of measurements and calculations of mono-crystalline and poly-crystalline panels are presented.

  18. [The mathematical modelling of population dynamics taking into account the adaptive behavior of individuals].

    PubMed

    Abakumov, A I

    2000-01-01

    The general approach for modelling of abundance dynamic of biological populations and communities is offered. The mechanisms of individual adaptation in changing environment are considered. The approach is detailed for population models without structure and with age structure. The property of solutions are investigated. As examples the author studies the concrete definitions of general models by analogy with models of Ricker and May. Theoretical analysis and calculations shows that survival of model population in extreme situation increases if adaptive behaviour is taking into account.

  19. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  20. Students' Use of the Energy Model to Account for Changes in Physical Systems

    ERIC Educational Resources Information Center

    Papadouris, Nico; Constantinou, Constantinos P.; Kyratsi, Theodora

    2008-01-01

    The aim of this study is to explore the ways in which students, aged 11-14 years, account for certain changes in physical systems and the extent to which they draw on an energy model as a common framework for explaining changes observed in diverse systems. Data were combined from two sources: interviews with 20 individuals and an open-ended…

  1. Business Models, Accounting and Billing Concepts in Grid-Aware Networks

    NASA Astrophysics Data System (ADS)

    Kotrotsos, Serafim; Racz, Peter; Morariu, Cristian; Iskioupi, Katerina; Hausheer, David; Stiller, Burkhard

    The emerging Grid Economy, shall set new challenges for the network. More and more literature underlines the significance of network - awareness for efficient and effective grid services. Following this path to Grid evolution, this paper identifies some key challenges in the areas of business modeling, accounting and billing and proposes an architecture that addresses them.

  2. The Politics and Statistics of Value-Added Modeling for Accountability of Teacher Preparation Programs

    ERIC Educational Resources Information Center

    Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas

    2014-01-01

    Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…

  3. Accounting for Co-Teaching: A Guide for Policymakers and Developers of Value-Added Models

    ERIC Educational Resources Information Center

    Isenberg, Eric; Walsh, Elias

    2015-01-01

    We outline the options available to policymakers for addressing co-teaching in a value-added model. Building on earlier work, we propose an improvement to a method of accounting for co-teaching that treats co-teachers as teams, with each teacher receiving equal credit for co-taught students. Hock and Isenberg (2012) described a method known as the…

  4. Measuring Resources in Education: A Comparison of Accounting and the Resource Cost Model Approach.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    2000-01-01

    The need for programmatic cost information, data compatibility, and understanding input/output relationships are sparking efforts to improve standards for organizing and reporting educational-resource data. Unlike accountants, economists measure resources in real terms and organize information around service delivery, using a resource-cost model.…

  5. Scale dependencies of hydrologic models to spatial variability of precipitation

    NASA Astrophysics Data System (ADS)

    Koren, V. I.; Finnerty, B. D.; Schaake, J. C.; Smith, M. B.; Seo, D.-J.; Duan, Q.-Y.

    1999-04-01

    This study is focused on analyses of scale dependency of lumped hydrological models with different formulations of the infiltration processes. Three lumped hydrological models of differing complexity were used in the study: the SAC-SMA model, the Oregon State University (OSU) model, and the simple water balance (SWB) model. High-resolution (4×4 km) rainfall estimates from the next generation weather radar (NEXRAD) Stage III in the Arkansas-Red river basin were used in the study. These gridded precipitation estimates are a multi-sensor product which combines the spatial resolution of the radar data with the ground truth estimates of the gage data. Results were generated from each model using different resolutions of spatial averaging of hourly rainfall. Although all selected models were scale dependent, the level of dependency varied significantly with different formulations of the rainfall-runoff partitioning mechanism. Infiltration-excess type models were the most sensitive. Saturation-excess type models were less scale dependent. Probabilistic averaging of the point processes reduces scale dependency, however, its effectiveness varies depending on the scale and the spatial structure of rainfall.

  6. An analytical model for particulate reinforced composites (PRCs) taking account of particle debonding and matrix cracking

    NASA Astrophysics Data System (ADS)

    Jiang, Yunpeng

    2016-10-01

    In this work, a simple micromechanics-based model was developed to describe the overall stress-strain relations of particulate reinforced composites (PRCs), taking into account both particle debonding and matrix cracking damage. Based on the secant homogenization frame, the effective compliance tensor could be firstly given for the perfect composites without any damage. The progressive interface debonding damage is controlled by a Weibull probability function, and then the volume fraction of detached particles is involved in the equivalent compliance tensor to account for the impact of particle debonding. The matrix cracking was introduced in the present model to embody the stress softening stage in the deformation of PRCs. The analytical model was firstly verified by comparing with the corresponding experiment, and then parameter analyses were conducted. This modeling will shed some light on optimizing the microstructures in effectively improving the mechanical behaviors of PRCs.

  7. Aviation security cargo inspection queuing simulation model for material flow and accountability

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Rose, Terri A; Brumback, Daryl L

    2009-01-01

    Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we develop an aviation security cargo inspection queuing simulation model for material flow and accountability that will allow cargo managers to conduct impact studies of current and proposed business practices as they relate to inspection procedures, material flow, and accountability.

  8. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    PubMed

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  9. Event-based hydrological modeling for detecting dominant hydrological process and suitable model strategy for semi-arid catchments

    NASA Astrophysics Data System (ADS)

    Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng

    2016-11-01

    To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.

  10. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    ERIC Educational Resources Information Center

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  11. Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad

    2006-01-01

    This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'propagate-burn-propagate' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons between it and three methods using modified linearized covariance propagation are made. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. In the second method, the maneuver is not modeled but the uncertainty in its magnitude is accounted for by the inclusion of a process noise matrix. In the third method, which is essentially a hybrid of the first two, the nominal portion of the maneuver is included via the state transition matrix while a process noise matrix is used to account for the magnitude uncertainty. Since this method also correctly accounts for the presence of the maneuver in the nominal orbit, it is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, Pc. However, applications for the two other methods exist and are briefly discussed. Despite the fact that the process model ('propagate-burn-propagate') that was studied was very simple - point-mass gravitational effects due to the Earth combined with an impulsive delta-V in the velocity direction for the maneuver - generalizations to more complex scenarios, including high fidelity force models, finite duration maneuvers, and maneuver pointing errors, are straightforward and are discussed in the conclusion.

  12. Cost accounting models used for price-setting of health services: an international review.

    PubMed

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals.

  13. Numerical modeling of gravity-driven bubble flows with account of polydispersion

    NASA Astrophysics Data System (ADS)

    Chernyshev, A. S.; Schmidt, A. A.

    2016-10-01

    The present study is focused on the investigation of motion of bubble-liquid medium inside the bubble columns or vertical pipes with account of polydisperse phenomena by the means of numerical simulation. The underlying mathematical model is based on the Euler- Euler approach with interphase interaction described by the momentum and mass transfer between phases, along with the k-w-SST turbulence model which includes turbulence generation by the bubble motion and bubble path dispersion. Polydispersion is taken into account by the multi-class model with piecewise-constant distribution of bubble sizes per cell. Simulation of downward flow inside the straight vertical pipe resulted in maximum of the bubble void fraction close to the pipe center which is in good correlation with the experimental data. Simulation of multiphase flow inside rectangular bubble column with off-center sparger resulted in vertical bubble-liquid jet which is biased towards nearby wall with the correct prediction of attachment point location.

  14. Accounting for Local Dependence with the Rasch Model: The Paradox of Information Increase.

    PubMed

    Andrich, David

    Test theories imply statistical, local independence. Where local independence is violated, models of modern test theory that account for it have been proposed. One violation of local independence occurs when the response to one item governs the response to a subsequent item. Expanding on a formulation of this kind of violation between two items in the dichotomous Rasch model, this paper derives three related implications. First, it formalises how the polytomous Rasch model for an item constituted by summing the scores of the dependent items absorbs the dependence in its threshold structure. Second, it shows that as a consequence the unit when the dependence is accounted for is not the same as if the items had no response dependence. Third, it explains the paradox, known, but not explained in the literature, that the greater the dependence of the constituent items the greater the apparent information in the constituted polytomous item when it should provide less information.

  15. A selection model for accounting for publication bias in a full network meta-analysis.

    PubMed

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency.

  16. Predicting NonInertial Effects with Algebraic Stress Models which Account for Dissipation Rate Anisotropies

    NASA Technical Reports Server (NTRS)

    Jongen, T.; Machiels, L.; Gatski, T. B.

    1997-01-01

    Three types of turbulence models which account for rotational effects in noninertial frames of reference are evaluated for the case of incompressible, fully developed rotating turbulent channel flow. The different types of models are a Coriolis-modified eddy-viscosity model, a realizable algebraic stress model, and an algebraic stress model which accounts for dissipation rate anisotropies. A direct numerical simulation of a rotating channel flow is used for the turbulent model validation. This simulation differs from previous studies in that significantly higher rotation numbers are investigated. Flows at these higher rotation numbers are characterized by a relaminarization on the cyclonic or suction side of the channel, and a linear velocity profile on the anticyclonic or pressure side of the channel. The predictive performance of the three types of models are examined in detail, and formulation deficiencies are identified which cause poor predictive performance for some of the models. Criteria are identified which allow for accurate prediction of such flows by algebraic stress models and their corresponding Reynolds stress formulations.

  17. A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks

    PubMed Central

    Wang, Ping; Zhang, Lin; Li, Victor O. K.

    2013-01-01

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708

  18. Modeling of vapor intrusion from hydrocarbon-contaminated sources accounting for aerobic and anaerobic biodegradation

    NASA Astrophysics Data System (ADS)

    Verginelli, Iason; Baciocchi, Renato

    2011-11-01

    A one-dimensional steady state vapor intrusion model including both anaerobic and oxygen-limited aerobic biodegradation was developed. The aerobic and anaerobic layer thickness are calculated by stoichiometrically coupling the reactive transport of vapors with oxygen transport and consumption. The model accounts for the different oxygen demand in the subsurface required to sustain the aerobic biodegradation of the compound(s) of concern and for the baseline soil oxygen respiration. In the case of anaerobic reaction under methanogenic conditions, the model accounts for the generation of methane which leads to a further oxygen demand, due to methane oxidation, in the aerobic zone. The model was solved analytically and applied, using representative parameter ranges and values, to identify under which site conditions the attenuation of hydrocarbons migrating into indoor environments is likely to be significant. Simulations were performed assuming a soil contaminated by toluene only, by a BTEX mixture, by Fresh Gasoline and by Weathered Gasoline. The obtained results have shown that for several site conditions oxygen concentration below the building is sufficient to sustain aerobic biodegradation. For these scenarios the aerobic biodegradation is the primary mechanism of attenuation, i.e. anaerobic contribution is negligible and a model accounting just for aerobic biodegradation can be used. On the contrary, in all cases where oxygen is not sufficient to sustain aerobic biodegradation alone (e.g. highly contaminated sources), anaerobic biodegradation can significantly contribute to the overall attenuation depending on the site specific conditions.

  19. Modeling of vapor intrusion from hydrocarbon-contaminated sources accounting for aerobic and anaerobic biodegradation.

    PubMed

    Verginelli, Iason; Baciocchi, Renato

    2011-11-01

    A one-dimensional steady state vapor intrusion model including both anaerobic and oxygen-limited aerobic biodegradation was developed. The aerobic and anaerobic layer thickness are calculated by stoichiometrically coupling the reactive transport of vapors with oxygen transport and consumption. The model accounts for the different oxygen demand in the subsurface required to sustain the aerobic biodegradation of the compound(s) of concern and for the baseline soil oxygen respiration. In the case of anaerobic reaction under methanogenic conditions, the model accounts for the generation of methane which leads to a further oxygen demand, due to methane oxidation, in the aerobic zone. The model was solved analytically and applied, using representative parameter ranges and values, to identify under which site conditions the attenuation of hydrocarbons migrating into indoor environments is likely to be significant. Simulations were performed assuming a soil contaminated by toluene only, by a BTEX mixture, by Fresh Gasoline and by Weathered Gasoline. The obtained results have shown that for several site conditions oxygen concentration below the building is sufficient to sustain aerobic biodegradation. For these scenarios the aerobic biodegradation is the primary mechanism of attenuation, i.e. anaerobic contribution is negligible and a model accounting just for aerobic biodegradation can be used. On the contrary, in all cases where oxygen is not sufficient to sustain aerobic biodegradation alone (e.g. highly contaminated sources), anaerobic biodegradation can significantly contribute to the overall attenuation depending on the site specific conditions.

  20. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection

    USGS Publications Warehouse

    Zipkin, Elise F.; Grant, Evan H. Campbell; Fagan, William F.

    2012-01-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multi-species hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions of species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation dataset. We found that wetland hydroperiod (the length of time that a wetland holds water) as well as the occurrence state in the prior year were generally the most important factors in determining occupancy. The model with only habitat covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multi-species models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  1. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection.

    PubMed

    Zipkin, Elise F; Grant, Evan H Campbell; Fagan, William F

    2012-10-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multispecies hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions about species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation data set. We found that wetland hydroperiod (the length of time that a wetland holds water), as well as the occurrence state in the prior year, were generally the most important factors in determining occupancy. The model with habitat-only covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multispecies models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  2. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    NASA Astrophysics Data System (ADS)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  3. Generation of SEEAW asset accounts based on water resources management models

    NASA Astrophysics Data System (ADS)

    Pedro-Monzonís, María; Solera, Abel; Andreu, Joaquín

    2015-04-01

    One of the main challenges in the XXI century is related with the sustainable use of water. This is due to the fact that water is an essential element for the life of all who inhabit our planet. In many cases, the lack of economic valuation of water resources causes an inefficient water use. In this regard, society expects of policymakers and stakeholders maximise the profit produced per unit of natural resources. Water planning and the Integrated Water Resources Management (IWRM) represent the best way to achieve this goal. The System of Environmental-Economic Accounting for Water (SEEAW) is displayed as a tool for water allocation which enables the building of water balances in a river basin. The main concern of the SEEAW is to provide a standard approach which allows the policymakers to compare results between different territories. But building water accounts is a complex task due to the difficulty of the collection of the required data. Due to the difficulty of gauging the components of the hydrological cycle, the use of simulation models has become an essential tool extensively employed in last decades. The target of this paper is to present the building up of a database that enables the combined use of hydrological models and water resources models developed with AQUATOOL DSSS to fill in the SEEAW tables. This research is framed within the Water Accounting in a Multi-Catchment District (WAMCD) project, financed by the European Union. Its main goal is the development of water accounts in the Mediterranean Andalusian River Basin District, in Spain. This research pretends to contribute to the objectives of the "Blueprint to safeguard Europe's water resources". It is noteworthy that, in Spain, a large part of these methodological decisions are included in the Spanish Guideline of Water Planning with normative status guaranteeing consistency and comparability of the results.

  4. Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad

    2006-01-01

    This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'prop-burn-prop' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons are made with three methods using modified linearized covariance propagation. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. The second method does not model the maneuver but includes a process noise matrix to account for the uncertainty in its magnitude. The third method, which is essentially a hybrid of the first two, includes the nominal portion of the maneuver via the state transition matrix and uses a process noise matrix to account for the magnitude uncertainty. The first method is unable to produce the final orbit covariance except in the case of zero maneuver uncertainty. The second method yields good accuracy for the final covariance matrix but fails to model the final orbital state accurately. Agreement between the simulated covariance data produced by this method and the Monte Carlo truth data fell within 0.5-2.5 percent over a range of maneuver sizes that span two orders of magnitude (0.1-20 m/s). The third method, which yields a combination of good accuracy in the computation of the final covariance matrix and correct accounting for the presence of the maneuver in the nominal orbit, is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, PC. However, applications for the two other methods exist and are briefly discussed. Although

  5. A Distributed Hydrologic Model, HL-RDHM, for Flash Flood Forecasting in Hawaiian Watersheds

    NASA Astrophysics Data System (ADS)

    Fares, A.; Awal, R.; Michaud, J.; Chu, P.; Fares, S.; Kevin, K.; Rosener, M.

    2012-12-01

    Hawai'i's watersheds are flash flood prone due to their small contributing areas, and frequent intense spatially variable precipitation. Accurate simulation of the hydrology of these watersheds should incorporate spatial variability of at least the major input data, e.g., precipitation. The goal of this study is to evaluate the performance of the U.S. National Weather Service Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) in flash flood forecasting at Hanalei watershed, Kauai, Hawai'i. Some of the major limitations of using HL-RDHM in Hawaii are: i) Hawaii lies outside the Hydrologic Rainfall Analysis Project (HRAP) coordinate system of the continental US (CONUS), unavailability of a priori SAC-SMA parameter grids, and absence of hourly multi-sensor NEXRAD based precipitation grids. The specific objectives of this study were to i) run HL-RDHM outside CONUS domain, and ii) evaluate the performance of HL-RDHM for flash flood forecasting in the flood prone Hanalei watershed, Kauai, Hawai'i. We i) modified HRAP coordinate system; ii) generated input data of precipitation grids at different resolutions using data from 20 precipitation gauges five of which were within Hanalei watershed; iii) and generated SAC-SMA and routing parameter grids for the modified HRAP coordinate system. The one HRAP resolution grid (4 km x 4 km) was not accurate; thus, the basin averaged annual hourly precipitation of 1 HRAP grid is comparatively lower than that of ½ and ¼ HRAP grids. The performance of HL-RDHM using basin averaged a priori grids and distributed a priori grids was reasonable even using non-optimized a priori parameter values for 2008 data. HL-RDHM reasonably matched the observed streamflow magnitudes of peaks and time to peak during the calibration and validation periods. Overall, HL-RDHM performance is "good" to "very good" if we use input data of finer resolution grids (½ HRAP or ¼ HRAP) and precipitation grids interpolated from sufficient data of

  6. Accounting for anatomical noise in search-capable model observers for planar nuclear imaging

    PubMed Central

    Sen, Anando; Gifford, Howard C.

    2016-01-01

    Abstract. Model observers intended to predict the diagnostic performance of human observers should account for the effects of both quantum and anatomical noise. We compared the abilities of several visual-search (VS) and scanning Hotelling-type models to account for anatomical noise in a localization receiver operating characteristic (LROC) study involving simulated nuclear medicine images. Our VS observer invoked a two-stage process of search and analysis. The images featured lesions in the prostate and pelvic lymph nodes. Lesion contrast and the geometric resolution and sensitivity of the imaging collimator were the study variables. A set of anthropomorphic mathematical phantoms was imaged with an analytic projector based on eight parallel-hole collimators with different sensitivity and resolution properties. The LROC study was conducted with human observers and the channelized nonprewhitening, channelized Hotelling (CH) and VS model observers. The CH observer was applied in a “background-known-statistically” protocol while the VS observer performed a quasi-background-known-exactly task. Both of these models were applied with and without internal noise in the decision variables. A perceptual search threshold was also tested with the VS observer. The model observers without inefficiencies failed to mimic the average performance trend for the humans. The CH and VS observers with internal noise matched the humans primarily at low collimator sensitivities. With both internal noise and the search threshold, the VS observer attained quantitative agreement with the human observers. Computational efficiency is an important advantage of the VS observer. PMID:26835503

  7. A statistical RCL interconnect delay model taking account of process variations

    NASA Astrophysics Data System (ADS)

    Zhu, Zhang-Ming; Wan, Da-Jing; Yang, Yin-Tang; En, Yun-Fei

    2011-01-01

    As the feature size of the CMOS integrated circuit continues to shrink, process variations have become a key factor affecting the interconnect performance. Based on the equivalent Elmore model and the use of the polynomial chaos theory and the Galerkin method, we propose a linear statistical RCL interconnect delay model, taking into account process variations by successive application of the linear approximation method. Based on a variety of nano-CMOS process parameters, HSPICE simulation results show that the maximum error of the proposed model is less than 3.5%. The proposed model is simple, of high precision, and can be used in the analysis and design of nanometer integrated circuit interconnect systems.

  8. An analytical model accounting for tip shape evolution during atom probe analysis of heterogeneous materials.

    PubMed

    Rolland, N; Larson, D J; Geiser, B P; Duguay, S; Vurpillot, F; Blavette, D

    2015-12-01

    An analytical model describing the field evaporation dynamics of a tip made of a thin layer deposited on a substrate is presented in this paper. The difference in evaporation field between the materials is taken into account in this approach in which the tip shape is modeled at a mesoscopic scale. It was found that the non-existence of sharp edge on the surface is a sufficient condition to derive the morphological evolution during successive evaporation of the layers. This modeling gives an instantaneous and smooth analytical representation of the surface that shows good agreement with finite difference simulations results, and a specific regime of evaporation was highlighted when the substrate is a low evaporation field phase. In addition, the model makes it possible to calculate theoretically the tip analyzed volume, potentially opening up new horizons for atom probe tomographic reconstruction.

  9. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    EPA Science Inventory

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  10. A novel VLES model accounting for near-wall turbulence: physical rationale and applications

    NASA Astrophysics Data System (ADS)

    Jakirlic, Suad; Chang, Chi-Yao; Kutej, Lukas; Tropea, Cameron

    2014-11-01

    A novel VLES (Very Large Eddy Simulation) model whose non-resolved residual turbulence is modelled by using an advanced near-wall eddy-viscosity model accounting for the near-wall Reynolds stress anisotropy influence on the turbulence viscosity by modelling appropriately the velocity scale in the relevant formulation (Hanjalic et al., 2004) is proposed. It represents a variable resolution Hybrid LES/RANS (Reynolds-Averaged Navier-Stokes) computational scheme enabling a seamless transition from RANS to LES depending on the ratio of the turbulent viscosities associated with the unresolved scales corresponding to the LES cut-off and the `unsteady' scales pertinent to the turbulent properties of the VLES residual motion, which varies within the flow domain. The VLES method is validated interactively in the process of the model derivation by computing fully-developed flow in a plane channel (important representative of wall-bounded flows, underlying the log-law for the velocity field, for studying near-wall Reynolds stress anisotropy) and a separating flow over a periodic arrangement of smoothly-contoured 2-D hills. The model performances are also assessed in capturing the natural decay of the homogeneous isotropic turbulence. The model is finally applied to swirling flow in a vortex tube, flow in an IC-engine configuration and flow past a realistic car model.

  11. Variance component model to account for sample structure in genome-wide association studies.

    PubMed

    Kang, Hyun Min; Sul, Jae Hoon; Service, Susan K; Zaitlen, Noah A; Kong, Sit-Yee; Freimer, Nelson B; Sabatti, Chiara; Eskin, Eleazar

    2010-04-01

    Although genome-wide association studies (GWASs) have identified numerous loci associated with complex traits, imprecise modeling of the genetic relatedness within study samples may cause substantial inflation of test statistics and possibly spurious associations. Variance component approaches, such as efficient mixed-model association (EMMA), can correct for a wide range of sample structures by explicitly accounting for pairwise relatedness between individuals, using high-density markers to model the phenotype distribution; but such approaches are computationally impractical. We report here a variance component approach implemented in publicly available software, EMMA eXpedited (EMMAX), that reduces the computational time for analyzing large GWAS data sets from years to hours. We apply this method to two human GWAS data sets, performing association analysis for ten quantitative traits from the Northern Finland Birth Cohort and seven common diseases from the Wellcome Trust Case Control Consortium. We find that EMMAX outperforms both principal component analysis and genomic control in correcting for sample structure.

  12. Time lag model for batch bioreactor simulation accounting the effect of micro-organism mortality.

    PubMed

    Zahariev, Andrey; Kiskinov, Hristo; Angelov, Angel; Zlatev, Stoyan

    2015-01-02

    In the present work, a generalization of the classical model of Monod accounting the influence of both delayed and instant mortalities on the dynamics of the micro-organism population is proposed. The model was analysed and compared with respect to its quality and applicability for simulation of the cultivation process of micro-organisms. Existence of a unique global positive solution of the Cauchy problem for the proposed model is proved and explicit relations between the decay parameters and the nutrition substrate concentration are obtained. These mathematical results allow us to calculate the nutrient substrate concentration which guarantees that the biomass concentration is maximal for every specific type of taxonomic groups of micro-organisms (bacteria, yeasts).

  13. Micromechanical modeling of elastic properties of cortical bone accounting for anisotropy of dense tissue.

    PubMed

    Salguero, Laura; Saadat, Fatemeh; Sevostianov, Igor

    2014-10-17

    The paper analyzes the connection between microstructure of the osteonal cortical bone and its overall elastic properties. The existing models either neglect anisotropy of the dense tissue or simplify cortical bone microstructure (accounting for Haversian canals only). These simplifications (related mostly to insufficient mathematical apparatus) complicate quantitative analysis of the effect of microstructural changes - produced by age, microgravity, or some diseases - on the overall mechanical performance of cortical bone. The present analysis fills this gap; it accounts for anisotropy of the dense tissue and uses realistic model of the porous microstructure. The approach is based on recent results of Sevostianov et al. (2005) and Saadat et al. (2012) on inhomogeneities in a transversely-isotropic material. Bone's microstructure is modeled according to books of Martin and Burr (1989), Currey (2002), and Fung (1993) and includes four main families of pores. The calculated elastic constants for porous cortical bone are in agreement with available experimental data. The influence of each of the pore types on the overall moduli is examined.

  14. A parametric ribcage geometry model accounting for variations among the adult population.

    PubMed

    Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2016-09-06

    The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks.

  15. Improvement of the integration of Soil Moisture Accounting into the NRCS-CN model

    NASA Astrophysics Data System (ADS)

    Durán-Barroso, Pablo; González, Javier; Valdés, Juan B.

    2016-11-01

    Rainfall-runoff quantification is one of the most important tasks in both engineering and watershed management as it allows the identification, forecast and explanation of the watershed response. This non-linear process depends on the watershed antecedent conditions, which are commonly related to the initial soil moisture content. Although several studies have highlighted the relevance of soil moisture measures to improve flood modelling, the discussion is still open in the literature about the approach to use in lumped model. The integration of these previous conditions in the widely used rainfall-runoff models NRCS-CN (e.g. National Resources Conservation Service - Curve Number model) could be handled in two ways: using the Antecedent Precipitation Index (API) concept to modify the model parameter; or alternatively, using a Soil Moisture Accounting (SMA) procedure into the NRCS-CN, being the soil moisture a state variable. For this second option, the state variable does not have a direct physical representation. This make difficult the estimation of the initial soil moisture store level. This paper presents a new formulation that overcomes such issue, the rainfall-runoff model called RSSa. Its suitability is evaluated by comparing the RSSa model with the original NRCS-CN model and alternatives SMA procedures in 12 watersheds located in six different countries, with different climatic conditions, from Mediterranean to Semi-arid regions. The analysis shows that the new model, RSSa, performs better when compared with previously proposed CN-based models. Finally, an assessment is made of the influence of the soil moisture parameter for each watershed and the relative weight of scale effects over model parameterization.

  16. Does homologous reinfection drive multiple-wave influenza outbreaks? Accounting for immunodynamics in epidemiological models.

    PubMed

    Camacho, A; Cazelles, B

    2013-12-01

    Epidemiological models of influenza transmission usually assume that recovered individuals instantly develop a fully protective immunity against the infecting strain. However, recent studies have highlighted host heterogeneity in the development of this immune response, characterized by delay and even absence of protection, that could lead to homologous reinfection (HR). Here, we investigate how these immunological mechanisms at the individual level shape the epidemiological dynamics at the population level. In particular, because HR was observed during the successive waves of past pandemics, we assess its role in driving multiple-wave influenza outbreaks. We develop a novel mechanistic model accounting for host heterogeneity in the immune response. Immunological parameters are inferred by fitting our dynamical model to a two-wave influenza epidemic that occurred on the remote island of Tristan da Cunha (TdC) in 1971, and during which HR occurred in 92 of 284 islanders. We then explore the dynamics predicted by our model for various population settings. We find that our model can explain HR over both short (e.g. week) and long (e.g. month) time-scales, as reported during past pandemics. In particular, our results reveal that the HR wave on TdC was a natural consequence of the exceptional contact configuration and high susceptibility of this small and isolated community. By contrast, in larger, less mixed and partially protected populations, HR alone cannot generate multiple-wave outbreaks. However, in the latter case, we find that a significant proportion of infected hosts would remain unprotected at the end of the pandemic season and should therefore benefit from vaccination. Crucially, we show that failing to account for these unprotected individuals can lead to large underestimation of the magnitude of the first post-pandemic season. These results are relevant in the context of the 2009 A/H1N1 influenza post-pandemic era.

  17. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  18. Accounting for sex differences in PTSD: A multi-variable mediation model

    PubMed Central

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    Background Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually. Objective The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. Method The study was a quasi-prospective questionnaire survey assessing PTSD and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, N=450) and 6 months after the robbery (T2, N=368; 61.1% females). Mediation was examined using an analysis designed specifically to test a multiple mediator model. Results Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion), tonic immobility, panic, dissociation, negative posttraumatic cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. Conclusions The findings suggest that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma-related psychiatric disorders

  19. Toward a formalized account of attitudes: The Causal Attitude Network (CAN) model.

    PubMed

    Dalege, Jonas; Borsboom, Denny; van Harreveld, Frenk; van den Berg, Helma; Conner, Mark; van der Maas, Han L J

    2016-01-01

    This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions between these reactions arise through direct causal influences (e.g., the belief that snakes are dangerous causes fear of snakes) and mechanisms that support evaluative consistency between related contents of evaluative reactions (e.g., people tend to align their belief that snakes are useful with their belief that snakes help maintain ecological balance). In the CAN model, the structure of attitude networks conforms to a small-world structure: evaluative reactions that are similar to each other form tight clusters, which are connected by a sparser set of "shortcuts" between them. We argue that the CAN model provides a realistic formalized measurement model of attitudes and therefore fills a crucial gap in the attitude literature. Furthermore, the CAN model provides testable predictions for the structure of attitudes and how they develop, remain stable, and change over time. Attitude strength is conceptualized in terms of the connectivity of attitude networks and we show that this provides a parsimonious account of the differences between strong and weak attitudes. We discuss the CAN model in relation to possible extensions, implication for the assessment of attitudes, and possibilities for further study.

  20. Modeling Tree Growth Taking into Account Carbon Source and Sink Limitations

    PubMed Central

    Hayat, Amaury; Hacket-Pain, Andrew J.; Pretzsch, Hans; Rademacher, Tim T.; Friend, Andrew D.

    2017-01-01

    Increasing CO2 concentrations are strongly controlled by the behavior of established forests, which are believed to be a major current sink of atmospheric CO2. There are many models which predict forest responses to environmental changes but they are almost exclusively carbon source (i.e., photosynthesis) driven. Here we present a model for an individual tree that takes into account the intrinsic limits of meristems and cellular growth rates, as well as control mechanisms within the tree that influence its diameter and height growth over time. This new framework is built on process-based understanding combined with differential equations solved by numerical method. Our aim is to construct a model framework of tree growth for replacing current formulations in Dynamic Global Vegetation Models, and so address the issue of the terrestrial carbon sink. Our approach was successfully tested for stands of beech trees in two different sites representing part of a long-term forest yield experiment in Germany. This model provides new insights into tree growth and limits to tree height, and addresses limitations of previous models with respect to sink-limited growth. PMID:28377773

  1. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    PubMed Central

    2011-01-01

    Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment. PMID:21974866

  2. Investigating a race model account of executive control in rats with the countermanding paradigm.

    PubMed

    Beuk, J; Beninger, R J; Paré, M

    2014-03-28

    The countermanding paradigm investigates the ability to withhold a response when a stop signal is presented occasionally. The race model (Logan and Cowan, 1984) was developed to account for performance in humans and to estimate the stop signal response time (SSRT). This model has yet to be fully validated for countermanding performance in rats. Furthermore, response adjustments observed in human performance of the task have not been examined in rodents. Male Wistar rats were trained to respond to a visual stimulus (go signal) by pressing a lever below that stimulus, but to countermand the lever press (25% of trials) subsequent to an auditory tone (stop signal) presented after a variable delay. We found decreased inhibitory success as stop signal delay (SSD) increased and estimated a SSRT of 157ms. As expected by the race model, response time (RT) of movements that escaped inhibition: (1) were faster than responses made in the absence of a stop signal; (2) lengthened with increasing SSD; and (3) were predictable by the race model. In addition, responses were slower after stop trial errors, suggestive of error monitoring. Amphetamine (AMPH) (0.25, 0.5mg/kg) resulted in faster go trial RTs, baseline-dependent changes in SSRT and attenuated response adjustments. These findings demonstrate that the race model of countermanding performance, applied successfully in human and nonhuman primate models, can be employed in the countermanding performance of rodents. This is the first study to reveal response adjustments and AMPH-induced alterations of response adjustments in rodent countermanding.

  3. Advances in stream shade modelling. Accounting for canopy overhang and off-centre view

    NASA Astrophysics Data System (ADS)

    Davies-Colley, R.; Meleason, M. A.; Rutherford, K.

    2005-05-01

    Riparian shade controls the stream thermal regime and light for photosynthesis of stream plants. The quantity difn (diffuse non-interceptance), defined as the proportion of incident lighting received under a sky of uniform brightness, is useful for general specification of stream light exposure, having the virtue that it can be measured directly with common light sensors of appropriate spatial and spectral character. A simple model (implemented in EXCEL-VBA) (Davies-Colley & Rutherford Ecol. Engrg in press) successfully reproduces the broad empirical trend of decreasing difn at the channel centre with increasing ratio of canopy height to stream width. We have now refined this model to account for (a) foliage overhanging the channel (for trees of different canopy form), and (b) off-centre view of the shade (rather than just the channel centre view). We use two extreme geometries bounding real (meandering) streams: the `canyon' model simulates an infinite straight canal, whereas the `cylinder' model simulates a stream meandering so tightly that its geometry collapses into an isolated pool in the forest. The model has been validated using a physical `rooftop' model of the cylinder case, with which it is possible to measure shade with different geometries.

  4. Implementation of a cost-accounting model in a biobank: practical implications.

    PubMed

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.

  5. Superconfiguration accounting approach versus average-atom model in local-thermodynamic-equilibrium highly ionized plasmas.

    PubMed

    Faussurier, G

    1999-06-01

    Statistical methods of describing and simulating complex ionized plasmas requires the development of reliable and computationally tractable models. In that spirit, we propose the screened-hydrogenic average atom, augmented with corrections resulting from fluctuations of the occupation probabilities around the mean-field equilibrium, as an approximation to calculate the grand potential and related statistical properties. Our main objective is to check the validity of this approach by comparing its predictions with those given by the superconfiguration accounting method. The latter is well-suited to this purpose. In effect, this method makes it possible to go beyond the mean-field model by using nonperturbative, analytic, and systematic techniques. Besides, it allows us to establish the relationship between the detailed configuration accounting and the average-atom methods. To our knowledge, this is the first time that the superconfiguration description has been used in this context. Finally, this study is also the occasion for presenting a powerful technique from analytic number theory to calculate superconfiguration averaged quantities.

  6. An extended car-following model accounting for the average headway effect in intelligent transportation system

    NASA Astrophysics Data System (ADS)

    Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming

    2017-04-01

    In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.

  7. Discrete-layered damping model of multilayer plate with account of internal damping

    NASA Astrophysics Data System (ADS)

    Paimushin, V. N.; Gazizullin, R. K.

    2016-11-01

    Construction of discrete-layered damping model of multilayer plate in small displacement and deformations with account of internal damping of layers of Thompson- Kelvin-Voight model is presented. Based on derived equations, analytical solution is given to the static damping problem of simply supported single-layer rectangular plate subjected to uniformly distributed pressure, which is applied to one of its boundary planes. Convergence to the three-dimensional case is analysed for the obtained solution with respect to the dependence on dimension of mesh in the thickness direction of plate. For thin plates, dimension reduction of the formulated problem is set on the basis of simplifying hypothesis applied for each layer.

  8. Water accounting for stressed river basins based on water resources management models.

    PubMed

    Pedro-Monzonís, María; Solera, Abel; Ferrer, Javier; Andreu, Joaquín; Estrela, Teodoro

    2016-09-15

    Water planning and the Integrated Water Resources Management (IWRM) represent the best way to help decision makers to identify and choose the most adequate alternatives among other possible ones. The System of Environmental-Economic Accounting for Water (SEEA-W) is displayed as a tool for the building of water balances in a river basin, providing a standard approach to achieve comparability of the results between different territories. The target of this paper is to present the building up of a tool that enables the combined use of hydrological models and water resources models to fill in the SEEA-W tables. At every step of the modelling chain, we are capable to build the asset accounts and the physical water supply and use tables according to SEEA-W approach along with an estimation of the water services costs. The case study is the Jucar River Basin District (RBD), located in the eastern part of the Iberian Peninsula in Spain which as in other many Mediterranean basins is currently water-stressed. To guide this work we have used PATRICAL model in combination with AQUATOOL Decision Support System (DSS). The results indicate that for the average year the total use of water in the district amounts to 15,143hm(3)/year, being the Total Water Renewable Water Resources 3909hm(3)/year. On the other hand, the water service costs in Jucar RBD amounts to 1634 million € per year at constant 2012 prices. It is noteworthy that 9% of these costs correspond to non-conventional resources, such as desalinated water, reused water and water transferred from other regions.

  9. Modeling energy expenditure and oxygen consumption in human exposure models: accounting for fatigue and EPOC.

    PubMed

    Isaacs, Kristin; Glen, Graham; Mccurdy, Thomas; Smith, Luther

    2008-05-01

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled individual's physical activity level as described in an activity diary. Activity level is quantified via standardized values of metabolic equivalents of work (METS) for the activity being performed and converted into activity-specific oxygen consumption estimates. However, oxygen consumption remains elevated after a moderate- or high-intensity activity is completed. This effect, which is termed excess post-exercise oxygen consumption (EPOC), requires upward adjustment of the METS estimates that follow high-energy expenditure events, to model subsequent increased ventilation and intake dose rates. In addition, since an individual's capacity for work decreases during extended activity, methods are also required to adjust downward those METS estimates that exceed physiologically realistic limits over time. A unified method for simultaneously performing these adjustments is developed. The method simulates a cumulative oxygen deficit for each individual and uses it to impose appropriate time-dependent reductions in the METS time series and additions for EPOC. The relationships between the oxygen deficit and METS limits are nonlinear and are derived from published data on work capacity and oxygen consumption. These modifications result in improved modeling of ventilation patterns, and should improve intake dose estimates associated with exposure to airborne environmental contaminants.

  10. A gene network model accounting for development and evolution of mammalian teeth.

    PubMed

    Salazar-Ciudad, Isaac; Jernvall, Jukka

    2002-06-11

    Generation of morphological diversity remains a challenge for evolutionary biologists because it is unclear how an ultimately finite number of genes involved in initial pattern formation integrates with morphogenesis. Ideally, models used to search for the simplest developmental principles on how genes produce form should account for both developmental process and evolutionary change. Here we present a model reproducing the morphology of mammalian teeth by integrating experimental data on gene interactions and growth into a morphodynamic mechanism in which developing morphology has a causal role in patterning. The model predicts the course of tooth-shape development in different mammalian species and also reproduces key transitions in evolution. Furthermore, we reproduce the known expression patterns of several genes involved in tooth development and their dynamics over developmental time. Large morphological effects frequently can be achieved by small changes, according to this model, and similar morphologies can be produced by different changes. This finding may be consistent with why predicting the morphological outcomes of molecular experiments is challenging. Nevertheless, models incorporating morphology and gene activity show promise for linking genotypes to phenotypes.

  11. A kinetic model for type I and II IP3R accounting for mode changes.

    PubMed

    Siekmann, Ivo; Wagner, Larry E; Yule, David; Crampin, Edmund J; Sneyd, James

    2012-08-22

    Based upon an extensive single-channel data set, a Markov model for types I and II inositol trisphosphate receptors (IP(3)R) is developed. The model aims to represent accurately the kinetics of both receptor types of IP(3)R depending on the concentrations of inositol trisphosphate (IP(3)), adenosine trisphosphate (ATP), and intracellular calcium (Ca(2+)). In particular, the model takes into account that for some combinations of ligands the IP(3)R switches between extended periods of inactivity alternating with intervals of bursting activity (mode changes). In a first step, the inactive and active modes are modeled separately. It is found that, within modes, both receptor types are ligand-independent. In a second step, the submodels are connected by transition rates. Ligand-dependent regulation of the channel activity is achieved by modulating these transitions between active and inactive modes. As a result, a compact representation of the IP(3)R is obtained that accurately captures stochastic single-channel dynamics including mode changes in a model with six states and 10 rate constants, only two of which are ligand-dependent.

  12. An agent-based simulation model to study accountable care organizations.

    PubMed

    Liu, Pai; Wu, Shinyi

    2016-03-01

    Creating accountable care organizations (ACOs) has been widely discussed as a strategy to control rapidly rising healthcare costs and improve quality of care; however, building an effective ACO is a complex process involving multiple stakeholders (payers, providers, patients) with their own interests. Also, implementation of an ACO is costly in terms of time and money. Immature design could cause safety hazards. Therefore, there is a need for analytical model-based decision-support tools that can predict the outcomes of different strategies to facilitate ACO design and implementation. In this study, an agent-based simulation model was developed to study ACOs that considers payers, healthcare providers, and patients as agents under the shared saving payment model of care for congestive heart failure (CHF), one of the most expensive causes of sometimes preventable hospitalizations. The agent-based simulation model has identified the critical determinants for the payment model design that can motivate provider behavior changes to achieve maximum financial and quality outcomes of an ACO. The results show nonlinear provider behavior change patterns corresponding to changes in payment model designs. The outcomes vary by providers with different quality or financial priorities, and are most sensitive to the cost-effectiveness of CHF interventions that an ACO implements. This study demonstrates an increasingly important method to construct a healthcare system analytics model that can help inform health policy and healthcare management decisions. The study also points out that the likely success of an ACO is interdependent with payment model design, provider characteristics, and cost and effectiveness of healthcare interventions.

  13. A comparison of two diffusion process models in accounting for payoff and stimulus frequency manipulations.

    PubMed

    Leite, Fábio P

    2012-08-01

    I analyzed response time and accuracy data from a numerosity discrimination experiment in which both stimulus frequency and payoff structure were manipulated. The numerosity discrimination encompassed responding either "low" or "high" to the number of asterisks in a 10 × 10 grid, on the basis of an experimenter-determined decision cutoff (fixed at 50). In the stimulus frequency condition, there were more low than high stimuli in some blocks and more high than low stimuli in other blocks. In the payoff condition, responses were rewarded such that the relative value of a stimulus mimicked the relative frequency of that stimulus in the previous manipulation. I modeled the data using two sequential-sampling models in which evidence was accumulated until either a "low" or a "high" decision criterion was reached and a response was initiated: a single-stage diffusion model framework and a two-stage diffusion model framework. In using these two frameworks, the goal was to examine their relative merits across stimulus frequency and payoff structure manipulations. I found that shifts in starting point in a single-stage diffusion framework and shifts in the initial drift rate in the two-stage model were able to account for the data. I also found, however, that these two shifts across the two models produced similar changes in the random walk that described the decision process. In conclusion, I found that the similarities in the descriptions of the decision processes make it difficult to choose between the two models and suggested that such a choice should consider model assumptions and parameter estimate interpretations.

  14. Radiative opacities and configuration interaction effects of hot iron plasma using a detailed term accounting model

    NASA Astrophysics Data System (ADS)

    Jin, Fengtao; Zeng, Jiaolong; Yuan, Jianmin

    2003-12-01

    We have calculated the radiative opacities of iron plasma in local thermodynamic equilibrium using a detailed term accounting model. The extensive atomic data are obtained by multiconfiguration Hartree-Fock (MCHF) method, with Breit-Pauli relativistic corrections. Extensive configuration interaction (CI) has been included based on LS coupling to obtain energy levels and the bound-bound transition cross sections. A detailed configuration accounting model is applied to evaluate the bound-free absorption cross sections. We simulate two experimental transmission spectra [G. Winhart et al., Phys. Rev. E 53, R1332 (1996); P. T. Springer et al., J. Quant. Spectrosc. Radiat. Transf. 58, 927 (1997)] to verify our calculation model, one is at a temperature of 22 eV and a density of 10-2 g/cm3 and the other is at a temperature of 20 eV and a lower density of 10-4 g/cm3. It is shown that the strong CI can effectively change the oscillator strengths in contrast to the single configuration HF method. For both of the two simulated transmission spectra good agreement is obtained between the present MCHF results and the experimental data. Spectrally resolved opacities and Planck and Rosseland mean opacities are also calculated. For the isothermal sequence of T=20 eV, when the density decreases from 10-2 to 10-5 g/cm3, the linewidth also decreases so that the iron transition arrays show more discrete line structures and the linewidth becomes very important to the Rosseland mean opacity.

  15. Regional collaboration as a model for fostering accountability and transforming health care.

    PubMed

    Speir, Alan M; Rich, Jeffrey B; Crosby, Ivan; Fonner, Edwin

    2009-01-01

    An era of increasing budgetary constraints, misaligned payers and providers, and a competitive system where United States health outcomes are outpaced by less well-funded nations is motivating policy-makers to seek more effective means for promoting cost-effective delivery and accountability. This article illustrates an effective working model of regional collaboration focused on improving health outcomes, containing costs, and making efficient use of resources in cardiovascular surgical care. The Virginia Cardiac Surgery Quality Initiative is a decade-old collaboration of cardiac surgeons and hospital providers in Virginia working to improve outcomes and contain costs by analyzing comparative data, identifying top performers, and replicating best clinical practices on a statewide basis. The group's goals and objectives, along with 2 generations of performance improvement initiatives, are examined. These involve attempts to improve postoperative outcomes and use of tools for decision support and modeling. This work has led the group to espouse a more integrated approach to performance improvement and to formulate principles of a quality-focused payment system. This is one in which collaboration promotes regional accountability to deliver quality care on a cost-effective basis. The Virginia Cardiac Surgery Quality Initiative has attempted to test a global pricing model and has implemented a pay-for-performance program where physicians and hospitals are aligned with common objectives. Although this collaborative approach is a work in progress, authors point out preconditions applicable to other regions and medical specialties. A road map of short-term next steps is needed to create an adaptive payment system tied to the national agenda for reforming the delivery system.

  16. FPLUME-1.0: An integral volcanic plume model accounting for ash aggregation

    NASA Astrophysics Data System (ADS)

    Folch, A.; Costa, A.; Macedonio, G.

    2016-02-01

    Eruption source parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1-D (one-dimensional) cross-section-averaged eruption column model based on the buoyant plume theory (BPT). The model accounts for plume bending by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in the presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an effective grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the effective particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland). The modular structure of the code facilitates the implementation in the future code versions of more quantitative ash aggregation parameterization as further observations and experiment data will be available for better constraining ash aggregation processes.

  17. FPLUME-1.0: An integral volcanic plume model accounting for ash aggregation

    NASA Astrophysics Data System (ADS)

    Folch, Arnau; Costa, Antonio; Macedonio, Giovanni

    2016-04-01

    Eruption Source Parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1D cross-section averaged eruption column model based on the Buoyant Plume Theory (BPT). The model accounts for plume bending by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an "effective" grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice-versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the "effective" particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland). The modular structure of the code facilitates the implementation in the future code versions of more quantitative ash aggregation parameterization as further observations and experiments data will be available for better constraining ash aggregation processes.

  18. A unifying modeling of plant shoot gravitropism with an explicit account of the effects of growth

    PubMed Central

    Bastien, Renaud; Douady, Stéphane; Moulia, Bruno

    2014-01-01

    Gravitropism, the slow reorientation of plant growth in response to gravity, is a major determinant of the form and posture of land plants. Recently a universal model of shoot gravitropism, the AC model, was presented, in which the dynamics of the tropic movement is only determined by the conflicting controls of (1) graviception that tends to curve the plants toward the vertical, and (2) proprioception that tends to keep the stem straight. This model was found to be valid for many species and over two orders of magnitude of organ size. However, the motor of the movement, the elongation, was purposely neglected in the AC model. If growth effects are to be taken into account, it is necessary to consider the material derivative, i.e., the rate of change of curvature bound to expanding and convected organ elements. Here we show that it is possible to rewrite the material equation of curvature in a compact simplified form that directly expresses the curvature variation as a function of the median elongation and of the distribution of the differential growth. By using this extended model, called the ACĖ model, growth is found to have two main destabilizing effects on the tropic movement: (1) passive orientation drift, which occurs when a curved element elongates without differential growth, and (2) fixed curvature, when an element leaves the elongation zone and is no longer able to actively change its curvature. By comparing the AC and ACĖ models to experiments, these two effects are found to be negligible. Our results show that the simplified AC mode can be used to analyze gravitropism and posture control in actively elongating plant organs without significant information loss. PMID:24782876

  19. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    NASA Astrophysics Data System (ADS)

    Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.

    2016-07-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( < 0.063 mm) ash (3-59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  20. Material Protection, Accounting, and Control Technologies (MPACT): Modeling and Simulation Roadmap

    SciTech Connect

    Cipiti, Benjamin; Dunn, Timothy; Durbin, Samual; Durkee, Joe W.; England, Jeff; Jones, Robert; Ketusky, Edward; Li, Shelly; Lindgren, Eric; Meier, David; Miller, Michael; Osburn, Laura Ann; Pereira, Candido; Rauch, Eric Benton; Scaglione, John; Scherer, Carolynn P.; Sprinkle, James K.; Yoo, Tae-Sic

    2016-08-05

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal. This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. These tools will consist of instrumentation and devices as well as computer software for modeling. To aid in framing its long-term goal, during FY16, a modeling and simulation roadmap is being developed for three major areas of investigation: (1) radiation transport and sensors, (2) process and chemical models, and (3) shock physics and assessments. For each area, current modeling approaches are described, and gaps and needs are identified.

  1. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    PubMed

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  2. Hierarchical modeling of contingency-based source monitoring: a test of the probability-matching account.

    PubMed

    Arnold, Nina R; Bayen, Ute J; Kuhlmann, Beatrice G; Vaterrodt, Bianca

    2013-04-01

    According to the probability-matching account of source guessing (Spaniol & Bayen, Journal of Experimental Psychology: Learning, Memory, and Cognition 28:631-651, 2002), when people do not remember the source of an item in a source-monitoring task, they match the source-guessing probabilities to the perceived contingencies between sources and item types. In a source-monitoring experiment, half of the items presented by each of two sources were consistent with schematic expectations about this source, whereas the other half of the items were consistent with schematic expectations about the other source. Participants' source schemas were activated either at the time of encoding or just before the source-monitoring test. After test, the participants judged the contingency of the item type and source. Individual parameter estimates of source guessing were obtained via beta-multinomial processing tree modeling (beta-MPT; Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010). We found a significant correlation between the perceived contingency and source guessing, as well as a correlation between the deviation of the guessing bias from the true contingency and source memory when participants did not receive the schema information until retrieval. These findings support the probability-matching account.

  3. Statistical approaches to account for missing values in accelerometer data: Applications to modeling physical activity.

    PubMed

    Xu, Selene Yue; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki

    2016-07-10

    Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.

  4. Causal Inference in Occupational Epidemiology: Accounting for the Healthy Worker Effect by Using Structural Nested Models

    PubMed Central

    Naimi, Ashley I.; Richardson, David B.; Cole, Stephen R.

    2013-01-01

    In a recent issue of the Journal, Kirkeleit et al. (Am J Epidemiol. 2013;177(11):1218–1224) provided empirical evidence for the potential of the healthy worker effect in a large cohort of Norwegian workers across a range of occupations. In this commentary, we provide some historical context, define the healthy worker effect by using causal diagrams, and use simulated data to illustrate how structural nested models can be used to estimate exposure effects while accounting for the healthy worker survivor effect in 4 simple steps. We provide technical details and annotated SAS software (SAS Institute, Inc., Cary, North Carolina) code corresponding to the example analysis in the Web Appendices, available at http://aje.oxfordjournals.org/. PMID:24077092

  5. Causal inference in occupational epidemiology: accounting for the healthy worker effect by using structural nested models.

    PubMed

    Naimi, Ashley I; Richardson, David B; Cole, Stephen R

    2013-12-15

    In a recent issue of the Journal, Kirkeleit et al. (Am J Epidemiol. 2013;177(11):1218-1224) provided empirical evidence for the potential of the healthy worker effect in a large cohort of Norwegian workers across a range of occupations. In this commentary, we provide some historical context, define the healthy worker effect by using causal diagrams, and use simulated data to illustrate how structural nested models can be used to estimate exposure effects while accounting for the healthy worker survivor effect in 4 simple steps. We provide technical details and annotated SAS software (SAS Institute, Inc., Cary, North Carolina) code corresponding to the example analysis in the Web Appendices, available at http://aje.oxfordjournals.org/.

  6. Keeping Accountability Systems Accountable

    ERIC Educational Resources Information Center

    Foote, Martha

    2007-01-01

    The standards and accountability movement in education has undeniably transformed schooling throughout the United States. Even before President Bush signed the No Child Left Behind (NCLB) Act into law in January 2002, mandating annual public school testing in English and math for grades 3-8 and once in high school, most states had already…

  7. Can the Enceladus plume account for 16 GW? A Boiling Liquid Model

    NASA Astrophysics Data System (ADS)

    Nakajima, M.; Ingersoll, A. P.

    2012-12-01

    Since the detection of water vapor plumes from the tiger stripes at the south pole of Enceladus (Porco et al., 2006), several models have been suggested to explain the plume mechanism (e.g., Schmidt et al., 2008; Kieffer et al., 2009). The so-called "Icy Chamber Model" suggests ice sublimation under the stripes causes the plumes. One of the problems with the model is that it cannot explain the high salinity of the plumes (Postberg et al., 2009) because ice particles condensing from a vapor are relatively salt free. Secondly, the model has difficulties to explain the observed high heat flux (15.8 GW, Howett et al., 2011) with only heat conduction through the ice as a heat source. According to previous models (Nimmo et al., 2007; Abramov and Spencer, 2009), the conductive heat flux is only 1-4 GW. Nimmo et al., (2007) suggested that the latent heat release by the sublimation of a large amount of water vapor (90% by mass) to ice particles under the crack could account for the heat flux. However, Ingersoll & Pankine, (2010) found out that such sublimation of the vapor occurs only up to 1% by mass under their parameter range. To solve these problems, we investigate the "Boiling Liquid Model", which assumes that liquid water under the stripes causes the plumes. The model is favored because ice particles derived from a salty liquid can have high salinity. Enforcing conservation of mass, momentum, and energy, we construct a simple atmospheric model that includes controlled boiling and interaction between the gas and the icy wall. We first assume that the heat radiated to space comes entirely from the heat generated by condensation of the gas onto the ice wall. We vary the crack width and height as parameters and find out the conductive heat flux is ~1 GW, just as in the icy chamber model. We then investigate the additional heating processes, such as radiation from the particles after they come out of the crack and from the ones formed under the surface due to variations of

  8. How States Can Hold Schools Accountable: The Strong Schools Model of Standards-Based Reform.

    ERIC Educational Resources Information Center

    Brooks, Sarah R.

    Few states have overcome the political and practical obstacles to implementing a clear, feasible, comprehensive accountability system. The University of Washington's Center on Reinventing Public Education reviewed experiences from state accountability efforts. Workable accountability systems focus on results, clarify goals and roles, and…

  9. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Duputel, Z.; Jiang, J.; Jolivet, R.; Simons, M.; Rivera, L.; Ampuero, J.-P.; Riel, B.; Owen, S. E.; Moore, A. W.; Samsonov, S. V.; Ortega Culaciati, F.; Minson, S. E.

    2015-10-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  10. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    USGS Publications Warehouse

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  11. Context-Specific Proportion Congruency Effects: An Episodic Learning Account and Computational Model

    PubMed Central

    Schmidt, James R.

    2016-01-01

    In the Stroop task, participants identify the print color of color words. The congruency effect is the observation that response times and errors are increased when the word and color are incongruent (e.g., the word “red” in green ink) relative to when they are congruent (e.g., “red” in red). The proportion congruent (PC) effect is the finding that congruency effects are reduced when trials are mostly incongruent rather than mostly congruent. This PC effect can be context-specific. For instance, if trials are mostly incongruent when presented in one location and mostly congruent when presented in another location, the congruency effect is smaller for the former location. Typically, PC effects are interpreted in terms of strategic control of attention in response to conflict, termed conflict adaptation or conflict monitoring. In the present manuscript, however, an episodic learning account is presented for context-specific proportion congruent (CSPC) effects. In particular, it is argued that context-specific contingency learning can explain part of the effect, and context-specific rhythmic responding can explain the rest. Both contingency-based and temporal-based learning can parsimoniously be conceptualized within an episodic learning framework. An adaptation of the Parallel Episodic Processing model is presented. This model successfully simulates CSPC effects, both for contingency-biased and contingency-unbiased (transfer) items. The same fixed-parameter model can explain a range of other findings from the learning, timing, binding, practice, and attentional control domains. PMID:27899907

  12. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    PubMed

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-03-21

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions.

  13. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    PubMed

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both.

  14. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    NASA Astrophysics Data System (ADS)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  15. On the Value of Climate Elasticity Indices to Assess the Impact of Climate Change on Streamflow Projection using an ensemble of bias corrected CMIP5 dataset

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet; Moradkhani, Hamid

    2015-04-01

    Changes in two climate elasticity indices, i.e. temperature and precipitation elasticity of streamflow, were investigated using an ensemble of bias corrected CMIP5 dataset as forcing to two hydrologic models. The Variable Infiltration Capacity (VIC) and the Sacramento Soil Moisture Accounting (SAC-SMA) hydrologic models, were calibrated at 1/16 degree resolution and the simulated streamflow was routed to the basin outlet of interest. We estimated precipitation and temperature elasticity of streamflow from: (1) observed streamflow; (2) simulated streamflow by VIC and SAC-SMA models using observed climate for the current climate (1963-2003); (3) simulated streamflow using simulated climate from 10 GCM - CMIP5 dataset for the future climate (2010-2099) including two concentration pathways (RCP4.5 and RCP8.5) and two downscaled climate products (BCSD and MACA). The streamflow sensitivity to long-term (e.g., 30-year) average annual changes in temperature and precipitation is estimated for three periods i.e. 2010-40, 2040-70 and 2070-99. We compared the results of the three cases to reflect on the value of precipitation and temperature indices to assess the climate change impacts on Columbia River streamflow. Moreover, these three cases for two models are used to assess the effects of different uncertainty sources (model forcing, model structure and different pathways) on the two climate elasticity indices.

  16. Carbon accounting of forest bioenergy: from model calibrations to policy options (Invited)

    NASA Astrophysics Data System (ADS)

    Lamers, P.

    2013-12-01

    knowledge in the field by comparing different state-of-the-art temporal forest carbon modeling efforts, and discusses whether or to what extent a deterministic ';carbon debt' accounting is possible and appropriate. It concludes upon the possible scientific and eventually political choices in temporal carbon accounting for regulatory frameworks including alternative options to address unintentional carbon losses within forest ecosystems/bioenergy systems.

  17. Towards an improvement of carbon accounting for wildfires: incorporation of charcoal production into carbon emission models

    NASA Astrophysics Data System (ADS)

    Doerr, Stefan H.; Santin, Cristina; de Groot, Bill

    2015-04-01

    Every year fires release to the atmosphere the equivalent to 20-30% of the carbon (C) emissions from fossil fuel consumption, with future emissions from wildfires expected to increase under a warming climate. Critically, however, part of the biomass C affected by fire is not emitted during burning, but converted into charcoal, which is very resistant to environmental degradation and, thus, contributes to long-term C sequestration. The magnitude of charcoal production from wildfires as a long-term C sink remains essentially unknown and, to the date, charcoal production has not been included in wildfire emission and C budget models. Here we present complete inventories of charcoal production in two fuel-rich, but otherwise very different ecosystems: i) a boreal conifer forest (experimental stand-replacing crown fire; Canada, 2012) and a dry eucalyptus forest (high-intensity fuel reduction burn; Australia 2014). Our data show that, when considering all the fuel components and quantifying all the charcoal produced from each (i.e. bark, dead wood debris, fine fuels), the overall amount of charcoal produced is significant: up to a third of the biomass C affected by fire. These findings indicate that charcoal production from wildfires could represent a major and currently unaccounted error in the estimation of the effects of wildfires in the global C balance. We suggest an initial approach to include charcoal production in C emission models, by using our case study of a boreal forest fire and the Canadian Fire Effects Model (CanFIRE). We also provide recommendations of how a 'conversion factor' for charcoal production could be relatively easily estimated when emission factors for different types of fuels and fire conditions are experimentally obtained. Ultimately, this presentation is a call for integrative collaboration between the fire emission modelling community and the charcoal community to work together towards the improvement of C accounting for wildfires.

  18. Accounting for the kinetics in order parameter analysis: Lessons from theoretical models and a disordered peptide

    NASA Astrophysics Data System (ADS)

    Berezovska, Ganna; Prada-Gracia, Diego; Mostarda, Stefano; Rao, Francesco

    2012-11-01

    Molecular simulations as well as single molecule experiments have been widely analyzed in terms of order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such descriptions are inaccurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-state-models of the system under study. Application of the methodology to theoretical models with a noisy order parameter as well as the dynamics of a disordered peptide illustrates the possibility to build accurate descriptions of molecular processes on the sole basis of order parameter time series without using any supplementary information.

  19. A performance weighting procedure for GCMs based on explicit probabilistic models and accounting for observation uncertainty

    NASA Astrophysics Data System (ADS)

    Renard, Benjamin; Vidal, Jean-Philippe

    2016-04-01

    In recent years, the climate modeling community has put a lot of effort into releasing the outputs of multimodel experiments for use by the wider scientific community. In such experiments, several structurally distinct GCMs are run using the same observed forcings (for the historical period) or the same projected forcings (for the future period). In addition, several members are produced for a single given model structure, by running each GCM with slightly different initial conditions. This multiplicity of GCM outputs offers many opportunities in terms of uncertainty quantification or GCM comparisons. In this presentation, we propose a new procedure to weight GCMs according to their ability to reproduce the observed climate. Such weights can be used to combine the outputs of several models in a way that rewards good-performing models and discards poorly-performing ones. The proposed procedure has the following main properties: 1. It is based on explicit probabilistic models describing the time series produced by the GCMs and the corresponding historical observations, 2. It can use several members whenever available, 3. It accounts for the uncertainty in observations, 4. It assigns a weight to each GCM (all weights summing up to one), 5. It can also assign a weight to the "H0 hypothesis" that all GCMs in the multimodel ensemble are not compatible with observations. The application of the weighting procedure is illustrated with several case studies including synthetic experiments, simple cases where the target GCM output is a simple univariate variable and more realistic cases where the target GCM output is a multivariate and/or a spatial variable. These case studies illustrate the generality of the procedure which can be applied in a wide range of situations, as long as the analyst is prepared to make an explicit probabilistic assumption on the target variable. Moreover, these case studies highlight several interesting properties of the weighting procedure. In

  20. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    USGS Publications Warehouse

    Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.

    2016-01-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between  ∼  2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( <  0.063 mm) ash (3–59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  1. Historical Account to the State of the Art in Debris Flow Modeling

    NASA Astrophysics Data System (ADS)

    Pudasaini, Shiva P.

    2013-04-01

    In this contribution, I present a historical account of debris flow modelling leading to the state of the art in simulations and applications. A generalized two-phase model is presented that unifies existing avalanche and debris flow theories. The new model (Pudasaini, 2012) covers both the single-phase and two-phase scenarios and includes many essential and observable physical phenomena. In this model, the solid-phase stress is closed by Mohr-Coulomb plasticity, while the fluid stress is modeled as a non-Newtonian viscous stress that is enhanced by the solid-volume-fraction gradient. A generalized interfacial momentum transfer includes viscous drag, buoyancy and virtual mass forces, and a new generalized drag force is introduced to cover both solid-like and fluid-like drags. Strong couplings between solid and fluid momentum transfer are observed. The two-phase model is further extended to describe the dynamics of rock-ice avalanches with new mechanical models. This model explains dynamic strength weakening and includes internal fluidization, basal lubrication, and exchanges of mass and momentum. The advantages of the two-phase model over classical (effectively single-phase) models are discussed. Advection and diffusion of the fluid through the solid are associated with non-linear fluxes. Several exact solutions are constructed, including the non-linear advection-diffusion of fluid, kinematic waves of debris flow front and deposition, phase-wave speeds, and velocity distribution through the flow depth and through the channel length. The new model is employed to study two-phase subaerial and submarine debris flows, the tsunami generated by the debris impact at lakes/oceans, and rock-ice avalanches. Simulation results show that buoyancy enhances flow mobility. The virtual mass force alters flow dynamics by increasing the kinetic energy of the fluid. Newtonian viscous stress substantially reduces flow deformation, whereas non-Newtonian viscous stress may change the

  2. Modelling overbank flow on farmed catchments taking into account spatial hydrological discontinuities

    NASA Astrophysics Data System (ADS)

    Moussa, R.; Tilma, M.; Chahinian, N.; Huttel, O.

    2003-04-01

    In agricultural catchments, hydrological processes are largely variable in space due to human impact causing hydrological discontinuities such as ditch network, field limits and terraces. The ditch network accelerates runoff by concentrating flows, drains the water table or replenishes it by reinfiltration of the runoff water. During extreme flood events, overbank flow occurs and surface pathflows are modified. The purpose of this study is to assess the influence of overbank flow on hydrograph shape during flood events. For that, MHYDAS, a physically based distributed hydrological model, was especially developed to take into account these hydrological discontinuities. The model considers the catchment as a series of interconnected hydrological unit. Runoff from each unit is estimated using a deterministic model based on the pounding-time algorithm and then routed through the ditch network using the diffusive wave equation. Overbank flow is modelled by modifying links between the hydrological units and the ditch network. The model was applied to simulate the main hydrological processes on a small headwater farmed Mediterranean catchment located in Southern France. The basic hydrometeorological equipment consists of a meteorological station, rain gauges, a tensio-neutronic and a piezometric measurement network, and eight water flow measurements. A multi-criteria and multi-scale approach was used. Three independent error criteria (Nash, error on volume and error on peak flow) were calculated and combined using the Pareto technique. Then, a multi-scale approach was used to calibrate and validate the model for the eight water flow measurements. The application of MHYDAS on the extreme ten flood events of the last decade enables to identify the ditches where overbank flows occur and to calculate discharge at various points of the ditch network. Results show that for the extreme flood event, more than 45% of surface runoff occur due to overbank flow. Discussion shows that

  3. Accounting for Forest Harvest and Wildfire in a Spatially-distributed Carbon Cycle Process Model

    NASA Astrophysics Data System (ADS)

    Turner, D. P.; Ritts, W.; Kennedy, R. E.; Yang, Z.; Law, B. E.

    2009-12-01

    Forests are subject to natural disturbances in the form of wildfire, as well as management-related disturbances in the form of timber harvest. These disturbance events have strong impacts on local and regional carbon budgets, but quantifying the associated carbon fluxes remains challenging. The ORCA Project aims to quantify regional net ecosystem production (NEP) and net biome production (NBP) in Oregon, California, and Washington, and we have adopted an integrated approach based on Landsat imagery and ecosystem modeling. To account for stand-level carbon fluxes, the Biome-BGC model has been adapted to simulate multiple severities of fire and harvest. New variables include snags, direct fire emissions, and harvest removals. New parameters include fire-intensity-specific combustion factors for each carbon pool (based on field measurements) and proportional removal rates for harvest events. To quantify regional fluxes, the model is applied in a spatially-distributed mode over the domain of interest, with disturbance history derived from a time series of Landsat images. In stand-level simulations, the post disturbance transition from negative (source) to positive (sink) NEP is delayed approximately a decade in the case of high severity fire compared to harvest. Simulated direct pyrogenic emissions range from 11 to 25 % of total non-soil ecosystem carbon. In spatial mode application over Oregon and California, the sum of annual pyrogenic emissions and harvest removals was generally less that half of total NEP, resulting in significant carbon sequestration on the land base. Spatially and temporally explicit simulation of disturbance-related carbon fluxes will contribute to our ability to evaluate effects of management on regional carbon flux, and in our ability to assess potential biospheric feedbacks to climate change mediated by changing disturbance regimes.

  4. Modeling occupancy of hosts by mistletoe seeds after accounting for imperfect detectability.

    PubMed

    Fadini, Rodrigo F; Cintra, Renato

    2015-01-01

    The detection of an organism in a given site is widely used as a state variable in many metapopulation and epidemiological studies. However, failure to detect the species does not necessarily mean that it is absent. Assessing detectability is important for occupancy (presence-absence) surveys; and identifying the factors reducing detectability may help improve survey precision and efficiency. A method was used to estimate the occupancy status of host trees colonized by mistletoe seeds of Psittacanthus plagiophyllus as a function of host covariates: host size and presence of mistletoe infections on the same or on the nearest neighboring host (the cashew tree Anacardium occidentale). The technique also evaluated the effect of taking detectability into account for estimating host occupancy by mistletoe seeds. Individual host trees were surveyed for presence of mistletoe seeds with the aid of two or three observers to estimate detectability and occupancy. Detectability was, on average, 17% higher in focal-host trees with infected neighbors, while decreased about 23 to 50% from smallest to largest hosts. The presence of mistletoe plants in the sample tree had negligible effect on detectability. Failure to detect hosts as occupied decreased occupancy by 2.5% on average, with maximum of 10% for large and isolated hosts. The method presented in this study has potential for use with metapopulation studies of mistletoes, especially those focusing on the seed stage, but also as improvement of accuracy in occupancy models estimates often used for metapopulation dynamics of tree-dwelling plants in general.

  5. Associative Account of Self-Cognition: Extended Forward Model and Multi-Layer Structure

    PubMed Central

    Sugiura, Motoaki

    2013-01-01

    The neural correlates of “self” identified by neuroimaging studies differ depending on which aspects of self are addressed. Here, three categories of self are proposed based on neuroimaging findings and an evaluation of the likely underlying cognitive processes. The physical self, representing self-agency of action, body-ownership, and bodily self-recognition, is supported by the sensory and motor association cortices located primarily in the right hemisphere. The interpersonal self, representing the attention or intentions of others directed at the self, is supported by several amodal association cortices in the dorsomedial frontal and lateral posterior cortices. The social self, representing the self as a collection of context-dependent social-values, is supported by the ventral aspect of the medial prefrontal cortex and the posterior cingulate cortex. Despite differences in the underlying cognitive processes and neural substrates, all three categories of self are likely to share the computational characteristics of the forward model, which is underpinned by internal schema or learned associations between one’s behavioral output and the consequential input. Additionally, these three categories exist within a hierarchical layer structure based on developmental processes that updates the schema through the attribution of prediction error. In this account, most of the association cortices critically contribute to some aspect of the self through associative learning while the primary regions involved shift from the lateral to the medial cortices in a sequence from the physical to the interpersonal to the social self. PMID:24009578

  6. Underwriting information-theoretic accounts of quantum mechanics with a realist, psi-epistemic model

    NASA Astrophysics Data System (ADS)

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    2016-05-01

    We propose an adynamical interpretation of quantum theory called Relational Blockworld (RBW) where the fundamental ontological element is a 4D graphical amalgam of space, time and sources called a “spacetimesource element.” These are fundamental elements of space, time and sources, not source elements in space and time. The transition amplitude for a spacetimesource element is computed using a path integral with discrete graphical action. The action for a spacetimesource element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint between sources, the spacetime metric and the energy-momentum content of the spacetimesource element, rather than a dynamical law for time-evolved entities. To illustrate this interpretation, we explain the simple EPR-Bell and twin-slit experiments. This interpretation of quantum mechanics constitutes a realist, psi-epistemic model that might underwrite certain information-theoretic accounts of the quantum.

  7. Modeling Occupancy of Hosts by Mistletoe Seeds after Accounting for Imperfect Detectability

    PubMed Central

    Fadini, Rodrigo F.; Cintra, Renato

    2015-01-01

    The detection of an organism in a given site is widely used as a state variable in many metapopulation and epidemiological studies. However, failure to detect the species does not necessarily mean that it is absent. Assessing detectability is important for occupancy (presence—absence) surveys; and identifying the factors reducing detectability may help improve survey precision and efficiency. A method was used to estimate the occupancy status of host trees colonized by mistletoe seeds of Psittacanthus plagiophyllus as a function of host covariates: host size and presence of mistletoe infections on the same or on the nearest neighboring host (the cashew tree Anacardium occidentale). The technique also evaluated the effect of taking detectability into account for estimating host occupancy by mistletoe seeds. Individual host trees were surveyed for presence of mistletoe seeds with the aid of two or three observers to estimate detectability and occupancy. Detectability was, on average, 17% higher in focal-host trees with infected neighbors, while decreased about 23 to 50% from smallest to largest hosts. The presence of mistletoe plants in the sample tree had negligible effect on detectability. Failure to detect hosts as occupied decreased occupancy by 2.5% on average, with maximum of 10% for large and isolated hosts. The method presented in this study has potential for use with metapopulation studies of mistletoes, especially those focusing on the seed stage, but also as improvement of accuracy in occupancy models estimates often used for metapopulation dynamics of tree-dwelling plants in general. PMID:25973754

  8. Design of a Competency-Based Assessment Model in the Field of Accounting

    ERIC Educational Resources Information Center

    Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús

    2012-01-01

    This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…

  9. Striving for Student Success: A Model of Shared Accountability. Education Sector Reports

    ERIC Educational Resources Information Center

    Bathgate, Kelly; Colvin, Richard Lee; Silva, Elena

    2011-01-01

    Instead of putting the entire achievement burden on schools, what would it look like to hold a whole community responsible for long-range student outcomes? This report explores the concept of "shared accountability" in education. The No Child Left Behind (NCLB) Act ushered in a new era of accountability in American education: for the…

  10. Measuring Resources in Education: From Accounting to the Resource Cost Model Approach. Working Paper Series.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    This report describes two alternative approaches to measuring resources in K-12 education. One approach relies heavily on traditional accounting data, whereas the other draws on detailed information about the jobs and assignments of individual school personnel. It outlines the differences between accounting and economics and discusses how each…

  11. Sediment erodability in sediment transport modelling: Can we account for biota effects?

    NASA Astrophysics Data System (ADS)

    Le Hir, P.; Monbet, Y.; Orvain, F.

    2007-05-01

    Sediment erosion results from hydrodynamic forcing, represented by the bottom shear stress (BSS), and from the erodability of the sediment, defined by the critical erosion shear stress and the erosion rate. Abundant literature has dealt with the effects of biological components on sediment erodability and concluded that sediment processes are highly sensitive to the biota. However, very few sediment transport models account for these effects. We provide some background on the computation of BSS, and on the classical erosion laws for fine sand and mud, followed by a brief review of biota effects with the aim of quantifying the latter into generic formulations, where applicable. The effects of macrophytes, microphytobenthos, and macrofauna are considered in succession. Marine vegetation enhances the bottom dissipation of current energy, but also reduces shear stress at the sediment-water interface, which can be significant when the shoot density is high. The microphytobenthos and secreted extracellular polymeric substances (EPS) stabilise the sediment, and an increase of up to a factor of 5 can be assigned to the erosion threshold on muddy beds. However, the consequences with respect to the erosion rate are debatable since, once the protective biofilm is eroded, the underlying sediment probably has the same erosion behaviour as bare sediment. In addition, the development of benthic diatoms tends to be seasonal, so that stabilising effects are likely to be minimal in winter. Macrofaunal effects are characterised by extreme variability. For muddy sediments, destabilisation seems to be the general trend; this can become critical when benthic communities settle on consolidated sediments that would not be eroded if they remained bare. Biodeposition and bioresuspension fluxes are mentioned, for comparison with hydrodynamically induced erosion rates. Unlike the microphytobenthos, epifaunal benthic organisms create local roughness and are likely to change the BSS generated

  12. Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting

    NASA Astrophysics Data System (ADS)

    Edouard, Simon; Béatrice, Vincendon; Véronique, Ducrocq

    2015-04-01

    Title : Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting Authors : Simon EDOUARD*, Béatrice VINCENDON*, Véronique Ducrocq* * : GAME/CNRM(Météo-France, CNRS)Toulouse,France Mediterranean intense weather events often lead to devastating flash-floods (FF). Increasing the lead time of FF forecasts would permit to better anticipate their catastrophic consequences. These events are one part of Mediterranean hydrological cycle. HyMeX (HYdrological cycle in the Mediterranean EXperiment) aims at a better understanding and quantification of the hydrological cycle and related processes in the Mediterranean. In order to get a lot of data, measurement campaigns were conducted. The first special observing period (SOP1) of these campaigns, served as a test-bed for a real-time hydrological ensemble prediction system (HEPS) dedicated to FF forecasting. It produced an ensemble of quantitative discharge forecasts (QDF) using the ISBA-TOP system. ISBATOP is a coupling between the surface scheme ISBA and a version of TOPMODEL dedicated to Mediterranean fast responding rivers. ISBA-TOP was driven with several quantitative precipitation forecasts (QPF) ensembles based on AROME atmospheric convection-permitting model. This permitted to take into account the uncertainty that affects QPF and that propagates up to the QDF. This uncertainty is major for discharge forecasting especially in the case of Mediterranean flash-floods. But other sources of uncertainty need to be sampled in HEPS systems. One of them is inherent to the hydrological modelling. The ISBA-TOP coupled system has been improved since the initial version, that was used for instance during Hymex SOP1. The initial ISBA-TOP consisted into coupling a TOPMODEL approach with ISBA-3L, which represented the soil stratification with 3 layers. The new version consists into coupling the same TOPMODEL approach with a version of ISBA where more than ten layers describe the soil vertical

  13. A Pluralistic Account of Homology: Adapting the Models to the Data

    PubMed Central

    Haggerty, Leanne S.; Jachiet, Pierre-Alain; Hanage, William P.; Fitzpatrick, David A.; Lopez, Philippe; O’Connell, Mary J.; Pisani, Davide; Wilkinson, Mark; Bapteste, Eric; McInerney, James O.

    2014-01-01

    Defining homologous genes is important in many evolutionary studies but raises obvious issues. Some of these issues are conceptual and stem from our assumptions of how a gene evolves, others are practical, and depend on the algorithmic decisions implemented in existing software. Therefore, to make progress in the study of homology, both ontological and epistemological questions must be considered. In particular, defining homologous genes cannot be solely addressed under the classic assumptions of strong tree thinking, according to which genes evolve in a strictly tree-like fashion of vertical descent and divergence and the problems of homology detection are primarily methodological. Gene homology could also be considered under a different perspective where genes evolve as “public goods,” subjected to various introgressive processes. In this latter case, defining homologous genes becomes a matter of designing models suited to the actual complexity of the data and how such complexity arises, rather than trying to fit genetic data to some a priori tree-like evolutionary model, a practice that inevitably results in the loss of much information. Here we show how important aspects of the problems raised by homology detection methods can be overcome when even more fundamental roots of these problems are addressed by analyzing public goods thinking evolutionary processes through which genes have frequently originated. This kind of thinking acknowledges distinct types of homologs, characterized by distinct patterns, in phylogenetic and nonphylogenetic unrooted or multirooted networks. In addition, we define “family resemblances” to include genes that are related through intermediate relatives, thereby placing notions of homology in the broader context of evolutionary relationships. We conclude by presenting some payoffs of adopting such a pluralistic account of homology and family relationship, which expands the scope of evolutionary analyses beyond the traditional

  14. A Morphogenetic Model Accounting for Pollen Aperture Pattern in Flowering Plants.

    PubMed

    Ressayre; Godelle; Mignot; Gouyon

    1998-07-21

    Pollen grains are embeddded in an extremely resistant wall. Apertures are well defined places where the pollen wall is reduced or absent that permit pollen tube germination. Pollen grains are produced by meiosis and aperture number definition appears to be linked with the partition that follows meiosis and leads to the formation of a tetrad of four haploid microspores. In dicotyledonous plants, meiosis is simultaneous which means that cytokinesis occurs once the two nuclear divisions are completed. A syncitium with the four nuclei stemming from meiosis is formed and cytokinesis isolates simulataneously the four products of meiosis. We propose a theoretical morphogenetic model which takes into account part of the features of the ontogeny of the pollen grains. The nuclei are considered as attractors acting upon a morphogenetic substance distributed within the cytoplasm of the dividing cell. This leads to a partition of the volume of the cell in four domains that is similar to the observations of cytokinesis in the studied species. The most widespread pattern of aperture distribution in dicotyledonous plants (three apertures equidistributed on the pollen grain equator) can be explained by bipolar interactions between nuclei stemming from the second meiotic division, and observed variations on these patterns by disturbances of these interactions. In numerous plant species, several pollen grains differing in aperture number are produced by a single individual. The distribution of the different morphs within tetrads indicates that the four daughter cells can have different aperture number. The model provides an explanation for the duplication of one of the apertures of a three-aperture pollen grain leading to a four-aperture one and in parallel it gives an explanation for how heterogeneous tetrads can be formed.Copyright 1998 Academic Press

  15. An Interactive Activation Model of Context Effects in Letter Perception: Part 1. An Account of Basic Findings.

    ERIC Educational Resources Information Center

    McClelland, James L.; Rumelhart, David E.

    1981-01-01

    A model of context effects in perception is applied to perception of letters. Perception results from excitatory and inhibitory interactions of detectors for visual features, letters, and words. The model produces facilitation for letters in pronounceable pseudowords as well as words and accounts for rule-governed performance without any rules.…

  16. Taking the Missing Propensity into Account When Estimating Competence Scores: Evaluation of Item Response Theory Models for Nonignorable Omissions

    ERIC Educational Resources Information Center

    Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H.

    2015-01-01

    When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…

  17. Situated sentence processing: the coordinated interplay account and a neurobehavioral model.

    PubMed

    Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R

    2010-03-01

    Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms

  18. Pharmacokinetic Modeling of Manganese III. Physiological Approaches Accounting for Background and Tracer Kinetics

    SciTech Connect

    Teeguarden, Justin G.; Gearhart, Jeffrey; Clewell, III, H. J.; Covington, Tammie R.; Nong, Andy; Anderson, Melvin E.

    2007-01-01

    assessments (Dixit et al., 2003). With most exogenous compounds, there is often no background exposure and body concentrations are not under active control from homeostatic processes as occurs with essential nutrients. Any complete Mn PBPK model would include the homeostatic regulation as an essential nutritional element and the additional exposure routes by inhalation. Two companion papers discuss the kinetic complexities of the quantitative dose-dependent alterations in hepatic and intestinal processes that control uptake and elimination of Mn (Teeguarden et al., 2006a, b). Radioactive 54Mn has been to investigate the behavior of the more common 55Mn isotope in the body because the distribution and elimination of tracer doses reflects the overall distributional characteristics of Mn. In this paper, we take the first steps in developing a multi-route PBPK model for Mn. Here we develop a PBPK model to account for tissue concentrations and tracer kinetics of Mn under normal dietary intake. This model for normal levels of Mn will serve as the starting point for more complete model descriptions that include dose-dependencies in both oral uptake and and biliary excretion. Material and Methods Experimental Data Two studies using 54Mn tracer were employed in model development. (Furchner et al. 1966; Wieczorek and Oberdorster 1989). In Furchner et al. (1966) male Sprague-Dawley rats received an ip injection of carrier-free 54MnCl2 while maintained on standard rodent feed containing ~ 45 ppm Mn. Tissue radioactivity of 54Mn was measured by liquid scintillation counting between post injection days 1 to 89 and reported as percent of administered dose per kg tissue. 54Mn time courses were reported for liver, kidney, bone, brain, muscle, blood, lung and whole body. Because ip uptake is via the portal circulation to the liver, this data set had information on distribution and clearance behaviors of Mn entering the systemic circulation from liver.

  19. Accounting for water management issues within hydrological simulation: Alternative modelling options and a network optimization approach

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Nalbantis, Ioannis; Rozos, Evangelos; Koutsoyiannis, Demetris

    2010-05-01

    In mixed natural and artificialized river basins, many complexities arise due to anthropogenic interventions in the hydrological cycle, including abstractions from surface water bodies, groundwater pumping or recharge and water returns through drainage systems. Typical engineering approaches adopt a multi-stage modelling procedure, with the aim to handle the complexity of process interactions and the lack of measured abstractions. In such context, the entire hydrosystem is separated into natural and artificial sub-systems or components; the natural ones are modelled individually, and their predictions (i.e. hydrological fluxes) are transferred to the artificial components as inputs to a water management scheme. To account for the interactions between the various components, an iterative procedure is essential, whereby the outputs of the artificial sub-systems (i.e. abstractions) become inputs to the natural ones. However, this strategy suffers from multiple shortcomings, since it presupposes that pure natural sub-systems can be located and that sufficient information is available for each sub-system modelled, including suitable, i.e. "unmodified", data for calibrating the hydrological component. In addition, implementing such strategy is ineffective when the entire scheme runs in stochastic simulation mode. To cope with the above drawbacks, we developed a generalized modelling framework, following a network optimization approach. This originates from the graph theory, which has been successfully implemented within some advanced computer packages for water resource systems analysis. The user formulates a unified system which is comprised of the hydrographical network and the typical components of a water management network (aqueducts, pumps, junctions, demand nodes etc.). Input data for the later include hydraulic properties, constraints, targets, priorities and operation costs. The real-world system is described through a conceptual graph, whose dummy properties

  20. 31 CFR Appendix A to Part 212 - Model Notice to Account Holder

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... your account, you may contact us at . ... fill out a garnishment exemption form and submit it to the court. You may contact the creditor that... released back to you. (Conditional sentence if contact information is in the garnishment order)...

  1. 31 CFR Appendix A to Part 212 - Model Notice to Account Holder

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... your account, you may contact us at . ... fill out a garnishment exemption form and submit it to the court. You may contact the creditor that... released back to you. (Conditional sentence if contact information is in the garnishment order)...

  2. 31 CFR Appendix A to Part 212 - Model Notice to Account Holder

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... your account, you may contact us at . ... fill out a garnishment exemption form and submit it to the court. You may contact the creditor that... released back to you. (Conditional sentence if contact information is in the garnishment order)...

  3. Accounting Curriculum.

    ERIC Educational Resources Information Center

    Prickett, Charlotte

    This curriculum guide describes the accounting curriculum in the following three areas: accounting clerk, bookkeeper, and nondegreed accountant. The competencies and tasks complement the Arizona validated listing in these areas. The guide lists 24 competencies for nondegreed accountants, 10 competencies for accounting clerks, and 11 competencies…

  4. Crash Simulation of Roll Formed Parts by Damage Modelling Taking Into Account Preforming Effects

    NASA Astrophysics Data System (ADS)

    Till, Edwin T.; Hackl, Benjamin; Schauer, Hermann

    2011-08-01

    Complex phase steels of strength levels up to 1200 MPa are suitable to roll forming. These may be applied in automotive structures for enhancing the crashworthiness, e. g. as stiffeners in doors. Even though the strain hardening of the material is low there is considerable bending formability. However ductility decreases with the strength level. Higher strength requires more focus to the structural integrity of the part during the process planning stage and with respect to the crash behavior. Nowadays numerical simulation is used as a process design tool for roll-forming in a production environment. The assessment of the stability of a roll forming process is quite challenging for AHSS grades. There are two objectives of the present work. First to provide a reliable assessment tool to the roll forming analyst for failure prediction. Second to establish simulation procedures in order to predict the part's behavior in crash applications taking into account damage and failure. Today adequate ductile fracture models are available which can be used in forming and crash applications. These continuum models are based on failure strain curves or surfaces which depend on the stress triaxiality (e. g. Crach or GISSMO) and may additionally include the Lode angle (extended Mohr Coulomb or extended GISSMO model). A challenging task is to obtain the respective failure strain curves. In the paper the procedure is described in detail how these failure strain curves are obtained using small scale tests within voestalpine Stahl, notch tensile-, bulge and shear tests. It is shown that capturing the surface strains is not sufficient for obtaining reliable material failure parameters. The simulation tool for roll-forming at the site of voestalpine Krems is Copra® FEA RF, which is a 3D continuum finite element solver based on MSC.Marc. The simulation environment for crash applications is LS-DYNA. Shell elements are used for this type of analyses. A major task is to provide results of

  5. [Optimization of ecological footprint model based on environmental pollution accounts: a case study in Pearl River Delta urban agglomeration].

    PubMed

    Bai, Yu; Zeng, Hui; Wei, Jian-bing; Zhang, Wen-juan; Zhao, Hong-wei

    2008-08-01

    To solve the problem of ignoring the calculation of environment pollution in traditional ecological footprint model accounts, this paper put forward an optimized ecological footprint (EF) model, taking the pollution footprint into account. In the meantime, the environmental capacity's calculation was also added into the system of ecological capacity, and further used to do ecological assessment of Pearl River Delta urban agglomeration in 2005. The results showed a perfect inosculation between the ecological footprint and the development characteristics and spatial pattern, and illustrated that the optimized EF model could make a better orientation for the environmental pollution in the system, and also, could roundly explain the environmental effects of human activity. The optimization of ecological footprint model had better integrality and objectivity than traditional models.

  6. Accounting for delay of energy transfer between coupled rooms in statistical-acoustics models of reverberant-energy decay.

    PubMed

    Summers, Jason E

    2012-08-01

    A statistical-acoustics model for energy decay in systems of two or more coupled rooms is introduced, which accounts for the distribution of delay in the transfer of energy between subrooms that results from the finite speed of sound. The method extends previous models based on systems of coupled ordinary differential equations by using functional differential equations to explicitly model dependence on prior values of energy in adjacent subrooms. Predictions of the model are illustrated for a two-room coupled system and compared with the predictions of a benchmark computational geometrical-acoustics model.

  7. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    NASA Astrophysics Data System (ADS)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that

  8. Multiple-breed reaction norm animal model accounting for robustness and heteroskedastic in a Nelore-Angus crossed population.

    PubMed

    Oliveira, M M; Santana, M L; Cardoso, F F

    2016-07-01

    Our objective was to genetically characterize post-weaning weight gain (PWG), over a 345-day period after weaning, of Brangus-Ibagé (Nelore×Angus) cattle. Records (n=4016) were from the foundation herd of the Embrapa South Livestock Center. A Bayesian approach was used to assess genotype by environment (G×E) interaction and to identify a suitable model for the estimation of genetic parameters and use in genetic evaluation. A robust and heteroscedastic reaction norm multiple-breed animal model was proposed. The model accounted for heterogeneity of residual variance associated with effects of breed, heterozygosity, sex and contemporary group; and was robust with respect to outliers. Additive genetic effects were modeled for the intercept and slope of a reaction norm to changes in the environmental gradient. Inference was based on Monte Carlo Markov Chain of 110 000 cycles, after 10 000 cycles of burn-in. Bayesian model choice criteria indicated the proposed model was superior to simpler sub-models that did not account for G×E interaction, multiple-breed structure, robustness and heteroscedasticity. We conclude that, for the Brangus-Ibagé population, these factors should be jointly accounted for in genetic evaluation of PWG. Heritability estimates increased proportionally with improvement in the environmental conditions gradient. Therefore, an increased proportion of differences in performance among animals were explained by genetic factors rather than environmental factors as rearing conditions improved. As a consequence response to selection may be increased in favorable environments.

  9. [Parametric identification of mathematical models of population genetics taking into account the geographical dispersion in finite samples].

    PubMed

    Volkov, I K

    1988-02-01

    A method for parameter identification of population genetics' mathematical models, taking account of geographical disperse at limited samples of experimental data on mutant frequency values has been developed. The existence of the MLS (method of the least squares) estimations of the models' parameters studied has been proved, zero approach of the looked for estimations found and the iterative procedure of making them precise shown. A means of building up the a posteriori function of probability density of the zero and following approximations of the models' parameters is pointed out. The possibility of application of the proposed method to find estimations of mathematical models' parameters of population genetics, taking account of geographical disperse, has been shown on the particular example.

  10. Accountability to Whom? For What? Teacher Identity and the Force Field Model of Teacher Development

    ERIC Educational Resources Information Center

    Samuel, Michael

    2008-01-01

    The rise of fundamentalism in the sphere of teacher education points to a swing back towards teachers as service workers for State agendas. Increasingly, teachers are expected to account for the outcomes of their practices. This article traces the trajectory of trends in teacher education over the past five decades arguing that this "new…

  11. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    ERIC Educational Resources Information Center

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  12. Alternative Schools Accountability Model: 2001-2002 Indicator Selection and Reporting Guide.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    California has developed an alternative accountability system for schools with fewer than 100 students, alternative schools of various kinds, community day schools, and other schools under the jurisdiction of a county board of education or a county superintendent of schools. This document is a guide to assist local administrators in completing the…

  13. Accounting for Global Climate Model Projection Uncertainty in Modern Statistical Downscaling

    SciTech Connect

    Johannesson, G

    2010-03-17

    configuring and running a regional climate model (RCM) nested within a given GCM projection (i.e., the GCM provides bounder conditions for the RCM). On the other hand, statistical downscaling aims at establishing a statistical relationship between observed local/regional climate variables of interest and synoptic (GCM-scale) climate predictors. The resulting empirical relationship is then applied to future GCM projections. A comparison of the pros and cons of dynamical versus statistical downscaling is outside the scope of this effort, but has been extensively studied and the reader is referred to Wilby et al. (1998); Murphy (1999); Wood et al. (2004); Benestad et al. (2007); Fowler et al. (2007), and references within those. The scope of this effort is to study methodology, a statistical framework, to propagate and account for GCM uncertainty in regional statistical downscaling assessment. In particular, we will explore how to leverage an ensemble of GCM projections to quantify the impact of the GCM uncertainty in such an assessment. There are three main component to this effort: (1) gather the necessary climate-related data for a regional SDS study, including multiple GCM projections, (2) carry out SDS, and (3) assess the uncertainty. The first step is carried out using tools written in the Python programming language, while analysis tools were developed in the statistical programming language R; see Figure 1.

  14. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology

    PubMed Central

    2013-01-01

    Background The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. Methods A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Results Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. Conclusions A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions. PMID:23419192

  15. A collaborative accountable care model in three practices showed promising early results on costs and quality of care.

    PubMed

    Salmon, Richard B; Sanderson, Mark I; Walters, Barbara A; Kennedy, Karen; Flores, Robert C; Muney, Alan M

    2012-11-01

    Cigna's Collaborative Accountable Care initiative provides financial incentives to physician groups and integrated delivery systems to improve the quality and efficiency of care for patients in commercial open-access benefit plans. Registered nurses who serve as care coordinators employed by participating practices are a central feature of the initiative. They use patient-specific reports and practice performance reports provided by Cigna to improve care coordination, identify and close care gaps, and address other opportunities for quality improvement. We report interim quality and cost results for three geographically and structurally diverse provider practices in Arizona, New Hampshire, and Texas. Although not statistically significant, these early results revealed favorable trends in total medical costs and quality of care, suggesting that a shared-savings accountable care model and collaborative support from the payer can enable practices to take meaningful steps toward full accountability for care quality and efficiency.

  16. On the treatment of evapotranspiration, soil moisture accounting, and aquifer recharge in monthly water balance models.

    USGS Publications Warehouse

    Alley, W.M.

    1984-01-01

    Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation.-from Author

  17. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    NASA Astrophysics Data System (ADS)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  18. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    ERIC Educational Resources Information Center

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  19. Randomly Accountable

    ERIC Educational Resources Information Center

    Kane, Thomas J.; Staiger, Douglas O.; Geppert, Jeffrey

    2002-01-01

    The accountability debate tends to devolve into a battle between the pro-testing and anti-testing crowds. When it comes to the design of a school accountability system, the devil is truly in the details. A well-designed accountability plan may go a long way toward giving school personnel the kinds of signals they need to improve performance.…

  20. An Individual-Based Model of Zebrafish Population Dynamics Accounting for Energy Dynamics

    PubMed Central

    Beaudouin, Rémy; Goussen, Benoit; Piccini, Benjamin; Augustine, Starrlight; Devillers, James; Brion, François; Péry, Alexandre R. R.

    2015-01-01

    Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model) was coupled to an individual based model of zebrafish population dynamics (IBM model). Next, we fitted the DEB model to new experimental data on zebrafish growth and reproduction thus improving existing models. We further analysed the DEB-model and DEB-IBM using a sensitivity analysis. Finally, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding), it can already serve to predict the impact of compounds at the population level. PMID:25938409

  1. Factors accounting for youth suicide attempt in Hong Kong: a model building.

    PubMed

    Wan, Gloria W Y; Leung, Patrick W L

    2010-10-01

    This study aimed at proposing and testing a conceptual model of youth suicide attempt. We proposed a model that began with family factors such as a history of physical abuse and parental divorce/separation. Family relationship, presence of psychopathology, life stressors, and suicide ideation were postulated as mediators, leading to youth suicide attempt. The stepwise entry of the risk factors to a logistic regression model defined their proximity as related to suicide attempt. Path analysis further refined our proposed model of youth suicide attempt. Our originally proposed model was largely confirmed. The main revision was dropping parental divorce/separation as a risk factor in the model due to lack of significant contribution when examined alongside with other risk factors. This model was cross-validated by gender. This study moved research on youth suicide from identification of individual risk factors to model building, integrating separate findings of the past studies.

  2. Recommended Method To Account For Daughter Ingrowth For The Portsmouth On-Site Waste Disposal Facility Performance Assessment Modeling

    SciTech Connect

    Phifer, Mark A.; Smith, Frank G. III

    2013-06-21

    A 3-D STOMP model has been developed for the Portsmouth On-Site Waste Disposal Facility (OSWDF) at Site D as outlined in Appendix K of FBP 2013. This model projects the flow and transport of the following radionuclides to various points of assessments: Tc-99, U-234, U-235, U-236, U-238, Am-241, Np-237, Pu-238, Pu-239, Pu-240, Th-228, and Th-230. The model includes the radioactive decay of these parents, but does not include the associated daughter ingrowth because the STOMP model does not have the capability to model daughter ingrowth. The Savannah River National Laboratory (SRNL) provides herein a recommended method to account for daughter ingrowth in association with the Portsmouth OSWDF Performance Assessment (PA) modeling.

  3. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2015-08-11

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  4. Nonlinear analysis of a new car-following model accounting for the global average optimal velocity difference

    NASA Astrophysics Data System (ADS)

    Peng, Guanghan; Lu, Weizhen; He, Hongdi

    2016-09-01

    In this paper, a new car-following model is proposed by considering the global average optimal velocity difference effect on the basis of the full velocity difference (FVD) model. We investigate the influence of the global average optimal velocity difference on the stability of traffic flow by making use of linear stability analysis. It indicates that the stable region will be enlarged by taking the global average optimal velocity difference effect into account. Subsequently, the mKdV equation near the critical point and its kink-antikink soliton solution, which can describe the traffic jam transition, is derived from nonlinear analysis. Furthermore, numerical simulations confirm that the effect of the global average optimal velocity difference can efficiently improve the stability of traffic flow, which show that our new consideration should be taken into account to suppress the traffic congestion for car-following theory.

  5. An Instructional Model for Preparing Accounting/Computing Clerks in Michigan Secondary School Office Education Programs, Part I and Part II.

    ERIC Educational Resources Information Center

    Moskovis, L. Michael; McKitrick, Max O.

    Outlined in this two-part document is a model for the implementation of a business-industry oriented program designed to provide high school seniors with updated training in the skills and concepts necessary for developing competencies in entry-level and second-level accounting jobs that involve accounts receivable, accounts payable, and payroll…

  6. A new form of the elliptic relaxation equation to account for wall effects in RANS modeling

    NASA Astrophysics Data System (ADS)

    Manceau, Rémi; Hanjalić, Kemal

    2000-09-01

    Different methods for improving the behavior in the logarithmic layer of the elliptic relaxation equation, which enable the extension of Reynolds stress models or eddy viscosity models down to the wall, are tested in a channel flow at Reτ=590 and compared with direct numerical simulation (DNS) data. First, a priori tests are performed in order to confirm the improvement predicted by the theory, either with the Rotta+IP (isotropization of production) model or the Speziale-Sarkar-Gatski (SSG) model as the source term of the elliptic relaxation equation. The best form of the model is then used for full simulations, in Durbin second moment closure or in the frame of the v2¯-f model. It is shown that the results can be significantly improved, in particular by using a formulation based on the refinement of the modeling of the two-point correlations involved in the redistribution term.

  7. Accounting for spatial effects in land use regression for urban air pollution modeling.

    PubMed

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models.

  8. An intuitive Bayesian spatial model for disease mapping that accounts for scaling.

    PubMed

    Riebler, Andrea; Sørbye, Sigrunn H; Simpson, Daniel; Rue, Håvard

    2016-08-01

    In recent years, disease mapping studies have become a routine application within geographical epidemiology and are typically analysed within a Bayesian hierarchical model formulation. A variety of model formulations for the latent level have been proposed but all come with inherent issues. In the classical BYM (Besag, York and Mollié) model, the spatially structured component cannot be seen independently from the unstructured component. This makes prior definitions for the hyperparameters of the two random effects challenging. There are alternative model formulations that address this confounding; however, the issue on how to choose interpretable hyperpriors is still unsolved. Here, we discuss a recently proposed parameterisation of the BYM model that leads to improved parameter control as the hyperparameters can be seen independently from each other. Furthermore, the need for a scaled spatial component is addressed, which facilitates assignment of interpretable hyperpriors and make these transferable between spatial applications with different graph structures. The hyperparameters themselves are used to define flexible extensions of simple base models. Consequently, penalised complexity priors for these parameters can be derived based on the information-theoretic distance from the flexible model to the base model, giving priors with clear interpretation. We provide implementation details for the new model formulation which preserve sparsity properties, and we investigate systematically the model performance and compare it to existing parameterisations. Through a simulation study, we show that the new model performs well, both showing good learning abilities and good shrinkage behaviour. In terms of model choice criteria, the proposed model performs at least equally well as existing parameterisations, but only the new formulation offers parameters that are interpretable and hyperpriors that have a clear meaning.

  9. Reconstruction of Arabidopsis metabolic network models accounting for subcellular compartmentalization and tissue-specificity.

    PubMed

    Mintz-Oron, Shira; Meir, Sagit; Malitsky, Sergey; Ruppin, Eytan; Aharoni, Asaph; Shlomi, Tomer

    2012-01-03

    Plant metabolic engineering is commonly used in the production of functional foods and quality trait improvement. However, to date, computational model-based approaches have only been scarcely used in this important endeavor, in marked contrast to their prominent success in microbial metabolic engineering. In this study we present a computational pipeline for the reconstruction of fully compartmentalized tissue-specific models of Arabidopsis thaliana on a genome scale. This reconstruction involves automatic extraction of known biochemical reactions in Arabidopsis for both primary and secondary metabolism, automatic gap-filling, and the implementation of methods for determining subcellular localization and tissue assignment of enzymes. The reconstructed tissue models are amenable for constraint-based modeling analysis, and significantly extend upon previous model reconstructions. A set of computational validations (i.e., cross-validation tests, simulations of known metabolic functionalities) and experimental validations (comparison with experimental metabolomics datasets under various compartments and tissues) strongly testify to the predictive ability of the models. The utility of the derived models was demonstrated in the prediction of measured fluxes in metabolically engineered seed strains and the design of genetic manipulations that are expected to increase vitamin E content, a significant nutrient for human health. Overall, the reconstructed tissue models are expected to lay down the foundations for computational-based rational design of plant metabolic engineering. The reconstructed compartmentalized Arabidopsis tissue models are MIRIAM-compliant and are available upon request.

  10. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    NASA Astrophysics Data System (ADS)

    Branlard, E.; Gaunaa, M.; Machefaux, E.

    2014-06-01

    The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data from the MEXICO experiment are used as a basis for validation. Three tools using the same 2D airfoil coefficient data are compared: a BEM code, an Actuator-Line and a vortex code. The vortex code is further used to validate the results from the newly implemented BEM yaw-model. Significant improvements are obtained for the prediction of loads and induced velocities. Further relaxation of the main assumptions of the model are briefly presented and discussed.

  11. Beyond Socks, Signs, and Alarms: A Reflective Accountability Model for Fall Prevention.

    PubMed

    Hoke, Linda M; Guarracino, Dana

    2016-01-01

    Despite standard fall precautions, including nonskid socks, signs, alarms, and patient instructions, our 48-bed cardiac intermediate care unit (CICU) had a 41% increase in the rate of falls (from 2.2 to 3.1 per 1,000 patient days) and a 65% increase in the rate of falls with injury (from 0.75 to 1.24 per 1,000 patient days) between fiscal years (FY) 2012 and 2013. An evaluation of the falls data conducted by a cohort of four clinical nurses found that the majority of falls occurred when patients were unassisted by nurses, most often during toileting. Supported by the leadership team, the clinical nurses developed an accountability care program that required nurses to use reflective practice to evaluate each fall, including sending an e-mail to all staff members with both the nurse's and the patient's perspective on the fall, as well as the nurse's reflection on what could have been done to prevent the fall. Other program components were a postfall huddle and guidelines for assisting and remaining with fall risk patients for the duration of their toileting. Placing the accountability for falls with the nurse resulted in decreases in the unit's rates of falls and falls with injury of 55% (from 3.1 to 1.39 per 1,000 patient days) and 72% (from 1.24 to 0.35 per 1,000 patient days), respectively, between FY2013 and FY2014. Prompt call bell response (less than 60 seconds) also contributed to the goal of fall prevention.

  12. A statistical finite element model of the knee accounting for shape and alignment variability.

    PubMed

    Rao, Chandreshwar; Fitzpatrick, Clare K; Rullkoetter, Paul J; Maletsky, Lorin P; Kim, Raymond H; Laz, Peter J

    2013-10-01

    By characterizing anatomical differences in size and shape between subjects, statistical shape models enable population-based evaluations in biomechanics. Statistical models have largely focused on individual bones with application to implant sizing, bone fracture and osteoarthritis; however, in joint mechanics applications, the statistical models must consider the geometry of multiple structures of a joint and their relative position. Accordingly, the objectives of this study were to develop a statistical shape and alignment modeling (SSAM) approach to characterize the intersubject variability in bone morphology and alignment for the structures of the knee, to demonstrate the statistical model's ability to describe variability in a training set and to generate realistic instances for use in finite element evaluation of joint mechanics. The statistical model included representations of the bone and cartilage for the femur, tibia and patella from magnetic resonance images and relative alignment of the structures at a known, loaded position in an experimental knee simulator for a training set of 20 specimens. The statistical model described relationships or modes of variation in shape and relative alignment of the knee structures. By generating new 'virtual subjects' with physiologically realistic knee anatomy, the modeling approach can efficiently perform investigations into joint mechanics and implant design which benefit from population-based considerations.

  13. Accounting for Individual Differences in Bradley-Terry Models by Means of Recursive Partitioning

    ERIC Educational Resources Information Center

    Strobl, Carolin; Wickelmaier, Florian; Zeileis, Achim

    2011-01-01

    The preference scaling of a group of subjects may not be homogeneous, but different groups of subjects with certain characteristics may show different preference scalings, each of which can be derived from paired comparisons by means of the Bradley-Terry model. Usually, either different models are fit in predefined subsets of the sample or the…

  14. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    EPA Science Inventory

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  15. Value-Added Models of Assessment: Implications for Motivation and Accountability

    ERIC Educational Resources Information Center

    Anderman, Eric M.; Anderman, Lynley H.; Yough, Michael S.; Gimbert, Belinda G.

    2010-01-01

    In this article, we examine the relations of value-added models of measuring academic achievement to student motivation. Using an achievement goal orientation theory perspective, we argue that value-added models, which focus on the progress of individual students over time, are more closely aligned with research on student motivation than are more…

  16. Storage-based approaches to build floodplain inundation modelling capability in river system models for water resources planning and accounting

    NASA Astrophysics Data System (ADS)

    Dutta, Dushmanta; Teng, Jin; Vaze, Jai; Lerat, Julien; Hughes, Justin; Marvanek, Steve

    2013-11-01

    We develop two innovate approaches for floodplain modelling in river system models.The two approaches can estimate floodplain fluxes and stores model at river reach.Performance of the second approach is equivalent to a hydrodynamic model.The second approach is suitable for rapid inundation estimate at high spatial scale.New developments enable river models to improve environmental flow modelling.

  17. An agent-based simulation model of patient choice of health care providers in accountable care organizations.

    PubMed

    Alibrahim, Abdullah; Wu, Shinyi

    2016-10-04

    Accountable care organizations (ACO) in the United States show promise in controlling health care costs while preserving patients' choice of providers. Understanding the effects of patient choice is critical in novel payment and delivery models like ACO that depend on continuity of care and accountability. The financial, utilization, and behavioral implications associated with a patient's decision to forego local health care providers for more distant ones to access higher quality care remain unknown. To study this question, we used an agent-based simulation model of a health care market composed of providers able to form ACO serving patients and embedded it in a conditional logit decision model to examine patients capable of choosing their care providers. This simulation focuses on Medicare beneficiaries and their congestive heart failure (CHF) outcomes. We place the patient agents in an ACO delivery system model in which provider agents decide if they remain in an ACO and perform a quality improving CHF disease management intervention. Illustrative results show that allowing patients to choose their providers reduces the yearly payment per CHF patient by $320, reduces mortality rates by 0.12 percentage points and hospitalization rates by 0.44 percentage points, and marginally increases provider participation in ACO. This study demonstrates a model capable of quantifying the effects of patient choice in a theoretical ACO system and provides a potential tool for policymakers to understand implications of patient choice and assess potential policy controls.

  18. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    PubMed

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented.

  19. Modeling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    NASA Astrophysics Data System (ADS)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2014-01-01

    Coral reefs are diverse ecosystems threatened by rising CO2 levels that are driving the observed increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are able to explicitly modelled by linking the rates of growth, recovery and calcification to the rates of bleaching and temperature stress induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The different characteristics of this model are also assessed against independent data to show that the model captures the observed response of corals. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles for understanding the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can reproduce much of the observed response of corals to changes in temperature and ocean acidification.

  20. A novel constitutive model of skeletal muscle taking into account anisotropic damage.

    PubMed

    Ito, D; Tanaka, E; Yamamoto, S

    2010-01-01

    The purpose of this study is to develop a constitutive model of skeletal muscle that describes material anisotropy, viscoelasticity and damage of muscle tissue. A free energy function is described as the sum of volumetric elastic, isochoric elastic and isochoric viscoelastic parts. The isochoric elastic part is divided into two types of shear response and the response in the fiber direction. To represent the dependence of the mechanical properties on muscle activity, we incorporate a contractile element into the model. The viscoelasticity of muscle is modeled as a three-dimensional model constructed by extending the one-dimensional generalized Maxwell model. Based on the framework of continuum damage mechanics, the anisotropic damage of muscle tissue is expressed by a second-order damage tensor. The evolution of the damage is assumed to depend on the current strain and damage. The evolution equation is formulated using the representation theorem of tensor functions. The proposed model is applied to the experimental data on tensile mechanical properties in the fiber direction and the compression properties in the fiber and cross-fiber directions in literature. The model can predict non-linear mechanical properties and breaking points.

  1. A thermomechanical model accounting for the behavior of shape memory alloys in finite deformations

    NASA Astrophysics Data System (ADS)

    Haller, Laviniu; Nedjar, Boumedienne; Moumni, Ziad; Vedinaş, Ioan; Trană, Eugen

    2016-07-01

    Shape memory alloys (SMA) comport an interesting behavior. They can undertake large strains and then recover their undeformed shape by heating. In this context, one of the aspects that challenged many researchers was the development of a mathematical model to predict the behavior of a known SMA under real-life conditions, or finite strain. This paper is aimed at working out a finite strain mathematical model for a Ni-Ti SMA, under the superelastic experiment conditions and under uniaxial mechanical loading, based on the Zaki-Moumni 3D mathematical model developed under the small perturbations assumption. Within the current article, a comparison between experimental findings and calculated results is also investigated. The proposed finite strain mathematical model shows good agreement with experimental data.

  2. Modelling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    NASA Astrophysics Data System (ADS)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2015-05-01

    Coral reefs are diverse ecosystems that are threatened by rising CO2 levels through increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are explicitly modelled by linking rates of growth, recovery and calcification to rates of bleaching and temperature-stress-induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, correlated up- and down-regulation of traits that are consistent with resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The performance of the model is assessed against independent data to demonstrate how it can capture the observed response of corals to stress. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles to help understand the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can give insights into how corals respond to changes in temperature and ocean acidification.

  3. Accounting for nitrogen fixation in simple models of lake nitrogen loading/export.

    PubMed

    Ruan, Xiaodan; Schellenger, Frank; Hellweger, Ferdi L

    2014-05-20

    Coastal eutrophication, an important global environmental problem, is primarily caused by excess nitrogen and management efforts consequently focus on lowering watershed N export (e.g., by reducing fertilizer use). Simple quantitative models are needed to evaluate alternative scenarios at the watershed scale. Existing models generally assume that, for a specific lake/reservoir, a constant fraction of N loading is exported downstream. However, N fixation by cyanobacteria may increase when the N loading is reduced, which may change the (effective) fraction of N exported. Here we present a model that incorporates this process. The model (Fixation and Export of Nitrogen from Lakes, FENL) is based on a steady-state mass balance with loading, output, loss/retention, and N fixation, where the amount fixed is a function of the N/P ratio of the loading (i.e., when N/P is less than a threshold value, N is fixed). Three approaches are used to parametrize and evaluate the model, including microcosm lab experiments, lake field observations/budgets and lake ecosystem model applications. Our results suggest that N export will not be reduced proportionally with N loading, which needs to be considered when evaluating management scenarios.

  4. Analysis of homogeneous/non-homogeneous nanofluid models accounting for nanofluid-surface interactions

    NASA Astrophysics Data System (ADS)

    Ahmad, R.

    2016-07-01

    This article reports an unbiased analysis for the water based rod shaped alumina nanoparticles by considering both the homogeneous and non-homogeneous nanofluid models over the coupled nanofluid-surface interface. The mechanics of the surface are found for both the homogeneous and non-homogeneous models, which were ignored in previous studies. The viscosity and thermal conductivity data are implemented from the international nanofluid property benchmark exercise. All the simulations are being done by using the experimentally verified results. By considering the homogeneous and non-homogeneous models, the precise movement of the alumina nanoparticles over the surface has been observed by solving the corresponding system of differential equations. For the non-homogeneous model, a uniform temperature and nanofluid volume fraction are assumed at the surface, and the flux of the alumina nanoparticle is taken as zero. The assumption of zero nanoparticle flux at the surface makes the non-homogeneous model physically more realistic. The differences of all profiles for both the homogeneous and nonhomogeneous models are insignificant, and this is due to small deviations in the values of the Brownian motion and thermophoresis parameters.

  5. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    PubMed

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-03

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  6. Accounting for Misclassified Outcomes in Binary Regression Models Using Multiple Imputation With Internal Validation Data

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Troester, Melissa A.; Richardson, David B.

    2013-01-01

    Outcome misclassification is widespread in epidemiology, but methods to account for it are rarely used. We describe the use of multiple imputation to reduce bias when validation data are available for a subgroup of study participants. This approach is illustrated using data from 308 participants in the multicenter Herpetic Eye Disease Study between 1992 and 1998 (48% female; 85% white; median age, 49 years). The odds ratio comparing the acyclovir group with the placebo group on the gold-standard outcome (physician-diagnosed herpes simplex virus recurrence) was 0.62 (95% confidence interval (CI): 0.35, 1.09). We masked ourselves to physician diagnosis except for a 30% validation subgroup used to compare methods. Multiple imputation (odds ratio (OR) = 0.60; 95% CI: 0.24, 1.51) was compared with naive analysis using self-reported outcomes (OR = 0.90; 95% CI: 0.47, 1.73), analysis restricted to the validation subgroup (OR = 0.57; 95% CI: 0.20, 1.59), and direct maximum likelihood (OR = 0.62; 95% CI: 0.26, 1.53). In simulations, multiple imputation and direct maximum likelihood had greater statistical power than did analysis restricted to the validation subgroup, yet all 3 provided unbiased estimates of the odds ratio. The multiple-imputation approach was extended to estimate risk ratios using log-binomial regression. Multiple imputation has advantages regarding flexibility and ease of implementation for epidemiologists familiar with missing data methods. PMID:24627573

  7. Accounting for misclassified outcomes in binary regression models using multiple imputation with internal validation data.

    PubMed

    Edwards, Jessie K; Cole, Stephen R; Troester, Melissa A; Richardson, David B

    2013-05-01

    Outcome misclassification is widespread in epidemiology, but methods to account for it are rarely used. We describe the use of multiple imputation to reduce bias when validation data are available for a subgroup of study participants. This approach is illustrated using data from 308 participants in the multicenter Herpetic Eye Disease Study between 1992 and 1998 (48% female; 85% white; median age, 49 years). The odds ratio comparing the acyclovir group with the placebo group on the gold-standard outcome (physician-diagnosed herpes simplex virus recurrence) was 0.62 (95% confidence interval (CI): 0.35, 1.09). We masked ourselves to physician diagnosis except for a 30% validation subgroup used to compare methods. Multiple imputation (odds ratio (OR) = 0.60; 95% CI: 0.24, 1.51) was compared with naive analysis using self-reported outcomes (OR = 0.90; 95% CI: 0.47, 1.73), analysis restricted to the validation subgroup (OR = 0.57; 95% CI: 0.20, 1.59), and direct maximum likelihood (OR = 0.62; 95% CI: 0.26, 1.53). In simulations, multiple imputation and direct maximum likelihood had greater statistical power than did analysis restricted to the validation subgroup, yet all 3 provided unbiased estimates of the odds ratio. The multiple-imputation approach was extended to estimate risk ratios using log-binomial regression. Multiple imputation has advantages regarding flexibility and ease of implementation for epidemiologists familiar with missing data methods.

  8. Radiomagnetotelluric two-dimensional forward and inverse modelling accounting for displacement currents

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Pedersen, Laust B.; Siripunvaraporn, Weerachai

    2008-11-01

    Electromagnetic surface measurements with the radiomagnetotelluric (RMT) method in the frequency range between 10 and 300kHz are typically interpreted in the quasi-static approximation, that is, assuming displacement currents are negligible. In this paper, the dielectric effect of displacement currents on RMT responses over resistive subsurface models is studied with a 2-D forward and inverse scheme that can operate both in the quasi-static approximation and including displacement currents. Forward computations of simple models exemplify how responses that allow for displacement currents deviate from responses computed in the quasi-static approximation. The differences become most obvious for highly resistive subsurface models of about 3000Ωm and more and at high frequencies. For such cases, the apparent resistivities and phases of the transverse magnetic (TM) and transverse electric (TE) modes are significantly smaller than in the quasi-static approximation. Along profiles traversing 2-D subsurface models, sign reversals in the real part of the vertical magnetic transfer function (VMT) are often more pronounced than in the quasi-static approximation. On both sides of such sign reversals, the responses computed including displacement currents are larger than typical measurement errors. The 2-D inversion of synthetic data computed including displacement currents demonstrates that serious misinterpretations in the form of artefacts in inverse models can be made if displacement currents are neglected during the inversion. Hence, the inclusion of the dielectric effect is a crucial improvement over existing quasi-static 2-D inverse schemes. Synthetic data from a 2-D model with constant dielectric permittivity and a conductive block buried in a highly resistive layer, which in turn is underlain by a conductive layer, are inverted. In the quasi-static inverse model, the depth to the conductive structures is overestimated, artefactual resistors appear on both sides of the

  9. Accounting for anthropogenic actions in modeling of stream flow at the regional scale

    NASA Astrophysics Data System (ADS)

    David, C. H.; Famiglietti, J. S.

    2013-12-01

    The modeling of the horizontal movement of water from land to coasts at scales ranging from 10^5 km^2 to 10^6 km^2 has benefited from extensive research within the past two decades. In parallel, community technology for gathering/sharing surface water observations and datasets for describing the geography of terrestrial water bodies have recently had groundbreaking advancements. Yet, the fields of computational hydrology and hydroinformatics have barely started to work hand-in-hand, and much research remains to be performed before we can better understand the anthropogenic impact on surface water through combined observations and models. Here, we build on our existing river modeling approach that leverages community state-of-the-art tools such as atmospheric data from the second phase of the North American Land Data Assimilation System (NLDAS2), river networks from the enhanced National Hydrography Dataset (NHDPlus), and observations from the U.S. Geological Survey National Water Information System (NWIS) obtained through CUAHSI webservices. Modifications are made to our integrated observational/modeling system to include treatment for anthropogenic actions such as dams, pumping and divergences in river networks. Initial results of a study focusing on the entire State of California suggest that availability of data describing human alterations on natural river networks associated with proper representation of such actions in our models could help advance hydrology further. Snapshot from an animation of flow in California river networks. The full animation is available at: http://www.ucchm.org/david/rapid.htm.

  10. A dissolution model that accounts for coverage of mineral surfaces by precipitation in core floods

    NASA Astrophysics Data System (ADS)

    Pedersen, Janne; Jettestuen, Espen; Madland, Merete V.; Hildebrand-Habel, Tania; Korsnes, Reidar I.; Vinningland, Jan Ludvig; Hiorth, Aksel

    2016-01-01

    In this paper, we propose a model for evolution of reactive surface area of minerals due to surface coverage by precipitating minerals. The model is used to interpret results from an experiment where a chalk core was flooded with MgCl2 for 1072 days, giving rise to calcite dissolution and magnesite precipitation. The model successfully describes both the long-term behavior of the measured effluent concentrations and the more or less homogeneous distribution of magnesite found in the core after 1072 days. The model also predicts that precipitating magnesite minerals form as larger crystals or aggregates of smaller size crystals, and not as thin flakes or as a monomolecular layer. Using rate constants obtained from literature gave numerical effluent concentrations that diverged from observed values only after a few days of flooding. To match the simulations to the experimental data after approximately 1 year of flooding, a rate constant that is four orders of magnitude lower than reported by powder experiments had to be used. We argue that a static rate constant is not sufficient to describe a chalk core flooding experiment lasting for nearly 3 years. The model is a necessary extension of standard rate equations in order to describe long term core flooding experiments where there is a large degree of textural alteration.

  11. An extended continuum model accounting for the driver's timid and aggressive attributions

    NASA Astrophysics Data System (ADS)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-04-01

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV-Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption.

  12. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    SciTech Connect

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  13. Variable parameter McCarthy-Muskingum flow transport model for compound channels accounting for distributed non-uniform lateral flow

    NASA Astrophysics Data System (ADS)

    Swain, Ratnakar; Sahoo, Bhabagrahi

    2015-11-01

    In this study, the fully volume conservative simplified hydrodynamic-based variable parameter McCarthy-Muskingum (VPMM) flow transport model advocated by Perumal and Price in 2013 is extended to exclusively incorporate the distributed non-uniform lateral flow in the routing scheme accounting for compound river channel flows. The revised VPMM formulation is exclusively derived from the combined form of the de Saint-Venant's continuity and momentum equations with the spatiotemporally distributed lateral flow which is solved using the finite difference box scheme. This revised model could address the earlier model limitations of: (i) non-accounting non-uniformly distributed lateral flow, (ii) ignoring floodplain flow, and (iii) non-consideration of catchment dynamics of lateral flow generation restricting its real-time application. The efficacy of the revised formulation is tested to simulate 16 years (1980-1995) river runoff from real-time storm events under scarce morpho-hydrological data conditions in a tropical monsoon-type 48 km Bolani-Gomlai reach of the Brahmani River in eastern India. The spatiotemporally distributed lateral flows generated in real-time is computed by water balance approach accounting for catchment characteristics of normalized network area function, land use land cover classes, and soil textural classes; and hydro-meteorological variables of precipitation, soil moisture, minimum and maximum temperatures, wind speed, relative humidity, and solar radiation. The multiple error measures used in this study and the simulation results reveal that the revised VPMM model has a greater practical utility in estimating the event-based and long-term meso-scale river runoff (both discharge and its stage) at any ungauged site, enhancing its application for real-time flood estimation.

  14. Does Don Fisher's high-pressure manifold model account for phloem transport and resource partitioning?

    PubMed Central

    Patrick, John W.

    2013-01-01

    The pressure flow model of phloem transport envisaged by Münch (1930) has gained wide acceptance. Recently, however, the model has been questioned on structural and physiological grounds. For instance, sub-structures of sieve elements may reduce their hydraulic conductances to levels that impede flow rates of phloem sap and observed magnitudes of pressure gradients to drive flow along sieve tubes could be inadequate in tall trees. A variant of the Münch pressure flow model, the high-pressure manifold model of phloem transport introduced by Donald Fisher may serve to reconcile at least some of these questions. To this end, key predicted features of the high-pressure manifold model of phloem transport are evaluated against current knowledge of the physiology of phloem transport. These features include: (1) An absence of significant gradients in axial hydrostatic pressure in sieve elements from collection to release phloem accompanied by transport properties of sieve elements that underpin this outcome; (2) Symplasmic pathways of phloem unloading into sink organs impose a major constraint over bulk flow rates of resources translocated through the source-path-sink system; (3) Hydraulic conductances of plasmodesmata, linking sieve elements with surrounding phloem parenchyma cells, are sufficient to support and also regulate bulk flow rates exiting from sieve elements of release phloem. The review identifies strong circumstantial evidence that resource transport through the source-path-sink system is consistent with the high-pressure manifold model of phloem transport. The analysis then moves to exploring mechanisms that may link demand for resources, by cells of meristematic and expansion/storage sinks, with plasmodesmal conductances of release phloem. The review concludes with a brief discussion of how these mechanisms may offer novel opportunities to enhance crop biomass yields. PMID:23802003

  15. Assessing and accounting for the effects of model error in Bayesian solutions to hydrogeophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Koepke, C.; Irving, J.; Roubinet, D.

    2014-12-01

    Geophysical methods have gained much interest in hydrology over the past two decades because of their ability to provide estimates of the spatial distribution of subsurface properties at a scale that is often relevant to key hydrological processes. Because of an increased desire to quantify uncertainty in hydrological predictions, many hydrogeophysical inverse problems have recently been posed within a Bayesian framework, such that estimates of hydrological properties and their corresponding uncertainties can be obtained. With the Bayesian approach, it is often necessary to make significant approximations to the associated hydrological and geophysical forward models such that stochastic sampling from the posterior distribution, for example using Markov-chain-Monte-Carlo (MCMC) methods, is computationally feasible. These approximations lead to model structural errors, which, so far, have not been properly treated in hydrogeophysical inverse problems. Here, we study the inverse problem of estimating unsaturated hydraulic properties, namely the van Genuchten-Mualem (VGM) parameters, in a layered subsurface from time-lapse, zero-offset-profile (ZOP) ground penetrating radar (GPR) data, collected over the course of an infiltration experiment. In particular, we investigate the effects of assumptions made for computational tractability of the stochastic inversion on model prediction errors as a function of depth and time. These assumptions are that (i) infiltration is purely vertical and can be modeled by the 1D Richards equation, and (ii) the petrophysical relationship between water content and relative dielectric permittivity is known. Results indicate that model errors for this problem are far from Gaussian and independently identically distributed, which has been the common assumption in previous efforts in this domain. In order to develop a more appropriate likelihood formulation, we use (i) a stochastic description of the model error that is obtained through

  16. Improved signal model for confocal sensors accounting for object depending artifacts.

    PubMed

    Mauch, Florian; Lyda, Wolfram; Gronle, Marc; Osten, Wolfgang

    2012-08-27

    The conventional signal model of confocal sensors is well established and has proven to be exceptionally robust especially when measuring rough surfaces. Its physical derivation however is explicitly based on plane surfaces or point like objects, respectively. Here we show experimental results of a confocal point sensor measurement of a surface standard. The results illustrate the rise of severe artifacts when measuring curved surfaces. On this basis, we present a systematic extension of the conventional signal model that is proven to be capable of qualitatively explaining these artifacts.

  17. Educational Quality Is Measured by Individual Student Achievement Over Time. Mt. San Antonio College AB 1725 Model Accountability System Pilot Proposal.

    ERIC Educational Resources Information Center

    Mount San Antonio Coll., Walnut, CA.

    In December 1990, a project was begun at Mt. San Antonio College (MSAC) in Walnut, California, to develop a model accountability system based on the belief that educational quality is measured by individual achievement over time. This proposal for the Accountability Model (AM) presents information on project methodology and organization in four…

  18. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  19. Teachers' Conceptions of Assessment in Chinese Contexts: A Tripartite Model of Accountability, Improvement, and Irrelevance

    ERIC Educational Resources Information Center

    Brown, Gavin T. L.; Hui, Sammy K. F.; Yu, Flora W. M.; Kennedy, Kerry J.

    2011-01-01

    The beliefs teachers have about assessment influence classroom practices and reflect cultural and societal differences. This paper reports the development of a new self-report inventory to examine beliefs teachers in Hong Kong and southern China contexts have about the nature and purpose of assessment. A statistically equivalent model for Hong…

  20. An Exemplar-Model Account of Feature Inference from Uncertain Categorizations

    ERIC Educational Resources Information Center

    Nosofsky, Robert M.

    2015-01-01

    In a highly systematic literature, researchers have investigated the manner in which people make feature inferences in paradigms involving uncertain categorizations (e.g., Griffiths, Hayes, & Newell, 2012; Murphy & Ross, 1994, 2007, 2010a). Although researchers have discussed the implications of the results for models of categorization and…

  1. Evaluation of alternative surface runoff accounting procedures using the SWAT model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For surface runoff estimation in the Soil and Water Assessment Tool (SWAT) model, the curve number (CN) procedure is commonly adopted to calculate surface runoff by utilizing antecedent soil moisture condition (SCSI) in field. In the recent version of SWAT (SWAT2005), an alternative approach is ava...

  2. Trust, Accountability, Autonomy: Building a Teacher-Driven Professional Growth Model

    ERIC Educational Resources Information Center

    Jebson, Hugh; DiNota, Carlo

    2011-01-01

    Faculty evaluation--arguably no other topic in independent education evokes as much passionate discourse--mostly negative, or at least freighted with anxiety. But, in the authors' experience, it does not have to be this way. At their school, Berkeley Preparatory School (Florida), they have recently developed a teacher evaluation model that is…

  3. Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper

    ERIC Educational Resources Information Center

    Hock, Heinrich; Isenberg, Eric

    2012-01-01

    Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…

  4. Using state-and-transition modeling to account for imperfect detection in invasive species management

    USGS Publications Warehouse

    Frid, Leonardo; Holcombe, Tracy; Morisette, Jeffrey T.; Olsson, Aaryn D.; Brigham, Lindy; Bean, Travis M.; Betancourt, Julio L.; Bryan, Katherine

    2013-01-01

    Buffelgrass, a highly competitive and flammable African bunchgrass, is spreading rapidly across both urban and natural areas in the Sonoran Desert of southern and central Arizona. Damages include increased fire risk, losses in biodiversity, and diminished revenues and quality of life. Feasibility of sustained and successful mitigation will depend heavily on rates of spread, treatment capacity, and cost–benefit analysis. We created a decision support model for the wildland–urban interface north of Tucson, AZ, using a spatial state-and-transition simulation modeling framework, the Tool for Exploratory Landscape Scenario Analyses. We addressed the issues of undetected invasions, identifying potentially suitable habitat and calibrating spread rates, while answering questions about how to allocate resources among inventory, treatment, and maintenance. Inputs to the model include a state-and-transition simulation model to describe the succession and control of buffelgrass, a habitat suitability model, management planning zones, spread vectors, estimated dispersal kernels for buffelgrass, and maps of current distribution. Our spatial simulations showed that without treatment, buffelgrass infestations that started with as little as 80 ha (198 ac) could grow to more than 6,000 ha by the year 2060. In contrast, applying unlimited management resources could limit 2060 infestation levels to approximately 50 ha. The application of sufficient resources toward inventory is important because undetected patches of buffelgrass will tend to grow exponentially. In our simulations, areas affected by buffelgrass may increase substantially over the next 50 yr, but a large, upfront investment in buffelgrass control could reduce the infested area and overall management costs.

  5. Tree biomass in the Swiss landscape: nationwide modelling for improved accounting for forest and non-forest trees.

    PubMed

    Price, B; Gomez, A; Mathys, L; Gardi, O; Schellenberger, A; Ginzler, C; Thürig, E

    2017-03-01

    Trees outside forest (TOF) can perform a variety of social, economic and ecological functions including carbon sequestration. However, detailed quantification of tree biomass is usually limited to forest areas. Taking advantage of structural information available from stereo aerial imagery and airborne laser scanning (ALS), this research models tree biomass using national forest inventory data and linear least-square regression and applies the model both inside and outside of forest to create a nationwide model for tree biomass (above ground and below ground). Validation of the tree biomass model against TOF data within settlement areas shows relatively low model performance (R (2) of 0.44) but still a considerable improvement on current biomass estimates used for greenhouse gas inventory and carbon accounting. We demonstrate an efficient and easily implementable approach to modelling tree biomass across a large heterogeneous nationwide area. The model offers significant opportunity for improved estimates on land use combination categories (CC) where tree biomass has either not been included or only roughly estimated until now. The ALS biomass model also offers the advantage of providing greater spatial resolution and greater within CC spatial variability compared to the current nationwide estimates.

  6. A communication model of shared decision making: accounting for cancer treatment decisions.

    PubMed

    Siminoff, Laura A; Step, Mary M

    2005-07-01

    The authors present a communication model of shared decision making (CMSDM) that explicitly identifies the communication process as the vehicle for decision making in cancer treatment. In this view, decision making is necessarily a sociocommunicative process whereby people enter into a relationship, exchange information, establish preferences, and choose a course of action. The model derives from contemporary notions of behavioral decision making and ethical conceptions of the doctor-patient relationship. This article briefly reviews the theoretical approaches to decision making, notes deficiencies, and embeds a more socially based process into the dynamics of the physician-patient relationship, focusing on cancer treatment decisions. In the CMSDM, decisions depend on (a) antecedent factors that have potential to influence communication, (b) jointly constructed communication climate, and (c) treatment preferences established by the physician and the patient.

  7. The effects of drugs on human models of emotional processing: an account of antidepressant drug treatment.

    PubMed

    Pringle, Abbie; Harmer, Catherine J

    2015-12-01

    Human models of emotional processing suggest that the direct effect of successful antidepressant drug treatment may be to modify biases in the processing of emotional information. Negative biases in emotional processing are documented in depression, and single or short-term dosing with conventional antidepressant drugs reverses these biases in depressed patients prior to any subjective change in mood. Antidepressant drug treatments also modulate emotional processing in healthy volunteers, which allows the consideration of the psychological effects of these drugs without the confound of changes in mood. As such, human models of emotional processing may prove to be useful for testing the efficacy of novel treatments and for matching treatments to individual patients or subgroups of patients.

  8. The effects of drugs on human models of emotional processing: an account of antidepressant drug treatment

    PubMed Central

    Pringle, Abbie; Harmer, Catherine J.

    2015-01-01

    Human models of emotional processing suggest that the direct effect of successful antidepressant drug treatment may be to modify biases in the processing of emotional information. Negative biases in emotional processing are documented in depression, and single or short-term dosing with conventional antidepressant drugs reverses these biases in depressed patients prior to any subjective change in mood. Antidepressant drug treatments also modulate emotional processing in healthy volunteers, which allows the consideration of the psychological effects of these drugs without the confound of changes in mood. As such, human models of emotional processing may prove to be useful for testing the efficacy of novel treatments and for matching treatments to individual patients or subgroups of patients. PMID:26869848

  9. Situational Force Scoring: Accounting for Combined Arms Effects in Aggregate Combat Models

    DTIC Science & Technology

    1992-01-01

    Army War College Model, Historical Evaluation and Research Organization (HERO), November 1982. -21- and Monti Callero . An important alternative approach...i ii l .... .... .. .. . ... • • •,, o , •,: , o •. .. : -.. . ............. .. . . . . 4-d, ,,i , b .:h l•. i...Slift" •[)’• II.-•DA$1-....... .... O ’ . .l .... ......... l .. . . .. t ieY 5•$6.6-11 Iaq -s# w D 1-LO01 i op Fig. D.4--Surviving Armor and Infantry

  10. Structure-Based Statistical Mechanical Model Accounts for the Causality and Energetics of Allosteric Communication

    PubMed Central

    Guarnera, Enrico; Berezovsky, Igor N.

    2016-01-01

    Allostery is one of the pervasive mechanisms through which proteins in living systems carry out enzymatic activity, cell signaling, and metabolism control. Effective modeling of the protein function regulation requires a synthesis of the thermodynamic and structural views of allostery. We present here a structure-based statistical mechanical model of allostery, allowing one to observe causality of communication between regulatory and functional sites, and to estimate per residue free energy changes. Based on the consideration of ligand free and ligand bound systems in the context of a harmonic model, corresponding sets of characteristic normal modes are obtained and used as inputs for an allosteric potential. This potential quantifies the mean work exerted on a residue due to the local motion of its neighbors. Subsequently, in a statistical mechanical framework the entropic contribution to allosteric free energy of a residue is directly calculated from the comparison of conformational ensembles in the ligand free and ligand bound systems. As a result, this method provides a systematic approach for analyzing the energetics of allosteric communication based on a single structure. The feasibility of the approach was tested on a variety of allosteric proteins, heterogeneous in terms of size, topology and degree of oligomerization. The allosteric free energy calculations show the diversity of ways and complexity of scenarios existing in the phenomenology of allosteric causality and communication. The presented model is a step forward in developing the computational techniques aimed at detecting allosteric sites and obtaining the discriminative power between agonistic and antagonistic effectors, which are among the major goals in allosteric drug design. PMID:26939022

  11. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    NASA Astrophysics Data System (ADS)

    Liang, Peixin; Chai, Feng; Bi, Yunlong; Pei, Yulong; Cheng, Shukang

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization.

  12. Accounting for Long Term Sediment Storage in a Watershed Scale Numerical Model for Suspended Sediment Routing

    NASA Astrophysics Data System (ADS)

    Keeler, J. J.; Pizzuto, J. E.; Skalak, K.; Karwan, D. L.; Benthem, A.; Ackerman, T. R.

    2015-12-01

    Quantifying the delivery of suspended sediment from upland sources to downstream receiving waters is important for watershed management, but current routing models fail to accurately represent lag times in delivery resulting from sediment storage. In this study, we route suspended sediment tagged by a characteristic tracer using a 1-dimensional model that implicitly includes storage and remobilization processes and timescales. From an input location where tagged sediment is added, the model advects suspended sediment downstream at the velocity of the stream (adjusted for the intermittency of transport events). Deposition rates are specified by the fraction of the suspended load stored per kilometer of downstream transport (presumably available from a sediment budget). Tagged sediment leaving storage is evaluated from a convolution equation based on the probability distribution function (pdf) of sediment storage waiting times; this approach avoids the difficulty of accurately representing complex processes of sediment remobilization from floodplain and other deposits. To illustrate the role of storage on sediment delivery, we compare exponential and bounded power-law waiting time pdfs with identical means of 94 years. In both cases, the median travel time for sediment to reach the depocenter in fluvial systems less than 40km long is governed by in-channel transport and is unaffected by sediment storage. As the channel length increases, however, the median sediment travel time reflects storage rather than in-channel transport; travel times do not vary significantly between the two different waiting time functions. At distances of 50, 100, and 200 km, the median travel time for suspended sediment is 36, 136, and 325 years, orders of magnitude slower than travel times associated with in-channel transport. These computations demonstrate that storage can be neglected for short rivers, but for longer systems, storage controls the delivery of suspended sediment.

  13. Accounting for management costs in sensitivity analyses of matrix population models.

    PubMed

    Baxter, Peter W J; McCarthy, Michael A; Possingham, Hugh P; Menkhorst, Peter W; McLean, Natasha

    2006-06-01

    Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.

  14. Accounting for Heaping in Retrospectively Reported Event Data – A Mixture-Model Approach

    PubMed Central

    Bar, Haim Y.; Lillard, Dean R.

    2012-01-01

    When event data are retrospectively reported, more temporally distal events tend to get “heaped” on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch time-varying right-hand side variables. We develop a model-based approach to estimate the extent of heaping in the data, and how it affects regression parameter estimates. We use smoking cessation data as a motivating example, but our method is general. It facilitates the use of retrospective data from the multitude of cross-sectional and longitudinal studies worldwide that collect and potentially could collect event data. PMID:22733577

  15. A complete soil hydraulic model accounting for capillary and adsorptive water retention, capillary and film conductivity, and hysteresis

    NASA Astrophysics Data System (ADS)

    Rudiyanto; Sakai, Masaru; van Genuchten, Martinus Th.; Alazba, A. A.; Setiawan, Budi Indra; Minasny, Budiman

    2015-11-01

    A soil hydraulic model that considers capillary hysteretic and adsorptive water retention as well as capillary and film conductivity covering the complete soil moisture range is presented. The model was obtained by incorporating the capillary hysteresis model of Parker and Lenhard into the hydraulic model of Peters-Durner-Iden (PDI) as formulated for the van Genuchten (VG) retention equation. The formulation includes the following processes: capillary hysteresis accounting for air entrapment, closed scanning curves, nonhysteretic sorption of water retention onto mineral surfaces, a hysteretic function for the capillary conductivity, a nonhysteretic function for the film conductivity, and a nearly nonhysteretic function of the conductivity as a function of water content (θ) for the entire range of water contents. The proposed model only requires two additional parameters to describe hysteresis. The model was found to accurately describe observed hysteretic water retention and conductivity data for a dune sand. Using a range of published data sets, relationships could be established between the capillary water retention and film conductivity parameters. Including vapor conductivity improved conductivity descriptions in the very dry range. The resulting model allows predictions of the hydraulic conductivity from saturation until complete dryness using water retention parameters.

  16. Singing with yourself: evidence for an inverse modeling account of poor-pitch singing.

    PubMed

    Pfordresher, Peter Q; Mantell, James T

    2014-05-01

    Singing is a ubiquitous and culturally significant activity that humans engage in from an early age. Nevertheless, some individuals - termed poor-pitch singers - are unable to match target pitches within a musical semitone while singing. In the experiments reported here, we tested whether poor-pitch singing deficits would be reduced when individuals imitate recordings of themselves as opposed to recordings of other individuals. This prediction was based on the hypothesis that poor-pitch singers have not developed an abstract "inverse model" of the auditory-vocal system and instead must rely on sensorimotor associations that they have experienced directly, which is true for sequences an individual has already produced. In three experiments, participants, both accurate and poor-pitch singers, were better able to imitate sung recordings of themselves than sung recordings of other singers. However, this self-advantage was enhanced for poor-pitch singers. These effects were not a byproduct of self-recognition (Experiment 1), vocal timbre (Experiment 2), or the absolute pitch of target recordings (i.e., the advantage remains when recordings are transposed, Experiment 3). Results support the conceptualization of poor-pitch singing as an imitative deficit resulting from a deficient inverse model of the auditory-vocal system with respect to pitch.

  17. Account of near-cathode sheath in numerical models of high-pressure arc discharges

    NASA Astrophysics Data System (ADS)

    Benilov, M. S.; Almeida, N. A.; Baeva, M.; Cunha, M. D.; Benilova, L. G.; Uhrlandt, D.

    2016-06-01

    Three approaches to describing the separation of charges in near-cathode regions of high-pressure arc discharges are compared. The first approach employs a single set of equations, including the Poisson equation, in the whole interelectrode gap. The second approach employs a fully non-equilibrium description of the quasi-neutral bulk plasma, complemented with a newly developed description of the space-charge sheaths. The third, and the simplest, approach exploits the fact that significant power is deposited by the arc power supply into the near-cathode plasma layer, which allows one to simulate the plasma-cathode interaction to the first approximation independently of processes in the bulk plasma. It is found that results given by the different models are generally in good agreement, and in some cases the agreement is even surprisingly good. It follows that the predicted integral characteristics of the plasma-cathode interaction are not strongly affected by details of the model provided that the basic physics is right.

  18. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    USGS Publications Warehouse

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  19. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    PubMed Central

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model

  20. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections.

    PubMed

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-06-18

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model.

  1. Modelling of trace metal uptake by roots taking into account complexation by exogenous organic ligands

    NASA Astrophysics Data System (ADS)

    Jean-Marc, Custos; Christian, Moyne; Sterckeman, Thibault

    2010-05-01

    The context of this study is phytoextraction of soil trace metals such as Cd, Pb or Zn. Trace metal transfer from soil to plant depends on physical and chemical processes such as minerals alteration, transport, adsorption/desorption, reactions in solution and biological processes including the action of plant roots and of associated micro-flora. Complexation of metal ions by organic ligands is considered to play a role on the availability of trace metals for roots in particular in the event that synthetic ligands (EDTA, NTA, etc.) are added to the soil to increase the solubility of the contaminants. As this role is not clearly understood, we wanted to simulate it in order to quantify the effect of organic ligands on root uptake of trace metals and produce a tool which could help in optimizing the conditions of phytoextraction.We studied the effect of an aminocarboxilate ligand on the absorption of the metal ion by roots, both in hydroponic solution and in soil solution, for which we had to formalize the buffer power for the metal. We assumed that the hydrated metal ion is the only form which can be absorbed by the plants. Transport and reaction processes were modelled for a system made up of the metal M, a ligand L and the metal complex ML. The Tinker-Nye-Barber model was adapted to describe the transport of solutes M, L and ML in the soil and absorption of M by the roots. This allowed to represent the interactions between transport, chelating reactions, absorption of the solutes at the root surface, root growth with time, in order to simulate metal uptake by a whole root system.Several assumptions were tested such as i) absorption of the metal by an infinite sink and according to a Michaelis-Menten kinetics, solutes transport by diffusion with and without ii) mass flow and iii) soil buffer power for the ligand L. In hydroponic solution (without soil buffer power), ligands decreased the trace metal flux towards roots, as they reduced the concentration of hydrated

  2. On the influence of debris in glacier melt modelling: a new temperature-index model accounting for the debris thickness feedback

    NASA Astrophysics Data System (ADS)

    Carenzo, Marco; Mabillard, Johan; Pellicciotti, Francesca; Reid, Tim; Brock, Ben; Burlando, Paolo

    2013-04-01

    The increase of rockfalls from the surrounding slopes and of englacial melt-out material has led to an increase of the debris cover extent on Alpine glaciers. In recent years, distributed debris energy-balance models have been developed to account for the melt rate enhancing/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya. Some of the input data such as wind or temperature are also of difficult extrapolation from station measurements. Due to their lower data requirement, empirical models have been used in glacier melt modelling. However, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of debris thickness on melt. In this paper, we present a new temperature-index model accounting for the debris thickness feedback in the computation of melt rates at the debris-ice interface. The empirical parameters (temperature factor, shortwave radiation factor, and lag factor accounting for the energy transfer through the debris layer) are optimized at the point scale for several debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter has been validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. The new model is developed on Miage Glacier, Italy, a debris cover glacier in which the ablation area is mantled in near-continuous layer of rock. Subsequently, its transferability is tested on Haut Glacier d'Arolla, Switzerland, where debris is thinner and its extension has been seen to expand in the last decades. The results show that the performance of the new debris temperature-index model (DETI) in simulating the glacier melt rate at the point scale

  3. Accounting for false-positive acoustic detections of bats using occupancy models

    USGS Publications Warehouse

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    4. Synthesis and applications. Our results suggest that false positives sufficient to affect inferences may be common in acoustic surveys for bats. We demonstrate an approach that can estimate occupancy, regardless of the false-positive rate, when acoustic surveys are paired with capture surveys. Applications of this approach include monitoring the spread of White-Nose Syndrome, estimating the impact of climate change and informing conservation listing decisions. We calculate a site-specific probability of occupancy, conditional on survey results, which could inform local permitting decisions, such as for wind energy projects. More generally, the magnitude of false positives suggests that false-positive occupancy models can improve accuracy in research and monitoring of bats and provide wildlife managers with more reliable information.

  4. Accounting for crustal magnetization in models of the core magnetic field

    NASA Technical Reports Server (NTRS)

    Jackson, Andrew

    1990-01-01

    The problem of determining the magnetic field originating in the earth's core in the presence of remanent and induced magnetization is considered. The effect of remanent magnetization in the crust on satellite measurements of the core magnetic field is investigated. The crust as a zero-mean stationary Gaussian random process is modelled using an idea proposed by Parker (1988). It is shown that the matrix of second-order statistics is proportional to the Gram matrix, which depends only on the inner-products of the appropriate Green's functions, and that at a typical satellite altitude of 400 km the data are correlated out to an angular separation of approximately 15 deg. Accurate and efficient means of calculating the matrix elements are given. It is shown that the variance of measurements of the radial component of a magnetic field due to the crust is expected to be approximately twice that in horizontal components.

  5. Using satellite and multi-modeling for improving soil moisture and streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Toll, David; Li, Bailing; Xiwu, Zhan; Brian, Cosgrove

    2010-05-01

    Work for this project is towards improving the stream flow forecasts for the NOAA River Forecast Centers (RFC) throughout the U.S. using multi-model capability primarily from the NASA Land Information System and remote sensing data provided by AMSR-E for soil moisture. The RFCs address a range of issues, including peak and low flow predictions as well as river floods and flash floods. The NASA Land Information System (LIS) provides a data integration framework for combining a range of ancillary and satellite data with state of the art data assimilation capabilities. We are currently including: 1) the Noah land surface model (LSM) simulates soil moisture (both liquid and frozen), soil temperature, skin temperature, snowpack water equivalent, snowpack density, canopy water content, and the traditional energy flux and water flux terms of the surface energy and surface water balance; 2) the Sacramento Distributed model is based on the lumped 'SAC-SMA' model used for hydrological simulations; and 3) the Catchment land surface model that is distinctive in the way land surface elements are depicted as hydrological catchments. Results from assimilating AMSR-E (Advances Microwave Sounding Radiometer) soil moisture with the Noah LSM using ensemble Kalman filter data assimilation. Results for a test site in Oklahoma, US show significant improvement for soil moisture estimation assimilating AMSR-E data. We used a conservation of mass procedure within a soil column to provide a more physically based approach to transfer observed soil moisture state to the lower soil moisture profiles. Overall the AMSR-e results shows improvement for improving the true spatial mean of soil moisture improvements. Noah LSM comparisons to determine if AMSR-E contributed to an improved streamflow showed inconclusive results. More accurate hydrologic improvements are expected from the new SMOS (Soil Moisture Ocean Salinity) and the future SMAP (Soil Moisture Active Passive). Future work will compare

  6. Rotating Stellar Models Can Account for the Extended Main-sequence Turnoffs in Intermediate-age Clusters

    NASA Astrophysics Data System (ADS)

    Brandt, Timothy D.; Huang, Chelsea X.

    2015-07-01

    We show that the extended main-sequence turnoffs seen in intermediate-age Large Magellanic Cloud (LMC) clusters, often attributed to age spreads of several 100 Myr, may be easily accounted for by variable stellar rotation in a coeval population. We compute synthetic photometry for grids of rotating stellar evolution models and interpolate them to produce isochrones at a variety of rotation rates and orientations. An extended main-sequence turnoff naturally appears in color-magnitude diagrams at ages just under 1 Gyr, peaks in extent between ˜1 and 1.5 Gyr, and gradually disappears by around 2 Gyr in age. We then fit our interpolated isochrones by eye to four LMC clusters with very extended main-sequence turnoffs: NGC 1783, 1806, 1846, and 1987. In each case, stellar populations with a single age and metallicity can comfortably account for the observed extent of the turnoff region. The new stellar models predict almost no correlation of turnoff color with rotational v{sin}i. The red part of the turnoff is populated by a combination of slow rotators and edge-on rapid rotators, while the blue part contains rapid rotators at lower inclinations.

  7. A statistical model-based technique for accounting for prostate gland deformation in endorectal coil-based MR imaging.

    PubMed

    Tahmasebi, Amir M; Sharifi, Reza; Agarwal, Harsh K; Turkbey, Baris; Bernardo, Marcelino; Choyke, Peter; Pinto, Peter; Wood, Bradford; Kruecker, Jochen

    2012-01-01

    In prostate brachytherapy procedures, combining high-resolution endorectal coil (ERC)-MRI with Computed Tomography (CT) images has shown to improve the diagnostic specificity for malignant tumors. Despite such advantage, there exists a major complication in fusion of the two imaging modalities due to the deformation of the prostate shape in ERC-MRI. Conventionally, nonlinear deformable registration techniques have been utilized to account for such deformation. In this work, we present a model-based technique for accounting for the deformation of the prostate gland in ERC-MR imaging, in which a unique deformation vector is estimated for every point within the prostate gland. Modes of deformation for every point in the prostate are statistically identified using a set of MR-based training set (with and without ERC-MRI). Deformation of the prostate from a deformed (ERC-MRI) to a non-deformed state in a different modality (CT) is then realized by first calculating partial deformation information for a limited number of points (such as surface points or anatomical landmarks) and then utilizing the calculated deformation from a subset of the points to determine the coefficient values for the modes of deformations provided by the statistical deformation model. Using a leave-one-out cross-validation, our results demonstrated a mean estimation error of 1mm for a MR-to-MR registration.

  8. A general model for likelihood computations of genetic marker data accounting for linkage, linkage disequilibrium, and mutations.

    PubMed

    Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter

    2015-09-01

    Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers.

  9. An extended macro traffic flow model accounting for the driver's bounded rationality and numerical tests

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan

    2017-02-01

    In this paper, we propose a macro traffic flow model to explore the effects of the driver's bounded rationality on the evolutions of traffic waves (which include shock and rarefaction waves) and small perturbation, and on the fuel consumption and emissions (that include CO, HC and NOX) during the evolution process. The numerical results illustrate that considering the driver's bounded rationality can prominently smooth the wavefront of the traffic waves and improve the stability of traffic flow, which shows that the driver's bounded rationality has positive impacts on traffic flow; but considering the driver's bounded rationality reduces the fuel consumption and emissions only at the upstream of the rarefaction wave while enhances the fuel consumption and emissions under other situations, which shows that the driver's bounded rationality has positive impacts on the fuel consumption and emissions only at the upstream of the rarefaction wave, while negative effects on the fuel consumption and emissions under other situations. In addition, the numerical results show that the driver's bounded rationality has little prominent impact on the total fuel consumption, and emissions during the whole evolution of small perturbation.

  10. Hysteresis modelling of GO laminations for arbitrary in-plane directions taking into account the dynamics of orthogonal domain walls

    NASA Astrophysics Data System (ADS)

    Baghel, A. P. S.; Sai Ram, B.; Chwastek, K.; Daniel, L.; Kulkarni, S. V.

    2016-11-01

    The anisotropy of magnetic properties in grain-oriented steels is related to their microstructure. It results from the anisotropy of the single crystal properties combined to crystallographic texture. The magnetization process along arbitrary directions can be explained using phase equilibrium for domain patterns, which can be described using Neel's phase theory. According to the theory the fractions of 180° and 90° domain walls depend on the direction of magnetization. This paper presents an approach to model hysteresis loops of grain-oriented steels along arbitrary in-plane directions. The considered description is based on a modification of the Jiles-Atherton model. It includes a modified expression for the anhysteretic magnetization which takes into account contributions of two types of domain walls. The computed hysteresis curves for different directions are in good agreement with experimental results.

  11. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  12. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili.

    PubMed

    Xiao, Ke; Malvankar, Nikhil S; Shu, Chuanjun; Martz, Eric; Lovley, Derek R; Sun, Xiao

    2016-03-22

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics.

  13. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili

    NASA Astrophysics Data System (ADS)

    Xiao, Ke; Malvankar, Nikhil S.; Shu, Chuanjun; Martz, Eric; Lovley, Derek R.; Sun, Xiao

    2016-03-01

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics.

  14. A micromechanics-inspired constitutive model for shape-memory alloys that accounts for initiation and saturation of phase transformation

    NASA Astrophysics Data System (ADS)

    Kelly, Alex; Stebner, Aaron P.; Bhattacharya, Kaushik

    2016-12-01

    A constitutive model to describe macroscopic elastic and transformation behaviors of polycrystalline shape-memory alloys is formulated using an internal variable thermodynamic framework. In a departure from prior phenomenological models, the proposed model treats initiation, growth kinetics, and saturation of transformation distinctly, consistent with physics revealed by recent multi-scale experiments and theoretical studies. Specifically, the proposed approach captures the macroscopic manifestations of three micromechanial facts, even though microstructures are not explicitly modeled: (1) Individual grains with favorable orientations and stresses for transformation are the first to nucleate martensite, and the local nucleation strain is relatively large. (2) Then, transformation interfaces propagate according to growth kinetics to traverse networks of grains, while previously formed martensite may reorient. (3) Ultimately, transformation saturates prior to 100% completion as some unfavorably-oriented grains do not transform; thus the total transformation strain of a polycrystal is modest relative to the initial, local nucleation strain. The proposed formulation also accounts for tension-compression asymmetry, processing anisotropy, and the distinction between stress-induced and temperature-induced transformations. Consequently, the model describes thermoelastic responses of shape-memory alloys subject to complex, multi-axial thermo-mechanical loadings. These abilities are demonstrated through detailed comparisons of simulations with experiments.

  15. An accountability model for integrating information systems, evaluation mechanisms, and decision making processes in alcohol and drug abuse agencies.

    PubMed

    Duncan, F H; Link, A D

    1979-01-01

    This article has attempted to demonstrate that decision making and evaluation can be carried out in a systematic fashion only if agencies make a commitment to do so, and only if adequate systems are established. The management information system is the most expensive and most sophisticated component of the integrated model presented here. Its existence, in some fashion, is essential to the operation of the model. Contrary to what many managers may believe and practice, the management information system is not in itself the final solution to evaluation. Neither is the evaluation a panacea for all program ills. Evaluation can provide the information required to meet the ever increasing demands for agency or program accountability evaluation can also provide insights for future decisions to change or alter the allocation of resources. Such evaluation must be carefully planned and implemented; and, at the state level, can be successful only if executed in a systematic manner as suggested here. Regardless of the degree of sophistication of any system, it will work only when supported by users in the local treatment centers. If the model employed does little to serve them, it is not a model worth considering. It is with these needs in mind that this model was developed.

  16. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili

    PubMed Central

    Xiao, Ke; Malvankar, Nikhil S.; Shu, Chuanjun; Martz, Eric; Lovley, Derek R.; Sun, Xiao

    2016-01-01

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics. PMID:27001169

  17. A nonlinear BOLD model accounting for refractory effect by applying the longitudinal relaxation in NMR to the linear BOLD model.

    PubMed

    Jung, Kwan-Jin

    2009-09-01

    A mathematical model to regress the nonlinear blood oxygen level-dependent (BOLD) fMRI signal has been developed by incorporating the refractory effect into the linear BOLD model of the biphasic gamma variate function. The refractory effect was modeled as a relaxation of two separate BOLD capacities corresponding to the biphasic components of the BOLD signal in analogy with longitudinal relaxation of magnetization in NMR. When tested with the published fMRI data of finger tapping, the nonlinear BOLD model with the refractory effect reproduced the nonlinear BOLD effects such as reduced poststimulus undershoot and saddle pattern in a prolonged stimulation as well as the reduced BOLD signal for repetitive stimulation.

  18. A statistical human rib cage geometry model accounting for variations by age, sex, stature and body mass index.

    PubMed

    Shi, Xiangnan; Cao, Libo; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2014-07-18

    In this study, we developed a statistical rib cage geometry model accounting for variations by age, sex, stature and body mass index (BMI). Thorax CT scans were obtained from 89 subjects approximately evenly distributed among 8 age groups and both sexes. Threshold-based CT image segmentation was performed to extract the rib geometries, and a total of 464 landmarks on the left side of each subject׳s ribcage were collected to describe the size and shape of the rib cage as well as the cross-sectional geometry of each rib. Principal component analysis and multivariate regression analysis were conducted to predict rib cage geometry as a function of age, sex, stature, and BMI, all of which showed strong effects on rib cage geometry. Except for BMI, all parameters also showed significant effects on rib cross-sectional area using a linear mixed model. This statistical rib cage geometry model can serve as a geometric basis for developing a parametric human thorax finite element model for quantifying effects from different human attributes on thoracic injury risks.

  19. Modelling the spatial distribution of snow water equivalent at the catchment scale taking into account changes in snow covered area

    NASA Astrophysics Data System (ADS)

    Skaugen, T.; Randen, F.

    2011-12-01

    A successful modelling of the snow reservoir is necessary for water resources assessments and the mitigation of spring flood hazards. A good estimate of the spatial probability density function (PDF) of snow water equivalent (SWE) is important for obtaining estimates of the snow reservoir, but also for modelling the changes in snow covered area (SCA), which is crucial for the runoff dynamics in spring. In a previous paper the PDF of SWE was modelled as a sum of temporally correlated gamma distributed variables. This methodology was constrained to estimate the PDF of SWE for snow covered areas only. In order to model the PDF of SWE for a catchment, we need to take into account the change in snow coverage and provide the spatial moments of SWE for both snow covered areas and for the catchment as a whole. The spatial PDF of accumulated SWE is, also in this study, modelled as a sum of correlated gamma distributed variables. After accumulation and melting events the changes in the spatial moments are weighted by changes in SCA. The spatial variance of accumulated SWE is, after both accumulation- and melting events, evaluated by use of the covariance matrix. For accumulation events there are only positive elements in the covariance matrix, whereas for melting events, there are both positive and negative elements. The negative elements dictate that the correlation between melt and SWE is negative. The negative contributions become dominant only after some time into the melting season so at the onset of the melting season, the spatial variance thus continues to increase, for later to decrease. This behaviour is consistent with observations and called the "hysteretic" effect by some authors. The parameters for the snow distribution model can be estimated from observed historical precipitation data which reduces by one the number of parameters to be calibrated in a hydrological model. Results from the model are in good agreement with observed spatial moments of SWE and SCA

  20. A sampling design and model for estimating abundance of Nile crocodiles while accounting for heterogeneity of detectability of multiple observers

    USGS Publications Warehouse

    Shirley, Matthew H.; Dorazio, Robert M.; Abassery, Ekramy; Elhady, Amr A.; Mekki, Mohammed S.; Asran, Hosni H.

    2012-01-01

    As part of the development of a management program for Nile crocodiles in Lake Nasser, Egypt, we used a dependent double-observer sampling protocol with multiple observers to compute estimates of population size. To analyze the data, we developed a hierarchical model that allowed us to assess variation in detection probabilities among observers and survey dates, as well as account for variation in crocodile abundance among sites and habitats. We conducted surveys from July 2008-June 2009 in 15 areas of Lake Nasser that were representative of 3 main habitat categories. During these surveys, we sampled 1,086 km of lake shore wherein we detected 386 crocodiles. Analysis of the data revealed significant variability in both inter- and intra-observer detection probabilities. Our raw encounter rate was 0.355 crocodiles/km. When we accounted for observer effects and habitat, we estimated a surface population abundance of 2,581 (2,239-2,987, 95% credible intervals) crocodiles in Lake Nasser. Our results underscore the importance of well-trained, experienced monitoring personnel in order to decrease heterogeneity in intra-observer detection probability and to better detect changes in the population based on survey indices. This study will assist the Egyptian government establish a monitoring program as an integral part of future crocodile harvest activities in Lake Nasser

  1. Accounting for intracell flow in models with emphasis on water table recharge and stream-aquifer interaction. 2. A procedure

    USGS Publications Warehouse

    Jorgensen, D.G.; Signor, D.C.; Imes, J.L.

    1989-01-01

    Intercepted intracell flow, especially if cell includes water table recharge and a stream (sink), can result in significant model error if not accounted for. A procedure utilizing net flow per cell (Fn) that accounts for intercepted intracell flow can be used for both steady state and transient simulations. Germane to the procedure is the determination of the ratio of area of influence of the interior sink to the area of the cell (Ai/Ac). Ai is the area in which water table recharge has the potential to be intercepted by the sink. Determining Ai/Ac requires either a detailed water table map or observation of stream conditions within the cell. A proportioning parameter M, which is equal to 1 or slightly less and is a function of cell geometry, is used to determine how much of the water that has potential for interception is intercepted by the sink within the cell. Also germane to the procedure is the determination of the flow across the streambed (Fs) which is not directly a function of cell size, due to difference in head between the water level in the stream and the potentiometric surface of the aquifer underlying the streambed. -from Authors

  2. Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing.

    PubMed

    Tan, Cheston; Poggio, Tomaso

    2016-01-01

    Faces are an important and unique class of visual stimuli, and have been of interest to neuroscientists for many years. Faces are known to elicit certain characteristic behavioral markers, collectively labeled "holistic processing", while non-face objects are not processed holistically. However, little is known about the underlying neural mechanisms. The main aim of this computational simulation work is to investigate the neural mechanisms that make face processing holistic. Using a model of primate visual processing, we show that a single key factor, "neural tuning size", is able to account for three important markers of holistic face processing: the Composite Face Effect (CFE), Face Inversion Effect (FIE) and Whole-Part Effect (WPE). Our proof-of-principle specifies the precise neurophysiological property that corresponds to the poorly-understood notion of holism, and shows that this one neural property controls three classic behavioral markers of holism. Our work is consistent with neurophysiological evidence, and makes further testable predictions. Overall, we provide a parsimonious account of holistic face processing, connecting computation, behavior and neurophysiology.

  3. Accounting for Population Structure in Gene-by-Environment Interactions in Genome-Wide Association Studies Using Mixed Models.

    PubMed

    Sul, Jae Hoon; Bilow, Michael; Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar

    2016-03-01

    Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants.

  4. Comparative evaluation of ensemble Kalman filter, particle filter and variational techniques for river discharge forecast

    NASA Astrophysics Data System (ADS)

    Hirpa, F. A.; Gebremichael, M.; LEE, H.; Hopson, T. M.

    2012-12-01

    Hydrologic data assimilation techniques provide a means to improve river discharge forecasts through updating hydrologic model states and correcting the atmospheric forcing data via optimally combining model outputs with observations. The performance of the assimilation procedure, however, depends on the data assimilation techniques used and the amount of uncertainty in the data sets. To investigate the effects of these, we comparatively evaluate three data assimilation techniques, including ensemble Kalman filter (EnKF), particle filter (PF) and variational (VAR) technique, which assimilate discharge and synthetic soil moisture data at various uncertainty levels into the Sacramento Soil Moisture accounting (SAC-SMA) model used by the National Weather Service (NWS) for river forecasting in The United States. The study basin is Greens Bayou watershed with area of 178 km2 in eastern Texas. In the presentation, we summarize the results of the comparisons, and discuss the challenges of applying each technique for hydrologic applications.

  5. Accounting Specialist.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This publication identifies 20 subjects appropriate for use in a competency list for the occupation of accounting specialist, 1 of 12 occupations within the business/computer technologies cluster. Each unit consists of a number of competencies; a list of competency builders is provided for each competency. Titles of the 20 units are as follows:…

  6. Painless Accountability.

    ERIC Educational Resources Information Center

    Brown, R. W.; And Others

    The computerized Painless Accountability System is a performance objective system from which instructional programs are developed. Three main simplified behavioral response levels characterize this system: (1) cognitive, (2) psychomotor, and (3) affective domains. Each of these objectives are classified by one of 16 descriptors. The second major…

  7. Accountability Overboard

    ERIC Educational Resources Information Center

    Chieppo, Charles D.; Gass, James T.

    2009-01-01

    This article reports that special interest groups opposed to charter schools and high-stakes testing have hijacked Massachusetts's once-independent board of education and stand poised to water down the Massachusetts Comprehensive Assessment System (MCAS) tests and the accountability system they support. President Barack Obama and Massachusetts…

  8. Impact of accounting for coloured noise in radar altimetry data on a regional quasi-geoid model

    NASA Astrophysics Data System (ADS)

    Farahani, H. H.; Slobbe, D. C.; Klees, R.; Seitz, Kurt

    2017-01-01

    We study the impact of an accurate computation and incorporation of coloured noise in radar altimeter data when computing a regional quasi-geoid model using least-squares techniques. Our test area comprises the Southern North Sea including the Netherlands, Belgium, and parts of France, Germany, and the UK. We perform the study by modelling the disturbing potential with spherical radial base functions. To that end, we use the traditional remove-compute-restore procedure with a recent GRACE/GOCE static gravity field model. Apart from radar altimeter data, we use terrestrial, airborne, and shipboard gravity data. Radar altimeter sea surface heights are corrected for the instantaneous dynamic topography and used in the form of along-track quasi-geoid height differences. Noise in these data are estimated using repeat-track and post-fit residual analysis techniques and then modelled as an auto regressive moving average process. Quasi-geoid models are computed with and without taking the modelled coloured noise into account. The difference between them is used as a measure of the impact of coloured noise in radar altimeter along-track quasi-geoid height differences on the estimated quasi-geoid model. The impact strongly depends on the availability of shipboard gravity data. If no such data are available, the impact may attain values exceeding 10 centimetres in particular areas. In case shipboard gravity data are used, the impact is reduced, though it still attains values of several centimetres. We use geometric quasi-geoid heights from GPS/levelling data at height markers as control data to analyse the quality of the quasi-geoid models. The quasi-geoid model computed using a model of the coloured noise in radar altimeter along-track quasi-geoid height differences shows in some areas a significant improvement over a model that assumes white noise in these data. However, the interpretation in other areas remains a challenge due to the limited quality of the control data.

  9. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.

  10. Contrast and lustre: A model that accounts for eleven different forms of contrast discrimination in binocular vision.

    PubMed

    Georgeson, Mark A; Wallis, Stuart A; Meese, Tim S; Baker, Daniel H

    2016-12-01

    Our goal here is a more complete understanding of how information about luminance contrast is encoded and used by the binocular visual system. In two-interval forced-choice experiments we assessed observers' ability to discriminate changes in contrast that could be an increase or decrease of contrast in one or both eyes, or an increase in one eye coupled with a decrease in the other (termed IncDec). The base or pedestal contrasts were either in-phase or out-of-phase in the two eyes. The opposed changes in the IncDec condition did not cancel each other out, implying that along with binocular summation, information is also available from mechanisms that do not sum the two eyes' inputs. These might be monocular mechanisms. With a binocular pedestal, monocular increments of contrast were much easier to see than monocular decrements. These findings suggest that there are separate binocular (B) and monocular (L,R) channels, but only the largest of the three responses, max(L,B,R), is available to perception and decision. Results from contrast discrimination and contrast matching tasks were described very accurately by this model. Stimuli, data, and model responses can all be visualized in a common binocular contrast space, allowing a more direct comparison between models and data. Some results with out-of-phase pedestals were not accounted for by the max model of contrast coding, but were well explained by an extended model in which gratings of opposite polarity create the sensation of lustre. Observers can discriminate changes in lustre alongside changes in contrast.

  11. An energy-based model accounting for snow accumulation and snowmelt in a coniferous forest and in an open area

    NASA Astrophysics Data System (ADS)

    Matějka, Ondřej; Jeníček, Michal

    2016-04-01

    An energy balance approach was used to simulate snow water equivalent (SWE) evolution in an open area, forest clearing and coniferous forest during winter seasons 2011/12 and 2012/13 in the Bystřice River basin (Krušné Mountains, Czech Republic). The aim was to describe the impact of vegetation on snow accumulation and snowmelt under different forest canopy structure and trees density. Hemispherical photographs were used to describe the forest canopy structure. Energy balance model of snow accumulation and melt was set up. The snow model was adjusted to account the effects of forest canopy on driving meteorological variables. Leaf area index derived from 32 hemispherical photographs of vegetation and sky was used to implement the forest influence in the snow model. The model was evaluated using snow depth and SWE data measured at 16 localities in winter seasons from 2011 to 2013. The model was able to reproduce the SWE evolution in both winter seasons beneath the forest canopy, forest clearing and open area. The SWE maximum in forest sites was by 18% lower than in open areas and forest clearings. The portion of shortwave radiation on snowmelt rate was by 50% lower in forest areas than in open areas due to shading effect. The importance of turbulent fluxes was by 30% lower in forest sites compared to openings because of wind speed reduction up to 10% of values at corresponding open areas. Indirect estimation of interception rates was derived. Between 14 and 60% of snowfall was intercept and sublimated in the forest canopy in both winter seasons. Based on model results, the underestimation of solid precipitation (heated precipitation gauge used for measurement) at the weather station Hřebečná was revealed. The snowfall was underestimated by 40% in winter season 2011/12 and by 13% in winter season 2012/13. Although, the model formulation appeared sufficient for both analysed winter seasons, canopy effects on the longwave radiation and ground heat flux were not

  12. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    SciTech Connect

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-03-15

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  13. Integrated water resources management of the Ichkeul basin taking into account the durability of its wetland ecosystem using WEAP model

    NASA Astrophysics Data System (ADS)

    Shabou, M.; Lili-Chabaane, Z.; Gastli, W.; Chakroun, H.; Ben Abdallah, S.; Oueslati, I.; Lasram, F.; Laajimi, R.; Shaiek, M.; Romdhane, M. S.; Mnajja, A.

    2012-04-01

    The Conservation of coastal wetlands in the Mediterranean area is generally faced with development issues. It is the case of Tunisia where the precipitation is irregular in time and space. For the equity of water use (drinking, irrigation), there is a planning at the national level allowing the possibility of water transfer from regions rich in water resources to poor ones. This plan was initially done in Tunisia without taking into account the wetlands ecosystems and their specificities. The main purpose of this study is to find a model able to integrate simultaneously available resources and various water demands within a watershed by taking into account the durability of related wetland ecosystems. It is the case of the Ichkeul basin. This later is situated in northern of Tunisia, having an area of 2080 km2 and rainfall of about 600 mm/year. Downstream this basin, the Ichkeul Lake is characterized by a double alternation of seasonal high water and low salinity in winter and spring and low water levels and high salinity in summer and autumn that makes the Ichkeul an exceptional ecosystem. The originality of this hydrological system of Lake-marsh conditions is related to the presence of aquatic vegetation in the lake and special rich and varied hygrophilic in the marshes that constitutes the main source of food for large migrating water birds. After the construction of three dams on the principle rivers that are feeding the Ichkeul Lake, aiming particularly to supply the local irrigation and the drinking water demand of cities in the north and the east of Tunisia, freshwater inflow to the lake is greatly reduced causing a hydrological disequilibrium that influences the ecological conditions of the different species. Therefore, to ensure the sustainability of the water resources management, it's important to find a trade off between the existing hydrological and ecological systems taking into account water demands of various users (drinking, irrigation fishing, and

  14. Nonlinear analysis of a new car-following model accounting for the optimal velocity changes with memory

    NASA Astrophysics Data System (ADS)

    Peng, Guanghan; Lu, Weizhen; He, Hongdi; Gu, Zhenghua

    2016-11-01

    We, in this study, construct a new car-following model by accounting for the effect of the optimal velocity changes with memory in terms of the full velocity difference (FVD) model. The stability condition and mKdV equation concerning the optimal velocity changes with memory are derived through both linear stability and nonlinear analyses, respectively. Then, the space concerned can be divided into three regions classified as the stable, the metastable and the unstable ones. Moreover, it is shown that the effect of the optimal velocity changes with memory could enhance the stability of traffic flow. Furthermore, the numerical results verify that not only the sensitivity parameter of the optimal velocity changes with memory of driver but also the memory step could effectively stabilize the traffic flow. In addition, the stability of traffic flow is strengthened by increasing the memory step-size of optimal velocity changes and the intensity of drivers' memory with such changes. Most importantly, the effect of the optimal velocity changes with memory may avoid the disadvantage of historical information, which decreases the stability of traffic flow on road.

  15. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system.

    PubMed

    Beckon, William N

    2016-07-01

    For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  16. Modelling of the physico-chemical behaviour of clay minerals with a thermo-kinetic model taking into account particles morphology in compacted material.

    NASA Astrophysics Data System (ADS)

    Sali, D.; Fritz, B.; Clément, C.; Michau, N.

    2003-04-01

    Modelling of fluid-mineral interactions is largely used in Earth Sciences studies to better understand the involved physicochemical processes and their long-term effect on the materials behaviour. Numerical models simplify the processes but try to preserve their main characteristics. Therefore the modelling results strongly depend on the data quality describing initial physicochemical conditions for rock materials, fluids and gases, and on the realistic way of processes representations. The current geo-chemical models do not well take into account rock porosity and permeability and the particle morphology of clay minerals. In compacted materials like those considered as barriers in waste repositories, low permeability rocks like mudstones or compacted powders will be used : they contain mainly fine particles and the geochemical models used for predicting their interactions with fluids tend to misjudge their surface areas, which are fundamental parameters in kinetic modelling. The purpose of this study was to improve how to take into account the particles morphology in the thermo-kinetic code KINDIS and the reactive transport code KIRMAT. A new function was integrated in these codes, considering the reaction surface area as a volume depending parameter and the calculated evolution of the mass balance in the system was coupled with the evolution of reactive surface areas. We made application exercises for numerical validation of these new versions of the codes and the results were compared with those of the pre-existing thermo-kinetic code KINDIS. Several points are highlighted. Taking into account reactive surface area evolution during simulation modifies the predicted mass transfers related to fluid-minerals interactions. Different secondary mineral phases are also observed during modelling. The evolution of the reactive surface parameter helps to solve the competition effects between different phases present in the system which are all able to fix the chemical

  17. Emerging accounting trends accounting for leases.

    PubMed

    Valletta, Robert; Huggins, Brian

    2010-12-01

    A new model for lease accounting can have a significant impact on hospitals and healthcare organizations. The new approach proposes a "right-of-use" model that involves complex estimates and significant administrative burden. Hospitals and health systems that draw heavily on lease arrangements should start preparing for the new approach now even though guidance and a final rule are not expected until mid-2011. This article highlights a number of considerations from the lessee point of view.

  18. Analytical modeling of demagnetizing effect in magnetoelectric ferrite/PZT/ferrite trilayers taking into account a mechanical coupling

    NASA Astrophysics Data System (ADS)

    Loyau, V.; Aubert, A.; LoBue, M.; Mazaleyrat, F.

    2017-03-01

    In this paper, we investigate the demagnetizing effect in ferrite/PZT/ferrite magnetoelectric (ME) trilayer composites consisting of commercial PZT discs bonded by epoxy layers to Ni-Co-Zn ferrite discs made by a reactive Spark Plasma Sintering (SPS) technique. ME voltage coefficients (transversal mode) were measured on ferrite/PZT/ferrite trilayer ME samples with different thicknesses or phase volume ratio in order to highlight the influence of the magnetic field penetration governed by these geometrical parameters. Experimental ME coefficients and voltages were compared to analytical calculations using a quasi-static model. Theoretical demagnetizing factors of two magnetic discs that interact together in parallel magnetic structures were derived from an analytical calculation based on a superposition method. These factors were introduced in ME voltage calculations which take account of the demagnetizing effect. To fit the experimental results, a mechanical coupling factor was also introduced in the theoretical formula. This reflects the differential strain that exists in the ferrite and PZT layers due to shear effects near the edge of the ME samples and within the bonding epoxy layers. From this study, an optimization in magnitude of the ME voltage is obtained. Lastly, an analytical calculation of demagnetizing effect was conducted for layered ME composites containing higher numbers of alternated layers (n ≥ 5). The advantage of such a structure is then discussed.

  19. Developing a Global Model of Accounting Education and Examining IES Compliance in Australia, Japan, and Sri Lanka

    ERIC Educational Resources Information Center

    Watty, Kim; Sugahara, Satoshi; Abayadeera, Nadana; Perera, Luckmika

    2013-01-01

    The introduction of International Education Standards (IES) signals a clear move by the International Accounting Education Standards Board (IAESB) to ensure high quality standards in professional accounting education at a global level. This study investigated how IES are perceived and valued by member bodies and academics in three counties:…

  20. An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain

    PubMed Central

    Krishna, B. Suresh; Treue, Stefan

    2016-01-01

    Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679

  1. Self-consistent modeling of induced magnetic field in Titan's atmosphere accounting for the generation of Schumann resonance

    NASA Astrophysics Data System (ADS)

    Béghin, Christian

    2015-02-01

    This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.

  2. Response function theories that account for size distribution effects - A review. [mathematical models concerning composite propellant heterogeneity effects on combustion instability

    NASA Technical Reports Server (NTRS)

    Cohen, N. S.

    1980-01-01

    The paper presents theoretical models developed to account for the heterogeneity of composite propellants in expressing the pressure-coupled combustion response function. It is noted that the model of Lengelle and Williams (1968) furnishes a viable basis to explain the effects of heterogeneity.

  3. Accounting for the effects of sastrugi in the CERES clear-sky Antarctic shortwave angular distribution models

    NASA Astrophysics Data System (ADS)

    Corbett, J.; Su, W.

    2015-08-01

    The Cloud and the Earth's Radiant Energy System (CERES) instruments on NASA's Terra, Aqua and Soumi NPP satellites are used to provide a long-term measurement of Earth's energy budget. To accomplish this, the radiances measured by the instruments must be inverted to fluxes by the use of a scene-type-dependent angular distribution model (ADM). For permanent snow scenes over Antarctica, shortwave (SW) ADMs are created by compositing radiance measurements over the full viewing zenith and azimuth range. However, the presence of small-scale wind blown roughness features called sastrugi cause the BRDF (bidirectional reflectance distribution function) of the snow to vary significantly based upon the solar azimuth angle and location. This can result in monthly regional biases between -12 and 7.5 Wm-2 in the inverted TOA (top-of-atmosphere) SW flux. The bias is assessed by comparing the CERES shortwave fluxes derived from nadir observations with those from all viewing zenith angles, as the sastrugi affect fluxes inverted from the oblique viewing angles more than for the nadir viewing angles. In this paper we further describe the clear-sky Antarctic ADMs from Su et al. (2015). These ADMs account for the sastrugi effect by using measurements from the Multi-Angle Imaging Spectro-Radiometer (MISR) instrument to derive statistical relationships between radiance from different viewing angles. We show here that these ADMs reduce the bias and artifacts in the CERES SW flux caused by sastrugi, both locally and Antarctic-wide. The regional monthly biases from sastrugi are reduced to between -5 and 7 Wm-2, and the monthly-mean biases over Antarctica are reduced by up to 0.64 Wm-2, a decrease of 74 %. These improved ADMs are used as part of the Edition 4 CERES SSF (Single Scanner Footprint) data.

  4. Extending the simple linear regression model to account for correlated responses: an introduction to generalized estimating equations and multi-level mixed modelling.

    PubMed

    Burton, P; Gurrin, L; Sly, P

    1998-06-15

    Much of the research in epidemiology and clinical science is based upon longitudinal designs which involve repeated measurements of a variable of interest in each of a series of individuals. Such designs can be very powerful, both statistically and scientifically, because they enable one to study changes within individual subjects over time or under varied conditions. However, this power arises because the repeated measurements tend to be correlated with one another, and this must be taken into proper account at the time of analysis or misleading conclusions may result. Recent advances in statistical theory and in software development mean that studies based upon such designs can now be analysed more easily, in a valid yet flexible manner, using a variety of approaches which include the use of generalized estimating equations, and mixed models which incorporate random effects. This paper provides a particularly simple illustration of the use of these two approaches, taking as a practical example the analysis of a study which examined the response of portable peak expiratory flow meters to changes in true peak expiratory flow in 12 children with asthma. The paper takes the reader through the relevant practicalities of model fitting, interpretation and criticism and demonstrates that, in a simple case such as this, analyses based upon these model-based approaches produce reassuringly similar inferences to standard analyses based upon more conventional methods.

  5. Can the Five Factor Model of Personality Account for the Variability of Autism Symptom Expression? Multivariate Approaches to Behavioral Phenotyping in Adult Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Schwartzman, Benjamin C.; Wood, Jeffrey J.; Kapp, Steven K.

    2016-01-01

    The present study aimed to: determine the extent to which the five factor model of personality (FFM) accounts for variability in autism spectrum disorder (ASD) symptomatology in adults, examine differences in average FFM personality traits of adults with and without ASD and identify distinct behavioral phenotypes within ASD. Adults (N = 828;…

  6. Observation-based modelling of permafrost carbon fluxes with accounting for deep carbon deposits and thermokarst activity

    NASA Astrophysics Data System (ADS)

    Schneider von Deimling, T.; Grosse, G.; Strauss, J.; Schirrmeister, L.; Morgenstern, A.; Schaphoff, S.; Meinshausen, M.; Boike, J.

    2015-06-01

    High-latitude soils store vast amounts of perennially frozen and therefore inert organic matter. With rising global temperatures and consequent permafrost degradation, a part of this carbon stock will become available for microbial decay and eventual release to the atmosphere. We have developed a simplified, two-dimensional multi-pool model to estimate the strength and timing of future carbon dioxide (CO2) and methane (CH4) fluxes from newly thawed permafrost carbon (i.e. carbon thawed when temperatures rise above pre-industrial levels). We have especially simulated carbon release from deep deposits in Yedoma regions by describing abrupt thaw under newly formed thermokarst lakes. The computational efficiency of our model allowed us to run large, multi-centennial ensembles under various scenarios of future warming to express uncertainty inherent to simulations of the permafrost carbon feedback. Under moderate warming of the representative concentration pathway (RCP) 2.6 scenario, cumulated CO2 fluxes from newly thawed permafrost carbon amount to 20 to 58 petagrams of carbon (Pg-C) (68% range) by the year 2100 and reach 40 to 98 Pg-C in 2300. The much larger permafrost degradation under strong warming (RCP8.5) results in cumulated CO2 release of 42 to 141 Pg-C and 157 to 313 Pg-C (68% ranges) in the years 2100 and 2300, respectively. Our estimates only consider fluxes from newly thawed permafrost, not from soils already part of the seasonally thawed active layer under pre-industrial climate. Our simulated CH4 fluxes contribute a few percent to total permafrost carbon release yet they can cause up to 40% of total permafrost-affected radiative forcing in the 21st century (upper 68% range). We infer largest CH4 emission rates of about 50 Tg-CH4 per year around the middle of the 21st century when simulated thermokarst lake extent is at its maximum and when abrupt thaw under thermokarst lakes is taken into account. CH4 release from newly thawed carbon in wetland

  7. Observation-based modelling of permafrost carbon fluxes with accounting for deep carbon deposits and thermokarst activity

    NASA Astrophysics Data System (ADS)

    Schneider von Deimling, T.; Grosse, G.; Strauss, J.; Schirrmeister, L.; Morgenstern, A.; Schaphoff, S.; Meinshausen, M.; Boike, J.

    2014-12-01

    High-latitude soils store vast amounts of perennially frozen and therefore inert organic matter. With rising global temperatures and consequent permafrost degradation, a part of this carbon store will become available for microbial decay and eventual release to the atmosphere. We have developed a simplified, two-dimensional multi-pool model to estimate the strength and timing of future carbon dioxide (CO2) and methane (CH4) fluxes from newly thawed permafrost carbon (i.e. carbon thawed when temperatures rise above pre-industrial levels). We have especially simulated carbon release from deep deposits in Yedoma regions by describing abrupt thaw under thermokarst lakes. The computational efficiency of our model allowed us to run large, multi-centennial ensembles under various scenarios of future warming to express uncertainty inherent to simulations of the permafrost-carbon feedback. Under moderate warming of the representative concentration pathway (RCP) 2.6 scenario, cumulated CO2 fluxes from newly thawed permafrost carbon amount to 20 to 58 petagrammes of carbon (Pg-C) (68% range) by the year 2100 and reach 40 to 98 Pg-C in 2300. The much larger permafrost degradation under strong warming (RCP8.5) results in cumulated CO2 release of 42-141 and 157-313 Pg-C (68% ranges) in the years 2100 and 2300, respectively. Our estimates do only consider fluxes from newly thawed permafrost but not from soils already part of the seasonally thawed active layer under preindustrial climate. Our simulated methane fluxes contribute a few percent to total permafrost carbon release yet they can cause up to 40% of total permafrost-affected radiative forcing in the 21st century (upper 68% range). We infer largest methane emission rates of about 50 Tg-CH4 year-1 around the mid of the 21st century when simulated thermokarst lake extent is at its maximum and when abrupt thaw under thermokarst lakes is accounted for. CH4 release from newly thawed carbon in wetland-affected deposits is only

  8. A mathematical model of the global processes of plastic degradation in the World Ocean with account for the surface temperature distribution

    NASA Astrophysics Data System (ADS)

    Bartsev, S. I.; Gitelson, J. I.

    2016-02-01

    The suggested model of plastic garbage degradation allows us to obtain an estimate of the stationary density of their distribution over the surface of the World Ocean with account for the temperature dependence on the degradation rate. The model also allows us to estimate the characteristic time periods of degradation of plastic garbage and the dynamics of the mean density variation as the mean rate of plastic garbage entry into the ocean varies

  9. A biophysical model of synaptic plasticity and metaplasticity can account for the dynamics of the backward shift of hippocampal place fields.

    PubMed

    Yu, Xintian; Shouval, Harel Z; Knierim, James J

    2008-08-01

    Hippocampal place cells in the rat undergo experience-dependent changes when the rat runs stereotyped routes. One such change, the backward shift of the place field center of mass, has been linked by previous modeling efforts to spike-timing-dependent plasticity (STDP). However, these models did not account for the termination of the place field shift and they were based on an abstract implementation of STDP that ignores many of the features found in cortical plasticity. Here, instead of the abstract STDP model, we use a calcium-dependent plasticity (CaDP) learning rule that can account for many of the observed properties of cortical plasticity. We use the CaDP learning rule in combination with a model of metaplasticity to simulate place field dynamics. Without any major changes to the parameters of the original model, the present simulations account both for the initial rapid place field shift and for the subsequent slowing down of this shift. These results suggest that the CaDP model captures the essence of a general cortical mechanism of synaptic plasticity, which may underlie numerous forms of synaptic plasticity observed both in vivo and in vitro.

  10. A physically meaningful equivalent circuit network model of a lithium-ion battery accounting for local electrochemical and thermal behaviour, variable double layer capacitance and degradation

    NASA Astrophysics Data System (ADS)

    von Srbik, Marie-Therese; Marinescu, Monica; Martinez-Botas, Ricardo F.; Offer, Gregory J.

    2016-09-01

    A novel electrical circuit analogy is proposed modelling electrochemical systems under realistic automotive operation conditions. The model is developed for a lithium ion battery and is based on a pseudo 2D electrochemical model. Although cast in the framework familiar to application engineers, the model is essentially an electrochemical battery model: all variables have a direct physical interpretation and there is direct access to all states of the cell via the model variables (concentrations, potentials) for monitoring and control systems design. This is the first Equivalent Circuit Network -type model that tracks directly the evolution of species inside the cell. It accounts for complex electrochemical phenomena that are usually omitted in online battery performance predictors such as variable double layer capacitance, the full current-overpotential relation and overpotentials due to mass transport limitations. The coupled electrochemical and thermal model accounts for capacity fade via a loss in active species and for power fade via an increase in resistive solid electrolyte passivation layers at both electrodes. The model's capability to simulate cell behaviour under dynamic events is validated against test procedures, such as standard battery testing load cycles for current rates up to 20 C, as well as realistic automotive drive cycle loads.

  11. Integrating a distributed hydrological model and SEEA-Water for improving water account and water allocation management under a climate change context.

    NASA Astrophysics Data System (ADS)

    Jauch, Eduardo; Almeida, Carina; Simionesei, Lucian; Ramos, Tiago; Neves, Ramiro

    2015-04-01

    The crescent demand and situations of water scarcity and droughts are a difficult problem to solve by water managers, with big repercussions in the entire society. The complexity of this question is increased by trans-boundary river issues and the environmental impacts of the usual adopted solutions to store water, like reservoirs. To be able to answer to the society requirements regarding water allocation in a sustainable way, the managers must have a complete and clear picture of the present situation, as well as being able to understand the changes in the water dynamics both in the short and long time period. One of the available tools for the managers is the System of Environmental-Economic Accounts for Water (SEEA-Water), a subsystem of SEEA with focus on water accounts, developed by the United Nations Statistical Division (UNSD) in collaboration with the London Group on Environmental Accounting, This system provides, between other things, with a set of tables and accounts for water and water related emissions, organizing statistical data making possible the derivation of indicators that can be used to assess the relations between economy and environment. One of the main issues with the SEEA-Water framework seems to be the requirement of large amounts of data, including field measurements of water availability in rivers/lakes/reservoirs, soil and groundwater, as also precipitation, irrigation and other water sources and uses. While this is an incentive to collecting and using data, it diminishes the usefulness of the system on countries where this data is not yet available or is incomplete, as it can lead to a poor understanding of the water availability and uses. Distributed hydrological models can be used to fill missing data required by the SEEA-Water framework. They also make it easier to assess different scenarios (usually soil use, water demand and climate changes) for a better planning of water allocation. In the context of the DURERO project (www

  12. Account of nonlocal ionization by fast electrons in the fluid models of a direct current glow discharge

    SciTech Connect

    Rafatov, I.; Bogdanov, E. A.; Kudryavtsev, A. A.

    2012-09-15

    We developed and tested a simple hybrid model for a glow discharge, which incorporates nonlocal ionization by fast electrons into the 'simple' and 'extended' fluid frameworks. Calculations have been performed for an argon gas. Comparison with the experimental data as well as with the hybrid (particle) and fluid modelling results demonstated good applicability of the proposed model.

  13. Model-Based Assessments to Support Learning and Accountability: The Evolution of CRESST's Research on Multiple-Purpose Measures

    ERIC Educational Resources Information Center

    Baker, Eva L.

    2007-01-01

    This article describes the history, evidence warrants, and evolution of the Center for Research on Evaluation, Standards, and Student Testing's (CRESST) model-based assessments. It considers alternative interpretations of scientific or practical models and illustrates how model-based assessment addresses both definitions. The components of the…

  14. Demonstrating marketing accountability.

    PubMed

    Gombeski, William R; Britt, Jason; Taylor, Jan; Riggs, Karen; Wray, Tanya; Adkins, Wanda; Springate, Suzanne

    2008-01-01

    Pressure on health care marketers to demonstrate effectiveness of their strategies and show their contribution to organizational goals is growing. A seven-tiered model based on the concepts of structure (having the right people, systems), process (doing the right things in the right way), and outcomes (results) is discussed. Examples of measures for each tier are provided and the benefits of using the model as a tool for measuring, organizing, tracking, and communicating appropriate information are provided. The model also provides a framework for helping management understand marketing's value and can serve as a vehicle for demonstrating marketing accountability.

  15. A Comparison of Seven Cox Regression-Based Models to Account for Heterogeneity Across Multiple HIV Treatment Cohorts in Latin America and the Caribbean

    PubMed Central

    Giganti, Mark J.; Luz, Paula M.; Caro-Vega, Yanink; Cesar, Carina; Padgett, Denis; Koenig, Serena; Echevarria, Juan; McGowan, Catherine C.; Shepherd, Bryan E.

    2015-01-01

    Abstract Many studies of HIV/AIDS aggregate data from multiple cohorts to improve power and generalizability. There are several analysis approaches to account for cross-cohort heterogeneity; we assessed how different approaches can impact results from an HIV/AIDS study investigating predictors of mortality. Using data from 13,658 HIV-infected patients starting antiretroviral therapy from seven Latin American and Caribbean cohorts, we illustrate the assumptions of seven readily implementable approaches to account for across cohort heterogeneity with Cox proportional hazards models, and we compare hazard ratio estimates across approaches. As a sensitivity analysis, we modify cohort membership to generate specific heterogeneity conditions. Hazard ratio estimates varied slightly between the seven analysis approaches, but differences were not clinically meaningful. Adjusted hazard ratio estimates for the association between AIDS at treatment initiation and death varied from 2.00 to 2.20 across approaches that accounted for heterogeneity; the adjusted hazard ratio was estimated as 1.73 in analyses that ignored across cohort heterogeneity. In sensitivity analyses with more extreme heterogeneity, we noted a slightly greater distinction between approaches. Despite substantial heterogeneity between cohorts, the impact of the specific approach to account for heterogeneity was minimal in our case study. Our results suggest that it is important to account for across cohort heterogeneity in analyses, but that the specific technique for addressing heterogeneity may be less important. Because of their flexibility in accounting for cohort heterogeneity, we prefer stratification or meta-analysis methods, but we encourage investigators to consider their specific study conditions and objectives. PMID:25647087

  16. A constitutive model for air-NAPL-water flow in the vadose zone accounting for immobile, non-occluded (residual) NAPL in strongly water-wet porous media.

    PubMed

    Lenhard, R J; Oostrom, M; Dane, J H

    2004-07-01

    A hysteretic constitutive model describing relations among relative permeabilities, saturations, and pressures in fluid systems consisting of air, nonaqueous-phase liquid (NAPL), and water is modified to account for NAPL that is postulated to be immobile in small pores and pore wedges and as films or lenses on water surfaces. A direct outcome of the model is prediction of the NAPL saturation that remains in the vadose zone after long drainage periods (residual NAPL). Using the modified model, water and NAPL (free, entrapped by water, and residual) saturations can be predicted from the capillary pressures and the water and total-liquid saturation-path histories. Relations between relative permeabilities and saturations are modified to account for the residual NAPL by adjusting the limits of integration in the integral expression used for predicting the NAPL relative permeability. When all of the NAPL is either residual or entrapped (i.e., no free NAPL), then the NAPL relative permeability will be zero. We model residual NAPL using concepts similar to those used to model residual water. As an initial test of the constitutive model, we compare predictions to published measurements of residual NAPL. Furthermore, we present results using the modified constitutive theory for a scenario involving NAPL imbibition and drainage.

  17. A constitutive model for air-NAPL-water flow in the vadose zone accounting for immobile, non-occluded (residual) NAPL in strongly water-wet porous media.

    PubMed

    Lenhard, R J; Oostrom, M; Dane, J H

    2004-09-01

    A hysteretic constitutive model describing relations among relative permeabilities, saturations, and pressures in fluid systems consisting of air, nonaqueous-phase liquid (NAPL), and water is modified to account for NAPL that is postulated to be immobile in small pores and pore wedges and as films or lenses on water surfaces. A direct outcome of the model is prediction of the NAPL saturation that remains in the vadose zone after long drainage periods (residual NAPL). Using the modified model, water and NAPL (free, entrapped by water, and residual) saturations can be predicted from the capillary pressures and the water and total-liquid saturation-path histories. Relations between relative permeabilities and saturations are modified to account for the residual NAPL by adjusting the limits of integration in the integral expression used for predicting the NAPL relative permeability. When all of the NAPL is either residual or entrapped (i.e., no free NAPL), then the NAPL relative permeability will be zero. We model residual NAPL using concepts similar to those used to model residual water. As an initial test of the constitutive model, we compare predictions to published measurements of residual NAPL. Furthermore, we present results using the modified constitutive theory for a scenario involving NAPL imbibition and drainage.

  18. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses

    PubMed Central

    Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.

    2015-01-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447

  19. Modeling storm-time electrodynamics of the low-latitude ionosphere thermosphere system: Can long lasting disturbance electric fields be accounted for?

    NASA Astrophysics Data System (ADS)

    Maruyama, Naomi; Sazykin, Stanislav; Spiro, Robert W.; Anderson, David; Anghel, Adela; Wolf, Richard A.; Toffoletto, Frank R.; Fuller-Rowell, Timothy J.; Codrescu, Mihail V.; Richmond, Arthur D.; Millward, George H.

    2007-07-01

    Storm-time ionospheric disturbance electric fields are studied for two large geomagnetic storms, March 31, 2001 and April 17 18, 2002, by comparing low-latitude observations of ionospheric plasma drifts with results from numerical simulations based on a combination of first-principles models. The simulation machinery combines the Rice convection model (RCM), used to calculate inner magnetospheric electric fields, and the coupled thermosphere ionosphere plasmasphere electrodynamics (CTIPe) model, driven, in part, by RCM-computed electric fields. Comparison of model results with measured or estimated low-latitude vertical drift velocities (zonal electric fields) shows that the coupled model is capable of reproducing measurements under a variety of conditions. In particular, our model results suggest, from theoretical grounds, a possibility of long-lasting penetration of magnetospheric electric fields to low latitudes during prolonged periods of enhanced convection associated with southward-directed interplanetary magnetic field, although the model probably overestimates the magnitude and duration of such penetration during extremely disturbed conditions. During periods of moderate disturbance, we found surprisingly good overall agreement between model predictions and data, with penetration electric fields accounting for early main phase changes and oscillations in low-latitude vertical drift, while the disturbance dynamo mechanism becomes increasingly important later in the modeled events. Discrepancies between the model results and the observations indicate some of the difficulties in validating these combined numerical models, and the limitations of the available experimental data.

  20. Arctic climate changes in the 21st century: Ensemble model estimates accounting for realism in present-day climate simulation

    NASA Astrophysics Data System (ADS)

    Eliseev, A. V.; Semenov, V. A.

    2016-11-01

    In the course of forecasting future climate changes in the Arctic Region based on calculations and an ensemble of the state-of-the-art global climate models, the results depend on the method of construction the statistics from the models.

  1. Construction of a mathematical model of the human body, taking the nonlinear rigidity of the spine into account

    NASA Technical Reports Server (NTRS)

    Glukharev, K. K.; Morozova, N. I.; Potemkin, B. A.; Solovyev, V. S.; Frolov, K. V.

    1973-01-01

    A mathematical model of the human body was constructed, under the action of harmonic vibrations, in the 2.5-7 Hz frequency range. In this frequency range, the model of the human body as a vibrating system, with concentrated parameters is considered. Vertical movements of the seat and vertical components of vibrations of the human body are investigated.

  2. Production model in the conditions of unstable demand taking into account the influence of trading infrastructure: Ergodicity and its application

    NASA Astrophysics Data System (ADS)

    Obrosova, N. K.; Shananin, A. A.

    2015-04-01

    A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by an attempt to analyze the problems of functioning of low competitive macroeconomic structures. The model is formalized in the form of a Bellman equation, for which a closed-form solution is found. The stochastic process of product stock variations is proved to be ergodic and its final probability distribution is found. Expressions for the average production load and the average product stock are found by analyzing the stochastic process. A system of model equations relating the model variables to official statistical parameters is derived. The model is identified using data from the Fiat and KAMAZ companies. The influence of the credit interest rate on the firm market value assessment and the production load level are analyzed using comparative statics methods.

  3. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    SciTech Connect

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.

    2012-04-24

    We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.

  4. The Asian clam Corbicula fluminea as a biomonitor of trace element contamination: Accounting for different sources of variation using an hierarchical linear model

    USGS Publications Warehouse

    Shoults-Wilson, W. A.; Peterson, J.T.; Unrine, J.M.; Rickard, J.; Black, M.C.

    2009-01-01

    In the present study, specimens of the invasive clam, Corbicula fluminea, were collected above and below possible sources of potentially toxic trace elements (As, Cd, Cr, Cu, Hg, Pb, and Zn) in the Altamaha River system (Georgia, USA). Bioaccumulation of these elements was quantified, along with environmental (water and sediment) concentrations. Hierarchical linear models were used to account for variability in tissue concentrations related to environmental (site water chemistry and sediment characteristics) and individual (growth metrics) variables while identifying the strongest relations between these variables and trace element accumulation. The present study found significantly elevated concentrations of Cd, Cu, and Hg downstream of the outfall of kaolin-processing facilities, Zn downstream of a tire cording facility, and Cr downstream of both a nuclear power plant and a paper pulp mill. Models of the present study indicated that variation in trace element accumulation was linked to distance upstream from the estuary, dissolved oxygen, percentage of silt and clay in the sediment, elemental concentrations in sediment, shell length, and bivalve condition index. By explicitly modeling environmental variability, the Hierarchical linear modeling procedure allowed the identification of sites showing increased accumulation of trace elements that may have been caused by human activity. Hierarchical linear modeling is a useful tool for accounting for environmental and individual sources of variation in bioaccumulation studies. ?? 2009 SETAC.

  5. Evolution of Gene Structural Complexity: An Alternative-Splicing-Based Model Accounts for Intron-Containing Retrogenes1[W

    PubMed Central

    Zhang, Chengjun; Gschwend, Andrea R.; Ouyang, Yidan; Long, Manyuan

    2014-01-01

    The structure of eukaryotic genes evolves extensively by intron loss or gain. Previous studies have revealed two models for gene structure evolution through the loss of introns: RNA-based gene conversion, dubbed the Fink model and retroposition model. However, retrogenes that experienced both intron loss and intron-retaining events have been ignored; evolutionary processes responsible for the variation in complex exon-intron structure were unknown. We detected hundreds of retroduplication-derived genes in human (Homo sapiens), fly (Drosophila melanogaster), rice (Oryza sativa), and Arabidopsis (Arabidopsis thaliana) and categorized them either as duplicated genes that have all introns lost or as duplicated genes that have at least lost one and retained one intron compared with the parental copy (intron-retaining [IR] type). Our new model attributes intron retention alternative splicing to the generation of these IR-type gene pairs. We presented 25 parental genes that have an intron retention isoform and have retained introns in the same locations in the IR-type duplicate genes, which directly support our hypothesis. Our alternative-splicing-based model in conjunction with the retroposition and Fink models can explain the IR-type gene observed. We discovered a greater percentage of IR-type genes in plants than in animals, which may be due to the abundance of intron retention cases in plants. Given the prevalence of intron retention in plants, this new model gives a support that plant genomes have very complex gene structures. PMID:24520158

  6. Evolution of gene structural complexity: an alternative-splicing-based model accounts for intron-containing retrogenes.

    PubMed

    Zhang, Chengjun; Gschwend, Andrea R; Ouyang, Yidan; Long, Manyuan

    2014-05-01

    The structure of eukaryotic genes evolves extensively by intron loss or gain. Previous studies have revealed two models for gene structure evolution through the loss of introns: RNA-based gene conversion, dubbed the Fink model and retroposition model. However, retrogenes that experienced both intron loss and intron-retaining events have been ignored; evolutionary processes responsible for the variation in complex exon-intron structure were unknown. We detected hundreds of retroduplication-derived genes in human (Homo sapiens), fly (Drosophila melanogaster), rice (Oryza sativa), and Arabidopsis (Arabidopsis thaliana) and categorized them either as duplicated genes that have all introns lost or as duplicated genes that have at least lost one and retained one intron compared with the parental copy (intron-retaining [IR] type). Our new model attributes intron retention alternative splicing to the generation of these IR-type gene pairs. We presented 25 parental genes that have an intron retention isoform and have retained introns in the same locations in the IR-type duplicate genes, which directly support our hypothesis. Our alternative-splicing-based model in conjunction with the retroposition and Fink models can explain the IR-type gene observed. We discovered a greater percentage of IR-type genes in plants than in animals, which may be due to the abundance of intron retention cases in plants. Given the prevalence of intron retention in plants, this new model gives a support that plant genomes have very complex gene structures.

  7. Accounting Fundamentals for Non-Accountants

    EPA Pesticide Factsheets

    The purpose of this module is to provide an introduction and overview of accounting fundamentals for non-accountants. The module also covers important topics such as communication, internal controls, documentation and recordkeeping.

  8. Accounting for model error in air quality forecasts: an application of 4DEnVar to the assimilation of atmospheric composition using QG-Chem 1.0

    NASA Astrophysics Data System (ADS)

    Emili, Emanuele; Gürol, Selime; Cariolle, Daniel

    2016-11-01

    Model errors play a significant role in air quality forecasts. Accounting for them in the data assimilation (DA) procedures is decisive to obtain improved forecasts. We address this issue using a reduced-order coupled chemistry-meteorology model based on quasi-geostrophic dynamics and a detailed tropospheric chemistry mechanism, which we name QG-Chem. This model has been coupled to the software library for the data assimilation Object Oriented Prediction System (OOPS) and used to assess the potential of the 4DEnVar algorithm for air quality analyses and forecasts. The assets of 4DEnVar include the possibility to deal with multivariate aspects of atmospheric chemistry and to account for model errors of a generic type. A simple diagnostic procedure for detecting model errors is proposed, based on the 4DEnVar analysis and one additional model forecast. A large number of idealized data assimilation experiments are shown for several chemical species of relevance for air quality forecasts (O3, NOx, CO and CO2) with very different atmospheric lifetimes and chemical couplings. Experiments are done both under a perfect model hypothesis and including model error through perturbation of surface chemical emissions. Some key elements of the 4DEnVar algorithm such as the ensemble size and localization are also discussed. A comparison with results of 3D-Var, widely used in operational centers, shows that, for some species, analysis and next-day forecast errors can be halved when model error is taken into account. This result was obtained using a small ensemble size, which remains affordable for most operational centers. We conclude that 4DEnVar has a promising potential for operational air quality models. We finally highlight areas that deserve further research for applying 4DEnVar to large-scale chemistry models, i.e., localization techniques, propagation of analysis covariance between DA cycles and treatment for chemical nonlinearities. QG-Chem can provide a useful tool in this

  9. Accounting for observation uncertainties in an evaluation metric of low latitude turbulent air-sea fluxes: application to the comparison of a suite of IPSL model versions

    NASA Astrophysics Data System (ADS)

    Servonnat, Jérôme; Găinuşă-Bogdan, Alina; Braconnot, Pascale

    2016-11-01

    Turbulent momentum and heat (sensible heat and latent heat) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate. The evaluation of these fluxes in the climate models is still difficult because of the large uncertainties associated with the reference products. In this paper we present an objective metric accounting for reference uncertainties to evaluate the annual cycle of the low latitude turbulent fluxes of a suite of IPSL climate models. This metric consists in a Hotelling T 2 test between the simulated and observed field in a reduce space characterized by the dominant modes of variability that are common to both the model and the reference, taking into account the observational uncertainty. The test is thus more severe when uncertainties are small as it is the case for sea surface temperature (SST). The results of the test show that for almost all variables and all model versions the model-reference differences are not zero. It is not possible to distinguish between model versions for sensible heat and meridional wind stress, certainly due to the large observational uncertainties. All model versions share similar biases for the different variables. There is no improvement between the reference versions of the IPSL model used for CMIP3 and CMIP5. The test also reveals that the higher horizontal resolution fails to improve the representation of the turbulent surface fluxes compared to the other versions. The representation of the fluxes is further degraded in a version with improved atmospheric physics with an amplification of some of the biases in the Indian Ocean and in the intertropical convergence zone. The ranking of the model versions for the turbulent fluxes is not correlated with the ranking found for SST. This highlights that despite the fact that SST gradients are important for the large-scale atmospheric circulation patterns, other factors such as wind speed, and air-sea temperature contrast play an

  10. Accounting for the Impact of Impermeable Soil Layers on Pesticide Runoff and Leaching in a Landscape Vulnerability Model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A regional-scale model that estimates landscape vulnerability of pesticide leaching and runoff (solution and particle adsorbed) underestimated runoff vulnerability and overestimated leaching vulnerability compared to measured data when applied to a gently rolling landscape in northeast Missouri. Man...

  11. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; Chuvieco, E.; Koutsias, N.

    2013-02-01

    Humans are responsible for most forest fires in Europe, but anthropogenic factors behind these events are still poorly understood. We tried to identify the driving factors of human-caused fire occurrence in Spain by applying two different statistical approaches. Firstly, assuming stationary processes for the whole country, we created models based on multiple linear regression and binary logistic regression to find factors associated with fire density and fire presence, respectively. Secondly, we used geographically weighted regression (GWR) to better understand and explore the local and regional variations of those factors behind human-caused fire occurrence. The number of human-caused fires occurring within a 25-yr period (1983-2007) was computed for each of the 7638 Spanish mainland municipalities, creating a binary variable (fire/no fire) to develop logistic models, and a continuous variable (fire density) to build standard linear regression models. A total of 383 657 fires were registered in the study dataset. The binary logistic model, which estimates the probability of having/not having a fire, successfully classified 76.4% of the total observations, while the ordinary least squares (OLS) regression model explained 53% of the variation of the fire density patterns (adjusted R2 = 0.53). Both approaches confirmed, in addition to forest and climatic variables, the importance of variables related with agrarian activities, land abandonment, rural population exodus and developmental processes as underlying factors of fire occurrence. For the GWR approach, the explanatory power of the GW linear model for fire density using an adaptive bandwidth increased from 53% to 67%, while for the GW logistic model the correctly classified observations improved only slightly, from 76.4% to 78.4%, but significantly according to the corrected Akaike Information Criterion (AICc), from 3451.19 to 3321.19. The results from GWR indicated a significant spatial variation in the local

  12. Modeling scale-dependent runoff generation in a small semi-arid watershed accounting for rainfall intensity and water depth

    NASA Astrophysics Data System (ADS)

    Langhans, Christoph; Govers, Gerard; Diels, Jan; Stone, Jeffry J.; Nearing, Mark A.

    2014-07-01

    Observed scale effects of runoff on hillslopes and small watersheds derive from complex interactions of time-varying rainfall rates with runoff, infiltration and macro- and microtopographic structures. A little studied aspect of scale effects is the concept of water depth-dependent infiltration. For semi-arid rangeland it has been demonstrated that mounds underneath shrubs have a high infiltrability and lower lying compacted or stony inter-shrub areas have a lower infiltrability. It is hypothesized that runoff accumulation further downslope leads to increased water depth, inundating high infiltrability areas, which increases the area-averaged infiltration rate. A model was developed that combines the concepts of water depth-dependent infiltration, partial contributing area under variable rainfall intensity, and the Green-Ampt theory for point-scale infiltration. The model was applied to rainfall simulation data and natural rainfall-runoff data from a small sub-watershed (0.4 ha) of the Walnut Gulch Experimental Watershed in the semi-arid US Southwest. Its performance to reproduce observed hydrographs was compared to that of a conventional Green-Ampt model assuming complete inundation sheet flow, with runon infiltration, which is infiltration of runoff onto pervious downstream areas. Parameters were derived from rainfall simulations and from watershed-scale calibration directly from the rainfall-runoff events. The performance of the water depth-dependent model was better than that of the conventional model on the scale of a rainfall simulator plot, but on the scale of a small watershed the performance of both model types was similar. We believe that the proposed model contributes to a less scale-dependent way of modeling runoff and erosion on the hillslope-scale.

  13. An analytical model for the celestial distribution of polarized light, accounting for polarization singularities, wavelength and atmospheric turbidity

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Gao, Jun; Fan, Zhiguo; Roberts, Nicholas W.

    2016-06-01

    We present a computationally inexpensive analytical model for simulating celestial polarization patterns in variable conditions. We combine both the singularity theory of Berry et al (2004 New J. Phys. 6 162) and the intensity model of Perez et al (1993 Sol. Energy 50 235-245) such that our single model describes three key sets of data: (1) the overhead distribution of the degree of polarization as well as the existence of neutral points in the sky; (2) the change in sky polarization as a function of the turbidity of the atmosphere; and (3) sky polarization patterns as a function of wavelength, calculated in this work from the ultra-violet to the near infra-red. To verify the performance of our model we generate accurate reference data using a numerical radiative transfer model and statistical comparisons between these two methods demonstrate no significant difference in almost all situations. The development of our analytical model provides a novel method for efficiently calculating the overhead skylight polarization pattern. This provides a new tool of particular relevance for our understanding of animals that use the celestial polarization pattern as a source of visual information.

  14. Accounting: Accountants Need Verbal Skill Training

    ERIC Educational Resources Information Center

    Whitaker, Bruce L.

    1978-01-01

    Verbal skills training is one aspect of accounting education not usually included in secondary and postsecondary accounting courses. The author discusses the need for verbal competency and methods of incorporating it into accounting courses, particularly a variation of the Keller plan of individualized instruction. (MF)

  15. Can the Five Factor Model of Personality Account for the Variability of Autism Symptom Expression? Multivariate Approaches to Behavioral Phenotyping in Adult Autism Spectrum Disorder.

    PubMed

    Schwartzman, Benjamin C; Wood, Jeffrey J; Kapp, Steven K

    2016-01-01

    The present study aimed to: determine the extent to which the five factor model of personality (FFM) accounts for variability in autism spectrum disorder (ASD) symptomatology in adults, examine differences in average FFM personality traits of adults with and without ASD and identify distinct behavioral phenotypes within ASD. Adults (N = 828; nASD = 364) completed an online survey with an autism trait questionnaire and an FFM personality questionnaire. FFM facets accounted for 70 % of variance in autism trait scores. Neuroticism positively correlated with autism symptom severity, while extraversion, openness to experience, agreeableness, and conscientiousness negatively correlated with autism symptom severity. Four FFM subtypes emerged within adults with ASD, with three subtypes characterized by high neuroticism and none characterized by lower-than-average neuroticism.

  16. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    USGS Publications Warehouse

    Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  17. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    PubMed Central

    Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior. PMID:24834339

  18. Two-way FSI modelling of blood flow through CCA accounting on-line medical diagnostics in hypertension

    NASA Astrophysics Data System (ADS)

    Czechowicz, K.; Badur, J.; Narkiewicz, K.

    2014-08-01

    Flow parameters can induce pathological changes in the arteries. We propose a method to asses those parameters using a 3D computer model of the flow in the Common Carotid Artery. Input data was acquired using an automatic 2D ultrasound wall tracking system. This data has been used to generate a 3D geometry of the artery. The diameter and wall thickness have been assessed individually for every patient, but the artery has been taken as a 75mm straight tube. The Young's modulus for the arterial walls was calculated using the pulse pressure, diastolic (minimal) diameter and wall thickness (IMT). Blood flow was derived from the pressure waveform using a 2-parameter Windkessel model. The blood is assumed to be non-Newtonian. The computational models were generated and calculated using commercial code. The coupling method required the use of Arbitrary Lagrangian-Euler formulation to solve Navier-Stokes and Navier-Lame equations in a moving domain. The calculations showed that the distention of the walls in the model is not significantly different from the measurements. Results from the model have been used to locate additional risk factors, such as wall shear stress or circumferential stress, that may predict adverse hypertension complications.

  19. Using a new high resolution regional model for malaria that accounts for population density and surface hydrology to determine sensitivity of malaria risk to climate drivers

    NASA Astrophysics Data System (ADS)

    Tompkins, Adrian; Ermert, Volker; Di Giuseppe, Francesca

    2013-04-01

    In order to better address the role of population dynamics and surface hydrology in the assessment of malaria risk, a new dynamical disease model been developed at ICTP, known as VECTRI: VECtor borne disease community model of ICTP, TRIeste (VECTRI). The model accounts for the temperature impact on the larvae, parasite and adult vector populations. Local host population density affects the transmission intensity, and the model thus reproduces the differences between peri-urban and rural transmission noted in Africa. A new simple pond model framework represents surface hydrology. The model can be used on with spatial resolutions finer than 10km to resolve individual health districts and thus can be used as a planning tool. Results of the models representation of interannual variability and longer term projections of malaria transmission will be shown for Africa. These will show that the model represents the seasonality and spatial variations of malaria transmission well matching a wide range of survey data of parasite rate and entomological inoculation rate (EIR) from across West and East Africa taken in the period prior to large-scale interventions. The model is used to determine the sensitivity of malaria risk to climate variations, both in rainfall and temperature, and then its use in a prototype forecasting system coupled with ECMWF forecasts will be demonstrated.

  20. Reengineering Elementary Accounting. Final Report.

    ERIC Educational Resources Information Center

    California State Univ., Chico.

    This final report describes activities and accomplishments of a 3-year project at California State University Chico (CSUC) to reengineer the 2-semester elementary accounting course. The new model emphasized, first, shifting from the traditional view of the preparer of accounting information to that of the user; second, forcing the student to adopt…

  1. A hybrid Bayesian hierarchical model combining cohort and case-control studies for meta-analysis of diagnostic tests: Accounting for partial verification bias.

    PubMed

    Ma, Xiaoye; Chen, Yong; Cole, Stephen R; Chu, Haitao

    2016-12-01

    To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities, and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented.

  2. Quantitative modeling of the neural representation of objects: how semantic feature norms can account for fMRI activation.

    PubMed

    Chang, Kai-min Kevin; Mitchell, Tom; Just, Marcel Adam

    2011-05-15

    Recent multivariate analyses of fMRI activation have shown that discriminative classifiers such as Support Vector Machines (SVM) are capable of decoding fMRI-sensed neural states associated with the visual presentation of categories of various objects. However, the lack of a generative model of neural activity limits the generality of these discriminative classifiers for understanding the underlying neural representation. In this study, we propose a generative classifier that models the hidden factors that underpin the neural representation of objects, using a multivariate multiple linear regression model. The results indicate that object features derived from an independent behavioral feature norming study can explain a significant portion of the systematic variance in the neural activity observed in an object-contemplation task. Furthermore, the resulting regression model is useful for classifying a previously unseen neural activation vector, indicating that the distributed pattern of neural activities encodes sufficient signal to discriminate differences among stimuli. More importantly, there appears to be a double dissociation between the two classifier approaches and within- versus between-participants generalization. Whereas an SVM-based discriminative classifier achieves the best classification accuracy in within-participants analysis, the generative classifier outperforms an SVM-based model which does not utilize such intermediate representations in between-participants analysis. This pattern of results suggests the SVM-based classifier may be picking up some idiosyncratic patterns that do not generalize well across participants and that good generalization across participants may require broad, large-scale patterns that are used in our set of intermediate semantic features. Finally, this intermediate representation allows us to extrapolate the model of the neural activity to previously unseen words, which cannot be done with a discriminative classifier.

  3. Accounting for particle non-sphericity in modeling of mineral dust radiative properties in the thermal infrared

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Dubovik, O.; Lapyonok, T.; Derimian, Y.

    2014-12-01

    Spectral radiative parameters (extinction optical depth, single scattering albedo, asymmetry factor) of spheroids of mineral dust composed of quartz and clays have been simulated at wavelengths between 7.0 and 10.2 μm using a T-matrix code. In spectral intervals with high values of complex index of refraction and for large particles, the parameters cannot be fully calculated with the code. Practically, the calculations are stopped at a truncation radius over which the particles contribution cannot thus be taken into account. To deal with this issue, we have developed and applied an accurate corrective technique of T-matrix Size Truncation Compensation (TSTC). For a mineral dust described by its AERONET standard aspect ratio (AR) distribution, the full error margin when applying the TSTC is within 0.3% (or ±0.15%), whatever the radiative parameter and the wavelength considered, for quartz (the most difficult case). Large AR values limit also the possibilities of calculation with the code. The TSTC has been able to complete the calculations of the T-matrix code for a modified AERONET AR distribution with a maximum AR of 4.7 instead of 3 for the standard distribution. Comparison between the simulated properties of spheroids and of spheres of same volume confirms, in agreement with the literature, that significant differences are observed in the vicinity of the mineral resonant peaks (λ ca. 8.3-8.7 μm for quartz, ca. 9.3-9.5 μm for clays) and that they are due to absorption by the small particles. This is a favorable circumstance for the TSTC, which is concerned with the contribution of the largest particles. This technique of numerical calculation improves the accuracy of the simulated radiative parameters of mineral dust, which must lead to a progress in view of applications such as remote sensing or determination of energy balance of dust in the thermal infrared (TIR), incompletely investigated so far.

  4. Diagnostic Competence of Teachers: A Process Model That Accounts for Diagnosing Learning Behavior Tested by Means of a Case Scenario

    ERIC Educational Resources Information Center

    Klug, Julia; Bruder, Simone; Kelava, Augustin; Spiel, Christiane; Schmitz, Bernhard

    2013-01-01

    Diagnosing learning behavior is one of teachers' most central tasks. So far, accuracy in teachers' judgments on students' achievement has been investigated. In this study, a new perspective is taken by developing and testing a three-dimensional model that describes the process of diagnosing learning behavior within a sample of N = 293…

  5. A Suggested Model for an Accountability System for Cost Effectiveness Management and Evaluation of Adult Basic Education Programs.

    ERIC Educational Resources Information Center

    Carr, Neil W.; And Others

    This model for a cost effective management and evaluation system is intended to help the administrator of an adult basic education (ABE) program (1) to gather data on a monthly basis, (2) to maintain a monthly data review, (3) to modify program costs and/or student enrollment and class size, and (4) to prepare the following year's budget.…

  6. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    PubMed

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs.

  7. Modelling scale-dependent runoff generation in a small semi-arid watershed accounting for rainfall intensity and water depth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Observed scale effects of runoff and erosion on hillslopes and small watersheds pose one of the most intriguing challenges to modellers, because it results from complex interactions of time-dependent rainfall input with runoff, infiltration and macro- and microtopographic structures. A little studie...

  8. Modelling runoff at the plot scale taking into account rainfall partitioning by vegetation: application to stemflow of banana (Musa spp.) plant

    NASA Astrophysics Data System (ADS)

    Charlier, J.-B.; Moussa, R.; Cattan, P.; Cabidoche, Y.-M.; Voltz, M.

    2009-06-01

    Rainfall partitioning by vegetation modifies the intensity of rainwater reaching the ground, which affects runoff generation. Incident rainfall is intercepted by the plant canopy and then redistributed into throughfall and stemflow. Rainfall intensities at the soil surface are therefore not spatially uniform, generating local variations of runoff production that are disregarded in runoff models. The aim of this paper was to model runoff at the plot scale, accounting for rainfall partitioning by vegetation in the case of plants concentrating rainwater at the plant foot and promoting stemflow. We developed a lumped modelling approach, including a stemflow function that divided the plot into two compartments: one compartment including stemflow and the relative water pathways and one compartment for the rest of the plot. This stemflow function was coupled with a production function and a transfer function to simulate a flood hydrograph using the MHYDAS model. Calibrated parameters were a "stemflow coefficient", which compartmented the plot; the saturated hydraulic conductivity (Ks), which controls infiltration and runoff; and the two parameters of the diffusive wave equation. We tested our model on a banana plot of 3000 m2 on permeable Andosol (mean Ks=75 mm h-1) under tropical rainfalls, in Guadeloupe (FWI). Runoff simulations without and with the stemflow function were performed and compared to 18 flood events from 10 to 130 mm rainfall depth. Modelling results showed that the stemflow function improved the calibration of hydrographs according to the error criteria on volume and on peakflow and to the Nash and Sutcliffe coefficient. This was particularly the case for low flows observed during residual rainfall, for which the stemflow function allowed runoff to be simulated for rainfall intensities lower than the Ks measured at the soil surface. This approach also allowed us to take into account the experimental data, without needing to calibrate the runoff volume on

  9. Accounting for spatial variation in vegetation properties improves simulations of Amazon forest biomass and productivity in a global vegetation model

    NASA Astrophysics Data System (ADS)

    de Almeida Castanho, A. D.; Coe, M. T.; Heil Costa, M.; Malhi, Y.; Galbraith, D.; Quesada, C. A.

    2012-08-01

    Dynamic vegetation models forced with spatially homogeneous biophysical parameters are capable of producing average productivity and biomass values for the Amazon basin forest biome that are close to the observed estimates, but are unable to reproduce the observed spatial variability. Recent observational studies have shown substantial regional spatial variability of above-ground productivity and biomass across the Amazon basin, which is believed to be primarily driven by soil physical and chemical properties. In this study, spatial heterogeneity of vegetation properties is added to the IBIS land surface model, and the simulated productivity and biomass of the Amazon basin are compared to observations from undisturbed forest. The maximum Rubisco carboxylation capacity (Vcmax) and the woody biomass residence time (τw) were found to be the most important properties determining the modeled spatial variation of above-ground woody net primary productivity and biomass, respectively. Spatial heterogeneity of these properties may lead to a spatial variability of 1.8 times in the simulated woody net primary productivity and 2.8 times in the woody above-ground biomass. The coefficient of correlation between the modeled and observed woody productivity improved from 0.10 with homogeneous parameters to 0.73 with spatially heterogeneous parameters, while the coefficient of correlation between the simulated and observed woody above-ground biomass improved from 0.33 to 0.88. The results from our analyses with the IBIS dynamic vegetation model demonstrate that using single values for key ecological parameters in the tropical forest biome severely limits simulation accuracy. We emphasize that our approach must be viewed as an important first step and that a clearer understanding of the biophysical mechanisms that drive the spatial variability of carbon allocation, τw and Vcmax are necessary.

  10. Modeling complicated rheological behaviors in encapsulating shells of lipid-coated microbubbles accounting for nonlinear changes of both shell viscosity and elasticity.

    PubMed

    Li, Qian; Matula, Thomas J; Tu, Juan; Guo, Xiasheng; Zhang, Dong

    2013-02-21

    It has been accepted that the dynamic responses of ultrasound contrast agent (UCA) microbubbles will be significantly affected by the encapsulating shell properties (e.g., shell elasticity and viscosity). In this work, a new model is proposed to describe the complicated rheological behaviors in an encapsulating shell of UCA microbubbles by applying the nonlinear 'Cross law' to the shell viscous term in the Marmottant model. The proposed new model was verified by fitting the dynamic responses of UCAs measured with either a high-speed optical imaging system or a light scattering system. The comparison results between the measured radius-time curves and the numerical simulations demonstrate that the 'compression-only' behavior of UCAs can be successfully simulated with the new model. Then, the shell elastic and viscous coefficients of SonoVue microbubbles were evaluated based on the new model simulations, and compared to the results obtained from some existing UCA models. The results confirm the capability of the current model for reducing the dependence of bubble shell parameters on the initial bubble radius, which indicates that the current model might be more comprehensive to describe the complex rheological nature (e.g., 'shear-thinning' and 'strain-softening') in encapsulating shells of UCA microbubbles by taking into account the nonlinear changes of both shell elasticity and shell viscosity.

  11. The use of a random regression model to account for change in racing speed of German trotters with increasing age.

    PubMed

    Bugislaus, A-E; Roehe, R; Willms, F; Kalm, E

    2006-08-01

    In a genetic analysis of German trotters, the performance trait racing time per km was analysed by using a random regression model on six different age classes (2-, 3-, 4-, 5- and 6-year-old and older trotters; the age class of 3-year-old trotters was additionally divided by birth months of horses into two seasons). The best-fitting random regression model for the trait racing time per km on six age classes included as fixed effects sex, race track, condition of race track (fitted as second-order polynomial on age), distance of race and each driver (fitted as first-order polynomial on age) as well as the year-season (fitted independent of age). The random additive genetic and permanent environmental effects were fitted as second-order polynomials on age. Data consisted of 138,620 performance observations from 2,373 trotters and the pedigree data contained 9,952 horses from a four-generation pedigree. Heritabilities for racing time per km increased from 0.01 to 0.18 at age classes from 2- to 4-year-old trotters, then slightly decreased for 5 year and substantially decreased for 6-year-old horses. Genetic correlations of racing time per km among the six age classes were very high (rg = 0.82-0.99). Heritability was h2 = 0.13 when using a repeatability animal model for racing time per km considering the six age classes as fixed effect. Breeding values using repeatability analysis over all and within age classes resulted in slightly different ranking of trotters than those using random regression analysis. When using random regression analysis almost no reranking of trotters over time took place. Generally, the analyses showed that using a random regression model improved the accuracy of selection of trotters over age classes.

  12. Modeling single-vehicle run-off-road crash severity in rural areas: Accounting for unobserved heterogeneity and age difference.

    PubMed

    Gong, Linfeng; Fan, Wei David

    2017-04-01

    This study investigates factors that significantly contribute to the severity of driver injuries resulting from single-vehicle run-off-road (SV ROR) crashes. A mixed logit model approach is employed to explore the potential unobserved heterogeneous effects associated with each age group: young (ages 16-24), middle-aged (ages 25-65), and older drivers (ages over 65). Likelihood ratio tests indicated that the development of separate injury severity models for each age group is statistically superior to estimating a single model using all data. Based on the crash data collected from 2009 to 2013 in North Carolina, a series of driver, vehicle, roadway, and environmental characteristics are examined. Both parameter estimates and their elasticities are developed and used to interpret the models. The estimation results show that contributing factors which significantly affect the injury severity of an SV ROR crash differ across three age groups. Use of restraint device and horizontal curves are found to affect crash injuries and fatalities in all age groups. Reckless driving, speeding, distraction, inexperience, drug or alcohol involvement, presence of passengers, and driving an SUV or a van are found to have a more pronounced influence in young and middle-aged drivers than older drivers. Compared to the passenger cars, older drivers are less likely to experience possible injuries in a large-size vehicle (e.g., truck or bus). The average annual daily traffic volume and lighting conditions are also found to influence the resulting injury severity of an SV ROR crash specific to young drivers.

  13. Statistical Downscaling of General Circulation Model Outputs to Precipitation Accounting for Non-Stationarities in Predictor-Predictand Relationships

    PubMed Central

    Sachindra, D. A.; Perera, B. J. C.

    2016-01-01

    This paper presents a novel approach to incorporate the non-stationarities characterised in the GCM outputs, into the Predictor-Predictand Relationships (PPRs) in statistical downscaling models. In this approach, a series of 42 PPRs based on multi-linear regression (MLR) technique were determined for each calendar month using a 20-year moving window moved at a 1-year time step on the predictor data obtained from the NCEP/NCAR reanalysis data archive and observations of precipitation at 3 stations located in Victoria, Australia, for the period 1950–2010. Then the relationships between the constants and coefficients in the PPRs and the statistics of reanalysis data of predictors were determined for the period 1950–2010, for each calendar month. Thereafter, using these relationships with the statistics of the past data of HadCM3 GCM pertaining to the predictors, new PPRs were derived for the periods 1950–69, 1970–89 and 1990–99 for each station. This process yielded a non-stationary downscaling model consisting of a PPR per calendar month for each of the above three periods for each station. The non-stationarities in the climate are characterised by the long-term changes in the statistics of the climate variables and above process enabled relating the non-stationarities in the climate to the PPRs. These new PPRs were then used with the past data of HadCM3, to reproduce the observed precipitation. It was found that the non-stationary MLR based downscaling model was able to produce more accurate simulations of observed precipitation more often than conventional stationary downscaling models developed with MLR and Genetic Programming (GP). PMID:27997609

  14. Observation-based modelling of permafrost carbon fluxes with accounting for deep carbon deposits and thermokarst activity

    NASA Astrophysics Data System (ADS)

    Schneider von Deimling, Thomas; Grosse, Guido; Strauss, Jens; Schirrmeister, Lutz; Morgenstern, Anne; Schaphoff, Sibyll; Meinshausen, Malte; Boike, Julia

    2015-04-01

    With rising global temperatures and consequent permafrost degradation a part of old carbon stored in high latitude soils will become available for microbial decay and eventual release to the atmosphere. To estimate the strength and timing of future carbon dioxide and methane fluxes from newly thawed permafrost carbon, we have developed a simplified, two-dimensional multi-pool model. As large amounts of soil organic matter are stored in depths below three meters, we have also simulated carbon release from deep deposits in Yedoma regions. For this purpose we have modelled abrupt thaw under thermokarst lakes which can unlock large amounts of soil carbon buried deep in the ground. The computational efficiency of our 2-D model allowed us to run large, multi-centennial ensembles of differing scenarios of future warming to express uncertainty inherent to simulations of the permafrost-carbon feedback. Our model simulations, which are constrained by multiple lines of recent observations, suggest cumulated CO2 fluxes from newly thawed permafrost until the year 2100 of 20-58 Pg-C under moderate warming (RCP2.6), and of 42-141Pg-C under strong warming (RCP8.5). Under intense thermokarst activity, our simulated methane fluxes proved substantial and caused up to 40 % of total permafrost-affected radiative forcing in the 21st century. By quantifying CH4 contributions from different pools and depth levels, we discuss the role of thermokarst dynamics in affecting future Arctic carbon release. The additional global warming through the release from newly thawed permafrost carbon proved only slightly dependent on the pathway of anthropogenic emission in our simulations and reached about 0.1°C by end of the century. The long-term, permafrost-affected global warming increased further in the 22nd and 23rd century, reaching a maximum of about 0.4°C in the year 2300.

  15. A general model to calculate the spin-lattice (T1) relaxation time of blood, accounting for haematocrit, oxygen saturation and magnetic field strength.

    PubMed

    Hales, Patrick W; Kirkham, Fenella J; Clark, Christopher A

    2016-02-01

    Many MRI techniques require prior knowledge of the T1-relaxation time of blood (T1bl). An assumed/fixed value is often used; however, T1bl is sensitive to magnetic field (B0), haematocrit (Hct), and oxygen saturation (Y). We aimed to combine data from previous in vitro measurements into a mathematical model, to estimate T1bl as a function of B0, Hct, and Y. The model was shown to predict T1bl from in vivo studies with a good accuracy (± 87 ms). This model allows for improved estimation of T1bl between 1.5-7.0 T while accounting for variations in Hct and Y, leading to improved accuracy of MRI-derived perfusion measurements.

  16. Spatially Explicit Full Carbon and Greenhouse Gas Accounting for the Midwestern and Continental US: Modeling and Decision Support for Carbon Management

    NASA Astrophysics Data System (ADS)

    West, T. O.; Brandt, C. C.; Wilson, B. S.; Hellwinckel, C. M.; Mueller, M.; Tyler, D. D.; de La Torre Ugarte, D. G.; Larson, J. A.; Nelson, R. G.; Marland, G.

    2006-12-01

    Full carbon accounting for terrestrial ecosystems is intended to quantify changes in net carbon emissions caused by changes in land management. On agricultural lands, changes in land management can cause changes in CO2 emissions from fossil fuel use, agricultural lime, and decomposition of soil carbon. Changes in off-site emissions can occur from the manufacturing of fertilizers, pesticides, and agricultural lime. We are developing a full carbon accounting framework that can be used for estimates of on-site net carbon flux or for full greenhouse gas accounting at a high spatial resolution. Estimates are based on the assimilation of national inventory data, soil carbon dynamics based on empirical analyses of field data, and Landsat-derived remote sensing products with 30x30m resolution. We applied this framework to a mid-western region of the US that consists of 679 counties approximately centered around Iowa. We estimate the 1990 baseline soil carbon for this region to be 4,099 Tg C to a 3m maximum depth. Soil carbon accumulation of 57.3 Tg C is estimated to have occurred in this region between 1991-2000. Without accounting for soil carbon loss associated with changes to more intense tillage practices, our estimate increases to 66.3 Tg C. This indicates that on-site permanence of soil carbon is approximately 86% with no additional economic incentives provided for soil carbon sequestration practices. Total net carbon flux from the agricultural activities in the Midwestern US in 2000 is estimated at about -5 Tg C. This estimate includes carbon uptake, decomposition, harvested products, and on-site fossil fuel emissions. Therefore, soil carbon accumulation offset on-site emissions in 2000. Our carbon accounting framework offers a method to integrate new inventory and remote sensing data on an annual basis, account for alternating annual trends in land management without the need for model equilibration, and provide a transparent means to monitor changes soil carbon

  17. Cluster perturbation theory in Hubbard model exactly taking into account the short-range magnetic order in 2 x 2 cluster

    SciTech Connect

    Nikolaev, S. V. Ovchinnikov, S. G.

    2010-10-15

    The cluster perturbation theory is presented in the 2D Hubbard model constructed using X operators in the Hubbard-I approximation. The short-range magnetic order is taken into account by dividing the entire lattice into individual 2 x 2 clusters and solving the eigenvalue problem in an individual cluster using exact diagonalization taking into account all excited levels. The case of half-filling taking into account jumps between nearest neighbors is considered. As a result of numerical solution, a shadow zone is discovered in the quasiparticle spectrum. It is also found that a gap in the density of states in the quasiparticle spectrum at zero temperature exists for indefinitely small values of Coulomb repulsion parameter U and increases with this parameter. It is found that the presence of this gap in the spectrum is due to the formation of a short-range antiferromagnetic order. An analysis of the temperature evolution of the density of states shows that the metal-insulator transition occurs continuously. The existence of two characteristic energy scales at finite temperatures is demonstrated, the larger scale is associated with the formation of a pseudogap in the vicinity of the Fermi level, and the smaller scale is associated with the metal-insulator transition temperature. A peak in the density of states at the Fermi level, which is predicted in the dynamic mean field theory in the vicinity of the metal-insulator transition, is not observed.

  18. Modeling Water Resource Systems Accounting for Water-Related Energy Use, GHG Emissions and Water-Dependent Energy Generation in California

    NASA Astrophysics Data System (ADS)

    Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Medellin-Azuara, J.

    2015-12-01

    Most individual processes relating water and energy interdependence have been assessed in many different ways over the last decade. It is time to step up and include the results of these studies in management by proportionating a tool for integrating these processes in decision-making to effectively understand the tradeoffs between water and energy from management options and scenarios. A simple but powerful decision support system (DSS) for water management is described that includes water-related energy use and GHG emissions not solely from the water operations, but also from final water end uses, including demands from cities, agriculture, environment and the energy sector. Because one of the main drivers of energy use and GHG emissions is water pumping from aquifers, the DSS combines a surface water management model with a simple groundwater model, accounting for their interrelationships. The model also explicitly includes economic data to optimize water use across sectors during shortages and calculate return flows from different uses. Capabilities of the DSS are demonstrated on a case study over California's intertied water system. Results show that urban end uses account for most GHG emissions of the entire water cycle, but large water conveyance produces significant peaks over the summer season. Also the development of more efficient water application on the agricultural sector has increased the total energy consumption and the net water use in the basins.

  19. A conservative vapour intrusion screening model of oxygen-limited hydrocarbon vapour biodegradation accounting for building footprint size

    NASA Astrophysics Data System (ADS)

    Knight, John H.; Davis, Gregory B.

    2013-12-01

    Petroleum hydrocarbon vapours pose a reduced risk to indoor air due to biodegradation processes where oxygen is available in the subsurface or below built structures. However, no previous assessment has been available to show the effects of a building footprint (slab size) on oxygen-limited hydrocarbon vapour biodegradation and the potential for oxygen to be present beneath the entire sub-slab region of a building. Here we provide a new, conservative and conceptually simple vapour screening model which links oxygen and hydrocarbon vapour transport and biodegradation in the vicinity and beneath an impervious slab. This defines when vapour risk is insignificant, or conversely when there is potential for vapour to contact the sub-slab of a building. The solution involves complex mathematics to determine the position of an unknown boundary interface between oxygen diffusing in from the ground surface and vapours diffusing upwards from a subsurface vapour source, but the mathematics reduces to a simple relationship between the vapour source concentration and the ratio of the half slab width and depth to the vapour source. Data from known field investigations are shown to be consistent with the model predictions. Examples of 'acceptable' slab sizes for vapour source depths and strengths are given. The predictions are conservative as an estimator of when petroleum hydrocarbon vapours might come in contact with a slab-on-ground building since additional sources of oxygen due to advective flow or diffusion through the slab are ignored. As such the model can be used for screening sites for further investigation.

  20. A conservative vapour intrusion screening model of oxygen-limited hydrocarbon vapour biodegradation accounting for building footprint size.

    PubMed

    Knight, John H; Davis, Gregory B

    2013-12-01

    Petroleum hydrocarbon vapours pose a reduced risk to indoor air due to biodegradation processes where oxygen is available in the subsurface or below built structures. However, no previous assessment has been available to show the effects of a building footprint (slab size) on oxygen-limited hydrocarbon vapour biodegradation and the potential for oxygen to be present beneath the entire sub-slab region of a building. Here we provide a new, conservative and conceptually simple vapour screening model which links oxygen and hydrocarbon vapour transport and biodegradation in the vicinity and beneath an impervious slab. This defines when vapour risk is insignificant, or conversely when there is potential for vapour to contact the sub-slab of a building. The solution involves complex mathematics to determine the position of an unknown boundary interface between oxygen diffusing in from the ground surface and vapours diffusing upwards from a subsurface vapour source, but the mathematics reduces to a simple relationship between the vapour source concentration and the ratio of the half slab width and depth to the vapour source. Data from known field investigations are shown to be consistent with the model predictions. Examples of 'acceptable' slab sizes for vapour source depths and strengths are given. The predictions are conservative as an estimator of when petroleum hydrocarbon vapours might come in contact with a slab-on-ground building since additional sources of oxygen due to advective flow or diffusion through the slab are ignored. As such the model can be used for screening sites for further investigation.

  1. Accounting for uncertainty in confounder and effect modifier selection when estimating average causal effects in generalized linear models.

    PubMed

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-09-01

    Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.

  2. Controlled release from hydrogel-based solid matrices. A model accounting for water up-take, swelling and erosion.

    PubMed

    Lamberti, Gaetano; Galdi, Ivan; Barba, Anna Angela

    2011-04-04

    Design and realization of drug delivery systems based on polymer matrices could be greatly improved by modeling the phenomena which take place after the systems administration. Availability of a reliable mathematical model, able to predict the release kinetic from drug delivery systems, could actually replace the resource-consuming trial-and-error procedures usually followed in the manufacture of these latter. In this work, the complex problem of drug release from polymer (HPMC) based matrices systems was faced. The phenomena, previously observed and experimentally quantified, of water up-take, system swelling and erosion, and drug release were here described by transient mass balances with diffusion. The resulting set of differential equations was solved by using finite element methods. Two different systems were investigated: cylindrical matrices in which the transport phenomena were allowed only by lateral surfaces ("radial" case), and cylindrical matrices with the overall surface exposed to the solvent ("overall" case). A code able to describe quantitatively all the observed phenomena has been obtained.

  3. Accounting for structural and exchange mobility in models of status attainment: Social fluidity in five European countries.

    PubMed

    Menés, Jorge Rodríguez

    2017-01-01

    This paper proposes a new method to distinguish structural from exchange mobility in status attainment models with interval endogenous variables. In order to measure structural mobility, the paper proposes to trace occupational and educational changes across generations using information provided by children about their fathers. The validity of the method is assessed by comparing the effects of father's socio-economic status and education on son's status and educational attainments, net of occupational upgrading and educational expansion, in five European countries: Britain, Denmark, Germany, Norway, and Spain, using data from the 2005 EU-SILC survey. The results show that the effect of father's on son's ISEI weakens greatly in all countries after considering occupational upgrading, and that much of father's influence over sons occurs by directing them towards occupations with good economic prospects. Useful extensions to the method are discussed in the conclusions.

  4. The Monod-Wyman-Changeux allosteric model accounts for the quaternary transition dynamics in wild type and a recombinant mutant human hemoglobin

    PubMed Central

    Levantino, Matteo; Spilotros, Alessandro; Cammarata, Marco; Schirò, Giorgio; Ardiccioni, Chiara; Vallone, Beatrice; Brunori, Maurizio; Cupane, Antonio

    2012-01-01

    The acknowledged success of the Monod-Wyman-Changeux (MWC) allosteric model stems from its efficacy in accounting for the functional behavior of many complex proteins starting with hemoglobin (the paradigmatic case) and extending to channels and receptors. The kinetic aspects of the allosteric model, however, have been often neglected, with the exception of hemoglobin and a few other proteins where conformational relaxations can be triggered by a short and intense laser pulse, and monitored by time-resolved optical spectroscopy. Only recently the application of time-resolved wide-angle X-ray scattering (TR-WAXS), a direct structurally sensitive technique, unveiled the time scale of hemoglobin quaternary structural transition. In order to test the generality of the MWC kinetic model, we carried out a TR-WAXS investigation in parallel on adult human hemoglobin and on a recombinant protein (HbYQ) carrying two mutations at the active site [Leu(B10)Tyr and His(E7)Gln]. HbYQ seemed an ideal test because, although exhibiting allosteric properties, its kinetic and structural properties are different from adult human hemoglobin. The structural dynamics of HbYQ unveiled by TR-WAXS can be quantitatively accounted for by the MWC kinetic model. Interestingly, the main structural change associated with the R–T allosteric transition (i.e., the relative rotation and translation of the dimers) is approximately 10-fold slower in HbYQ, and the drop in the allosteric transition rate with ligand saturation is steeper. Our results extend the general validity of the MWC kinetic model and reveal peculiar thermodynamic properties of HbYQ. A possible structural interpretation of the characteristic kinetic behavior of HbYQ is also discussed. PMID:22927385

  5. Phytoplankton growth formulation in marine ecosystem models: Should we take into account photo-acclimation and variable stoichiometry in oligotrophic areas?

    NASA Astrophysics Data System (ADS)

    Ayata, S.-D.; Lévy, M.; Aumont, O.; Sciandra, A.; Sainte-Marie, J.; Tagliabue, A.; Bernard, O.

    2013-09-01

    The aim of this study is to evaluate the consequences of accounting for variable Chl:C (chlorophyll:carbon) and C:N (carbon:nitrogen) ratios in the formulation of phytoplankton growth in biogeochemical models. We compare the qualitative behavior of a suite of phytoplankton growth formulations with increasing complexity: 1) a Redfield formulation (constant C:N ratio) without photo-acclimation (constant Chl:C ratio), 2) a Redfield formulation with diagnostic chlorophyll (variable and empirical Chl:C ratio), 3) a quota formulation (variable C:N ratio) with diagnostic chlorophyll, and 4) a quota formulation with prognostic chlorophyll (dynamic variable). These phytoplankton growth formulations are embedded in a simple marine ecosystem model in a 1D framework at the Bermuda Atlantic Time-series (BATS) station. The model parameters are tuned using a stochastic assimilation method (micro-genetic algorithm) and skill assessment techniques are used to compare results. The lowest misfits with observations are obtained when photo-acclimation is taken into account (variable Chl:C ratio) and with non-Redfield stoichiometry (variable C:N ratio), both under spring and summer conditions. This indicates that the most flexible models (i.e., with variable ratios) are necessary to reproduce observations. As seen previously, photo-acclimation is essential in reproducing the observed deep chlorophyll maximum and subsurface production present during summer. Although Redfield and quota formulations of C:N ratios can equally reproduce chlorophyll data the higher primary production that arises from the quota model is in better agreement with observations. Under the oligotrophic conditions that typify the BATS site no clear difference was detected between quota formulations with diagnostic or prognostic chlorophyll.

  6. Computer-program documentation of an interactive-accounting model to simulate streamflow, water quality, and water-supply operations in a river basin

    USGS Publications Warehouse

    Burns, A.W.

    1988-01-01

    This report describes an interactive-accounting model used to simulate streamflow, chemical-constituent concentrations and loads, and water-supply operations in a river basin. The model uses regression equations to compute flow from incremental (internode) drainage areas. Conservative chemical constituents (typically dissolved solids) also are computed from regression equations. Both flow and water quality loads are accumulated downstream. Optionally, the model simulates the water use and the simplified groundwater systems of a basin. Water users include agricultural, municipal, industrial, and in-stream users , and reservoir operators. Water users list their potential water sources, including direct diversions, groundwater pumpage, interbasin imports, or reservoir releases, in the order in which they will be used. Direct diversions conform to basinwide water law priorities. The model is interactive, and although the input data exist in files, the user can modify them interactively. A major feature of the model is its color-graphic-output options. This report includes a description of the model, organizational charts of subroutines, and examples of the graphics. Detailed format instructions for the input data, example files of input data, definitions of program variables, and listing of the FORTRAN source code are Attachments to the report. (USGS)

  7. Branch-based model for the diameters of the pulmonary airways: accounting for departures from self-consistency and registration errors.

    PubMed

    Neradilek, Moni B; Polissar, Nayak L; Einstein, Daniel R; Glenny, Robb W; Minard, Kevin R; Carson, James P; Jiao, Xiangmin; Jacob, Richard E; Cox, Timothy C; Postlethwait, Edward M; Corley, Richard A

    2012-06-01

    We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis.

  8. An efficient model to predict guided wave radiation by finite-sized sources in multilayered anisotropic plates with account of caustics

    NASA Astrophysics Data System (ADS)

    Stévenin, M.; Lhémery, A.; Grondel, S.

    2016-01-01

    Elastic guided waves (GW) are used in various non-destructive testing (NDT) methods to inspect plate-like structures, generated by finite-sized transducers. Thanks to GW long range propagation, using a few transducers at permanent positions can provide a full coverage of the plate. Transducer diffraction effects take place, leading to complex radiated fields. Optimizing transducers positioning makes it necessary to accurately predict the GW field radiated by a transducer. Fraunhofer-like approximations applied to GW in isotropic homogeneous plates lead to fast and accurate field computation but can fail when applied to multi-layered anisotropic composite plates, as shown by some examples given. Here, a model is proposed for composite plates, based on the computation of the approximate Green's tensor describing modal propagation from a source point, with account of caustics typically seen when strong anisotropy is concerned. Modal solutions are otherwise obtained by the Semi-Analytic Finite Element method. Transducer diffraction effects are accounted for by means of an angular integration over the transducer surface as seen from the calculation point, that is, over energy paths involved, which are mode-dependent. The model is validated by comparing its predictions with those computed by means of a full convolution integration of the Green's tensor with the source over transducer surface. Examples given concern disk and rectangular shaped transducers commonly used in NDT.

  9. Mathematical modeling of gas-condensate mixture filtration in porous media taking into account non-equilibrium of phase transitions

    NASA Astrophysics Data System (ADS)

    Kachalov, V. V.; Molchanov, D. A.; Sokotushchenko, V. N.; Zaichenko, V. M.

    2016-11-01

    At the present time, a considerable part of the largest dry gas reservoirs in Russia are found in the stage of declining production, therefore active exploitation of gas-condensate fields will begin in the coming decades. There is a significant discrepancy between the project and the actual value of condensate recovery factor while producing reservoir of this type, which is caused by insufficient knowledge about non-equilibrium filtration mechanisms of gas-condensate mixtures in reservoir conditions. A system of differential equations to describe filtration process of two-phase multicomponent mixture for one-, two- and three-dimensional cases is presented in this work. The solution of the described system was made by finite-element method in the software package FlexPDE. Comparative distributions of velocities, pressures, saturations and phase compositions of three-component mixture along the reservoir model and in time in both cases of equilibrium and non-equilibrium filtration processes were obtained. Calculation results have shown that system deviation from the thermodynamic equilibrium increases gas phase flow rate and reduces liquid phase flow rate during filtration process of gas-condensate mixture.

  10. Mathematical Modeling of the Thermal State of an Isothermal Element with Account of the Radiant Heat Transfer Between Parts of a Spacecraft

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Paleshkin, A. V.; Terent‧ev, V. V.; Firsyuk, S. O.

    2016-01-01

    A methodological approach to determination of the thermal state at a point on the surface of an isothermal element of a small spacecraft has been developed. A mathematical model of heat transfer between surfaces of intricate geometric configuration has been described. In this model, account was taken of the external field of radiant fluxes and of the differentiated mutual influence of the surfaces. An algorithm for calculation of the distribution of the density of the radiation absorbed by surface elements of the object under study has been proposed. The temperature field on the lateral surface of the spacecraft exposed to sunlight and on its shady side has been calculated. By determining the thermal state of magnetic controls of the orientation system as an example, the authors have assessed the contribution of the radiation coming from the solar-cell panels and from the spacecraft surface.

  11. International Accounting and the Accounting Educator.

    ERIC Educational Resources Information Center

    Laribee, Stephen F.

    The American Assembly of Collegiate Schools of Business (AACSB) has been instrumental in internationalizing the accounting curriculum by means of accreditation requirements and standards. Colleges and universities have met the AACSB requirements either by providing separate international accounting courses or by integrating international topics…

  12. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  13. Funding Medical Research Projects: Taking into Account Referees' Severity and Consistency through Many-Faceted Rasch Modeling of Projects' Scores.

    PubMed

    Tesio, Luigi; Simone, Anna; Grzeda, Mariuzs T; Ponzio, Michela; Dati, Gabriele; Zaratin, Paola; Perucca, Laura; Battaglia, Mario A

    2015-01-01

    The funding policy of research projects often relies on scores assigned by a panel of experts (referees). The non-linear nature of raw scores and the severity and inconsistency of individual raters may generate unfair numeric project rankings. Rasch measurement (many-facets version, MFRM) provides a valid alternative to scoring. MFRM was applied to the scores achieved by 75 research projects on multiple sclerosis sent in response to a previous annual call by FISM-Italian Foundation for Multiple Sclerosis. This allowed to simulate, a posteriori, the impact of MFRM on the funding scenario. The applications were each scored by 2 to 4 independent referees (total = 131) on a 10-item, 0-3 rating scale called FISM-ProQual-P. The rotation plan assured "connection" of all pairs of projects through at least 1 shared referee.The questionnaire fulfilled satisfactorily the stringent criteria of Rasch measurement for psychometric quality (unidimensionality, reliability and data-model fit). Arbitrarily, 2 acceptability thresholds were set at a raw score of 21/30 and at the equivalent Rasch measure of 61.5/100, respectively. When the cut-off was switched from score to measure 8 out of 18 acceptable projects had to be rejected, while 15 rejected projects became eligible for funding. Some referees, of various severity, were grossly inconsistent (z-std fit indexes less than -1.9 or greater than 1.9). The FISM-ProQual-P questionnaire seems a valid and reliable scale. MFRM may help the decision-making process for allocating funds to MS research projects but also in other fields. In repeated assessment exercises it can help the selection of reliable referees. Their severity can be steadily calibrated, thus obviating the need to connect them with other referees assessing the same projects.

  14. Correction factor to account for dispersion in sharp-interface models of terrestrial freshwater lenses and active seawater intrusion

    NASA Astrophysics Data System (ADS)

    Werner, Adrian D.

    2017-04-01

    In this paper, a recent analytical solution that describes the steady-state extent of freshwater lenses adjacent to gaining rivers in saline aquifers is improved by applying an empirical correction for dispersive effects. Coastal aquifers experiencing active seawater intrusion (i.e., seawater is flowing inland) are presented as an analogous situation to the terrestrial freshwater lens problem, although the inland boundary in the coastal aquifer situation must represent both a source of freshwater and an outlet of saline groundwater. This condition corresponds to the freshwater river in the terrestrial case. The empirical correction developed in this research applies to situations of flowing saltwater and static freshwater lenses, although freshwater recirculation within the lens is a prominent consequence of dispersive effects, just as seawater recirculates within the stable wedges of coastal aquifers. The correction is a modification of a previous dispersive correction for Ghyben-Herzberg approximations of seawater intrusion (i.e., stable seawater wedges). Comparison between the sharp interface from the modified analytical solution and the 50% saltwater concentration from numerical modelling, using a range of parameter combinations, demonstrates the applicability of both the original analytical solution and its corrected form. The dispersive correction allows for a prediction of the depth to the middle of the mixing zone within about 0.3 m of numerically derived values, at least on average for the cases considered here. It is demonstrated that the uncorrected form of the analytical solution should be used to calculate saltwater flow rates, which closely match those obtained through numerical simulation. Thus, a combination of the unmodified and corrected analytical solutions should be utilized to explore both the saltwater fluxes and lens extent, depending on the dispersiveness of the problem. The new method developed in this paper is simple to apply and offers a

  15. Accounting for observational uncertainties in the evaluation of low latitude turbulent air-sea fluxes simulated in a suite of IPSL model versions

    NASA Astrophysics Data System (ADS)

    Servonnat, Jerome; Braconnot, Pascale; Gainusa-Bogdan, Alina

    2015-04-01

    Turbulent momentum and heat (sensible and latent) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate and their good representation in climate models is of prime importance. In this work, we use the methodology developed by Braconnot & Frankignoul (1993) to perform a Hotelling T2 test on spatio-temporal fields (annual cycles). This statistic provides a quantitative measure accounting for an estimate of the observational uncertainty for the evaluation of low-latitude turbulent air-sea fluxes in a suite of IPSL model versions. The spread within the observational ensemble of turbulent flux data products assembled by Gainusa-Bogdan et al (submitted) is used as an estimate of the observational uncertainty for the different turbulent fluxes. The methodology holds on a selection of a small number of dominating variability patterns (EOFs) that are common to both the model and the observations for the comparison. Consequently it focuses on the large-scale variability patterns and avoids the possibly noisy smaller scales. The results show that different versions of the IPSL couple model share common large scale model biases, but also that there the skill on sea surface temperature is not necessarily directly related to the skill in the representation of the different turbulent fluxes. Despite the large error bars on the observations the test clearly distinguish the different merits of the different model version. The analyses of the common EOF patterns and related time series provide guidance on the major differences with the observations. This work is a first attempt to use such statistic on the evaluation of the spatio-temporal variability of the turbulent fluxes, accounting for an observational uncertainty, and represents an efficient tool for systematic evaluation of simulated air-seafluxes, considering both the fluxes and the related atmospheric variables. References Braconnot, P., and C. Frankignoul (1993), Testing Model

  16. A Harmonious Accounting Duo?

    ERIC Educational Resources Information Center

    Schapperle, Robert F.; Hardiman, Patrick F.

    1992-01-01

    Accountants have urged "harmonization" of standards between the Governmental Accounting Standards Board and the Financial Accounting Standards Board, recommending similar reporting of like transactions. However, varying display of similar accounting events does not necessarily indicate disharmony. The potential for problems because of…

  17. Modelling runoff at the plot scale taking into account rainfall partitioning by vegetation: application to stemflow of banana (Musa spp.) plant

    NASA Astrophysics Data System (ADS)

    Charlier, J.-B.; Moussa, R.; Cattan, P.; Cabidoche, Y.-M.; Voltz, M.

    2009-11-01

    Rainfall partitioning by vegetation modifies the intensity of rainwater reaching the ground, which affects runoff generation. Incident rainfall is intercepted by the plant canopy and then redistributed into throughfall and stemflow. Rainfall intensities at the soil surface are therefore not spatially uniform, generating local variations of runoff production that are disregarded in runoff models. The aim of this paper was to model runoff at the plot scale, accounting for rainfall partitioning by vegetation in the case of plants concentrating rainwater at the plant foot and promoting stemflow. We developed a lumped modelling approach, including a stemflow function that divided the plot into two compartments: one compartment including stemflow and the related water pathways and one compartment for the rest of the plot. This stemflow function was coupled with a production function and a transfer function to simulate a flood hydrograph using the MHYDAS model. Calibrated parameters were a "stemflow coefficient", which compartmented the plot; the saturated hydraulic conductivity (Ks), which controls infiltration and runoff; and the two parameters of the diffusive wave equation. We tested our model on a banana plot of 3000 m2 on permeable Andosol (mean Ks=75 mm h-1) under tropical rainfalls, in Guadeloupe (FWI). Runoff simulations without and with the stemflow function were performed and compared to 18 flood events from 10 to 140 rainfall mm depth. Modelling results showed that the stemflow function improved the calibration of hydrographs according to the error criteria on volume and on peakflow, to the Nash and Sutcliffe coefficient, and to the root mean square error. This was particularly the case for low flows observed during residual rainfall, for which the stemflow function allowed runoff to be simulated for rainfall intensities lower than the Ks measured at the soil surface. This approach also allowed us to take into account the experimental data, without needing to

  18. A new approach to account for the medium-dependent effect in model-based dose calculations for kilovoltage x-rays.

    PubMed

    Pawlowski, Jason M; Ding, George X

    2011-07-07

    This study presents a new approach to accurately account for the medium-dependent effect in model-based dose calculations for kilovoltage (kV) x-rays. This approach is based on the hypothesis that the correction factors needed to convert dose from model-based dose calculations to absorbed dose-to-medium depend on both the attenuation characteristics of the absorbing media and the changes to the energy spectrum of the incident x-rays as they traverse media with an effective atomic number different than that of water. Using Monte Carlo simulation techniques, we obtained empirical medium-dependent correction factors that take both effects into account. We found that the correction factors can be expressed as a function of a single quantity, called the effective bone depth, which is a measure of the amount of bone that an x-ray beam must penetrate to reach a voxel. Since the effective bone depth can be calculated from volumetric patient CT images, the medium-dependent correction factors can be obtained for model-based dose calculations based on patient CT images. We tested the accuracy of this new approach on 14 patients for the case of calculating imaging dose from kilovoltage cone-beam computed tomography used for patient setup in radiotherapy, and compared it with the Monte Carlo method, which is regarded as the 'gold standard'. For all patients studied, the new approach resulted in mean dose errors of less than 3%. This is in contrast to current available inhomogeneity corrected methods, which have been shown to result in mean errors of up to -103% for bone and 8% for soft tissue. Since there is a huge gain in the calculation speed relative to the Monte Carlo method (∼two orders of magnitude) with an acceptable loss of accuracy, this approach provides an alternative accurate dose calculation method for kV x-rays.

  19. Calibration and use of an interactive-accounting model to simulate dissolved solids, streamflow, and water-supply operations in the Arkansas River basin, Colorado

    USGS Publications Warehouse

    Burns, A.W.

    1989-01-01

    An interactive-accounting model was used to simulate dissolved solids, streamflow, and water supply operations in the Arkansas River basin, Colorado. Model calibration of specific conductance to streamflow relations at three sites enabled computation of dissolved-solids loads throughout the basin. To simulate streamflow only, all water supply operations were incorporated in the regression relations for streamflow. Calibration for 1940-85 resulted in coefficients of determination that ranged from 0.89 to 0.58, and values in excess of 0.80 were determined for 16 of 20 nodes. The model then incorporated 74 water users and 11 reservoirs to simulate the water supply operations for two periods, 1943-74 and 1975-85. For the 1943-74 calibration, coefficients of determination for streamflow ranged from 0.87 to 0.02. Calibration of the water supply operations resulted in coefficients of determination that ranged from 0.87 to negative for simulated irrigation diversions of 37 selected water users. Calibration for 1975-85 was not evaluated statistically, but average values and plots of reservoir contents indicated reasonableness of the simulation. To demonstrate the utility of the model, six specific alternatives were simulated to consider effects of potential enlargement of Pueblo Reservoir. Three general major alternatives were simulated: the 1975-85 calibrated model data, the calibrated model data with an addition of 30 cu ft/sec in Fountain Creek flows, and the calibrated model data plus additional municipal water in storage. These three major alternatives considered the options of reservoir enlargement or no enlargement. A 40,000-acre-foot reservoir enlargement resulted in average increases of 2,500 acre-ft in transmountain diversions, of 800 acre-ft in storage diversions, and of 100 acre-ft in winter-water storage. (USGS)

  20. High-Resolution Coarse-Grained Model of Hydrated Anion-Exchange Membranes that Accounts for Hydrophobic and Ionic Interactions through Short-Ranged Potentials.

    PubMed

    Lu, Jibao; Jacobson, Liam C; Perez Sirkin, Yamila A; Molinero, Valeria

    2017-01-10

    Molecular simulations provide a versatile tool to study the structure, anion conductivity, and stability of anion-exchange membrane (AEM) materials and can provide a fundamental understanding of the relation between structure and property of membranes that is key for their use in fuel cells and other applications. The quest for large spatial and temporal scales required to model the multiscale structure and transport processes in the polymer electrolyte membranes, however, cannot be met with fully atomistic models, and the available coarse-grained (CG) models suffer from several challenges associated with their low-resolution. Here, we develop a high-resolution CG force field for hydrated polyphenylene oxide/trimethylamine chloride (PPO/TMACl) membranes compatible with the mW water model using a hierarchical parametrization approach based on Uncertainty Quantification and reference atomistic simulations modeled with the Generalized Amber Force Field (GAFF) and TIP4P/2005 water. The parametrization weighs multiple properties, including coordination numbers, radial distribution functions (RDFs), self-diffusion coefficients of water and ions, relative vapor pressure of water in the solution, hydration enthalpy of the tetramethylammonium chloride (TMACl) salt, and cohesive energy of its aqueous solutions. We analyze the interdependence between properties and address how to compromise between the accuracies of the properties to achieve an overall best representability. Our optimized CG model FFcomp quantitatively reproduces the diffusivities and RDFs of the reference atomistic model and qualitatively reproduces the experimental relative vapor pressure of water in solutions of tetramethylammonium chloride. These properties are of utmost relevance for the design and operation of fuel cell membranes. To our knowledge, this is the first CG model that includes explicitly each water and ion and accounts for hydrophobic, ionic, and intramolecular interactions explicitly

  1. Numerical modelling of emission of a two-level atom near a metal nanoparticle with account for tunnelling of an electron from an atom into a particle

    SciTech Connect

    Fedorovich, S V; Protsenko, I E

    2016-01-31

    We report the results of numerical modelling of emission of a two-level atom near a metal nanoparticle under resonant interaction of light with plasmon modes of the particle. Calculations have been performed for different polarisations of light by a dipole approximation method and a complex multipole method. Depending on the distance between a particle and an atom, the contribution of the nonradiative process of electron tunnelling from a two-level atom into a particle, which is calculated using the quasi-classical approximation, has been taken into account and assessed. We have studied spherical gold and silver particles of different diameters (10 – 100 nm). The rates of electron tunnelling and of spontaneous decay of the excited atomic state are found. The results can be used to develop nanoscale plasmonic emitters, lasers and photodetectors. (nanooptics)

  2. Do delivery routes of intranasally administered oxytocin account for observed effects on social cognition and behavior? A two-level model.

    PubMed

    Quintana, Daniel S; Alvares, Gail A; Hickie, Ian B; Guastella, Adam J

    2015-02-01

    Accumulating evidence demonstrates the important role of oxytocin (OT) in the modulation of social cognition and behavior. This has led many to suggest that the intranasal administration of OT may benefit psychiatric disorders characterized by social dysfunction, such as autism spectrum disorders and schizophrenia. Here, we review nasal anatomy and OT pathways to central and peripheral destinations, along with the impact of OT delivery to these destinations on social behavior and cognition. The primary goal of this review is to describe how these identified pathways may contribute to mechanisms of OT action on social cognition and behavior (that is, modulation of social information processing, anxiolytic effects, increases in approach-behaviors). We propose a two-level model involving three pathways to account for responses observed in both social cognition and behavior after intranasal OT administration and suggest avenues for future research to advance this research field.

  3. JSC interactive basic accounting system

    NASA Technical Reports Server (NTRS)

    Spitzer, J. F.

    1978-01-01

    Design concepts for an interactive basic accounting system (IBAS) are considered in terms of selecting the design option which provides the best response at the lowest cost. Modeling the IBAS workload and applying this workload to a U1108 EXEC 8 based system using both a simulation model and the real system is discussed.

  4. Denaturation of RNA secondary and tertiary structure by urea: simple unfolded state models and free energy parameters account for measured m-values

    PubMed Central

    Lambert, Dominic; Draper, David E.

    2012-01-01

    To investigate the mechanism by which urea destabilizes RNA structure, urea-induced unfolding of four different RNA secondary and tertiary structures was quantified in terms of an m-value, the rate at which the free energy of unfolding changes with urea molality. From literature data and our osmometric study of a backbone analog, we derived average interaction potentials (per Å2 of solvent accessible surface) between urea and three kinds of RNA surfaces: phosphate, ribose, and base. Estimates of the increases in solvent accessible surface areas upon RNA denaturation were based on a simple model of unfolded RNA as a combination of helical and single strand segments. These estimates, combined with the three interaction potentials and a term to account for urea interactions with released ions, yield calculated m-values in good agreement with experimental values (200 mm monovalent salt). Agreement was obtained only if single-stranded RNAs were modeled in a highly stacked, A form conformation. The primary driving force for urea induced denaturation is the strong interaction of urea with the large surface areas of bases that become exposed upon denaturation of either RNA secondary or tertiary structure, though urea interactions with backbone and released ions may account for up to a third of the m-value. Urea m-values for all four RNA are salt-dependent, which we attribute to an increased extension (or decreased charge density) of unfolded RNAs with increased urea concentration. The sensitivity of the urea m-value to base surface exposure makes it a potentially useful probe of the conformations of RNA unfolded states. PMID:23088364

  5. Enhancing the Variable Infiltration Capacity Model to Account for Natural and Anthropogenic Impacts on Evapotranspiration in the North American Monsoon Region

    NASA Astrophysics Data System (ADS)

    Bohn, T. J.; Vivoni, E. R.

    2015-12-01

    Evapotranspiration (ET) is a poorly constrained flux in the North American monsoon (NAM) region, leading to potential errors in land-atmosphere feedbacks. Due to the region's arid to semi-arid climate, two factors play major roles in ET: sparse vegetation that exhibits dramatic seasonal greening, and irrigated agriculture. To more accurately characterize the spatio-temporal variations of ET in the NAM region, we used the Variable Infiltration Capacity (VIC) model, modified to account for soil evaporation (Esoil), irrigated agriculture, and the variability of land surface properties derived from the Moderate Resolution Imaging Spectroradiometer during 2000-2012. Simulated ET patterns were compared to field observations at fifty-nine eddy covariance towers, water balance estimates in nine basins, and six available gridded ET products. The modified VIC model performed well at eddy covariance towers representing the natural and agricultural land covers in the region. Simulations revealed that major source areas for ET were forested mountain areas during the summer season and irrigated croplands at peak times of growth in the winter and summer, accounting for 22% and 9% of the annual ET, respectively. Over the NAM region, Esoil was the largest component (60%) of annual ET, followed by plant transpiration (T, 32%) and evaporation of canopy interception (8%). Esoil and T displayed different relations with P in natural land covers, with Esoil tending to peak earlier than T by up to one month, while only a weak correlation between ET and P was found in irrigated croplands. These VIC-based estimates are the most realistic to date for this region, outperforming several other process-based and remote-sensing-based gridded ET products. Furthermore, spatio-temporal patterns reveal new information on the magnitudes, locations and timing of ET in the North American monsoon region, with implications for land-atmosphere feedbacks.

  6. How to conduct a proper sensitivity analysis in life cycle assessment: taking into account correlations within LCI data and interactions within the LCA calculation model.

    PubMed

    Wei, Wei; Larrey-Lassalle, Pyrene; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis

    2015-01-06

    Sensitivity analysis (SA) is a significant tool for studying the robustness of results and their sensitivity to uncertainty factors in life cycle assessment (LCA). It highlights the most important set of model parameters to determine whether data quality needs to be improved, and to enhance interpretation of results. Interactions within the LCA calculation model and correlations within Life Cycle Inventory (LCI) input parameters are two main issues among the LCA calculation process. Here we propose a methodology for conducting a proper SA which takes into account the effects of these two issues. This study first presents the SA in an uncorrelated case, comparing local and independent global sensitivity analysis. Independent global sensitivity analysis aims to analyze the variability of results because of the variation of input parameters over the whole domain of uncertainty, together with interactions among input parameters. We then apply a dependent global sensitivity approach that makes minor modifications to traditional Sobol indices to address the correlation issue. Finally, we propose some guidelines for choosing the appropriate SA method depending on the characteristics of the model and the goals of the study. Our results clearly show that the choice of sensitivity methods should be made according to the magnitude of uncertainty and the degree of correlation.

  7. Estimating the evolution of atrazine concentration in a fractured sandstone aquifer using lumped-parameter models and taking land-use into account

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Gallé, T.; Bayerle, M.; Pittois, D.; Braun, C.; El Khabbaz, H.; Maloszewski, P.; Elsner, M.

    2012-04-01

    The European water framework directive and the groundwater directive require member states to identify water bodies at risk and assess the significance of increasing trend in pollutant concentration. For groundwater bodies, estimating the time to trend reversal or the pollution potential of the different sources present in the catchment require a sound understanding of the hydraulic behaviour of the aquifer. Although numerical groundwater models can theoretically be used for such forecasts, their calibration remains in many real-world cases problematic. A more parsimonious lumped-parameter model was applied to predict the evolution of atrazine concentration in springs draining a fractured sandstone aquifer in Luxembourg. Despite a nationwide ban in 2005, spring water concentrations of both atrazine and its metabolite desethylatrazine still had not begun to decrease four years later. The transfer function of the model was calibrated using tritium measurements and modified to take into account the fact that whereas tritium is applied uniformly over the entire catchment, atrazine was only used in areas where cereals are grown. We could also show that sorption processes in the aquifer can be neglected and that including pesticide degradation does not modify the shape of the atrazine breakthrough, but only affects the magnitude of the predicted spring water concentration. Results indicate that due to the large hydraulic inertia of the aquifer, trend reversal should not be expected before 2018.

  8. Pain: A Statistical Account

    PubMed Central

    Thacker, Michael A.; Moseley, G. Lorimer

    2017-01-01

    Perception is seen as a process that utilises partial and noisy information to construct a coherent understanding of the world. Here we argue that the experience of pain is no different; it is based on incomplete, multimodal information, which is used to estimate potential bodily threat. We outline a Bayesian inference model, incorporating the key components of cue combination, causal inference, and temporal integration, which highlights the statistical problems in everyday perception. It is from this platform that we are able to review the pain literature, providing evidence from experimental, acute, and persistent phenomena to demonstrate the advantages of adopting a statistical account in pain. Our probabilistic conceptualisation suggests a principles-based view of pain, explaining a broad range of experimental and clinical findings and making testable predictions. PMID:28081134

  9. LMAL Accounting Office 1936

    NASA Technical Reports Server (NTRS)

    1936-01-01

    Accounting Office: The Langley Memorial Aeronautical Laboratory's accounting office, 1936, with photographs of the Wright brothers on the wall. Although the Lab was named after Samuel P. Langley, most of the NACA staff held the Wrights as their heroes.

  10. Intelligent Accountability in Education

    ERIC Educational Resources Information Center

    O'Neill, Onora

    2013-01-01

    Systems of accountability are "second order" ways of using evidence of the standard to which "first order" tasks are carried out for a great variety of purposes. However, more accountability is not always better, and processes of holding to account can impose high costs without securing substantial benefits. At their worst,…

  11. Accounting Education in Crisis

    ERIC Educational Resources Information Center

    Turner, Karen F.; Reed, Ronald O.; Greiman, Janel

    2011-01-01

    Almost on a daily basis new accounting rules and laws are put into use, creating information that must be known and learned by the accounting faculty and then introduced to and understood by the accounting student. Even with the 150 hours of education now required for CPA licensure, it is impossible to teach and learn all there is to learn. Over…

  12. Automated Accounting. Instructor Guide.

    ERIC Educational Resources Information Center

    Moses, Duane R.

    This curriculum guide was developed to assist business instructors using Dac Easy Accounting College Edition Version 2.0 software in their accounting programs. The module consists of four units containing assignment sheets and job sheets designed to enable students to master competencies identified in the area of automated accounting. The first…

  13. Accounting & Computing Curriculum Guide.

    ERIC Educational Resources Information Center

    Avani, Nathan T.; And Others

    This curriculum guide consists of materials for use in teaching a competency-based accounting and computing course that is designed to prepare students for employability in the following occupational areas: inventory control clerk, invoice clerk, payroll clerk, traffic clerk, general ledger bookkeeper, accounting clerk, account information clerk,…

  14. The Accounting Capstone Problem

    ERIC Educational Resources Information Center

    Elrod, Henry; Norris, J. T.

    2012-01-01

    Capstone courses in accounting programs bring students experiences integrating across the curriculum (University of Washington, 2005) and offer unique (Sanyal, 2003) and transformative experiences (Sill, Harward, & Cooper, 2009). Students take many accounting courses without preparing complete sets of financial statements. Accountants not only…

  15. A Single-Level Tunnel Model to Account for Electrical Transport through Single Molecule- and Self-Assembled Monolayer-based Junctions

    PubMed Central

    Garrigues, Alvar R.; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R.; Thompon, Damien; del Barco, Enrique; Nijhuis, Christian A.

    2016-01-01

    We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction. PMID:27216489

  16. Toward a 3D cellular model for studying in vitro the outcome of photodynamic treatments: accounting for the effects of tissue complexity.

    PubMed

    Alemany-Ribes, Mireia; García-Díaz, María; Busom, Marta; Nonell, Santi; Semino, Carlos E

    2013-08-01

    Clinical therapies have traditionally been developed using two-dimensional (2D) cell culture systems, which fail to accurately capture tissue complexity. Therefore, three-dimensional (3D) cell cultures are more attractive platforms to integrate multiple cues that arise from the extracellular matrix and cells, closer to an in vivo scenario. Here we report the development of a 3D cellular model for the in vitro assessment of the outcome of oxygen- and drug-dependent therapies, exemplified by photodynamic therapy (PDT). Using a synthetic self-assembling peptide as a cellular scaffold (RAD16-I), we were able to recreate the in vivo limitation of oxygen and drug diffusion and its biological effect, which is the development of cellular resistance to therapy. For the first time, the production and decay of the cytotoxic species singlet oxygen could be observed in a 3D cell culture. Results revealed that the intrinsic mechanism of action is maintained in both systems and, hence, the dynamic mass transfer effects accounted for the major differences in efficacy between the 2D and 3D models. We propose that this methodological approach will help to improve the efficacy of future oxygen- and drug-dependent therapies such as PDT.

  17. User's guide for RIV2; a package for routing and accounting of river discharge for a modular, three-dimensional, finite-difference, ground- water flow model

    USGS Publications Warehouse

    Miller, Roger S.

    1988-01-01

    RIV2 is a package for the U.S. Geological Survey 's modular, three-dimensional, finite-difference, groundwater flow model developed by M. G. McDonald and A. W. Harbaugh that simulates river-discharge routing. RIV2 replaces RIVI, the original river package used in the model. RIV2 preserves the basic logic of RIV1, but better represents river-discharge routing. The main features of RIV2 are (1) The river system is divided into reaches and simulated river discharge is routed from one node to the next. (2) Inflow (river discharge) entering the upstream end of a reach can be specified. (3) More than one river can be represented at one node and rivers can cross, as when representing a siphon. (4) The quantity of leakage to or from the aquifer at a given node is proportional to the hydraulic-head difference between that specified for the river and that calculated for the aquifer. Also, the quantity of leakage to the aquifer at any node can be limited by the user and, within this limit, the maximum leakage to the aquifer is the discharge available in the river. This feature allows for the simulation of intermittent rivers and drains that have no discharge routed to their upstream reaches. (5) An accounting of river discharge is maintained. Neither stage-discharge relations nor storage in the river or river banks is simulated. (USGS)

  18. Accounting: "Balancing Out" the Accounting Program.

    ERIC Educational Resources Information Center

    Babcock, Coleen

    1979-01-01

    The vocational accounting laboratory is a viable, meaningful educational experience for high school seniors, due to the uniqueness of its educational approach and the direct involvement of the professional and business community. A balance of experiences is provided to match individual needs and goals of students. (CT)

  19. Educational Leadership in an Era of Accountability.

    ERIC Educational Resources Information Center

    Riles, Wilson

    Given the present economic situation, it is inevitable that more state legislatures and school boards will adopt a "cost accounting" attitude toward education. However, schools aren't factories, and using an industrial model for accountability doesn't work. To have a viable system of accountability, everyone who is concerned with education must be…

  20. Viscoplastic Model Development to Account for Strength Differential: Application to Aged Inconel 718 at Elevated Temperature. Degree awarded by Pennsylvania State Univ., 2000

    NASA Technical Reports Server (NTRS)

    Iyer, Saiganesh; Lerch, Brad (Technical Monitor)

    2001-01-01

    The magnitude of yield and flow stresses in aged Inconel 718 are observed to be different in tension and compression. This phenomenon, called the Strength differential (SD), contradicts the metal plasticity axiom that the second deviatoric stress invariant alone is sufficient for representing yield and flow. Apparently, at least one of the other two stress invariants is also significant. A unified viscoplastic model was developed that is able to account for the SD effect in aged Inconel 718. Building this model involved both theory and experiments. First, a general threshold function was proposed that depends on all three stress invariants and then the flow and evolution laws were developed using a potential-based thermodynamic framework. Judiciously chosen shear and axial tests were conducted to characterize the material. Shear tests involved monotonic loading, relaxation, and creep tests with different loading rates and load levels. The axial tests were tension and compression tests that resulted in sufficiently large inelastic strains. All tests were performed at 650 C. The viscoplastic material parameters were determined by optimizing the fit to the shear tests, during which the first and the third stress invariants remained zero. The threshold surface parameters were then fit to the tension and compression test data. An experimental procedure was established to quantify the effect of each stress invariant on inelastic deformation. This requires conducting tests with nonproportional three-dimensional load paths. Validation of the model was done using biaxial tests on tubular specimens of aged Inconel 718 using proportional and nonproportional axial-torsion loading. These biaxial tests also helped to determine the most appropriate form of the threshold function; that is, how to combine the stress invariants. Of the set of trial threshold functions, the ones that incorporated the third stress invariant give the best predictions. However, inclusion of the first

  1. Learning by Doing: Concepts and Models for Service-Learning in Accounting. AAHE's Series on Service-Learning in the Disciplines.

    ERIC Educational Resources Information Center

    Rama, D. V., Ed.

    This volume is part of a series of 18 monographs on service learning and the academic disciplines. It is designed to (1) develop a theoretical framework for service learning in accounting consistent with the goals identified by accounting educators and the recent efforts toward curriculum reform, and (2) describe specific active learning…

  2. Random regression models to account for the effect of genotype by environment interaction due to heat stress on the milk yield of Holstein cows under tropical conditions.

    PubMed

    Santana, Mário L; Bignardi, Annaiza Braga; Pereira, Rodrigo Junqueira; Menéndez-Buxadera, Alberto; El Faro, Lenira

    2016-02-01

    The present study had the following objectives: to compare random regression models (RRM) considering the time-dependent (days in milk, DIM) and/or temperature × humidity-dependent (THI) covariate for genetic evaluation; to identify the effect of genotype by environment interaction (G×E) due to heat stress on milk yield; and to quantify the loss of milk yield due to heat stress across lactation of cows under tropical conditions. A total of 937,771 test-day records from 3603 first lactations of Brazilian Holstein cows obtained between 2007 and 2013 were analyzed. An important reduction in milk yield due to heat stress was observed for THI values above 66 (-0.23 kg/day/THI). Three phases of milk yield loss were identified during lactation, the most damaging one at the end of lactation (-0.27 kg/day/THI). Using the most complex RRM, the additive genetic variance could be altered simultaneously as a function of both DIM and THI values. This model could be recommended for the genetic evaluation taking into account the effect of G×E. The response to selection in the comfort zone (THI ≤ 66) is expected to be higher than that obtained in the heat stress zone (THI > 66) of the animals. The genetic correlations between milk yield in the comfort and heat stress zones were less than unity at opposite extremes of the environmental gradient. Thus, the best animals for milk yield in the comfort zone are not necessarily the best in the zone of heat stress and, therefore, G×E due to heat stress should not be neglected in the genetic evaluation.

  3. (13)C metabolic flux analysis in neurons utilizing a model that accounts for hexose phosphate recycling within the pentose phosphate pathway.

    PubMed

    Gebril, Hoda M; Avula, Bharathi; Wang, Yan-Hong; Khan, Ikhlas A; Jekabsons, Mika B

    2016-02-01

    Glycolysis, mitochondrial substrate oxidation, and the pentose phosphate pathway (PPP) are critical for neuronal bioenergetics and oxidation-reduction homeostasis, but quantitating their fluxes remains challenging, especially when processes such as hexose phosphate (i.e., glucose/fructose-6-phosphate) recycling in the PPP are considered. A hexose phosphate recycling model was developed which exploited the rates of glucose consumption, lactate production, and mitochondrial respiration to infer fluxes through the major glucose consuming pathways of adherent cerebellar granule neurons by replicating [(13)C]lactate labeling from metabolism of [1,2-(13)C2]glucose. Flux calculations were predicated on a steady-state system with reactions having known stoichiometries and carbon atom transitions. Non-oxidative PPP activity and consequent hexose phosphate recycling, as well as pyruvate production by cytoplasmic malic enzyme, were optimized by the model and found to account for 28 ± 2% and 7.7 ± 0.2% of hexose phosphate and pyruvate labeling, respectively. From the resulting fluxes, 52 ± 6% of glucose was metabolized by glycolysis, compared to 19 ± 2% by the combined oxidative/non-oxidative pentose cycle that allows for hexose phosphate recycling, and 29 ± 8% by the combined oxidative PPP/de novo nucleotide synthesis reactions. By extension, 62 ± 6% of glucose was converted to pyruvate, the metabolism of which resulted in 16 ± 1% of glucose oxidized by mitochondria and 46 ± 6% exported as lactate. The results indicate a surprisingly high proportion of glucose utilized by the pentose cycle and the reactions synthesizing nucleotides, and exported as lactate. While the in vitro conditions to which the neurons were exposed (high glucose, no lactate or other exogenous substrates) limit extrapolating these results to the in vivo state, the approach provides a means of assessing a number of metabolic fluxes within the context of hexose phosphate recycling in the PPP from a

  4. PLATO IV Accountancy Index.

    ERIC Educational Resources Information Center

    Pondy, Dorothy, Comp.

    The catalog was compiled to assist instructors in planning community college and university curricula using the 48 computer-assisted accountancy lessons available on PLATO IV (Programmed Logic for Automatic Teaching Operation) for first semester accounting courses. It contains information on lesson access, lists of acceptable abbreviations for…

  5. Leadership for Accountability.

    ERIC Educational Resources Information Center

    Lashway, Larry

    2001-01-01

    This document explores issues of leadership for accountability and reviews five resources on the subject. These include: (1) "Accountability by Carrots and Sticks: Will Incentives and Sanctions Motivate Students, Teachers, and Administrators for Peak Performance?" (Larry Lashway); (2) "Organizing Schools for Teacher Learning"…

  6. The Choreography of Accountability

    ERIC Educational Resources Information Center

    Webb, P. Taylor

    2006-01-01

    The prevailing performance discourse in education claims school improvements can be achieved through transparent accountability procedures. The article identifies how teachers generate performances of their work in order to satisfy accountability demands. By identifying sources of teachers' knowledge that produce choreographed performances, I…

  7. Cluster Guide. Accounting Occupations.

    ERIC Educational Resources Information Center

    Beaverton School District 48, OR.

    Based on a recent task inventory of key occupations in the accounting cluster taken in the Portland, Oregon, area, this curriculum guide is intended to assist administrators and teachers in the design and implementation of high school accounting cluster programs. The guide is divided into four major sections: program organization and…

  8. The Accountability Illusion: Ohio

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  9. The Accountability Illusion: Florida

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  10. The Accountability Illusion: Minnesota

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  11. The Accountability Illusion: Wisconsin

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  12. The Accountability Illusion: Vermont

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  13. The Evolution of Accountability

    ERIC Educational Resources Information Center

    Webb, P. Taylor

    2011-01-01

    Campus 2020: Thinking ahead is a policy in British Columbia (BC), Canada, that attempted to hold universities accountable to performance. Within, I demonstrate how this Canadian articulation of educational accountability intended to develop "governmentality constellations" to control the university and regulate its knowledge output. This…

  14. The Coming Accounting Crisis

    ERIC Educational Resources Information Center

    Eaton, Tim V.

    2007-01-01

    The accounting profession is facing a potential crisis not only from the overall shortage of accounting faculty driven by smaller numbers of new faculty entering the profession as many existing faculty retire but also from changes that have been less well documented. This includes: (1) changes in attitude towards the roles of teaching, service and…

  15. Accountability in Education.

    ERIC Educational Resources Information Center

    Chippendale, P. R., Ed.; Wilkes, Paula V., Ed.

    This collection of papers delivered at a conference on accountability held at Darling Downs Institute of Advanced Education in Australia examines the meaning of accountability in education for teachers, lecturers, government, parents, administrators, education authorities, and the society at large. In Part 1, W. G. Walker attempts to answer the…

  16. The Accountability Illusion: Nevada

    ERIC Educational Resources Information Center

    Thomas B. Fordham Institute, 2009

    2009-01-01

    The intent of the No Child Left Behind (NCLB) Act of 2001 is to hold schools accountable for ensuring that all their students achieve mastery in reading and math, with a particular focus on groups that have traditionally been left behind. Under NCLB, states submit accountability plans to the U.S. Department of Education detailing the rules and…

  17. Managerial accounting applications in radiology.

    PubMed

    Lexa, Frank James; Mehta, Tushar; Seidmann, Abraham

    2005-03-01

    We review the core issues in managerial accounting for radiologists. We introduce the topic and then explore its application to diagnostic imaging. We define key terms such as fixed cost, variable cost, marginal cost, and marginal revenue and discuss their role in understanding the operational and financial implications for a radiology facility by using a cost-volume-profit model. Our work places particular emphasis on the role of managerial accounting in understanding service costs, as well as how it assists executive decision making.

  18. Do we need to account for scenarios of land use/land cover changes in regional climate modeling and impact studies?

    NASA Astrophysics Data System (ADS)

    Strada, Susanna; de Noblet-Ducoudré, Nathalie; Perrin, Mathieu; Stefanon, Marc

    2016-04-01

    By modifying the Earth's natural landscapes, humans have introduced an imbalance in the Earth System's energy, water and emission fluxes via land-use and land-cover changes (LULCCs). Through land-atmosphere interactions, LULCCs influence weather, air quality and climate at different scales, from regional/local (a few ten kilometres) (Pielke et al., 2011) to global (a few hundred kilometres) (Mahmood et al., 2014). Therefore, in the context of climate change, LULCCs will play a role locally/regionally in altering weather/atmospheric conditions. In addition to the global climate change impacts, LULCCs will possibly induce further changes in the functioning of terrestrial ecosystems and thereby affect adaptation strategies. If LULCCs influence weather/atmospheric conditions, could land use planning alter climate conditions and ease the impact of climate change by wisely shaping urban and rural landscapes? Nowadays, numerical land-atmosphere modelling allows to assess LULCC impacts at different scales (e.g., Marshall et al., 2003; de Noblet-Ducoudré et al., 2011). However, most scenarios of climate changes used to force impact models result from downscaling procedures that do not account for LULCCs (e.g., Jacob et al., 2014). Therefore, if numerical modelling may help in tackling the discussion about LULCCs, do existing LULCC scenarios encompass realistic changes in terms of land use planning? In the present study, we apply a surface model to compare projected LULCC scenarios over France and to assess their impacts on surface fluxes (i.e., water, heat and carbon dioxide fluxes) and on water and carbon storage in soils. To depict future LULCCs in France, we use RCP scenarios from the IPCC AR5 report (Moss et al., 2011). LULCCs encompassed in RCPs are discussed in terms of: (a) their impacts on water and energy balance over France, and (b) their feasibility in the framework of land use planning in France. This study is the first step to quantify the sensitivity of land

  19. 77 FR 43542 - Cost Accounting Standards: Cost Accounting Standards 412 and 413-Cost Accounting Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-25

    ... BUDGET Office of Federal Procurement Policy 48 CFR Part 9904 Cost Accounting Standards: Cost Accounting Standards 412 and 413--Cost Accounting Standards Pension Harmonization Rule AGENCY: Cost Accounting... correcting amendments. SUMMARY: The Office of Federal Procurement Policy (OFPP), Cost Accounting...

  20. Zebrafish Seizure Model Identifies p,p′-DDE as the Dominant Contaminant of Fetal California Sea Lions That Accounts for Synergistic Activity with Domoic Acid

    PubMed Central

    Tiedeken, Jessica A.; Ramsdell, John S.

    2010-01-01

    Background Fetal poisoning of California sea lions (CSLs; Zalophus californianus) has been associated with exposure to the algal toxin domoic acid. These same sea lions accumulate a mixture of persistent environmental contaminants including pesticides and industrial products such as polychlorinated biphenyls (PCBs) and polybrominated diphenyl ethers (PBDEs). Developmental exposure to the pesticide dichlorodiphenyltrichloroethane (DDT) and its stable metabolite 1,1-bis-(4-chlorophenyl)-2,2-dichloroethene (p,p′-DDE) has been shown to enhance domoic acid–induced seizures in zebrafish; however, the contribution of other co-occurring contaminants is unknown. Objective We formulated a mixture of contaminants to include PCBs, PBDEs, hexachlorocyclohexane (HCH), and chlordane at levels matching those reported for fetal CSL blubber to determine the impact of co-occurring persistent contaminants with p,p′-DDE on chemically induced seizures in zebrafish as a model for the CSLs. Methods Embryos were exposed (6–30 hr postfertilization) to p,p′-DDE in the presence or absence of a defined contaminant mixture prior to neurodevelopment via either bath exposure or embryo yolk sac microinjection. After brain maturation (7 days postfertilization), fish were exposed to a chemical convulsant, either pentylenetetrazole or domoic acid; resulting seizure behavior was then monitored and analyzed for changes, using cameras and behavioral tracking software. Results Induced seizure behavior did not differ significantly between subjects with embryonic exposure to a contaminant mixture and those exposed to p,p′-DDE only. Conclusion These studies demonstrate that p,p′-DDE—in the absence of PCBs, HCH, chlordane, and PBDEs that co-occur in fetal sea lions—accounts for the synergistic activity that leads to greater sensitivity to domoic acid seizures. PMID:20368122

  1. Human Resource Accounting.

    DTIC Science & Technology

    1984-12-01

    I AD-RI54 787 HUMAN RESOURCE ACCOUNTING (U) NAVAL POSTGRADUATE SCHOOL 1/2 F MONTEREY CR J C MARTINS DEC 84 1UNCLASSIFIED /G 5/9 NL -~~ .. 2. . L...Monterey, California JUN1im THESISG HUMAN RESOURCE ACCOUNTING by Joaquim C. Martins LLJ.. December 1984 Thesis Advisor: R.A. McGonigal Approved for...REPORT & PECRI00 COVERED Master’s Thesis; Human Resource Accounting Dcme 94- ’ 6. PERFORMING ORG. REPORT NUMBER 7. AUTOR(*) . CONTRACT OR GRANT NUMBER

  2. Ideas for the Accounting Classroom.

    ERIC Educational Resources Information Center

    Kerby, Debra; Romine, Jeff

    2003-01-01

    Innovative ideas for accounting education include having students study accounting across historical periods, using businesses for student research, exploring nontraditional accounting careers, and collaborating with professional associations. (SK)

  3. Information-Theoretic Benchmarking of Land Surface Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  4. Readability of Accounting Books.

    ERIC Educational Resources Information Center

    Razek, Joseph R.; And Others

    1982-01-01

    This article describes the results of a survey of the readability of most of the intermediate and advanced accounting textbooks currently in use at colleges and universities throughout the United States. (CT)

  5. Accounting Equals Applied Algebra.

    ERIC Educational Resources Information Center

    Roberts, Sondra

    1997-01-01

    Argues that students should be given mathematics credits for completing accounting classes. Demonstrates that, although the terminology is different, the mathematical concepts are the same as those used in an introductory algebra class. (JOW)

  6. 18 CFR 367.1840 - Account 184, Clearing accounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... ACT Balance Sheet Chart of Accounts Deferred Debits § 367.1840 Account 184, Clearing accounts. This account must include undistributed balances in clearing accounts at the date of the balance sheet... accounts. 367.1840 Section 367.1840 Conservation of Power and Water Resources FEDERAL ENERGY...

  7. Accounting for the environment.

    PubMed

    Lutz, E; Munasinghe, M

    1991-03-01

    Environmental awareness in the 1980s has led to efforts to improve the current UN System of National Accounts (SNA) for better measurement of the value of environmental resources when estimating income. National governments, the UN, the International Monetary Fund, and the World Bank are interested in solving this issue. The World Bank relies heavily on national aggregates in income accounts compiled by means of the SNA that was published in 1968 and stressed gross domestic product (GDP). GDP measures mainly market activity, but it takes does not consider the consumption of natural capital, and indirectly inhibits sustained development. The deficiencies of the current method of accounting are inconsistent treatment of manmade and natural capital, the omission of natural resources and their depletion from balance sheets, and pollution cleanup costs from national income. In the calculation of GDP pollution is overlooked, and beneficial environmental inputs are valued at zero. The calculation of environmentally adjusted net domestic product (EDP) and environmentally adjusted net income (ENI) would lower income and growth rate, as the World Resources Institute found with respect to Indonesia for 1971-84. When depreciation for oil, timber, and top soil was included the net domestic product (NDP) was only 4% compared with a 7.1% GDP. The World Bank has advocated environmental accounting since 1983 in SNA revisions. The 1989 revised Blue Book of the SNA takes environment concerns into account. Relevant research is under way in Mexico and Papua New Guinea using the UN Statistical Office framework as a system for environmentally adjusted economic accounts that computes EDP and ENI and integrates environmental data with national accounts while preserving SNA concepts.

  8. Infusion of Ensemble Data Assimilation in the operational NWS-Ensemble Streamflow Prediction (ESP) System

    NASA Astrophysics Data System (ADS)

    Meskele, T. T.; Moradkhani, H.

    2009-12-01

    Operational Streamflow forecasting accuracy is one of the issues to be addressed yet for effective water resources Management. The traditional based linear regression procedure produces a single-valued forecast that lacks treatment of associated uncertainties. The Ensemble Streamflow Prediction (ESP) was developed to improve the quantity and quality of information in the forecasts. The ESP produces multiple estimates of a streamflow variable based on current basin conditions and resampled past meteorological observations. Despite the fact that the ESP forecasting process is designed to account for the uncertain nature of the climate variable during the forecasting period, the procedure is entirely deterministic up to the starting point of forecast. Thus the operation represents forecast uncertainty due to forcing uncertainty only assuming the historical forcing in the past and the initial condition as perfect. However, in addition to the future forcing data uncertainty in the streamflow prediction, uncertainty may arise from the initial conditions (moisture states, and snowpack), model structure and parameters.. Thus in this study we addressed the initial condition, model structure and forcing data uncertainty via the data assimilation procedure, known as the Particle Filter (PF). We show that the ESP based on uncertain initial condition will create higher uncertainty as compared to traditional ESP. We used Sacramento Soil Moisture Accounting Model (SAC-SMA) for our study and tested the procedure over the Leaf River Basin in Mississippi.

  9. Water Accounting from Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Bastiaanssen, W. G.; Savenije, H.

    2014-12-01

    Water scarcity is increasing globally. This requires a more accurate management of the water resources at river basin scale and understanding of withdrawals and return flows; both naturally and man-induced. Many basins and their tributaries are, however, ungauged or poorly gauged. This hampers sound planning and monitoring processes. While certain countries have developed clear guidelines and policies on data observatories and data sharing, other countries and their basin organization still have to start on developing data democracies. Water accounting quantifies flows, fluxes, stocks and consumptive use pertaining to every land use class in a river basin. The objective is to derive a knowledge base with certain minimum information that facilitates decision making. Water Accounting Plus (WA+) is a new method for water resources assessment reporting (www.wateraccounting.org). While the PUB framework has yielded several deterministic models for flow prediction, WA+ utilizes remote sensing data of rainfall, evaporation (including soil, water, vegetation and interception evaporation), soil moisture, water levels, land use and biomass production. Examples will be demonstrated that show how remote sensing and hydrological models can be smartly integrated for generating all the required input data into WA+. A standard water accounting system for all basins in the world - with a special emphasis on data scarce regions - is under development. First results of using remote sensing measurements and hydrological modeling as an alternative to expensive field data sets, will be presented and discussed.

  10. Thinking about Accountability

    PubMed Central

    Deber, Raisa B.

    2014-01-01

    Accountability is a key component of healthcare reforms, in Canada and internationally, but there is increasing recognition that one size does not fit all. A more nuanced understanding begins with clarifying what is meant by accountability, including specifying for what, by whom, to whom and how. These papers arise from a Partnership for Health System Improvement (PHSI), funded by the Canadian Institutes of Health Research (CIHR), on approaches to accountability that examined accountability across multiple healthcare subsectors in Ontario. The partnership features collaboration among an interdisciplinary team, working with senior policy makers, to clarify what is known about best practices to achieve accountability under various circumstances. This paper presents our conceptual framework. It examines potential approaches (policy instruments) and postulates that their outcomes may vary by subsector depending upon (a) the policy goals being pursued, (b) governance/ownership structures and relationships and (c) the types of goods and services being delivered, and their production characteristics (e.g., contestability, measurability and complexity). PMID:25305385

  11. A Structural Equation Modeling Investigation of the Theory of Planned Behavior Applied to Accounting Professors' Enforcement of Cheating Rules

    ERIC Educational Resources Information Center

    Brigham, Stephen Scott

    2010-01-01

    This dissertation concerns factors that influence accounting professors' formal enforcement of academic misconduct rules using the theory of planned behavior ("TPB") as a theoretical framework. The theory posits that intentional behavior, such as enforcement, can be predicted by peoples' perceived behavioral control and…

  12. Uncertainty calculation in the RIO air quality interpolation model and aggregation to yearly average and exceedance probability taking into account the temporal auto-correlation.

    NASA Astrophysics Data System (ADS)

    Maiheu, Bino; Nele, Veldeman; Janssen, Stijn; Fierens, Frans; Trimpeneers, Elke

    2010-05-01

    RIO is an operational air quality interpolation model developed by VITO and IRCEL-CELINE and produces hourly maps for different pollutant concentrations such as O3, PM10 and NO2 measured in Belgium [1]. The RIO methodology consists of residual interpolation by Ordinary Kriging of the residuals of the measured concentrations and pre-determined trend functions which express the relation between land cover information derived from the CORINE dataset and measured time-averaged concentrations [2]. RIO is an important tool for the Flemish administration and is among others used to report, as is required by each member state, on the air quality status in Flanders to the European Union. We feel that a good estimate of the uncertainty of the yearly average concentration maps and the probability of norm-exceedance are both as important as the values themselves. In this contribution we will discuss the uncertainties specific to the RIO methodology, where we have both contributions from the Ordinary Kriging technique as well as the trend functions. Especially the parameterisation of the uncertainty w.r.t. the trend functions will be the key indicator for the degree of confidence the model puts into using land cover information for spatial interpolation of pollutant concentrations. Next, we will propose a method which enables us to calculate the uncertainty on the yearly average concentrations as well as the number of exceedance days, taking into account the temporal auto-correlation of the concentration fields. It is clear that the autocorrelation will have a strong impact on the uncertainty estimation [3] of yearly averages. The method we propose is based on a Monte Carlo technique that generates an ensemble of interpolation maps with the correct temporal auto-correlation structure. From a generated ensemble, the calculation of norm-exceedance probability at each interpolation location becomes quite straightforward. A comparison with the ad-hoc method proposed in [3], where

  13. Evaluation of snow data assimilation using the ensemble Kalman filter for seasonal streamflow prediction in the western United States

    NASA Astrophysics Data System (ADS)

    Huang, Chengcheng; Newman, Andrew J.; Clark, Martyn P.; Wood, Andrew W.; Zheng, Xiaogu

    2017-01-01

    In this study, we examine the potential of snow water equivalent data assimilation (DA) using the ensemble Kalman filter (EnKF) to improve seasonal streamflow predictions. There are several goals of this study. First, we aim to examine some empirical aspects of the EnKF, namely the observational uncertainty estimates and the observation transformation operator. Second, we use a newly created ensemble forcing dataset to develop ensemble model states that provide an estimate of model state uncertainty. Third, we examine the impact of varying the observation and model state uncertainty on forecast skill. We use basins from the Pacific Northwest, Rocky Mountains, and California in the western United States with the coupled Snow-17 and Sacramento Soil Moisture Accounting (SAC-SMA) models. We find that most EnKF implementation variations result in improved streamflow prediction, but the methodological choices in the examined components impact predictive performance in a non-uniform way across the basins. Finally, basins with relatively higher calibrated model performance (> 0.80 NSE) without DA generally have lesser improvement with DA, while basins with poorer historical model performance show greater improvements.

  14. Viewpoints on Accountability.

    ERIC Educational Resources Information Center

    Educational Innovators Press, Tucson, AZ.

    This booklet contains five papers which examine the activities, successes, and pitfalls encountered by educators who are introducing accountability techniques into instructional programs where they did not exist in the past. The papers are based on actual programs and offer possible solutions in the areas considered, which are 1) performance…

  15. Making Accountability Really Count

    ERIC Educational Resources Information Center

    Resnick, Lauren B.

    2006-01-01

    Standards-based education has now reached a stage where it is possible to evaluate its overall effectiveness. Several earlier papers in the special issue of "Educational Measurement: Issues and Practice" on "Test Scores and State Accountability" (Volume 24, Number 4) examined specific state policies and their effects on schools…

  16. Accountability Update, March 2000.

    ERIC Educational Resources Information Center

    Washington State Higher Education Coordinating Board, Olympia.

    This report provides the Washington State legislature, the Governor, and other interested parties with an update on the accountability performance of each of the state's public baccalaureate institutions (Central Washington University, Eastern Washington University, Evergreen State College, Washington State University, Western Washington…

  17. Educational Accounting Procedures.

    ERIC Educational Resources Information Center

    Tidwell, Sam B.

    This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing,…

  18. Professional Capital as Accountability

    ERIC Educational Resources Information Center

    Fullan, Michael; Rincón-Gallardo, Santiago; Hargreaves, Andy

    2015-01-01

    This paper seeks to clarify and spells out the responsibilities of policy makers to create the conditions for an effective accountability system that produces substantial improvements in student learning, strengthens the teaching profession, and provides transparency of results to the public. The authors point out that U.S. policy makers will need…

  19. Accountability for Productivity

    ERIC Educational Resources Information Center

    Wellman, Jane

    2010-01-01

    Productivity gains in higher education won't be made just by improving cost effectiveness or even performance. They need to be documented, communicated, and integrated into a strategic agenda to increase attainment. This requires special attention to "accountability" for productivity, meaning public presentation and communication of evidence about…

  20. Legal responsibility and accountability.

    PubMed

    Cox, Chris

    2010-06-01

    Shifting boundaries in healthcare roles have led to anxiety among some nurses about their legal responsibilities and accountabilities. This is partly because of a lack of education about legal principles that underpin healthcare delivery. This article explains the law in terms of standards of care, duty of care, vicarious liability and indemnity insurance.

  1. Democracy, Accountability, and Education

    ERIC Educational Resources Information Center

    Levinson, Meira

    2011-01-01

    Educational standards, assessments, and accountability systems are of immense political moment around the world. But there is no developed theory exploring the role that these systems should play within a democratic polity in particular. On the one hand, well-designed standards are public goods, supported by assessment and accountability…

  2. Community Accountability Conferencing.

    ERIC Educational Resources Information Center

    Thorsborne, Margaret

    Community Accountability Conferencing (CAC) was first introduced in Queensland, Australia schools in early 1994 after a serious assault in the school community. Some family members, students, and staff were dissatisfied with the solution of suspending the offenders. Seeking an alternative, comprehensive intervention strategy, the school community…

  3. Planning for Accountability.

    ERIC Educational Resources Information Center

    Cuneo, Tim; Bell, Shareen; Welsh-Gray, Carol

    1999-01-01

    Through its Challenge 2000 program, Joint Venture: Silicon Valley Network's 21st Century Education Initiative has been working with K-12 schools to improve student performance in literature, math, and science. Clearly stated standards, appropriate assessments, formal monitoring, critical friends, and systemwide accountability are keys to success.…

  4. Institutional Accountability Report, 2000.

    ERIC Educational Resources Information Center

    Santa Fe Community Coll., Gainesville, FL. Office of Institutional Research and Planning.

    This document discusses Santa Fe Community College's (SFCC) (Florida) five accountability measures. The type of data available provided on these measures is as follows: (1) District High School Enrollment Report and Retention and Success Rate Report; (2) Associate of Arts Degree Transfer Performance in the State University System; (3) Licensure…

  5. Fiscal Accounting Manual.

    ERIC Educational Resources Information Center

    California State Dept. of Housing and Community Development, Sacramento. Indian Assistance Program.

    Written in simple, easy to understand form, the manual provides a vehicle for the untrained person in bookkeeping to control funds received from grants for Indian Tribal Councils and Indian organizations. The method used to control grants (federal, state, or private) is fund accounting, designed to organize rendering services on a non-profit…

  6. Curtail Accountability, Cultivate Attainability

    ERIC Educational Resources Information Center

    Wraga, William G.

    2011-01-01

    The current test-driven accountability movement, codified in the No Child Left Behind Act of 2001 ([NCLB] 2002), was a misguided idea that will have the effect not of improving the education of children and youth, but of indicting the public school system of the United States. To improve education in the United States, politicians, policy makers,…

  7. Higher Education Accountability Plans

    ERIC Educational Resources Information Center

    Washington Higher Education Coordinating Board, 2003

    2003-01-01

    Washington state's public four-year universities and college have submitted their 2003-05 accountability plans to the Higher Education Coordinating Board (HECB). The state operating budget directs the Board to review these plans and set biennial performance targets for each institution. For 2003-05, the four-year institutions are reporting on a…

  8. Accounting 202, 302.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg.

    This teaching guide consists of guidelines for conducting two secondary-level introductory accounting courses. Intended for vocational business education students, the courses are designed to introduce financial principles and practices important to personal and business life, to promote development of clerical and bookkeeping skills sufficient…

  9. Student Attendance Accounting Manual.

    ERIC Educational Resources Information Center

    Freitas, Joseph M.

    In response to state legislation authorizing procedures for changes in academic calendars and measurement of student workload in California community colleges, this manual from the Chancellor's Office provides guidelines for student attendance accounting. Chapter 1 explains general items such as the academic calendar, admissions policies, student…

  10. Full Accounting for Curriculum.

    ERIC Educational Resources Information Center

    Paddock, Marie-Louise

    1988-01-01

    Given the curriculum's importance in the educational process, curriculum evaluation should be considered as essential as a district financial audit. When Fenwick English conducted a 1979 curriculum audit of Columbus, Ohio, schools, the accounting firm encountered numerous problems concerning development, review, and management practices. Planning…

  11. Excel in the Accounting Curriculum: Perceptions from Accounting Professors

    ERIC Educational Resources Information Center

    Ramachandran Rackliffe, Usha; Ragland, Linda

    2016-01-01

    Public accounting firms emphasize the importance of accounting graduates being proficient in Excel. Since many accounting graduates often aspire to work in public accounting, a question arises as to whether there should be an emphasis on Excel in accounting education. The purpose of this paper is to specifically look at this issue by examining…

  12. A Pariah Profession? Some Student Perceptions of Accounting and Accountancy.

    ERIC Educational Resources Information Center

    Fisher, Roy; Murphy, Vivienne

    1995-01-01

    Existing literature and a survey of 106 undergraduate accounting students in the United Kingdom were analyzed for perceptions of the accounting profession and the academic discipline of accounting. Results suggest that among accounting and nonaccounting students alike, there exist coexisting perceptions of accounting as having high status and low…

  13. 18 CFR 367.1420 - Account 142, Customer accounts receivable.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 142, Customer... GAS ACT Balance Sheet Chart of Accounts Current and Accrued Assets § 367.1420 Account 142, Customer accounts receivable. (a) This account must include amounts due from customers for service, and...

  14. An Existentialist Account of Identity Formation.

    ERIC Educational Resources Information Center

    Bilsker, Dan

    1992-01-01

    Gives account of Marcia's identity formation model in language of existentialist philosophy. Examines parallels between ego-identity and existentialist approaches. Describes identity in terms of existentialist concepts of Heidegger and Sartre. Argues that existentialist account of identity formation has benefits of clarification of difficult…

  15. Integrated Approach to User Account Management

    NASA Technical Reports Server (NTRS)

    Kesselman, Glenn; Smith, William

    2007-01-01

    IT environments consist of both Windows and other platforms. Providing user account management for this model has become increasingly diffi cult. If Microsoft#s Active Directory could be enhanced to extend a W indows identity for authentication services for Unix, Linux, Java and Macintosh systems, then an integrated approach to user account manag ement could be realized.

  16. Reclaiming "Sense" from "Cents" in Accounting Education

    ERIC Educational Resources Information Center

    Dellaportas, Steven

    2015-01-01

    This essay adopts an interpretive methodology of relevant literature to explore the limitations of accounting education when it is taught purely as a technical practice. The essay proceeds from the assumption that conventional accounting education is captured by a positivistic neo-classical model of decision-making that draws on economic rationale…

  17. MATERIAL CONTROL ACCOUNTING INMM

    SciTech Connect

    Hasty, T.

    2009-06-14

    Since 1996, the Mining and Chemical Combine (MCC - formerly known as K-26), and the United States Department of Energy (DOE) have been cooperating under the cooperative Nuclear Material Protection, Control and Accounting (MPC&A) Program between the Russian Federation and the U.S. Governments. Since MCC continues to operate a reactor for steam and electricity production for the site and city of Zheleznogorsk which results in production of the weapons grade plutonium, one of the goals of the MPC&A program is to support implementation of an expanded comprehensive nuclear material control and accounting (MC&A) program. To date MCC has completed upgrades identified in the initial gap analysis and documented in the site MC&A Plan and is implementing additional upgrades identified during an update to the gap analysis. The scope of these upgrades includes implementation of MCC organization structure relating to MC&A, establishing material balance area structure for special nuclear materials (SNM) storage and bulk processing areas, and material control functions including SNM portal monitors at target locations. Material accounting function upgrades include enhancements in the conduct of physical inventories, limit of error inventory difference procedure enhancements, implementation of basic computerized accounting system for four SNM storage areas, implementation of measurement equipment for improved accountability reporting, and both new and revised site-level MC&A procedures. This paper will discuss the implementation of MC&A upgrades at MCC based on the requirements established in the comprehensive MC&A plan developed by the Mining and Chemical Combine as part of the MPC&A Program.

  18. The Ensemble Framework for Flash Flood Forecasting: Global and CONUS Applications

    NASA Astrophysics Data System (ADS)

    Flamig, Z.; Vergara, H. J.; Clark, R. A.; Gourley, J. J.; Kirstetter, P. E.; Hong, Y.

    2015-12-01

    The Ensemble Framework for Flash Flood Forecasting (EF5) is a distributed hydrologic modeling framework combining water balance components such as the Variable Infiltration Curve (VIC) and Sacramento Soil Moisture Accounting (SAC-SMA) with kinematic wave channel routing. The Snow-17 snow pack model is included as an optional component in EF5 for basins where snow impacts are important. EF5 also contains the Differential Evolution Adaptive Metropolis (DREAM) parameter estimation scheme for model calibration. EF5 is made to be user friendly and as such training has been developed into a weeklong course. This course has been tested in modeling workshops held in Namibia and Mexico. EF5 has also been applied to specialized applications including the Flooded Locations and Simulated Hydrographs (FLASH) project. FLASH aims to provide flash flood monitoring and forecasting over the CONUS using Multi-Radar Multi-Sensor precipitation forcing. Using the extensive field measurements database from the 10,000 USGS measurement locations across the CONUS, parameters were developed for the kinematic wave routing in FLASH. This presentation will highlight FLASH performance over the CONUS on basins less than 1,000 km2 and discuss the development of simulated streamflow climatology over the CONUS for data mining applications. A global application of EF5 has also been developed using satellite based precipitation measurements combined with numerical weather prediction forecasts to produce flood and impact forecasts. The performance of this global system will be assessed and future plans detailed.

  19. Mathematical modeling of liquid/liquid hollow fiber membrane contactor accounting for interfacial transport phenomena: Extraction of lanthanides as a surrogate for actinides

    SciTech Connect

    Rogers, J.D.

    1994-08-04

    This report is divided into two parts. The second part is divided into the following sections: experimental protocol; modeling the hollow fiber extractor using film theory; Graetz model of the hollow fiber membrane process; fundamental diffusive-kinetic model; and diffusive liquid membrane device-a rigorous model. The first part is divided into: membrane and membrane process-a concept; metal extraction; kinetics of metal extraction; modeling the membrane contactor; and interfacial phenomenon-boundary conditions-applied to membrane transport.

  20. First-Person Accounts.

    ERIC Educational Resources Information Center

    Gribs, H.; And Others

    1995-01-01

    Personal accounts describe the lives of 2 individuals with deaf-blindness, one an 87-year-old woman who was deaf from birth and became totally blind over a 50-year period and the other of a woman who became deaf-blind as a result of a fever at the age of 7. Managing activities of daily life and experiencing sensory hallucinations are among topics…

  1. Managing global accounts.

    PubMed

    Yip, George S; Bink, Audrey J M

    2007-09-01

    Global account management--which treats a multinational customer's operations as one integrated account, with coherent terms for pricing, product specifications, and service--has proliferated over the past decade. Yet according to the authors' research, only about a third of the suppliers that have offered GAM are pleased with the results. The unhappy majority may be suffering from confusion about when, how, and to whom to provide it. Yip, the director of research and innovation at Capgemini, and Bink, the head of marketing communications at Uxbridge College, have found that GAM can improve customer satisfaction by 20% or more and can raise both profits and revenues by at least 15% within just a few years of its introduction. They provide guidelines to help companies achieve similar results. The first steps are determining whether your products or services are appropriate for GAM, whether your customers want such a program, whether those customers are crucial to your strategy, and how GAM might affect your competitive advantage. If moving forward makes sense, the authors' exhibit, "A Scorecard for Selecting Global Accounts," can help you target the right customers. The final step is deciding which of three basic forms to offer: coordination GAM (in which national operations remain relatively strong), control GAM (in which the global operation and the national operations are fairly balanced), and separate GAM (in which a new business unit has total responsibility for global accounts). Given the difficulty and expense of providing multiple varieties, the vast majority of companies should initially customize just one---and they should be careful not to start with a choice that is too ambitious for either themselves or their customers to handle.

  2. Accounting for Every Kilowatt

    DTIC Science & Technology

    2014-10-01

    Equation 1. One estimate of the energy density of diesel fuel (ρdiesel) coupled with the efficiency (η) of a 60-kilowatt generator op- erating at...are going. Reduc- ing demand without reducing our capability requires appliance -level feedback, which current smart-meter technology does not...event. Accountability Soldiers need appliance -level feedback to reduce electrical consumption. Specifically, they need to know what loads are currently

  3. Integrated Cost Accounting System.

    DTIC Science & Technology

    1992-07-27

    few other companies. Harvard Business Review contained articles explaining the ideas behind the new costing methods and examples of applications...technical report. Peter Drucker in an article in Harvard Business Review ’carefully explains that accounting must change in response to the changes in...Kaplan in a Harvard Business Review article develop the idea of four levels of activities: facility sustaining activities; product-sustaining activities

  4. Hospitals' Internal Accountability

    PubMed Central

    Kraetschmer, Nancy; Jass, Janak; Woodman, Cheryl; Koo, Irene; Kromm, Seija K.; Deber, Raisa B.

    2014-01-01

    This study aimed to enhance understanding of the dimensions of accountability captured and not captured in acute care hospitals in Ontario, Canada. Based on an Ontario-wide survey and follow-up interviews with three acute care hospitals in the Greater Toronto Area, we found that the two dominant dimensions of hospital accountability being reported are financial and quality performance. These two dimensions drove both internal and external reporting. Hospitals' internal reports typically included performance measures that were required or mandated in external reports. Although respondents saw reporting as a valuable mechanism for hospitals and the health system to monitor and track progress against desired outcomes, multiple challenges with current reporting requirements were communicated, including the following: 58% of survey respondents indicated that performance-reporting resources were insufficient; manual data capture and performance reporting were prevalent, with the majority of hospitals lacking sophisticated tools or technology to effectively capture, analyze and report performance data; hospitals tended to focus on those processes and outcomes with high measurability; and 53% of respondents indicated that valuable cross-system accountability, performance measures or both were not captured by current reporting requirements. PMID:25305387

  5. Risk-Informed Monitoring, Verification and Accounting (RI-MVA). An NRAP White Paper Documenting Methods and a Demonstration Model for Risk-Informed MVA System Design and Operations in Geologic Carbon Sequestration

    SciTech Connect

    Unwin, Stephen D.; Sadovsky, Artyom; Sullivan, E. C.; Anderson, Richard M.

    2011-09-30

    This white paper accompanies a demonstration model that implements methods for the risk-informed design of monitoring, verification and accounting (RI-MVA) systems in geologic carbon sequestration projects. The intent is that this model will ultimately be integrated with, or interfaced with, the National Risk Assessment Partnership (NRAP) integrated assessment model (IAM). The RI-MVA methods described here apply optimization techniques in the analytical environment of NRAP risk profiles to allow systematic identification and comparison of the risk and cost attributes of MVA design options.

  6. Inflation Accounting Methods and their Effectiveness.

    DTIC Science & Technology

    accounting and current cost accounting are explained as the major inflation accounting methods. Inflation accounting standards announced in the United...inflation accounting, constant purchasing power accounting, constant dollar accounting, current cost accounting , current value.

  7. Public Accountability in the Age of Neo-Liberal Governance.

    ERIC Educational Resources Information Center

    Ranson, Stewart

    2003-01-01

    Analyzes the impact of neo-liberal corporate accountability on educational governance since the demise of professional accountability in the mid-1970s. Argues that corporate accountability is inappropriate for educational governance. Proposes an alternative model: democratic accountability. (Contains 1 figure and 125 references.)(PKP)

  8. Improving Hydrologic Data Assimilation by a Multivariate Particle Filter-Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Yan, H.; DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Data assimilation (DA) is a popular method for merging information from multiple sources (i.e. models and remotely sensing), leading to improved hydrologic prediction. With the increasing availability of satellite observations (such as soil moisture) in recent years, DA is emerging in operational forecast systems. Although these techniques have seen widespread application, developmental research has continued to further refine their effectiveness. This presentation will examine potential improvements to the Particle Filter (PF) through the inclusion of multivariate correlation structures. Applications of the PF typically rely on univariate DA schemes (such as assimilating the outlet observed discharge), and multivariate schemes generally ignore the spatial correlation of the observations. In this study, a multivariate DA scheme is proposed by introducing geostatistics into the newly developed particle filter with Markov chain Monte Carlo (PF-MCMC) method. This new method is assessed by a case study over one of the basin with natural hydrologic process in Model Parameter Estimation Experiment (MOPEX), located in Arizona. The multivariate PF-MCMC method is used to assimilate the Advanced Scatterometer (ASCAT) grid (12.5 km) soil moisture retrievals and the observed streamflow in five gages (four inlet and one outlet gages) into the Sacramento Soil Moisture Accounting (SAC-SMA) model for the same scale (12.5 km), leading to greater skill in hydrologic predictions.

  9. Numerical modeling of 1D heterogeneous combustion in porous media under free convection taking into account dependence of permeability on porosity

    NASA Astrophysics Data System (ADS)

    Lutsenko, N. A.

    2016-06-01

    Using numerical experiment the one-dimensional unsteady process of heterogeneous combustion in porous object under free convection is considered when the dependence of permeability on porosity is taken into account. The combustion is due to exothermic reaction between the fuel in the solid porous medium and oxidizer contained in the gas flowing through the porous object. In the present work the process is considered under natural convection, i.e. when the flow rate and velocity of the gas at the inlet to the porous objects are unknown, but the gas pressure at object boundaries is known. The influence of changing of permeability due to the changing of porosity on the solution is investigated using original numerical method, which is based on a combination of explicit and implicit finite-difference schemes. It was shown that taking into account the dependence of permeability on porosity, which is described by some known equations, can significantly change the solution in one-dimensional case. The changing of permeability due to the changing of porosity leads to the speed increasing of both cocurrent and the countercurrent combustion waves, and to the temperature increasing in the combustion zone of countercurrent combustion wave.

  10. The Integration of Behavioral Accounting in Undergraduate Accounting Curricula.

    ERIC Educational Resources Information Center

    Buchanan, Phillip G.; Cao, Le Thi

    1986-01-01

    The study reported here is part of a continuing project with the goal of determining the place of behavioral accounting in the accounting curricula. While the first two studies focused on the graduate accounting curricula and the practitioners' opinions on the subject, this study concentrates on the behavioral accounting content of undergraduate…

  11. New Frontiers: Training Forensic Accountants within the Accounting Program

    ERIC Educational Resources Information Center

    Ramaswamy, Vinita

    2007-01-01

    Accountants have recently been subject to very unpleasant publicity following the collapse of Enron and other major companies. There has been a plethora of accounting failures and accounting restatements of falsified earnings, with litigations and prosecutions taking place every day. As the FASB struggles to tighten the loopholes in accounting,…

  12. Performance and Accountability Report

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The NASA Fiscal Year 2002 Performance and Accountability Report is presented. Over the past year, significant changes have been implemented to greatly improve NASA's management while continuing to break new ground in science and technology. Excellent progress has been made in implementing the President's Management Agenda. NASA is leading the government in its implementation of the five government-wide initiatives. NASA received an unqualified audit opinion on FY 2002 financial statements. The vast majority of performance goals have been achieved, furthering each area of NASA's mission. The contents include: 1) NASA Vision and Mission; 2) Management's Discussion and Analysis; 3) Performance; and 4) Financial.

  13. Automated Accounting. Payroll. Instructor Module.

    ERIC Educational Resources Information Center

    Moses, Duane R.

    This teacher's guide was developed to assist business instructors using Dac Easy Accounting Payroll Version 3.0 edition software in their accounting programs. The module contains assignment sheets and job sheets designed to enable students to master competencies identified in the area of automated accounting--payroll. Basic accounting skills are…

  14. Where Are the Accounting Professors?

    ERIC Educational Resources Information Center

    Chang, Jui-Chin; Sun, Huey-Lian

    2008-01-01

    Accounting education is facing a crisis of shortage of accounting faculty. This study discusses the reasons behind the shortage and offers suggestions to increase the supply of accounting faculty. Our suggestions are as followings. First, educators should begin promoting accounting academia as one of the career choices to undergraduate and…

  15. Revamping High School Accounting Courses.

    ERIC Educational Resources Information Center

    Bittner, Joseph

    2002-01-01

    Provides ideas for updating accounting courses: convert to semester length; focus on financial reporting/analysis, financial statements, the accounting cycle; turn textbook exercises into practice sets for the accounting cycle; teach about corporate accounting; and address individual line items on financial statements. (SK)

  16. A multi-scale cardiovascular system model can account for the load-dependence of the end-systolic pressure-volume relationship

    PubMed Central

    2013-01-01

    Background The end-systolic pressure-volume relationship is often considered as a load-independent property of the heart and, for this reason, is widely used as an index of ventricular contractility. However, many criticisms have been expressed against this index and the underlying time-varying elastance theory: first, it does not consider the phenomena underlying contraction and second, the end-systolic pressure volume relationship has been experimentally shown to be load-dependent. Methods In place of the time-varying elastance theory, a microscopic model of sarcomere contraction is used to infer the pressure generated by the contraction of the left ventricle, considered as a spherical assembling of sarcomere units. The left ventricle model is inserted into a closed-loop model of the cardiovascular system. Finally, parameters of the modified cardiovascular system model are identified to reproduce the hemodynamics of a normal dog. Results Experiments that have proven the limitations of the time-varying elastance theory are reproduced with our model: (1) preload reductions, (2) afterload increases, (3) the same experiments with increased ventricular contractility, (4) isovolumic contractions and (5) flow-clamps. All experiments simulated with the model generate different end-systolic pressure-volume relationships, showing that this relationship is actually load-dependent. Furthermore, we show that the results of our simulations are in good agreement with experiments. Conclusions We implemented a multi-scale model of the cardiovascular system, in which ventricular contraction is described by a detailed sarcomere model. Using this model, we successfully reproduced a number of experiments that have shown the failing points of the time-varying elastance theory. In particular, the developed multi-scale model of the cardiovascular system can capture the load-dependence of the end-systolic pressure-volume relationship. PMID:23363818

  17. Automated attendance accounting system

    NASA Technical Reports Server (NTRS)

    Chapman, C. P. (Inventor)

    1973-01-01

    An automated accounting system useful for applying data to a computer from any or all of a multiplicity of data terminals is disclosed. The system essentially includes a preselected number of data terminals which are each adapted to convert data words of decimal form to another form, i.e., binary, usable with the computer. Each data terminal may take the form of a keyboard unit having a number of depressable buttons or switches corresponding to selected data digits and/or function digits. A bank of data buffers, one of which is associated with each data terminal, is provided as a temporary storage. Data from the terminals is applied to the data buffers on a digit by digit basis for transfer via a multiplexer to the computer.

  18. Financial accounting for radiology executives.

    PubMed

    Seidmann, Abraham; Mehta, Tushar

    2005-03-01

    The authors review the role of financial accounting information from the perspective of a radiology executive. They begin by introducing the role of pro forma statements. They discuss the fundamental concepts of accounting, including the matching principle and accrual accounting. The authors then explore the use of financial accounting information in making investment decisions in diagnostic medical imaging. The paper focuses on critically evaluating the benefits and limitations of financial accounting for decision making in a radiology practice.

  19. Accounting for Unresolved Spatial Variability in Large Scale Models: Development and Evaluation of a Statistical Cloud Parameterization with Prognostic Higher Order Moments

    SciTech Connect

    Robert Pincus

    2011-05-17

    This project focused on the variability of clouds that is present across a wide range of scales ranging from the synoptic to the millimeter. In particular, there is substantial variability in cloud properties at scales smaller than the grid spacing of models used to make climate projections (GCMs) and weather forecasts. These models represent clouds and other small-scale processes with parameterizations that describe how those processes respond to and feed back on the largescale state of the atmosphere.

  20. Accounting for Mass Transfer Kinetics when Modeling the Impact of Low Permeability Layers in a Groundwater Source Zone on Dissolved Contaminant Fate and Transport

    DTIC Science & Technology

    2014-03-27

    Figure 4: Conceptual Model of contaminated aquifer (After Sievers, 2012) ................... 30 Figure 5: MODFLOW Head Output along a Longitudinal Cross...Table 3: Bachman Road Site Hydrogeological Values (Abriola et al., 2005) .................. 25 Table 4: Parameter Values Input into MODFLOW ...software. Two modeling packages within GMS were employed in this study, MODFLOW (Chapman and Parker, 2011) and RT3D (Clement et al., 2004). GMS was used

  1. Extending Integrate-and-Fire Model Neurons to Account for the Effects of Weak Electric Fields and Input Filtering Mediated by the Dendrite.

    PubMed

    Aspart, Florian; Ladenbauer, Josef; Obermayer, Klaus

    2016-11-01

    Transcranial brain stimulation and evidence of ephaptic coupling have recently sparked strong interests in understanding the effects of weak electric fields on the dynamics of brain networks and of coupled populations of neurons. The collective dynamics of large neuronal populations can be efficiently studied using single-compartment (point) model neurons of the integrate-and-fire (IF) type as their elements. These models, however, lack the dendritic morphology required to biophysically describe the effect of an extracellular electric field on the neuronal membrane voltage. Here, we extend the IF point neuron models to accurately reflect morphology dependent electric field effects extracted from a canonical spatial "ball-and-stick" (BS) neuron model. Even in the absence of an extracellular field, neuronal morphology by itself strongly affects the cellular response properties. We, therefore, derive additional components for leaky and nonlinear IF neuron models to reproduce the subthreshold voltage and spiking dynamics of the BS model exposed to both fluctuating somatic and dendritic inputs and an extracellular electric field. We show that an oscillatory electric field causes spike rate resonance, or equivalently, pronounced spike to field coherence. Its resonance frequency depends on the location of the synaptic background inputs. For somatic inputs the resonance appears in the beta and gamma frequency range, whereas for distal dendritic inputs it is shifted to even higher frequencies. Irrespective of an external electric field, the presence of a dendritic cable attenuates the subthreshold response at the soma to slowly-varying somatic inputs while implementing a low-pass filter for distal dendritic inputs. Our point neuron model extension is straightforward to implement and is computationally much more efficient compared to the original BS model. It is well suited for studying the dynamics of large populations of neurons with heterogeneous dendritic morphology with

  2. Extending Integrate-and-Fire Model Neurons to Account for the Effects of Weak Electric Fields and Input Filtering Mediated by the Dendrite

    PubMed Central

    Obermayer, Klaus

    2016-01-01

    Transcranial brain stimulation and evidence of ephaptic coupling have recently sparked strong interests in understanding the effects of weak electric fields on the dynamics of brain networks and of coupled populations of neurons. The collective dynamics of large neuronal populations can be efficiently studied using single-compartment (point) model neurons of the integrate-and-fire (IF) type as their elements. These models, however, lack the dendritic morphology required to biophysically describe the effect of an extracellular electric field on the neuronal membrane voltage. Here, we extend the IF point neuron models to accurately reflect morphology dependent electric field effects extracted from a canonical spatial “ball-and-stick” (BS) neuron model. Even in the absence of an extracellular field, neuronal morphology by itself strongly affects the cellular response properties. We, therefore, derive additional components for leaky and nonlinear IF neuron models to reproduce the subthreshold voltage and spiking dynamics of the BS model exposed to both fluctuating somatic and dendritic inputs and an extracellular electric field. We show that an oscillatory electric field causes spike rate resonance, or equivalently, pronounced spike to field coherence. Its resonance frequency depends on the location of the synaptic background inputs. For somatic inputs the resonance appears in the beta and gamma frequency range, whereas for distal dendritic inputs it is shifted to even higher frequencies. Irrespective of an external electric field, the presence of a dendritic cable attenuates the subthreshold response at the soma to slowly-varying somatic inputs while implementing a low-pass filter for distal dendritic inputs. Our point neuron model extension is straightforward to implement and is computationally much more efficient compared to the original BS model. It is well suited for studying the dynamics of large populations of neurons with heterogeneous dendritic morphology

  3. A modelling exercise to examine variations of NOx concentrations on adjacent footpaths in a street canyon: The importance of accounting for wind conditions and fleet composition.

    PubMed

    Gallagher, J

    2016-04-15

    Personal measurement studies and modelling investigations are used to examine pollutant exposure for pedestrians in the urban environment: each presenting various strengths and weaknesses in relation to labour and equipment costs, a sufficient sampling period and the accuracy of results. This modelling exercise considers the potential benefits of modelling results over personal measurement studies and aims to demonstrate how variations in fleet composition affects exposure results (presented as mean concentrations along the centre of both footpaths) in different traffic scenarios. A model of Pearse Street in Dublin, Ireland was developed by combining a computational fluid dynamic (CFD) model and a semi-empirical equation to simulate pollutant dispersion in the street. Using local NOx concentrations, traffic and meteorological data from a two-week period in 2011, the model were validated and a good fit was presented. To explore the long-term variations in personal exposure due to variations in fleet composition, synthesised traffic data was used to compare short-term personal exposure data (over a two-week period) with the results for an extended one-year period. Personal exposure during the two-week period underestimated the one-year results by between 8% and 65% on adjacent footpaths. The findings demonstrate the potential for relative differences in pedestrian exposure to exist between the north and south footpaths due to changing wind conditions in both peak and off-peak traffic scenarios. This modelling approach may help overcome potential under- or over-estimations of concentrations in personal measurement studies on the footpaths. Further research aims to measure pollutant concentrations on adjacent footpaths in different traffic and wind conditions and to develop a simpler modelling system to identify pollutant hotspots on our city footpaths so that urban planners can implement improvement strategies to improve urban air quality.

  4. Austenite Grain Growth in a 2.25Cr-1Mo Vanadium-Free Steel Accounting for Zener Pinning and Solute Drag: Experimental Study and Modeling

    NASA Astrophysics Data System (ADS)

    Dépinoy, S.; Marini, B.; Toffolon-Masclet, C.; Roch, F.; Gourgues-Lorenzon, A.-F.

    2017-02-01

    Austenite grain size has been experimentally determined for various austenitization temperatures and times in a 2.25Cr-1Mo vanadium-free steel. Three grain growth regimes were highlighted: limited growth occurs at lower temperatures [1193 K (920 °C) and 1243 K (970 °C)]; parabolic growth prevails at higher temperatures [1343 K (1070 °C) and 1393 K (1120 °C)]. At the intermediate temperature of 1293 K (1020 °C), slowed down growth was observed. Classical grain growth equations were applied to the experimental results, accounting for Zener pinning and solute drag as possible causes for temperature-dependent limited growth. It was shown that Zener pinning due to AlN particles could not be responsible for limited growth, although it has some effect at lower temperatures. Instead, limited and slow growths are very likely to be the result of segregation of molybdenum atoms at austenite grain boundaries. The temperature-dependence of this phenomenon may be linked to the co-segregation of molybdenum and carbon atoms.

  5. Comprehensive model of how reality distortion and symptoms occur in schizophrenia: could impairment in learning-dependent predictive perception account for the manifestations of schizophrenia?

    PubMed

    Krishnan, Ranga R; Kraus, Michael S; Keefe, Richard S E

    2011-06-01

    Conventional wisdom has not laid out a clear and uniform profile of schizophrenia as a unitary entity. One of the key first steps in elucidating the neurobiology of this entity would be to characterize the essential and common elements in the group of entities called schizophrenia. Kraepelin in his introduction notes 'the conviction seems to be more and more gaining ground that dementia praecox on the whole represents, a well characterized form of disease, and that we are justified in regarding the majority of the clinical pictures which are brought together here as the expression of a single morbid process, though outwardly they often diverge very far from one another'. But what is that single morbid process? We suggest that just as the uniform defect in all types of cancer is impaired regulation of cell proliferation, the primary defect in the group of entities called schizophrenia is persistent defective hierarchical temporal processing. This manifests in the form of chronic memory-prediction errors or deficits in learning-dependent predictive perception. These deficits account for the symptoms that present as reality distortion (delusions, thought disorder and hallucinations). This constellation of symptoms corresponds with the profile of most patients currently diagnosed as suffering from schizophrenia. In this paper we describe how these deficits can lead to the various symptoms of schizophrenia.

  6. The union, the mining company, and the environment: steelworkers build a multi-stakeholder model for corporate accountability at Phelps Dodge.

    PubMed

    Lewis, S

    1999-01-01

    This is a case study of ongoing relations between the Phelps Dodge mining company, a United Steelworkers local representing 560 employees at the company's Chino Mines in New Mexico, and an array of other concerned stakeholders. This case study shows that labor can be a full partner in environmental advocacy, and even take a leadership role in building a strong multi-stakeholder alliance for corporate accountability. While the case also shows that corporate jobs blackmail is alive and well in the global economy, the labor community-coalition that has emerged at the mining complex has broken some new ground. The approach taken attends to diverse stakeholder interests--cultural protection issues of Native-American and Mexican-American ethnic groups; conservation, groundwater and Right-to-Know issues of traditional environmental constituencies; and environmental liability and disclosure concerns of corporate shareholders. Among the key developments are: A new approach to corporate reporting to shareholders as an enforcement and right-to-know tool; The use of the internet as an information dissemination and action tool; The potential for environmentally needed improvements to serve as a receptor for employment of workers at a mine during periods of reduced production.

  7. Free energy of mixing of an Fe-Co liquid alloy taking into account nondiagonal d-d-electron coupling in the framework of the Willis-Harrison model

    NASA Astrophysics Data System (ADS)

    Dubinin, N. E.; Vatolin, N. A.

    2016-11-01

    Within the framework of the Willis-Harrison model, the effect of taking into account d-d-electron couplings nondiagonal in the magnetic quantum number between the neighboring atoms in a transition metal on the partial pair potentials and the free energy of mixing of an Fe-Co liquid alloy near the melting temperature is investigated. It is found that an increase in the fraction of nondiagonal couplings results in a decrease in the depth of the first minimum of the partial pair potentials and in the displacement of its position towards larger r. It is shown that taking this factor into account considerably improves the agreement with experimental data of the concentration dependence of the free energy of mixing of the system under consideration.

  8. You Only Die Once: Accounting for Multi-Attributable Mortality Risks in Multi-Disease Models for Health-Economic Analyses.

    PubMed

    Hoogenveen, Rudolf T; Boshuizen, Hendriek C; Engelfriet, Peter M; van Baal, Pieter Hm

    2016-07-12

    Mortality rates in Markov models, as used in health economic studies, are often estimated from summary statistics that allow limited adjustment for confounders. If interventions are targeted at multiple diseases and/or risk factors, these mortality rates need to be combined in a single model. This requires them to be mutually adjusted to avoid 'double counting' of mortality. We present a mathematical modeling approach to describe the joint effect of mutually dependent risk factors and chronic diseases on mortality in a consistent manner. Most importantly, this approach explicitly allows the use of readily available external data sources. An additional advantage is that existing models can be smoothly expanded to encompass more diseases/risk factors. To illustrate the usefulness of this method and how it should be implemented, we present a health economic model that links risk factors for diseases to mortality from these diseases, and describe the causal chain running from these risk factors (e.g., obesity) through to the occurrence of disease (e.g., diabetes, CVD) and death. Our results suggest that these adjustment procedures may have a large impact on estimated mortality rates. An improper adjustment of the mortality rates could result in an underestimation of disease prevalence and, therefore, disease costs.

  9. USING FINANCIAL ACCOUNTING METHODS TO FURTHER DEVELOP AND COMMUNICATE ENVIRONMENTAL ACCOUNTING USING EMERGY

    EPA Science Inventory

    The idea that the methods and models of accounting and bookkeeping might be useful in describing, understanding, and managing environmental systems is implicit in the title of H.T. Odum's book, Environmental Accounting: Emergy and Environmental Decision Making. In this paper, I ...

  10. Numerical modeling approach taking into account the influence of delamination for performance capacity of reinforced concrete beam strengthened in bending by CFRP

    NASA Astrophysics Data System (ADS)

    Wibowo, Supardi

    2017-03-01

    Reinforced concrete members strengthened in bending by externally bonding of Carbon Fiber Reinforced Polymer (CFRP) may present several failure modes: failure of material or failure of the interface between concrete-CFRP. Nevertheless, experience gained from testing confirms that in most cases delamination prevails over the other possible rupture modes. Delamination in CFRP strengthened sections is difficult to model because it involves multiple parameters such as FRP stiffness, adhesive material properties, presence of cracks in concrete, among others. A simplified numerical model to predict flexural capacity of reinforced concrete beam strengthened by CFRP at failure is presented in this paper. The experimental validation is presented as well. Based on the result of the proposed model, an equation for the prediction of ultimate flexural capacity to prevent CFRP debonding is proposed.

  11. The interaction of electric field and hydrostatic pressure in an electrical double layer: A simple "first principle" model that accounts for the finite sizes of counterions.

    PubMed

    Shapovalov, Vladimir L

    2015-09-15

    A simple model describing the influence of ion size in the electrical double layer (EDL) near a highly charged plane is proposed here. This model is based on the Poisson-Boltzmann equation with a single additional term representing the mechanical response of bulky ions to hydrostatic pressure. This pressure is produced by Coulomb forces, and increases to several kilobars in the vicinity of a highly charged plane. Numerical simulations demonstrate close packing as a limit for counterion concentrations. Differential capacity reaches maximum at 0.1-0.3V and remains reasonably small in wide range of potentials.

  12. A DGTD method for the numerical modeling of the interaction of light with nanometer scale metallic structures taking into account non-local dispersion effects

    NASA Astrophysics Data System (ADS)

    Schmitt, Nikolai; Scheid, Claire; Lanteri, Stéphane; Moreau, Antoine; Viquerat, Jonathan

    2016-07-01

    The interaction of light with metallic nanostructures is increasingly attracting interest because of numerous potential applications. Sub-wavelength metallic structures, when illuminated with a frequency close to the plasma frequency of the metal, present resonances that cause extreme local field enhancements. Exploiting the latter in applications of interest requires a detailed knowledge about the occurring fields which can actually not be obtained analytically. For the latter mentioned reason, numerical tools are thus an absolute necessity. The insight they provide is very often the only way to get a deep enough understanding of the very rich physics at play. For the numerical modeling of light-structure interaction on the nanoscale, the choice of an appropriate material model is a crucial point. Approaches that are adopted in a first instance are based on local (i.e. with no interaction between electrons) dispersive models, e.g. Drude or Drude-Lorentz models. From the mathematical point of view, when a time-domain modeling is considered, these models lead to an additional system of ordinary differential equations coupled to Maxwell's equations. However, recent experiments have shown that the repulsive interaction between electrons inside the metal makes the response of metals intrinsically non-local and that this effect cannot generally be overlooked. Technological achievements have enabled the consideration of metallic structures in a regime where such non-localities have a significant influence on the structures' optical response. This leads to an additional, in general non-linear, system of partial differential equations which is, when coupled to Maxwell's equations, significantly more difficult to treat. Nevertheless, dealing with a linearized non-local dispersion model already opens the route to numerous practical applications of plasmonics. In this work, we present a Discontinuous Galerkin Time-Domain (DGTD) method able to solve the system of Maxwell

  13. A DGTD method for the numerical modeling of the interaction of light with nanometer scale metallic structures taking into account non-local dispersion effects

    SciTech Connect

    Schmitt, Nikolai; Scheid, Claire; Lanteri, Stéphane; Moreau, Antoine; Viquerat, Jonathan

    2016-07-01

    The interaction of light with metallic nanostructures is increasingly attracting interest because of numerous potential applications. Sub-wavelength metallic structures, when illuminated with a frequency close to the plasma frequency of the metal, present resonances that cause extreme local field enhancements. Exploiting the latter in applications of interest requires a detailed knowledge about the occurring fields which can actually not be obtained analytically. For the latter mentioned reason, numerical tools are thus an absolute necessity. The insight they provide is very often the only way to get a deep enough understanding of the very rich physics at play. For the numerical modeling of light-structure interaction on the nanoscale, the choice of an appropriate material model is a crucial point. Approaches that are adopted in a first instance are based on local (i.e. with no interaction between electrons) dispersive models, e.g. Drude or Drude–Lorentz models. From the mathematical point of view, when a time-domain modeling is considered, these models lead to an additional system of ordinary differential equations coupled to Maxwell's equations. However, recent experiments have shown that the repulsive interaction between electrons inside the metal makes the response of metals intrinsically non-local and that this effect cannot generally be overlooked. Technological achievements have enabled the consideration of metallic structures in a regime where such non-localities have a significant influence on the structures' optical response. This leads to an additional, in general non-linear, system of partial differential equations which is, when coupled to Maxwell's equations, significantly more difficult to treat. Nevertheless, dealing with a linearized non-local dispersion model already opens the route to numerous practical applications of plasmonics. In this work, we present a Discontinuous Galerkin Time-Domain (DGTD) method able to solve the system of Maxwell

  14. Cost Accounting and Analysis for University Libraries.

    ERIC Educational Resources Information Center

    Leimkuhler, Ferdinand F.; Cooper, Michael D.

    The approach to library planning studied in this report is the use of accounting models to measure library costs and implement program budgets. A cost-flow model for a university library is developed and listed with historical data from the Berkeley General Library. Various comparisons of an exploratory nature are made of the unit costs for…

  15. Cost Accounting and Analysis for University Libraries

    ERIC Educational Resources Information Center

    Leimkuhler, Ferdinand F.; Cooper, Michael D.

    1971-01-01

    The approach to library planning studied in this paper is the use of accounting models to measure library costs and implement program budgets. A cost-flow model for a university library is developed and tested with historical data from the General Library at the University of California, Berkeley. (4 references) (Author)

  16. NASA Accountability Report

    NASA Technical Reports Server (NTRS)

    1997-01-01

    NASA is piloting fiscal year (FY) 1997 Accountability Reports, which streamline and upgrade reporting to Congress and the public. The document presents statements by the NASA administrator, and the Chief Financial Officer, followed by an overview of NASA's organizational structure and the planning and budgeting process. The performance of NASA in four strategic enterprises is reviewed: (1) Space Science, (2) Mission to Planet Earth, (3) Human Exploration and Development of Space, and (4) Aeronautics and Space Transportation Technology. Those areas which support the strategic enterprises are also reviewed in a section called Crosscutting Processes. For each of the four enterprises, there is discussion about the long term goals, the short term objectives and the accomplishments during FY 1997. The Crosscutting Processes section reviews issues and accomplishments relating to human resources, procurement, information technology, physical resources, financial management, small and disadvantaged businesses, and policy and plans. Following the discussion about the individual areas is Management's Discussion and Analysis, about NASA's financial statements. This is followed by a report by an independent commercial auditor and the financial statements.

  17. Spills, drills, and accountability

    SciTech Connect

    1993-12-31

    NRDC seeks preventive approaches to oil pollution on U.S. coasts. The recent oil spills in Spain and Scotland have highlighted a fact too easy to forget in a society that uses petroleum every minute of every day: oil is profoundly toxic. One tiny drop on a bald eagle`s egg has been known to kill the embryo inside. Every activity involving oil-drilling for it, piping it, shipping it-poses risks that must be taken with utmost caution. Moreover, oil production is highly polluting. It emits substantial air pollution, such as nitrogen oxides that can form smog and acid rain. The wells bring up great quantities of toxic waste: solids, liquids and sludges often contaminated by oil, toxic metals, or even radioactivity. This article examines the following topics focusing on oil pollution control and prevention in coastal regions of the USA: alternate energy sources and accountability of pollutor; ban on offshore drilling as exemplified by the energy policy act; tanker free zones; accurate damage evaluations. Policy of the National Resource Defence Council is articulated.

  18. The process of processing: exploring the validity of Neisser's perceptual cycle model with accounts from critical decision-making in the cockpit.

    PubMed

    Plant, Katherine L; Stanton, Neville A

    2015-01-01

    The perceptual cycle model (PCM) has been widely applied in ergonomics research in domains including road, rail and aviation. The PCM assumes that information processing occurs in a cyclical manner drawing on top-down and bottom-up influences to produce perceptual exploration and actions. However, the validity of the model has not been addressed. This paper explores the construct validity of the PCM in the context of aeronautical decision-making. The critical decision method was used to interview 20 helicopter pilots about critical decision-making. The data were qualitatively analysed using an established coding scheme, and composite PCMs for incident phases were constructed. It was found that the PCM provided a mutually exclusive and exhaustive classification of the information-processing cycles for dealing with critical incidents. However, a counter-cycle was also discovered which has been attributed to skill-based behaviour, characteristic of experts. The practical applications and future research questions are discussed. Practitioner Summary: This paper explores whether information processing, when dealing with critical incidents, occurs in the manner anticipated by the perceptual cycle model. In addition to the traditional processing cycle, a reciprocal counter-cycle was found. This research can be utilised by those who use the model as an accident analysis framework.

  19. Smart grid initialization reduces the computational complexity of multi-objective image registration based on a dual-dynamic transformation model to account for large anatomical differences

    NASA Astrophysics Data System (ADS)

    Bosman, Peter A. N.; Alderliesten, Tanja

    2016-03-01

    We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.

  20. Historical Psychology and the Milgram Paradigm: Tests of an Experimentally Derived Model of Defiance Using Accounts of Massacres by Nazi Reserve Police Battalion 101

    ERIC Educational Resources Information Center

    Navarick, Douglas J.

    2012-01-01

    In Milgram's (1963, 1965a, 1965b, 1974/2004) experiments on destructive obedience, an authority figure repeatedly ordered a resistant participant to deliver what seemed to be increasingly painful shocks to a confederate victim who demanded to be released. A three- stage behavioral model (aversive conditioning of contextual stimuli, emergence of a…