Sample records for default parameter values

  1. Finding Top-kappa Unexplained Activities in Video

    DTIC Science & Technology

    2012-03-09

    parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in

  2. A Simulation Program with Latency Exploitation for the Transient Analysis of Digital Circuits.

    DTIC Science & Technology

    1983-08-01

    PW PER) Examples: VIN 3 0 PULSE(-5 5 iNS iNS iNS 50NS lOONS) parameters default values units Vi (initial value) volts or amps V2 (pulsed value) volts...TAUl TD2 TAU2)mU Examples: VIN 3 0 EXP(-5 0 2NS 30NS 60NS 40NS) parameters default values units V1 (initial value) volts or amps V2 (pulsed value

  3. VizieR Online Data Catalog: FAMA code for stellar parameters and abundances (Magrini+, 2013)

    NASA Astrophysics Data System (ADS)

    Magrini, L.; Randich, S.; Friel, E.; Spina, L.; Jacobson, H.; Cantat-Gaudin, T.; Donati, P.; Baglioni, R.; Maiorca, E.; Bragaglia, A.; Sordo, R.; Vallenari, A.

    2013-07-01

    FAMA v.1, July 2013, distributed with MOOGv2013 and Kurucz models. Perl Codes: read_out2.pl read_final.pl driver.pl sclipping_26.0.pl sclipping_final.pl sclipping_26.1.pl confronta.pl fama.pl Model atmopheres and interpolator (Kurucz models): MODEL_ATMO MOOG_files: files to compile MOOG (the most recent version of MOOG can be obtained from http://www.as.utexas.edu/~chris/moog.html) FAMAmoogfiles: files to update when compiling MOOG OUTPUT: directory in which the results will be stored, contains a sm macro to produce final plots automoog.par: files with parameters for FAMA 1) OUTPUTdir 2) MOOGdir 3) modelsdir 4) 1.0 (default) percentage of the dispersion of FeI abundances to be considered to compute the errors on the stellar parameters, 1.0 means 100%, thus to compute e.g., the error on Teff we allow to code to find the Teff corresponding to a slope given by σ(FeI)/range(EP). 5) 1.2 (default) σ clipping for FeI lines 6) 1.0 (default) σ clipping for FeII lines 7) 1.0 (default) σ clipping for the other elements 8) 1.0 (default) value of the QP parameter, higher values mean less strong convergence criteria. star.iron: EWs in the correct format to test the code sun.par: initial parameters for the test (1 data file).

  4. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  5. KABAM Version 1.0 User's Guide and Technical Documentation - Appendix C - Explanation of Default Values Representing Biotic Characteristics of Aquatic Ecosystem, Including Food Web Structure

    EPA Pesticide Factsheets

    Information relevant to KABAM and explanations of default parameters used to define the 7 trophic levels. KABAM is a simulation model used to predict pesticide concentrations in aquatic regions for use in exposure assessments.

  6. Description of the National Hydrologic Model for use with the Precipitation-Runoff Modeling System (PRMS)

    USGS Publications Warehouse

    Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.

    2018-01-08

    This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.

  7. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using

  8. Technical Report for Calculations of Atmospheric Dispersion at Onsite Locations for Department of Energy Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levin, Alan; Chaves, Chris

    2015-04-04

    The Department of Energy (DOE) has performed an evaluation of the technical bases for the default value for the atmospheric dispersion parameter χ/Q. This parameter appears in the calculation of radiological dose at the onsite receptor location (co-located worker at 100 meters) in safety analysis of DOE nuclear facilities. The results of the calculation are then used to determine whether safety significant engineered controls should be established to prevent and/or mitigate the event causing the release of hazardous material. An evaluation of methods for calculation of the dispersion of potential chemical releases for the purpose of estimating the chemical exposuremore » at the co-located worker location was also performed. DOE’s evaluation consisted of: (a) a review of the regulatory basis for the default χ/Q dispersion parameter; (b) an analysis of this parameter’s sensitivity to various factors that affect the dispersion of radioactive material; and (c) performance of additional independent calculations to assess the appropriate use of the default χ/Q value.« less

  9. VLSI (Very Large Scale Integration) Design Tools Reference Manual - Release 1.0.

    DTIC Science & Technology

    1983-10-01

    Pulse PULSE(VI V2 TD TR TF PW PER) Examples: VIN 3 0 PULSE(-i 1 2NS 2NS 2NS SONS lOONS) parameter default units V1 (initial value) Volts or Amps V2...VO VA FREQ TD THETA) Examples: VIN 3 0 SIN(0 1 OOMEG 1NS 1EO) parameter default value units VO (offset) Volts or Amps VA (amplitude) Volts or Amps...TD to TSTOP V O+VAe (-(" -nTD)%)in(2iFRJEQ (tim +TD)) t ° I. 3. Exponential EXP(V1 V2 TD1 TAU1 TD2 TAU2) Examples: VIN 3 0 EXP(-4 -1 2NS 3ONS 6ONS

  10. Derivation of WECC Distributed PV System Model Parameters from Quasi-Static Time-Series Distribution System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Boemer, Jens C.; Vittal, Eknath

    The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less

  11. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  12. Default values for assessment of potential dermal exposure of the hands to industrial chemicals in the scope of regulatory risk assessments.

    PubMed

    Marquart, Hans; Warren, Nicholas D; Laitinen, Juha; van Hemmen, Joop J

    2006-07-01

    Dermal exposure needs to be addressed in regulatory risk assessment of chemicals. The models used so far are based on very limited data. The EU project RISKOFDERM has gathered a large number of new measurements on dermal exposure to industrial chemicals in various work situations, together with information on possible determinants of exposure. These data and information, together with some non-RISKOFDERM data were used to derive default values for potential dermal exposure of the hands for so-called 'TGD exposure scenarios'. TGD exposure scenarios have similar values for some very important determinant(s) of dermal exposure, such as amount of substance used. They form narrower bands within the so-called 'RISKOFDERM scenarios', which cluster exposure situations according to the same purpose of use of the products. The RISKOFDERM scenarios in turn are narrower bands within the so-called Dermal Exposure Operation units (DEO units) that were defined in the RISKOFDERM project to cluster situations with similar exposure processes and exposure routes. Default values for both reasonable worst case situations and typical situations were derived, both for single datasets and, where possible, for combined datasets that fit the same TGD exposure scenario. The following reasonable worst case potential hand exposures were derived from combined datasets: (i) loading and filling of large containers (or mixers) with large amounts (many litres) of liquids: 11,500 mg per scenario (14 mg cm(-2) per scenario with surface of the hands assumed to be 820 cm(2)); (ii) careful mixing of small quantities (tens of grams in <1l): 4.1 mg per scenario (0.005 mg cm(-2) per scenario); (iii) spreading of (viscous) liquids with a comb on a large surface area: 130 mg per scenario (0.16 mg cm(-2) per scenario); (iv) brushing and rolling of (relatively viscous) liquid products on surfaces: 6500 mg per scenario (8 mg cm(-2) per scenario) and (v) spraying large amounts of liquids (paints, cleaning products) on large areas: 12,000 mg per scenario (14 mg cm(-2) per scenario). These default values are considered useful for estimating exposure for similar substances in similar situations with low uncertainty. Several other default values based on single datasets can also be used, but lead to estimates with a higher uncertainty, due to their more limited basis. Sufficient analogy in all described parameters of the scenario, including duration, is needed to enable proper use of the default values. The default values lead to similar estimates as the RISKOFDERM dermal exposure model that was based on the same datasets, but uses very different parameters. Both approaches are preferred over older general models, such as EASE, that are not based on data from actual dermal exposure situations.

  13. Impact of Multileaf Collimator Configuration Parameters on the Dosimetric Accuracy of 6-MV Intensity-Modulated Radiation Therapy Treatment Plans.

    PubMed

    Petersen, Nick; Perrin, David; Newhauser, Wayne; Zhang, Rui

    2017-01-01

    The purpose of this study was to evaluate the impact of selected configuration parameters that govern multileaf collimator (MLC) transmission and rounded leaf offset in a commercial treatment planning system (TPS) (Pinnacle 3 , Philips Medical Systems, Andover, MA, USA) on the accuracy of intensity-modulated radiation therapy (IMRT) dose calculation. The MLC leaf transmission factor was modified based on measurements made with ionization chambers. The table of parameters containing rounded-leaf-end offset values was modified by measuring the radiation field edge as a function of leaf bank position with an ionization chamber in a scanning water-tank dosimetry system and comparing the locations to those predicted by the TPS. The modified parameter values were validated by performing IMRT quality assurance (QA) measurements on 19 gantry-static IMRT plans. Planar dose measurements were performed with radiographic film and a diode array (MapCHECK2) and compared to TPS calculated dose distributions using default and modified configuration parameters. Based on measurements, the leaf transmission factor was changed from a default value of 0.001 to 0.005. Surprisingly, this modification resulted in a small but statistically significant worsening of IMRT QA gamma-index passing rate, which revealed that the overall dosimetric accuracy of the TPS depends on multiple configuration parameters in a manner that is coupled and not intuitive because of the commissioning protocol used in our clinic. The rounded leaf offset table had little room for improvement, with the average difference between the default and modified offset values being -0.2 ± 0.7 mm. While our results depend on the current clinical protocols, treatment unit and TPS used, the methodology used in this study is generally applicable. Different clinics could potentially obtain different results and improve their dosimetric accuracy using our approach.

  14. Environment Modeling Using Runtime Values for JPF-Android

    NASA Technical Reports Server (NTRS)

    van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem

    2015-01-01

    Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.

  15. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    USGS Publications Warehouse

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  16. Variations in algorithm implementation among quantitative texture analysis software packages

    NASA Astrophysics Data System (ADS)

    Foy, Joseph J.; Mitta, Prerana; Nowosatka, Lauren R.; Mendel, Kayla R.; Li, Hui; Giger, Maryellen L.; Al-Hallaq, Hania; Armato, Samuel G.

    2018-02-01

    Open-source texture analysis software allows for the advancement of radiomics research. Variations in texture features, however, result from discrepancies in algorithm implementation. Anatomically matched regions of interest (ROIs) that captured normal breast parenchyma were placed in the magnetic resonance images (MRI) of 20 patients at two time points. Six first-order features and six gray-level co-occurrence matrix (GLCM) features were calculated for each ROI using four texture analysis packages. Features were extracted using package-specific default GLCM parameters and using GLCM parameters modified to yield the greatest consistency among packages. Relative change in the value of each feature between time points was calculated for each ROI. Distributions of relative feature value differences were compared across packages. Absolute agreement among feature values was quantified by the intra-class correlation coefficient. Among first-order features, significant differences were found for max, range, and mean, and only kurtosis showed poor agreement. All six second-order features showed significant differences using package-specific default GLCM parameters, and five second-order features showed poor agreement; with modified GLCM parameters, no significant differences among second-order features were found, and all second-order features showed poor agreement. While relative texture change discrepancies existed across packages, these differences were not significant when consistent parameters were used.

  17. Optimizing parameter choice for FSL-Brain Extraction Tool (BET) on 3D T1 images in multiple sclerosis.

    PubMed

    Popescu, V; Battaglini, M; Hoogstrate, W S; Verfaillie, S C J; Sluimer, I C; van Schijndel, R A; van Dijk, B W; Cover, K S; Knol, D L; Jenkinson, M; Barkhof, F; de Stefano, N; Vrenken, H

    2012-07-16

    Brain atrophy studies often use FSL-BET (Brain Extraction Tool) as the first step of image processing. Default BET does not always give satisfactory results on 3DT1 MR images, which negatively impacts atrophy measurements. Finding the right alternative BET settings can be a difficult and time-consuming task, which can introduce unwanted variability. To systematically analyze the performance of BET in images of MS patients by varying its parameters and options combinations, and quantitatively comparing its results to a manual gold standard. Images from 159 MS patients were selected from different MAGNIMS consortium centers, and 16 different 3DT1 acquisition protocols at 1.5 T or 3T. Before running BET, one of three pre-processing pipelines was applied: (1) no pre-processing, (2) removal of neck slices, or (3) additional N3 inhomogeneity correction. Then BET was applied, systematically varying the fractional intensity threshold (the "f" parameter) and with either one of the main BET options ("B" - bias field correction and neck cleanup, "R" - robust brain center estimation, or "S" - eye and optic nerve cleanup) or none. For comparison, intracranial cavity masks were manually created for all image volumes. FSL-FAST (FMRIB's Automated Segmentation Tool) tissue-type segmentation was run on all BET output images and on the image volumes masked with the manual intracranial cavity masks (thus creating the gold-standard tissue masks). The resulting brain tissue masks were quantitatively compared to the gold standard using Dice overlap coefficient (DOC). Normalized brain volumes (NBV) were calculated with SIENAX. NBV values obtained using for SIENAX other BET settings than default were compared to gold standard NBV with the paired t-test. The parameter/preprocessing/options combinations resulted in 20,988 BET runs. The median DOC for default BET (f=0.5, g=0) was 0.913 (range 0.321-0.977) across all 159 native scans. For all acquisition protocols, brain extraction was substantially improved for lower values of "f" than the default value. Using native images, optimum BET performance was observed for f=0.2 with option "B", giving median DOC=0.979 (range 0.867-0.994). Using neck removal before BET, optimum BET performance was observed for f=0.1 with option "B", giving median DOC 0.983 (range 0.844-0.996). Using the above BET-options for SIENAX instead of default, the NBV values obtained from images after neck removal with f=0.1 and option "B" did not differ statistically from NBV values obtained with gold-standard. Although default BET performs reasonably well on most 3DT1 images of MS patients, the performance can be improved substantially. The removal of the neck slices, either externally or within BET, has a marked positive effect on the brain extraction quality. BET option "B" with f=0.1 after removal of the neck slices seems to work best for all acquisition protocols. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Optimizing Vowel Formant Measurements in Four Acoustic Analysis Systems for Diverse Speaker Groups

    PubMed Central

    Derdemezis, Ekaterini; Kent, Ray D.; Fourakis, Marios; Reinicke, Emily L.; Bolt, Daniel M.

    2016-01-01

    Purpose This study systematically assessed the effects of select linear predictive coding (LPC) analysis parameter manipulations on vowel formant measurements for diverse speaker groups using 4 trademarked Speech Acoustic Analysis Software Packages (SAASPs): CSL, Praat, TF32, and WaveSurfer. Method Productions of 4 words containing the corner vowels were recorded from 4 speaker groups with typical development (male and female adults and male and female children) and 4 speaker groups with Down syndrome (male and female adults and male and female children). Formant frequencies were determined from manual measurements using a consensus analysis procedure to establish formant reference values, and from the 4 SAASPs (using both the default analysis parameters and with adjustments or manipulations to select parameters). Smaller differences between values obtained from the SAASPs and the consensus analysis implied more optimal analysis parameter settings. Results Manipulations of default analysis parameters in CSL, Praat, and TF32 yielded more accurate formant measurements, though the benefit was not uniform across speaker groups and formants. In WaveSurfer, manipulations did not improve formant measurements. Conclusions The effects of analysis parameter manipulations on accuracy of formant-frequency measurements varied by SAASP, speaker group, and formant. The information from this study helps to guide clinical and research applications of SAASPs. PMID:26501214

  19. Design of a Sixteen Bit Pipelined Adder Using CMOS Bulk P-Well Technology.

    DTIC Science & Technology

    1984-12-01

    node’s current value. These rules are based on the assumption that the event that was last calculated reflects the latest configuraticn of the network...Lines beginning with - are treated as ll comment. The parameter names and their default values are: ;configuration file for ’standard’ MPC procem capm .2a

  20. Ada Compiler Validation Summary Report: NATO SWG on APSE Compiler for VAX/VMS to MC68020 Version VCM1.82-02, VAX 8350 under VMS 5.4-1 with CAIS 5.5E Host Motorola MVME 133XT (MC68020 bare machine) Target

    DTIC Science & Technology

    1992-03-06

    and their respective value. Macro Parameter Macro Value SACCSIZE 32 $ AL IGNMENT 4 $COUNT-LAST 2 147 483 647 SDEFAULT KMNSIZE 2147483648 $DEFAULT-STOR...The subprogram raise..exception- Azif a raises the exception -described by the information record supplied as parameter. -In addition to the subprogram

  1. Green Infrastructure Tool | EPA Center for Exposure ...

    EPA Pesticide Factsheets

    2016-03-07

    Units option added – SI or US units. Default option is US units Additional options added to FTABLE such as clear FTABLE Significant digits for FTABLE calculations is changed to 5 Previously a default Cd value was used for calculations (under-drain and riser) but now a user-defined value option is given Conversion options added wherever necessary Default values of suction head and hydraulic conductivity are changed based on units selected in infiltration panel Default values of Cd for riser orifice and under-drain textboxes is changed to 0.6. Previously a default increment value of 0.1 is used for all the channel panels but now user can specify the increment

  2. An inventory of nitrous oxide emissions from agriculture in the UK using the IPCC methodology: emission estimate, uncertainty and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Brown, L.; Armstrong Brown, S.; Jarvis, S. C.; Syed, B.; Goulding, K. W. T.; Phillips, V. R.; Sneath, R. W.; Pain, B. F.

    Nitrous oxide emission from UK agriculture was estimated, using the IPCC default values of all emission factors and parameters, to be 87 Gg N 2O-N in both 1990 and 1995. This estimate was shown, however, to have an overall uncertainty of 62%. The largest component of the emission (54%) was from the direct (soil) sector. Two of the three emission factors applied within the soil sector, EF1 (direct emission from soil) and EF3 PRP (emission from pasture range and paddock) were amongst the most influential on the total estimate, producing a ±31 and +11% to -17% change in emissions, respectively, when varied through the IPCC range from the default value. The indirect sector (from leached N and deposited ammonia) contributed 29% of the total emission, and had the largest uncertainty (126%). The factors determining the fraction of N leached (Frac LEACH) and emissions from it (EF5), were the two most influential. These parameters are poorly specified and there is great potential to improve the emission estimate for this component. Use of mathematical models (NCYCLE and SUNDIAL) to predict Frac LEACH suggested that the IPCC default value for this parameter may be too high for most situations in the UK. Comparison with other UK-derived inventories suggests that the IPCC methodology may overestimate emission. Although the IPCC approach includes additional components to the other inventories (most notably emission from indirect sources), estimates for the common components (i.e. fertiliser and animals), and emission factors used, are higher than those of other inventories. Whilst it is recognised that the IPCC approach is generalised in order to allow widespread applicability, sufficient data are available to specify at least two of the most influential parameters, i.e. EF1 and Frac LEACH, more accurately, and so provide an improved estimate of nitrous oxide emissions from UK agriculture.

  3. Strontium-90 Biokinetics from Simulated Wound Intakes in Non-human Primates Compared with Combined Model Predictions from National Council on Radiation Protection and Measurements Report 156 and International Commission on Radiological Protection Publication 67.

    PubMed

    Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh

    2016-01-01

    This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.

  4. TIM Version 3.0 beta Technical Description and User Guide - Appendix A - User's Guidance for TIM v.3.0(beta)

    EPA Pesticide Factsheets

    Provides detailed guidance to the user on how to select input parameters for running the Terrestrial Investigation Model (TIM) and recommendations for default values that can be used when no chemical-specific or species-specific information are available.

  5. Gray Infrastructure Tool | EPA Center for Exposure ...

    EPA Pesticide Factsheets

    2016-03-07

    Natural channel with flood plain panel added Default depth increment of 0.5 is used for Natural Channel with FP Units option added – SI or US units. Default option is US units Conversion options added wherever necessary Additional options added to FTABLE such as clear FTABLE Significant digits for FTABLE calculations is changed to 4 Previously a default Cd value is used for calculations (under-drain and riser) but now a user defined value is used Default values of Cd for riser orifice and under-drain textboxes is changed to 0.6 Previously a default increment value of 0.1 is used for all the channel panels but now user can specify the increment

  6. Assigning Robust Default Values in Building Performance Simulation Software for Improved Decision-Making in the Initial Stages of Building Design.

    PubMed

    Hiyama, Kyosuke

    2015-01-01

    Applying data mining techniques on a database of BIM models could provide valuable insights in key design patterns implicitly present in these BIM models. The architectural designer would then be able to use previous data from existing building projects as default values in building performance simulation software for the early phases of building design. The author has proposed the method to minimize the magnitude of the variation in these default values in subsequent design stages. This approach maintains the accuracy of the simulation results in the initial stages of building design. In this study, a more convincing argument is presented to demonstrate the significance of the new method. The variation in the ideal default values for different building design conditions is assessed first. Next, the influence of each condition on these variations is investigated. The space depth is found to have a large impact on the ideal default value of the window to wall ratio. In addition, the presence or absence of lighting control and natural ventilation has a significant influence on the ideal default value. These effects can be used to identify the types of building conditions that should be considered to determine the ideal default values.

  7. Assigning Robust Default Values in Building Performance Simulation Software for Improved Decision-Making in the Initial Stages of Building Design

    PubMed Central

    2015-01-01

    Applying data mining techniques on a database of BIM models could provide valuable insights in key design patterns implicitly present in these BIM models. The architectural designer would then be able to use previous data from existing building projects as default values in building performance simulation software for the early phases of building design. The author has proposed the method to minimize the magnitude of the variation in these default values in subsequent design stages. This approach maintains the accuracy of the simulation results in the initial stages of building design. In this study, a more convincing argument is presented to demonstrate the significance of the new method. The variation in the ideal default values for different building design conditions is assessed first. Next, the influence of each condition on these variations is investigated. The space depth is found to have a large impact on the ideal default value of the window to wall ratio. In addition, the presence or absence of lighting control and natural ventilation has a significant influence on the ideal default value. These effects can be used to identify the types of building conditions that should be considered to determine the ideal default values. PMID:26090512

  8. Early Detection | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"171","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Early

  9. On the zeroth-order hamiltonian for CASPT2 calculations of spin crossover compounds.

    PubMed

    Vela, Sergi; Fumanal, Maria; Ribas-Ariño, Jordi; Robert, Vincent

    2016-04-15

    Complete active space self-consistent field theory (CASSCF) calculations and subsequent second-order perturbation theory treatment (CASPT2) are discussed in the evaluation of the spin-states energy difference (ΔH(elec)) of a series of seven spin crossover (SCO) compounds. The reference values have been extracted from a combination of experimental measurements and DFT + U calculations, as discussed in a recent article (Vela et al., Phys Chem Chem Phys 2015, 17, 16306). It is definitely proven that the critical IPEA parameter used in CASPT2 calculations of ΔH(elec), a key parameter in the design of SCO compounds, should be modified with respect to its default value of 0.25 a.u. and increased up to 0.50 a.u. The satisfactory agreement observed previously in the literature might result from an error cancellation originated in the default IPEA, which overestimates the stability of the HS state, and the erroneous atomic orbital basis set contraction of carbon atoms, which stabilizes the LS states. © 2015 Wiley Periodicals, Inc.

  10. Development of genetic algorithm-based optimization module in WHAT system for hydrograph analysis and model application

    NASA Astrophysics Data System (ADS)

    Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.

    2010-07-01

    Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.

  11. 40 CFR Appendix Ix to Part 266 - Methods Manual for Compliance With the BIF Regulations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Systems 2.1Performance Specifications for Continuous Emission Monitoring of Carbon Monoxide and Oxygen for... Methodology for Bevill Residue Determinations 8.0Procedures for Determining Default Values for Air Pollution Control System Removal Efficiencies 8.1APCS RE Default Values for Metals 8.2APCS RE Default Values for HC1...

  12. 40 CFR Appendix Ix to Part 266 - Methods Manual for Compliance With the BIF Regulations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Systems 2.1Performance Specifications for Continuous Emission Monitoring of Carbon Monoxide and Oxygen for... Methodology for Bevill Residue Determinations 8.0Procedures for Determining Default Values for Air Pollution Control System Removal Efficiencies 8.1APCS RE Default Values for Metals 8.2APCS RE Default Values for HC1...

  13. Data Science Bowl Launched to Improve Lung Cancer Screening | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2078","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl Logo","field_file_image_title_text[und][0][value]":"Data Science Bowl Logo","field_folder[und]":"76"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl

  14. Assessment of an extended dataset of in vitro human dermal absorption studies on pesticides to determine default values, opportunities for read-across and influence of dilution on absorption.

    PubMed

    Aggarwal, M; Fisher, P; Hüser, A; Kluxen, F M; Parr-Dobrzanski, R; Soufi, M; Strupp, C; Wiemann, C; Billington, R

    2015-06-01

    Dermal absorption is a key parameter in non-dietary human safety assessments for agrochemicals. Conservative default values and other criteria in the EFSA guidance have substantially increased generation of product-specific in vitro data and in some cases, in vivo data. Therefore, data from 190 GLP- and OECD guideline-compliant human in vitro dermal absorption studies were published, suggesting EFSA defaults and criteria should be revised (Aggarwal et al., 2014). This follow-up article presents data from an additional 171 studies and also the combined dataset. Collectively, the data provide consistent and compelling evidence for revision of EFSA's guidance. This assessment covers 152 agrochemicals, 19 formulation types and representative ranges of spray concentrations. The analysis used EFSA's worst-case dermal absorption definition (i.e., an entire skin residue, except for surface layers of stratum corneum, is absorbed). It confirmed previously proposed default values of 6% for liquid and 2% for solid concentrates, irrespective of active substance loading, and 30% for all spray dilutions, irrespective of formulation type. For concentrates, absorption from solvent-based formulations provided reliable read-across for other formulation types, as did water-based products for solid concentrates. The combined dataset confirmed that absorption does not increase linearly beyond a 5-fold increase in dilution. Finally, despite using EFSA's worst-case definition for absorption, a rationale for routinely excluding the entire stratum corneum residue, and ideally the entire epidermal residue in in vitro studies, is presented. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Development of a Three Dimensional Perfectly Matched Layer for Transient Elasto-Dynamic Analyses

    DTIC Science & Technology

    2006-12-01

    MacLean [Ref. 47] intro- duced a small tracked vehicle with dual inertial mass shakers mounted on top as a mobile source. It excited Rayleigh waves, but...routine initializes and set default values for; * the aplication parameters * the material data base parameters * the entries to appear on the...Underground seismic array experiments. National In- stitute of Nuclear Physics, 2005. [47] D. J. MacLean. Mobile source development for seismic-sonar based

  16. NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of

  17. Pancreatic Cancer Detection Consortium (PCDC) | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2256","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso highlighting the pancreas.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso

  18. PAR -- Interface to the ADAM Parameter System

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm J.; Chipperfield, Alan J.

    PAR is a library of Fortran subroutines that provides convenient mechanisms for applications to exchange information with the outside world, through input-output channels called parameters. Parameters enable a user to control an application's behaviour. PAR supports numeric, character, and logical parameters, and is currently implemented only on top of the ADAM parameter system. The PAR library permits parameter values to be obtained, without or with a variety of constraints. Results may be put into parameters to be passed onto other applications. Other facilities include setting a prompt string, and suggested defaults. This document also introduces a preliminary C interface for the PAR library -- this may be subject to change in the light of experience.

  19. 19 CFR 113.73 - Foreign trade zone operator bond conditions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the foreign trade zone or subzone. If the principal defaults and the default involves merchandise... merchandise involved in the default, or three times the value of the merchandise involved in the default if... as may be authorized by law or regulation. It is understood and agreed that whether the default...

  20. Emission of biocides from hospitals: comparing current survey results with European Union default values.

    PubMed

    Tluczkiewicz, Inga; Bitsch, Annette; Hahn, Stefan; Hahn, Torsten

    2010-04-01

    Under the European Union (EU) Biocidal Products Directive 98/8/EC, comprehensive evaluations on substances of the Third Priority List were conducted until 31 July 2007. This list includes, among other categories, disinfectants for human hygiene (e.g., skin and surface disinfection). For environmental exposure assessment of biocides, the EU emission scenarios apply. Currently available default values for disinfectants are based on consumption data from not more than 8 hospitals and were originally assembled for other purposes. To revalidate these default values, a survey on annual consumption data was performed in 27 German hospitals. These data were analyzed to provide consumption data per bed and day and per nurse and day for particular categories of active ingredients and were compared with default values from the EU emission scenario documents. Although several deviations were detected, an overall acceptable correspondence between Emission Scenario Documents default values and the current survey data was found. (c) 2009 SETAC

  1. The hydraulic capacity of deteriorating sewer systems.

    PubMed

    Pollert, J; Ugarelli, R; Saegrov, S; Schilling, W; Di Federico, V

    2005-01-01

    Sewer and wastewater systems suffer from insufficient capacity, construction flaws and pipe deterioration. Consequences are structural failures, local floods, surface erosion and pollution of receiving waters bodies. European cities spend in the order of five billion Euro per year for wastewater network rehabilitation. This amount is estimated to increase due to network ageing. The project CARE-S (Computer Aided RE-habilitation of Sewer Networks) deals with sewer and storm water networks. The final project goal is to develop integrated software, which provides the most cost-efficient system of maintenance, repair and rehabilitation of sewer networks. Decisions on investments in rehabilitation often have to be made with uncertain information about the structural condition and the hydraulic performance of a sewer system. Because of this, decision-making involves considerable risks. This paper presents the results of research focused on the study of hydraulic effects caused by failures due to temporal decline of sewer systems. Hydraulic simulations are usually carried out by running commercial models that apply, as input, default values of parameters that strongly influence results. Using CCTV inspections information as dataset to catalogue principal types of failures affecting pipes, a 3D model was used to evaluate their hydraulic consequences. The translation of failures effects in parameters values producing the same hydraulic conditions caused by failures was carried out through the comparison of laboratory experiences and 3D simulations results. Those parameters could be the input of 1D commercial models instead of the default values commonly inserted.

  2. General properties of solutions to inhomogeneous Black-Scholes equations with discontinuous maturity payoffs

    NASA Astrophysics Data System (ADS)

    O, Hyong-Chol; Jo, Jong-Jun; Kim, Ji-Sok

    2016-02-01

    We provide representations of solutions to terminal value problems of inhomogeneous Black-Scholes equations and study such general properties as min-max estimates, gradient estimates, monotonicity and convexity of the solutions with respect to the stock price variable, which are important for financial security pricing. In particular, we focus on finding representation of the gradient (with respect to the stock price variable) of solutions to the terminal value problems with discontinuous terminal payoffs or inhomogeneous terms. Such terminal value problems are often encountered in pricing problems of compound-like options such as Bermudan options or defaultable bonds with discrete default barrier, default intensity and endogenous default recovery. Our results can be used in pricing real defaultable bonds under consideration of existence of discrete coupons or taxes on coupons.

  3. A review of ADM1 extensions, applications, and analysis: 2002-2005.

    PubMed

    Batstone, D J; Keller, J; Steyer, J P

    2006-01-01

    Since publication of the Scientific and Technical Report (STR) describing the ADM1, the model has been extensively used, and analysed in both academic and practical applications. Adoption of the ADM1 in popular systems analysis tools such as the new wastewater benchmark (BSM2), and its use as a virtual industrial system can stimulate modelling of anaerobic processes by researchers and practitioners outside the core expertise of anaerobic processes. It has been used as a default structural element that allows researchers to concentrate on new extensions such as sulfate reduction, and new applications such as distributed parameter modelling of biofilms. The key limitations for anaerobic modelling originally identified in the STR were: (i) regulation of products from glucose fermentation, (ii) parameter values, and variability, and (iii) specific extensions. Parameter analysis has been widespread, and some detailed extensions have been developed (e.g., sulfate reduction). A verified extension that describes regulation of products from glucose fermentation is still limited, though there are promising fundamental approaches. This is a critical issue, given the current interest in renewable hydrogen production from carbohydrate-type waste. Critical analysis of the model has mainly focused on model structure reduction, hydrogen inhibition functions, and the default parameter set recommended in the STR. This default parameter set has largely been verified as a reasonable compromise, especially for wastewater sludge digestion. One criticism of note is that the ADM1 stoichiometry focuses on catabolism rather than anabolism. This means that inorganic carbon can be used unrealistically as a carbon source during some anabolic reactions. Advances and novel applications have also been made in the present issue, which focuses on the ADM1. These papers also explore a number of novel areas not originally envisaged in this review.

  4. Simulation of the right-angle car collision based on identified parameters

    NASA Astrophysics Data System (ADS)

    Kostek, R.; Aleksandrowicz, P.

    2017-10-01

    This article presents an influence of contact parameters on the collision pattern of vehicles. In this case a crash of two Fiat Cinquecentos with perpendicular median planes was simulated. The first vehicle was driven with a speed 50 km/h and crashed into the other one, standing still. It is a typical collision at junctions. For the first simulation, the default parameters of the V-SIM simulation program were assumed and then the parameters identified from the crash test of a Fiat Cinquecento, published by ADAC (Allgemeiner Deutscher Automobil-Club) were used. Various post-impact movements were observed for both simulations, which demonstrates a sensitivity of the simulation results to the assumed parameters. Applying the default parameters offered by the program can lead to inadequate evaluation of the collision part due to its only approximate reconstruction, which in consequence, influences the court decision. It was demonstrated how complex it is to reconstruct the pattern of the vehicles’ crash and what problems are faced by expert witnesses who tend to use default parameters.

  5. Evaluation and improvement of wastewater treatment plant performance using BioWin

    NASA Astrophysics Data System (ADS)

    Oleyiblo, Oloche James; Cao, Jiashun; Feng, Qian; Wang, Gan; Xue, Zhaoxia; Fang, Fang

    2015-03-01

    In this study, the activated sludge model implemented in the BioWin® software was validated against full-scale wastewater treatment plant data. Only two stoichiometric parameters ( Y p/acetic and the heterotrophic yield ( Y H)) required calibration. The value 0.42 was used for Y p/acetic in this study, while the default value of the BioWin® software is 0.49, making it comparable with the default values of the corresponding parameter (yield of phosphorus release to substrate uptake ) used in ASM2, ASM2d, and ASM3P, respectively. Three scenarios were evaluated to improve the performance of the wastewater treatment plant, the possibility of wasting sludge from either the aeration tank or the secondary clarifier, the construction of a new oxidation ditch, and the construction of an equalization tank. The results suggest that construction of a new oxidation ditch or an equalization tank for the wastewater treatment plant is not necessary. However, sludge should be wasted from the aeration tank during wet weather to reduce the solids loading of the clarifiers and avoid effluent violations. Therefore, it is recommended that the design of wastewater treatment plants (WWTPs) should include flexibility to operate the plants in various modes. This is helpful in selection of the appropriate operating mode when necessary, resulting in substantial reductions in operating costs.

  6. Use of DandD for dose assessment under NRC`s radiological criteria for license termination rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallegos, D.P.; Brown, T.J.; Davis, P.A.

    The Decontamination and Decommissioning (DandD) software package has been developed by Sandia National Laboratories for the Nuclear Regulatory Commission (NRC) specifically for the purpose of providing a user-friendly analytical tool to address the dose criteria contained in NRC`s Radiological Criteria for License Termination rule (10 CFR Part 20 Subpart E; NRC, 1997). Specifically, DandD embodies the NRC`s screening methodology to allow licensees to convert residual radioactivity contamination levels at their site to annual dose, in a manner consistent with both 10 CFR Part 20 and the corresponding implementation guidance developed by NRC. The screening methodology employs reasonably conservative scenarios, fatemore » and transport models, and default parameter values that have been developed to allow the NRC to quantitatively estimate the risk of releasing a site given only information about the level of contamination. Therefore, a licensee has the option of specifying only the level of contamination and running the code with the default parameter values, or in the case where site specific information is available to alter the appropriate parameter values and then calculate dose. DandD can evaluate dose for fur different scenarios: residential, building occupancy, building renovation, or drinking water. The screening methodology and DandD are part of a larger decision framework that allows and encourages licensees to optimize decisions on choice of alternative actions at their site, including collection of additional data and information. This decision framework is integrated into and documented in NRC`s technical guidance for decommissioning.« less

  7. Replacing Fortran Namelists with JSON

    NASA Astrophysics Data System (ADS)

    Robinson, T. E., Jr.

    2017-12-01

    Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.

  8. Accounting for the dissociating properties of organic chemicals in LCIA: an uncertainty analysis applied to micropollutants in the assessment of freshwater ecotoxicity.

    PubMed

    Morais, Sérgio Alberto; Delerue-Matos, Cristina; Gabarrell, Xavier

    2013-03-15

    In life cycle impact assessment (LCIA) models, the sorption of the ionic fraction of dissociating organic chemicals is not adequately modeled because conventional non-polar partitioning models are applied. Therefore, high uncertainties are expected when modeling the mobility, as well as the bioavailability for uptake by exposed biota and degradation, of dissociating organic chemicals. Alternative regressions that account for the ionized fraction of a molecule to estimate fate parameters were applied to the USEtox model. The most sensitive model parameters in the estimation of ecotoxicological characterization factors (CFs) of micropollutants were evaluated by Monte Carlo analysis in both the default USEtox model and the alternative approach. Negligible differences of CFs values and 95% confidence limits between the two approaches were estimated for direct emissions to the freshwater compartment; however the default USEtox model overestimates CFs and the 95% confidence limits of basic compounds up to three orders and four orders of magnitude, respectively, relatively to the alternative approach for emissions to the agricultural soil compartment. For three emission scenarios, LCIA results show that the default USEtox model overestimates freshwater ecotoxicity impacts for the emission scenarios to agricultural soil by one order of magnitude, and larger confidence limits were estimated, relatively to the alternative approach. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Numerically design the injection process parameters of parts fabricated with ramie fiber reinforced green composites

    NASA Astrophysics Data System (ADS)

    Chen, L. P.; He, L. P.; Chen, D. C.; Lu, G.; Li, W. J.; Yuan, J. M.

    2017-01-01

    The warpage deformation plays an important role on the performance of automobile interior components fabricated with natural fiber reinforced composites. The present work investigated the influence of process parameters on the warpage behavior of A pillar trim made of ramie fiber (RF) reinforced polypropylene (PP) composites (RF/PP) via numerical simulation with orthogonal experiment method and range analysis. The results indicated that fiber addition and packing pressure were the most important factors affecting warpage. The A pillar trim can achieved the minimum warpage value as of 2.124 mm under the optimum parameters. The optimal process parameters are: 70% percent of the default value of injection pressure for the packing pressure, 20 wt% for the fiber addition, 185 °C for the melt °C for the mold temperature, 7 s for the filling time and 17 s for the packing time.

  10. Chemopreventive Agent Development | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"174","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage

  11. Prostate and Urologic Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"183","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage

  12. Collective firm bankruptcies and phase transition in rating dynamics

    NASA Astrophysics Data System (ADS)

    Sieczka, P.; Hołyst, J. A.

    2009-10-01

    We present a simple model of firm rating evolution. We consider two sources of defaults: individual dynamics of economic development and Potts-like interactions between firms. We show that such a defined model leads to phase transition, which results in collective defaults. The existence of the collective phase depends on the mean interaction strength. For small interaction strength parameters, there are many independent bankruptcies of individual companies. For large parameters, there are giant collective defaults of firm clusters. In the case when the individual firm dynamics favors dumping of rating changes, there is an optimal strength of the firm's interactions from the systemic risk point of view. in here

  13. A Systems Model for Power Technology Assessment

    NASA Technical Reports Server (NTRS)

    Hoffman, David J.

    2002-01-01

    A computer model is under continuing development at NASA Glenn Research Center that enables first-order assessments of space power technology. The model, an evolution of NASA Glenn's Array Design Assessment Model (ADAM), is an Excel workbook that consists of numerous spreadsheets containing power technology performance data and sizing algorithms. Underlying the model is a number of databases that contain default values for various power generation, energy storage and power management and distribution component parameters. These databases are actively maintained by a team of systems analysts so that they contain state-of-art data as well as the most recent technology performance projections. Sizing of the power subsystems can be accomplished either by using an assumed mass specific power (W/kg) or energy (Wh/kg) or by a bottoms-up calculation that accounts for individual component performance and masses. The power generation, energy storage and power management and distribution subsystems are sized for given mission requirements for a baseline case and up to three alternatives. This allows four different power systems to be sized and compared using consistent assumptions and sizing algorithms. The component sizing models contained in the workbook are modular so that they can be easily maintained and updated. All significant input values have default values loaded from the databases that can be over-written by the user. The default data and sizing algorithms for each of the power subsystems are described in some detail. The user interface and workbook navigational features are also discussed. Finally, an example study case that illustrates the model's capability is presented.

  14. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Municipal Solid Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default Values...

  15. 40 CFR Table Nn-1 to Subpart Hh of... - Default Factors for Calculation Methodology 1 of This Subpart

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...

  16. 40 CFR Table Nn-1 to Subpart Hh of... - Default Factors for Calculation Methodology 1 of This Subpart

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...

  17. 40 CFR Table Nn-1 to Subpart Hh of... - Default Factors for Calculation Methodology 1 of This Subpart

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...

  18. Combination of DTI and fMRI reveals the white matter changes correlating with the decline of default-mode network activity in Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Wu, Xianjun; Di, Qian; Li, Yao; Zhao, Xiaojie

    2009-02-01

    Recently, evidences from fMRI studies have shown that there was decreased activity among the default-mode network in Alzheimer's disease (AD), and DTI researches also demonstrated that demyelinations exist in white matter of AD patients. Therefore, combining these two MRI methods may help to reveal the relationship between white matter damages and alterations of the resting state functional connectivity network. In the present study, we tried to address this issue by means of correlation analysis between DTI and resting state fMRI images. The default-mode networks of AD and normal control groups were compared to find the areas with significantly declined activity firstly. Then, the white matter regions whose fractional anisotropy (FA) value correlated with this decline were located through multiple regressions between the FA values and the BOLD response of the default networks. Among these correlating white matter regions, those whose FA values also declined were found by a group comparison between AD patients and healthy elderly control subjects. Our results showed that the areas with decreased activity among default-mode network included left posterior cingulated cortex (PCC), left medial temporal gyrus et al. And the damaged white matter areas correlated with the default-mode network alterations were located around left sub-gyral temporal lobe. These changes may relate to the decreased connectivity between PCC and medial temporal lobe (MTL), and thus correlate with the deficiency of default-mode network activity.

  19. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  20. Markov Chain Model with Catastrophe to Determine Mean Time to Default of Credit Risky Assets

    NASA Astrophysics Data System (ADS)

    Dharmaraja, Selvamuthu; Pasricha, Puneet; Tardelli, Paola

    2017-11-01

    This article deals with the problem of probabilistic prediction of the time distance to default for a firm. To model the credit risk, the dynamics of an asset is described as a function of a homogeneous discrete time Markov chain subject to a catastrophe, the default. The behaviour of the Markov chain is investigated and the mean time to the default is expressed in a closed form. The methodology to estimate the parameters is given. Numerical results are provided to illustrate the applicability of the proposed model on real data and their analysis is discussed.

  1. Parameter optimization of parenchymal texture analysis for prediction of false-positive recalls from screening mammography

    NASA Astrophysics Data System (ADS)

    Ray, Shonket; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2016-03-01

    This work details a methodology to obtain optimal parameter values for a locally-adaptive texture analysis algorithm that extracts mammographic texture features representative of breast parenchymal complexity for predicting falsepositive (FP) recalls from breast cancer screening with digital mammography. The algorithm has two components: (1) adaptive selection of localized regions of interest (ROIs) and (2) Haralick texture feature extraction via Gray- Level Co-Occurrence Matrices (GLCM). The following parameters were systematically varied: mammographic views used, upper limit of the ROI window size used for adaptive ROI selection, GLCM distance offsets, and gray levels (binning) used for feature extraction. Each iteration per parameter set had logistic regression with stepwise feature selection performed on a clinical screening cohort of 474 non-recalled women and 68 FP recalled women; FP recall prediction was evaluated using area under the curve (AUC) of the receiver operating characteristic (ROC) and associations between the extracted features and FP recall were assessed via odds ratios (OR). A default instance of mediolateral (MLO) view, upper ROI size limit of 143.36 mm (2048 pixels2), GLCM distance offset combination range of 0.07 to 0.84 mm (1 to 12 pixels) and 16 GLCM gray levels was set. The highest ROC performance value of AUC=0.77 [95% confidence intervals: 0.71-0.83] was obtained at three specific instances: the default instance, upper ROI window equal to 17.92 mm (256 pixels2), and gray levels set to 128. The texture feature of sum average was chosen as a statistically significant (p<0.05) predictor and associated with higher odds of FP recall for 12 out of 14 total instances.

  2. The Application of Optimal Defaults to Improve Elementary School Lunch Selections: Proof of Concept

    ERIC Educational Resources Information Center

    Loeb, Katharine L.; Radnitz, Cynthia; Keller, Kathleen L.; Schwartz, Marlene B.; Zucker, Nancy; Marcus, Sue; Pierson, Richard N.; Shannon, Michael; DeLaurentis, Danielle

    2018-01-01

    Background: In this study, we applied behavioral economics to optimize elementary school lunch choices via parent-driven decisions. Specifically, this experiment tested an optimal defaults paradigm, examining whether strategically manipulating the health value of a default menu could be co-opted to improve school-based lunch selections. Methods:…

  3. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    DOE PAGES

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.; ...

    2017-09-28

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site ormore » at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C: 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball–Berry stomatal conductance slope ( m bb) at Wind River aligned with findings from recent CLM experiments at sites characterized by the same plant functional type (needleleaf evergreen temperate forest), despite significant differences in stand composition and age and climatology, suggesting that CLM could benefit from a revised m bb value of 6, rather than the default value of 9, for this plant functional type. Conversely, Wind River required a unique calibration of the hydrology submodel to simulate soil moisture, suggesting that the default hydrology has a more limited applicability. Here, this study demonstrates that carbon isotope data can be used to constrain stomatal conductance and intrinsic water use efficiency in CLM, as an alternative to eddy covariance flux measurements. It also demonstrates that carbon isotopes can expose structural weaknesses in the model and provide a key constraint that may guide future model development.« less

  4. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site ormore » at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C: 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball–Berry stomatal conductance slope ( m bb) at Wind River aligned with findings from recent CLM experiments at sites characterized by the same plant functional type (needleleaf evergreen temperate forest), despite significant differences in stand composition and age and climatology, suggesting that CLM could benefit from a revised m bb value of 6, rather than the default value of 9, for this plant functional type. Conversely, Wind River required a unique calibration of the hydrology submodel to simulate soil moisture, suggesting that the default hydrology has a more limited applicability. Here, this study demonstrates that carbon isotope data can be used to constrain stomatal conductance and intrinsic water use efficiency in CLM, as an alternative to eddy covariance flux measurements. It also demonstrates that carbon isotopes can expose structural weaknesses in the model and provide a key constraint that may guide future model development.« less

  5. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    NASA Astrophysics Data System (ADS)

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.; Lin, John C.; Koven, Charles D.; Thornton, Peter E.; Bowling, David R.; Lai, Chun-Ta; Bible, Kenneth J.; Ehleringer, James R.

    2017-09-01

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site or at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C : 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball-Berry stomatal conductance slope (mbb) at Wind River aligned with findings from recent CLM experiments at sites characterized by the same plant functional type (needleleaf evergreen temperate forest), despite significant differences in stand composition and age and climatology, suggesting that CLM could benefit from a revised mbb value of 6, rather than the default value of 9, for this plant functional type. Conversely, Wind River required a unique calibration of the hydrology submodel to simulate soil moisture, suggesting that the default hydrology has a more limited applicability. This study demonstrates that carbon isotope data can be used to constrain stomatal conductance and intrinsic water use efficiency in CLM, as an alternative to eddy covariance flux measurements. It also demonstrates that carbon isotopes can expose structural weaknesses in the model and provide a key constraint that may guide future model development.

  6. 24 CFR 203.370 - Pre-foreclosure sales.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... sold by the mortgagor, after default and prior to foreclosure, at its current fair market value (less... determined by the Secretary, which default is the result of an adverse and unavoidable financial situation... whose current fair market value, compared to the amount needed to discharge the mortgage, meets the...

  7. Cancer Biomarkers | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"175","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Cancer Biomarkers Research Group Homepage Logo","title":"Cancer

  8. Gastrointestinal and Other Cancers | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"181","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Gastrointestinal and Other

  9. Biometry | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"66","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Biometry Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Biometry Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Biometry Research Group Homepage Logo","title":"Biometry Research Group Homepage

  10. Evaluation of SimpleTreat 4.0: Simulations of pharmaceutical removal in wastewater treatment plant facilities.

    PubMed

    Lautz, L S; Struijs, J; Nolte, T M; Breure, A M; van der Grinten, E; van de Meent, D; van Zelm, R

    2017-02-01

    In this study, the removal of pharmaceuticals from wastewater as predicted by SimpleTreat 4.0 was evaluated. Field data obtained from literature of 43 pharmaceuticals, measured in 51 different activated sludge WWTPs were used. Based on reported influent concentrations, the effluent concentrations were calculated with SimpleTreat 4.0 and compared to measured effluent concentrations. The model predicts effluent concentrations mostly within a factor of 10, using the specific WWTP parameters as well as SimpleTreat default parameters, while it systematically underestimates concentrations in secondary sludge. This may be caused by unexpected sorption, resulting from variability in WWTP operating conditions, and/or QSAR applicability domain mismatch and background concentrations prior to measurements. Moreover, variability in detection techniques and sampling methods can cause uncertainty in measured concentration levels. To find possible structural improvements, we also evaluated SimpleTreat 4.0 using several specific datasets with different degrees of uncertainty and variability. This evaluation verified that the most influencing parameters for water effluent predictions were biodegradation and the hydraulic retention time. Results showed that model performance is highly dependent on the nature and quality, i.e. degree of uncertainty, of the data. The default values for reactor settings in SimpleTreat result in realistic predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Default risk modeling with position-dependent killing

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.

    2013-04-01

    Diffusion in a linear potential in the presence of position-dependent killing is used to mimic a default process. Different assumptions regarding transport coefficients, initial conditions, and elasticity of the killing measure lead to diverse models of bankruptcy. One “stylized fact” is fundamental for our consideration: empirically default is a rather rare event, especially in the investment grade categories of credit ratings. Hence, the action of killing may be considered as a small parameter. In a number of special cases we derive closed-form expressions for the entire term structure of the cumulative probability of default, its hazard rate, and intensity. Comparison with historical data on aggregate global corporate defaults confirms the validity of the perturbation method for estimations of long-term probability of default for companies with high credit quality. On a single company level, we implement the derived formulas to estimate the one-year likelihood of default of Enron on a daily basis from August 2000 to August 2001, three months before its default, and compare the obtained results with forecasts of traditional structural models.

  12. Time varying default barrier as an agreement rules on bond contract

    NASA Astrophysics Data System (ADS)

    Maruddani, Di Asih I.; Safitri, Diah; Hoyyi, Abdul

    2018-05-01

    There are some default time rules on contract agreement of a bond. The classical default time is known as Merton Model. The most important characteristic of Merton’s model is the restriction of default time to the maturity of the debt, not taking into consideration the possibility of an early default. If the firm’s value falls down to minimal level before the maturity of the debt, but it is able to recover and meet the debt’s payment at maturity, the default would be avoided in Merton’ s approach. Merton model has been expanded by Hull & White [6] and Avellaneda & Zhu [1]. They introduced time-varying default barrier for modelling distance to default process. This model use time-varying variable as a barrier. In this paper, we give a valuation of a bond with time-varying default barrier agreement. We use straight forward integration for obtaining equity and liability equation. This theory is applied in Indonesian corporate bond.

  13. A calibration protocol of a one-dimensional moving bed bioreactor (MBBR) dynamic model for nitrogen removal.

    PubMed

    Barry, U; Choubert, J-M; Canler, J-P; Héduit, A; Robin, L; Lessard, P

    2012-01-01

    This work suggests a procedure to correctly calibrate the parameters of a one-dimensional MBBR dynamic model in nitrification treatment. The study deals with the MBBR configuration with two reactors in series, one for carbon treatment and the other for nitrogen treatment. Because of the influence of the first reactor on the second one, the approach needs a specific calibration strategy. Firstly, a comparison between measured values and simulated ones obtained with default parameters has been carried out. Simulated values of filtered COD, NH(4)-N and dissolved oxygen are underestimated and nitrates are overestimated compared with observed data. Thus, nitrifying rate and oxygen transfer into the biofilm are overvalued. Secondly, a sensitivity analysis was carried out for parameters and for COD fractionation. It revealed three classes of sensitive parameters: physical, diffusional and kinetic. Then a calibration protocol of the MBBR dynamic model was proposed. It was successfully tested on data recorded at a pilot-scale plant and a calibrated set of values was obtained for four parameters: the maximum biofilm thickness, the detachment rate, the maximum autotrophic growth rate and the oxygen transfer rate.

  14. Evaluation, Calibration and Comparison of the Precipitation-Runoff Modeling System (PRMS) National Hydrologic Model (NHM) Using Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) Gridded Datasets

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II; Haj, A. E., Jr.

    2014-12-01

    The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.

  15. Validating models of target acquisition performance in the dismounted soldier context

    NASA Astrophysics Data System (ADS)

    Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.

    2018-04-01

    The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.

  16. [Factor structure of regional CBF and CMRglu values as a tool for the study of default mode of the brain].

    PubMed

    Kataev, G V; Korotkov, A D; Kireev, M V; Medvedev, S V

    2013-01-01

    In the present article it was shown that the functional connectivity of brain structures, revealed by factor analysis of resting PET CBF and rCMRglu data, is an adequate tool to study the default mode of the human brain. The identification of neuroanatomic systems of default mode (default mode network) during routine clinical PET investigations is important for further studying the functional organization of the normal brain and its reorganizations in pathological conditions.

  17. 2014 Review on the Extension of the AMedP-8(C) Methodology to New Agents, Materials, and Conditions

    DTIC Science & Technology

    2015-08-01

    chemical agents, five biological agents, seven radioisotopes , nuclear fallout, or prompt nuclear effects.1 Each year since 2009, OTSG has sponsored IDA...evaluated four agents: anthrax, botulinum toxin, sarin (GB), and distilled mustard (HD), first using the default parameters and methods in HPAC and...the IDA team then made incremental changes to the default casualty parameters and methods to control for all known data and methodological

  18. Breast and Gynecologic Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"184","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Breast and Gynecologic Cancer Research

  19. Community Oncology and Prevention Trials | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"168","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Image","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Image","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Early Detection Research Group Homepage Image","title":"Early

  20. Lung and Upper Aerodigestive Cancer | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"180","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Lung and Upper Aerodigestive

  1. Risk Factors and Mortality Associated with Default from Multidrug-Resistant Tuberculosis Treatment

    PubMed Central

    Franke, Molly F.; Appleton, Sasha C.; Bayona, Jaime; Arteaga, Fernando; Palacios, Eda; Llaro, Karim; Shin, Sonya S.; Becerra, Mercedes C.; Murray, Megan B.; Mitnick, Carole D.

    2008-01-01

    Background Completing treatment for multidrug-resistant (MDR) tuberculosis (TB) may be more challenging than completing first-line TB therapy, especially in resource poor settings. The objectives of this study were to (1) identify risk factors for default from MDR TB therapy; (2) quantify mortality among patients who default; and (3) identify risk factors for death following default. Methods We performed a retrospective chart review to identify risk factors for default and conducted home visits to assess mortality among patients who defaulted. Results 67 of 671 patients (10.0%) defaulted. The median time to default was 438 days (interquartile range [IQR]: 152−710), and 40.3% of patients had culture-positive sputum at the time of default. Substance use (hazard ratio [HR]: 2.96, 95% confidence interval [CI]: [1.56, 5.62], p-value [p]=0.001), substandard housing conditions (HR: 1.83, CI: [1.07, 3.11], p=0.03), later year of enrollment (HR: 1.62, CI: [1.09, 2.41], p=0.02) and health district (p=0.02) predicted default in a multivariable analysis. Severe adverse events did not predict default. Of 47 (70.1%) patients who defaulted and were successfully traced, 25 (53.2%) had died. Poor bacteriologic response, less than a year of treatment at default, low education level, and diagnosis with a psychiatric disorder significantly predicted death after default in a multivariable analysis. Conclusions The proportion of patients who defaulted from MDR TB treatment was relatively low. The large proportion of patients who defaulted while culture-positive underscores the public health importance of minimizing default. Prognosis for patients who defaulted was poor. Interventions aimed at preventing default may reduce TB-related mortality. PMID:18462099

  2. Estimation of Community Land Model parameters for an improved assessment of net carbon fluxes at European sites

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan

    2017-03-01

    The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.

  3. A Neural Network Approach to Estimating the Allowance for Bad Debt

    ERIC Educational Resources Information Center

    Joyner, Donald Thomas

    2011-01-01

    The granting of credit is a necessary risk of doing business. If companies only accepted cash, sales would be negatively impacted. In a perfect world, all consumers would pay their bills when they become due. However, the fact is that some consumers do default on debt. Companies are willing to accept default risk because the value of defaults does…

  4. 40 CFR Table Tt-1 to Subpart Tt - Default DOC and Decay Rate Values for Industrial Waste Landfills

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Industrial Waste Landfills TT Table TT-1 to Subpart TT Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Industrial Waste Landfills Pt. 98, Subpt. TT, Table TT Table TT-1 to Subpart TT—Default DOC and Decay Rate Values for Industrial...

  5. TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.

    1994-01-01

    The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters. Output from TAIR may include aerodynamic coefficients, the airfoil surface solution, convergence histories, and printer plots of Mach number and density contour maps. The TAIR program is written in FORTRAN IV for batch execution and has been implemented on a CDC 7600 computer with a central memory requirement of approximately 155K (octal) of 60 bit words. The TAIR program was developed in 1981.

  6. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  7. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the field survey) were weighted for priority. We compared some gradient-based global optimization methods of Dakota starting with the default parameters of Biome-BGC. In the result of sensitive analysis, carbon allocation parameters between coarse root and leaf, between stem and leaf, and SLA had high contribution on both leaf and woody biomass changes. These parameters were selected to be optimized. The measured leaf, above- and below-ground woody biomass carbon density at the last year were 0.22, 1.81 and 0.86 kgC m-2, respectively, whereas those simulated in the non-optimized control case using all default parameters were 0.12, 2.26 and 0.52 kgC m-2, respectively. After optimizing the parameters, the simulated values were improved to 0.19, 1.81 and 0.86 kgC m-2, respectively. The coliny global optimization method gave the better fitness than efficient global and ncsu direct method. The optimized parameters showed the higher carbon allocation rates to coarse roots and leaves and the lower SLA than the default parameters, which were consistent to the general water physiological response in a dry climate. The simulation using the weighted object function resulted in the closer simulations to the measurements at the last year with the lower fitness during the previous years.

  8. Optimizing Photosynthetic and Respiratory Parameters Based on the Seasonal Variation Pattern in Regional Net Ecosystem Productivity Obtained from Atmospheric Inversion

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.

    2014-12-01

    In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.

  9. Adequacy of the default values for skin surface area used for risk assessment and French anthropometric data by a probabilistic approach.

    PubMed

    Dornic, N; Ficheux, A S; Bernard, A; Roudot, A C

    2017-08-01

    The notes of guidance for the testing of cosmetic ingredients and their safety evaluation by the Scientific Committee on Consumer Safety (SCCS) is a document dedicated to ensuring the safety of European consumers. This contains useful data for risk assessment such as default values for Skin Surface Area (SSA). A more in-depth study of anthropometric data across Europe reveals considerable variations. The default SSA value was derived from a study on the Dutch population, which is known to be one of the tallest nations in the World. This value could be inadequate for shorter populations of Europe. Data were collected in a survey on cosmetic consumption in France. Probabilistic treatment of these data and analysis of the case of methylisothiazolinone, a sensitizer recently evaluated by a deterministic approach submitted to SCCS, suggest that the default value for SSA used in the quantitative risk assessment might not be relevant for a significant share of the French female population. Others female populations of Southern Europe may also be excluded. This is of importance given that some studies show an increasing risk of developping skin sensitization among women. The disparities in anthropometric data across Europe should be taken into consideration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Simulation of car collision with an impact block

    NASA Astrophysics Data System (ADS)

    Kostek, R.; Aleksandrowicz, P.

    2017-10-01

    This article presents the experimental results of crash test of Fiat Cinquecento performed by Allgemeiner Deutscher Automobil-Club (ADAC) and the simulation results obtained with program called V-SIM for default settings. At the next stage a wheel was blocked and the parameters of contact between the vehicle and the barrier were changed for better results matching. The following contact parameters were identified: stiffness at compression phase, stiffness at restitution phase, the coefficients of restitution and friction. The changes lead to various post-impact positions, which shows sensitivity of the results to contact parameters. V-SIM is commonly used by expert witnesses who tend to use default settings, therefore the companies offering simulation programs should identify those parameters with due diligence.

  11. One period coupon bond valuation with revised first passage time approach and the application in Indonesian corporate bond

    NASA Astrophysics Data System (ADS)

    Maruddani, Di Asih I.; Rosadi, Dedi; Gunardic, Abdurakhman

    2015-02-01

    The value of a corporate bond is conventionally expressed in terms of zero coupon bond. In practice, the most common form of debt instrument is coupon bond and allows early default before maturity as safety covenant for the bondholder. This paper study valuation for one period coupon bond, a coupon bond that only give one time coupon at the bond period. It assumes that the model give bondholder the right to reorganize a firm if its value falls below a given barrier. Revised first passage time approach is applied for default time rule. As a result, formulas of equity, liability, and probability of default is derived for this specified model. Straightforward integration under risk neutral pricing is used for deriving those formulas. For the application, bond of Bank Rakyat Indonesia (BRI) as one of the largest bank in Indonesia is analyzed. R computing show that value of the equity is IDR 453.724.549.000.000, the liability is IDR 2.657.394.000.000, and the probability if default is 5.645305E-47 %.

  12. Physical evaluations of Co-Cr-Mo parts processed using different additive manufacturing techniques

    NASA Astrophysics Data System (ADS)

    Ghani, Saiful Anwar Che; Mohamed, Siti Rohaida; Harun, Wan Sharuzi Wan; Noar, Nor Aida Zuraimi Md

    2017-12-01

    In recent years, additive manufacturing with highly design customization has gained an important technique for fabrication in aerospace and medical fields. Despite the ability of the process to produce complex components with highly controlled architecture geometrical features, maintaining the part's accuracy, ability to fabricate fully functional high density components and inferior surfaces quality are the major obstacles in producing final parts using additive manufacturing for any selected application. This study aims to evaluate the physical properties of cobalt chrome molybdenum (Co-Cr-Mo) alloys parts fabricated by different additive manufacturing techniques. The full dense Co-Cr-Mo parts were produced by Selective Laser Melting (SLM) and Direct Metal Laser Sintering (DMLS) with default process parameters. The density and relative density of samples were calculated using Archimedes' principle while the surface roughness on the top and side surface was measured using surface profiler. The roughness average (Ra) for top surface for SLM produced parts is 3.4 µm while 2.83 µm for DMLS produced parts. The Ra for side surfaces for SLM produced parts is 4.57 µm while 9.0 µm for DMLS produced parts. The higher Ra values on side surfaces compared to the top faces for both manufacturing techniques was due to the balling effect phenomenon. The yield relative density for both Co-Cr-Mo parts produced by SLM and DMLS are 99.3%. Higher energy density has influence the higher density of produced samples by SLM and DMLS processes. The findings of this work demonstrated that SLM and DMLS process with default process parameters have effectively produced full dense parts of Co-Cr-Mo with high density, good agreement of geometrical accuracy and better surface finish. Despite of both manufacturing process yield that produced components with higher density, the current finding shows that SLM technique could produce components with smoother surface quality compared to DMLS process with default parameters.

  13. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  14. Splicing Ge-doped photonic crystal fibers using commercial fusion splicer with default discharge parameters.

    PubMed

    Wang, Yiping; Bartelt, Hartmut; Brueckner, Sven; Kobelke, Jens; Rothhardt, Manfred; Mörl, Klaus; Ecke, Wolfgang; Willsch, Reinhardt

    2008-05-12

    A novel technique for splicing a small core Ge-doped photonic crystal fiber (PCF) was demonstrated using a commercial fusion splicer with default discharge parameters for the splicing of two standard single mode fibers (SMFs). Additional discharge parameter adjustments are not required to splice the PCF to several different SMFs. A low splice loss of 1.0 approximately 1.4 dB is achieved. Low or no light reflection is expected at the splice joint due to the complete fusion of the two fiber ends. The splice joint has a high bending strength and does not break when the bending radius is decreased to 4 mm.

  15. Evaluating drywells for stormwater management and enhanced aquifer recharge

    NASA Astrophysics Data System (ADS)

    Sasidharan, Salini; Bradford, Scott A.; Šimůnek, Jiří; DeJong, Bill; Kraemer, Stephen R.

    2018-06-01

    Drywells are increasingly used for stormwater management and enhanced aquifer recharge, but only limited research has quantitatively determined the performance of drywells. Numerical and field scale experiments were, therefore, conducted to improve our understanding and ability to characterize the drywell behavior. In particular, HYDRUS (2D/3D) was modified to simulate transient head boundary conditions for the complex geometry of the Maxwell Type IV drywell; i.e., a sediment chamber, an overflow pipe, and the variable geometry and storage of the drywell system with depth. Falling-head infiltration experiments were conducted on drywells located at the National Training Center in Fort Irwin, California (CA) and a commercial complex in Torrance, CA to determine in situ soil hydraulic properties (the saturated hydraulic conductivity, Ks, and the retention curve shape parameter, α) for an equivalent uniform soil profile by inverse parameter optimization. A good agreement between the observed and simulated water heights in wells was obtained for both sites as indicated by the coefficient of determination 0.95-0.99-%, unique parameter fits, and small standard errors. Fort Irwin and Torrance drywells had very distinctive soil hydraulic characteristics. The fitted value of Ks=1.01 × 10-3 m min-1 at the Torrance drywell was consistent with the sandy soil texture at this site and the default value for sand in the HYDRUS soil catalog. The drywell with this Ks= 1.01 × 10-3 m min-1 could easily infiltrate predicted surface runoff from a design rain event (∼51.3 m3) within 5760 min (4 d). In contrast, the fitted value of Ks=2.25 × 10-6 m min-1 at Fort Irwin was very low compared to the Torrance drywell and more than an order of magnitude smaller than the default value reported in the HYDRUS soil catalog for sandy clay loam at this site, likely due to clogging. These experiments and simulations provide useful information to characterize effective soil hydraulic properties in situ, and to improve the design of drywells for enhanced recharge.

  16. How prior preferences determine decision-making frames and biases in the human brain

    PubMed Central

    Lopez-Persem, Alizée; Domenech, Philippe; Pessiglione, Mathias

    2016-01-01

    Understanding how option values are compared when making a choice is a key objective for decision neuroscience. In natural situations, agents may have a priori on their preferences that create default policies and shape the neural comparison process. We asked participants to make choices between items belonging to different categories (e.g., jazz vs. rock music). Behavioral data confirmed that the items taken from the preferred category were chosen more often and more rapidly, which qualified them as default options. FMRI data showed that baseline activity in classical brain valuation regions, such as the ventromedial Prefrontal Cortex (vmPFC), reflected the strength of prior preferences. In addition, evoked activity in the same regions scaled with the default option value, irrespective of the eventual choice. We therefore suggest that in the brain valuation system, choices are framed as comparisons between default and alternative options, which might save some resource but induce a decision bias. DOI: http://dx.doi.org/10.7554/eLife.20317.001 PMID:27864918

  17. Charm and beauty quark masses in the MMHT2014 global PDF analysis.

    PubMed

    Harland-Lang, L A; Martin, A D; Motylinski, P; Thorne, R S

    We investigate the variation in the MMHT2014 PDFs when we allow the heavy-quark masses [Formula: see text] and [Formula: see text] to vary away from their default values. We make PDF sets available in steps of [Formula: see text] and [Formula: see text], and present the variation in the PDFs and in the predictions. We examine the comparison to the HERA data on charm and beauty structure functions and note that in each case the heavy-quark data, and the inclusive data, have a slight preference for lower masses than our default values. We provide PDF sets with three and four active quark flavours, as well as the standard value of five flavours. We use the pole mass definition of the quark masses, as in the default MMHT2014 analysis, but briefly comment on the [Formula: see text] definition.

  18. Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment

    PubMed Central

    DeBlasio, Dan

    2013-01-01

    Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379

  19. VizieR Online Data Catalog: WISE/NEOWISE Mars-crossing asteroids (Ali-Lagoa+, 2017)

    NASA Astrophysics Data System (ADS)

    Ali-Lagoa, V.; Delbo, M.

    2017-07-01

    We fitted the near-Earth asteroid thermal model of Harris (1998, Icarus, 131, 29) to WISE/NEOWISE thermal infrared data (see, e.g., Mainzer et al. 2011ApJ...736..100M, and Masiero et al. 2014, Cat. J/ApJ/791/121). The table contains the best-fitting values of size and beaming parameter. We note that the beaming parameter is a strictly positive quantity, but a negative sign is given to indicate whenever we could not fit it and had to assume a default value. We also provide the visible geometric albedos computed from the diameter and the tabulated absolute magnitudes. Minimum relative errors of 10, 15, and 20 percent should be considered for size, beaming parameter and albedo in those cases for which the beaming parameter could be fitted. Otherwise, the minimum relative errors in size and albedo increase to 20 and 40 percent (see, e.g., Mainzer et al. 2011ApJ...736..100M). The asteroid absolute magnitudes and slope parameters retrieved from the Minor Planet Center (MPC) are included, as well as the number of observations used in each WISE band (nW2, nW3, nW4) and the corresponding average values of heliocentric and geocentric distances and phase angle of the observations. The ephemerides were retrieved from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).

  20. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  1. Adaptive Local Realignment of Protein Sequences.

    PubMed

    DeBlasio, Dan; Kececioglu, John

    2018-06-11

    While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.

  2. 40 CFR Table C-1 to Subpart C of... - Default CO2 Emission Factors and High Heat Values for Various Types of Fuel

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Heat Values for Various Types of Fuel C Table C-1 to Subpart C of Part 98 Protection of Environment... Stationary Fuel Combustion Sources Pt. 98, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 98—Default CO2... input from MSW and/or tires; and (c) small batch incinerators that combust no more than 1,000 tons of...

  3. Evaluation of bond strength of resin cements using different general-purpose statistical software packages for two-parameter Weibull statistics.

    PubMed

    Roos, Malgorzata; Stawarczyk, Bogna

    2012-07-01

    This study evaluated and compared Weibull parameters of resin bond strength values using six different general-purpose statistical software packages for two-parameter Weibull distribution. Two-hundred human teeth were randomly divided into 4 groups (n=50), prepared and bonded on dentin according to the manufacturers' instructions using the following resin cements: (i) Variolink (VAN, conventional resin cement), (ii) Panavia21 (PAN, conventional resin cement), (iii) RelyX Unicem (RXU, self-adhesive resin cement) and (iv) G-Cem (GCM, self-adhesive resin cement). Subsequently, all specimens were stored in water for 24h at 37°C. Shear bond strength was measured and the data were analyzed using Anderson-Darling goodness-of-fit (MINITAB 16) and two-parameter Weibull statistics with the following statistical software packages: Excel 2011, SPSS 19, MINITAB 16, R 2.12.1, SAS 9.1.3. and STATA 11.2 (p≤0.05). Additionally, the three-parameter Weibull was fitted using MNITAB 16. Two-parameter Weibull calculated with MINITAB and STATA can be compared using an omnibus test and using 95% CI. In SAS only 95% CI were directly obtained from the output. R provided no estimates of 95% CI. In both SAS and R the global comparison of the characteristic bond strength among groups is provided by means of the Weibull regression. EXCEL and SPSS provided no default information about 95% CI and no significance test for the comparison of Weibull parameters among the groups. In summary, conventional resin cement VAN showed the highest Weibull modulus and characteristic bond strength. There are discrepancies in the Weibull statistics depending on the software package and the estimation method. The information content in the default output provided by the software packages differs to very high extent. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  4. ASSESSMENT OF INTAKE ACCORDING TO IDEAS GUIDANCE: CASE STUDY.

    PubMed

    Bitar, A; Maghrabi, M

    2018-04-01

    Estimation of radiation intake and internal dose can be carried out through direct or indirect measurements during routine or special monitoring program. In case of Iodine-131 contamination, direct measurements, such as thyroid counting, are fast and efficient to give quick results. Generally, the calculation method implements suitable values for known parameters whereas default values are used if no information is available. However, in view to avoid significant discrepancies, IDEAS guidelines put in route a comprehensive method to evaluate the monitoring data for one and different types of monitoring. This article deals with a case of internal contamination of a worker who inhaled aerosols containing 131I during the production of radiopharmaceuticals. The interpretation of data obtained was done by following IDEAS guidelines.

  5. Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2018-01-01

    Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ NEE) for the different ensemble members from ˜ 2 to 3 g C m-2 yr-1 (with uncertain parameters) to ˜ 45 g C m-2 yr-1 (C3 grass) and ˜ 75 g C m-2 yr-1 (C3 crops) with perturbed forcings. This increase in uncertainty is related to the impact of the meteorological forcings on leaf onset and senescence, and enhanced/reduced drought stress related to perturbation of precipitation. The NEE uncertainty for the forest plant functional type (PFT) was considerably lower (σ NEE ˜ 4.0-13.5 g C m-2 yr-1 with perturbed parameters, meteorological forcings and initial states). We conclude that LAI and NEE uncertainty with CLM is clearly underestimated if uncertain meteorological forcings and initial states are not taken into account.

  6. Identification of novel uncertainty factors and thresholds of toxicological concern for health hazard and risk assessment: Application to cleaning product ingredients.

    PubMed

    Wang, Zhen; Scott, W Casan; Williams, E Spencer; Ciarlo, Michael; DeLeo, Paul C; Brooks, Bryan W

    2018-04-01

    Uncertainty factors (UFs) are commonly used during hazard and risk assessments to address uncertainties, including extrapolations among mammals and experimental durations. In risk assessment, default values are routinely used for interspecies extrapolation and interindividual variability. Whether default UFs are sufficient for various chemical uses or specific chemical classes remains understudied, particularly for ingredients in cleaning products. Therefore, we examined publicly available acute median lethal dose (LD50), and reproductive and developmental no-observed-adverse-effect level (NOAEL) and lowest-observed-adverse-effect level (LOAEL) values for the rat model (oral). We employed probabilistic chemical toxicity distributions to identify likelihoods of encountering acute, subacute, subchronic and chronic toxicity thresholds for specific chemical categories and ingredients in cleaning products. We subsequently identified thresholds of toxicological concern (TTC) and then various UFs for: 1) acute (LD50s)-to-chronic (reproductive/developmental NOAELs) ratios (ACRs), 2) exposure duration extrapolations (e.g., subchronic-to-chronic; reproductive/developmental), and 3) LOAEL-to-NOAEL ratios considering subacute/acute developmental responses. These ratios (95% CIs) were calculated from pairwise threshold levels using Monte Carlo simulations to identify UFs for all ingredients in cleaning products. Based on data availability, chemical category-specific UFs were also identified for aliphatic acids and salts, aliphatic alcohols, inorganic acids and salts, and alkyl sulfates. In a number of cases, derived UFs were smaller than default values (e.g., 10) employed by regulatory agencies; however, larger UFs were occasionally identified. Such UFs could be used by assessors instead of relying on default values. These approaches for identifying mammalian TTCs and diverse UFs represent robust alternatives to application of default values for ingredients in cleaning products and other chemical classes. Findings can also support chemical substitutions during alternatives assessment, and data dossier development (e.g., read across), identification of TTCs, and screening-level hazard and risk assessment when toxicity data is unavailable for specific chemicals. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Hawkes-diffusion process and the conditional probability of defaults in the Eurozone

    NASA Astrophysics Data System (ADS)

    Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin

    2016-05-01

    This study examines market information embedded in the European sovereign CDS (credit default swap) market by analyzing the sovereign CDSs of 13 Eurozone countries from January 1, 2008, to February 29, 2012, which includes the recent Eurozone debt crisis period. We design the conditional probability of defaults for the CDS prices based on the Hawkes-diffusion process and obtain the theoretical prices of CDS indexes. To estimate the model parameters, we calibrate the model prices to empirical prices obtained from individual sovereign CDS term structure data. The estimated parameters clearly explain both cross-sectional and time-series data. Our empirical results show that the probability of a huge loss event sharply increased during the Eurozone debt crisis, indicating a contagion effect. Even countries with strong and stable economies, such as Germany and France, suffered from the contagion effect. We also find that the probability of small events is sensitive to the state of the economy, spiking several times due to the global financial crisis and the Greek government debt crisis.

  8. Adjoint-Based Climate Model Tuning: Application to the Planet Simulator

    NASA Astrophysics Data System (ADS)

    Lyu, Guokun; Köhl, Armin; Matei, Ion; Stammer, Detlef

    2018-01-01

    The adjoint method is used to calibrate the medium complexity climate model "Planet Simulator" through parameter estimation. Identical twin experiments demonstrate that this method can retrieve default values of the control parameters when using a long assimilation window of the order of 2 months. Chaos synchronization through nudging, required to overcome limits in the temporal assimilation window in the adjoint method, is employed successfully to reach this assimilation window length. When assimilating ERA-Interim reanalysis data, the observations of air temperature and the radiative fluxes are the most important data for adjusting the control parameters. The global mean net longwave fluxes at the surface and at the top of the atmosphere are significantly improved by tuning two model parameters controlling the absorption of clouds and water vapor. The global mean net shortwave radiation at the surface is improved by optimizing three model parameters controlling cloud optical properties. The optimized parameters improve the free model (without nudging terms) simulation in a way similar to that in the assimilation experiments. Results suggest a promising way for tuning uncertain parameters in nonlinear coupled climate models.

  9. On-Line Data Reconstruction in Redundant Disk Arrays.

    DTIC Science & Technology

    1994-05-01

    each sale, - file servers that support a large number of clients with differing work schedules , and * automated teller networks in banking systems...24KB Head scheduling : FIFO User data layout: Sequential in address space of array Disk spindles: Synchronized Table 2.2: Default array parameters for...package and a set of scheduling and queueing routines. 2.3.3. Default workload This dissertation reports on many performance evaluations. In order to

  10. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  11. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE PAGES

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  12. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  13. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-07-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  14. 40 CFR Table C-1 to Subpart C of... - Default CO2 Emission Factors and High Heat Values for Various Types of Fuel

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Heat Values for Various Types of Fuel C Table C-1 to Subpart C of Part 98 Protection of Environment... Stationary Fuel Combustion Sources Pt. 98, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 98—Default CO2... exception of ethylene. 2 Ethylene HHV determined at 41 °F (5 °C) and saturation pressure. 3 Use of this...

  15. 17 CFR 230.239T - Temporary exemption for eligible credit default swaps.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... be delivered if there is a credit-related event or whose value is used to determine the amount of the... a payout if there is a default or other credit event involving identified obligation(s) or... agreement; (iii) Notional amount upon which payment obligations are calculated; (iv) Credit-related events...

  16. Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.

    2014-07-01

    The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.

  17. Land-total and Ocean-total Precipitation and Evaporation from a Community Atmosphere Model version 5 Perturbed Parameter Ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Covey, Curt; Lucas, Donald D.; Trenberth, Kevin E.

    2016-03-02

    This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in additionmore » to one run with default inputparameter values.« less

  18. Intelligent Weather Agent

    NASA Technical Reports Server (NTRS)

    Spirkovska, Liljana (Inventor)

    2006-01-01

    Method and system for automatically displaying, visually and/or audibly and/or by an audible alarm signal, relevant weather data for an identified aircraft pilot, when each of a selected subset of measured or estimated aviation situation parameters, corresponding to a given aviation situation, has a value lying in a selected range. Each range for a particular pilot may be a default range, may be entered by the pilot and/or may be automatically determined from experience and may be subsequently edited by the pilot to change a range and to add or delete parameters describing a situation for which a display should be provided. The pilot can also verbally activate an audible display or visual display of selected information by verbal entry of a first command or a second command, respectively, that specifies the information required.

  19. Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model

    NASA Astrophysics Data System (ADS)

    Washington, M. H.; Kumar, S.

    2017-12-01

    The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.

  20. Bayesian calibration of the Community Land Model using surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less

  1. 40 CFR 98.463 - Calculating GHG emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in....464(b)(4)(i), use a default value of 1.0. MCF = Methane correction factor (fraction). Use the default... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...

  2. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site. 2016 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, G. Tim; Hartman, Larry; Stagich, Brooke

    Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  3. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site 2017 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, T.; Stagich, B.

    Operations at the Savannah River Site (SRS) result in releases of relatively small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991, 2008, 2010, and 2016 and are being concurred with or updated in this report. These parameters include localmore » characteristics of meat, milk, and vegetable production; river recreational activities; and meat, milk, and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  4. Entropy measure of credit risk in highly correlated markets

    NASA Astrophysics Data System (ADS)

    Gottschalk, Sylvia

    2017-07-01

    We compare the single and multi-factor structural models of corporate default by calculating the Jeffreys-Kullback-Leibler divergence between their predicted default probabilities when asset correlations are either high or low. Single-factor structural models assume that the stochastic process driving the value of a firm is independent of that of other companies. A multi-factor structural model, on the contrary, is built on the assumption that a single firm's value follows a stochastic process correlated with that of other companies. Our main results show that the divergence between the two models increases in highly correlated, volatile, and large markets, but that it is closer to zero in small markets, when asset correlations are low and firms are highly leveraged. These findings suggest that during periods of financial instability, when asset volatility and correlations increase, one of the models misreports actual default risk.

  5. Atmospheric refraction correction for Ka-band blind pointing on the DSS-13 beam waveguide antenna

    NASA Technical Reports Server (NTRS)

    Perez-Borroto, I. M.; Alvarez, L. S.

    1992-01-01

    An analysis of the atmospheric refraction corrections at the DSS-13 34-m diameter beam waveguide (BWG) antenna for the period Jul. - Dec. 1990 is presented. The current Deep Space Network (DSN) atmospheric refraction model and its sensitivity with respect to sensor accuracy are reviewed. Refraction corrections based on actual atmospheric parameters are compared with the DSS-13 station default corrections for the six-month period. Average blind-pointing improvement during the worst month would have amounted to 5 mdeg at 10 deg elevation using actual surface weather values. This would have resulted in an average gain improvement of 1.1 dB.

  6. A Value-Added Model to Measure Higher Education Returns on Government Investment

    ERIC Educational Resources Information Center

    Sparks, Roland J.

    2011-01-01

    The cost of college is increasing faster than inflation with the government funding over 19 million student loans that have a current outstanding balance of over $850 billion in 2010. Student default rates for 2008 averaged 7% but for some colleges, default rates were as high as 46.8%. Congress is demanding answers from colleges and universities…

  7. 40 CFR Table C-1 to Subpart C - Default CO2 Emission Factors and High Heat Values for Various Types of Fuel

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 52.07 Biomass Fuels—Liquid mmBtu/gallon kg CO2/mmBtu Ethanol 0.084 68.44 Biodiesel 0.128 73.84 Biodiesel (100%) 0.128 73.84 Rendered Animal Fat 0.125 71.06 Vegetable Oil 0.120 81.55 1 Use of this default...

  8. 40 CFR Table C-1 to Subpart C of... - Default CO2 Emission Factors and High Heat Values for Various Types of Fuel

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 52.07 Biomass Fuels—Liquid mmBtu/gallon kg CO2/mmBtu Ethanol 0.084 68.44 Biodiesel 0.128 73.84 Biodiesel (100%) 0.128 73.84 Rendered Animal Fat 0.125 71.06 Vegetable Oil 0.120 81.55 1 Use of this default...

  9. Introduction of risk size in the determination of uncertainty factor UFL in risk assessment

    NASA Astrophysics Data System (ADS)

    Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei

    2012-09-01

    The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.

  10. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  11. A review of lung-to-blood absorption rates for radon progeny.

    PubMed

    Marsh, J W; Bailey, M R

    2013-12-01

    The International Commission on Radiological Protection (ICRP) Publication 66 Human Respiratory Tract Model (HRTM) treats clearance of materials from the respiratory tract as a competitive process between absorption into blood and particle transport to the alimentary tract and lymphatics. The ICRP recommended default absorption rates for lead and polonium (Type M) in ICRP Publication 71 but stated that the values were not appropriate for short-lived radon progeny. This paper reviews and evaluates published data from volunteer and laboratory animal experiments to estimate the HRTM absorption parameter values for short-lived radon progeny. Animal studies showed that lead ions have two phases of absorption: ∼10 % absorbed with a half-time of ∼15 min, the rest with a half-time of ∼10 h. The studies also indicated that some of the lead ions were bound to respiratory tract components. Bound fractions, f(b), for lead were estimated from volunteer and animal studies and ranged from 0.2 to 0.8. Based on the evaluations of published data, the following HRTM absorption parameter values were derived for lead as a decay product of radon: f(r) = 0.1, s(r) = 100 d(-1), s(s) = 1.7 d(-1), f(b) = 0.5 and s(b) = 1.7 d(-1). Effective doses calculated assuming these absorption parameter values instead of a single absorption half-time of 10 h with no binding (as has generally been assumed) are only a few per cent higher. However, as there is some conflicting evidence on the absorption kinetics for radon progeny, dose calculations have been carried out for different sets of absorption parameter values derived from different studies. The results of these calculations are discussed.

  12. Improvements on the relationship between plume height and mass eruption rate: Implications for volcanic ash cloud forecasting

    NASA Astrophysics Data System (ADS)

    Webley, P. W.; Dehn, J.; Mastin, L. G.; Steensen, T. S.

    2011-12-01

    Volcanic ash plumes and the dispersing clouds into the atmosphere are a hazard for local populations as well as for the aviation industry. Volcanic ash transport and dispersion (VATD) models, used to forecast the movement of these hazardous ash emissions, require eruption source parameters (ESP) such as plume height, eruption rate and duration. To estimate mass eruption rate, empirical relationships with observed plume height have been applied. Theoretical relationships defined by Morton et al. (1956) and Wilson et al. (1976) use default values for the environmental lapse rate (ELR), thermal efficiency, density of ash, specific heat capacity, initial temperature of the erupted material and final temperature of the material. Each volcano, based on its magma type, has a different density, specific heat capacity and initial eruptive temperature compared to these default parameters, and local atmospheric conditions can produce a very different ELR. Our research shows that a relationship between plume height and mass eruption rate can be defined for each eruptive event for each volcano. Additionally, using the one-dimensional modeling program, Plumeria, our analysis assesses the importance of factors such as vent diameter and eruption velocity on the relationship between the eruption rate and measured plume height. Coupling such a tool with a VATD model should improve pre-eruptive forecasts of ash emissions downwind and lead to improvements in ESP data that VATD models use for operational volcanic ash cloud forecasting.

  13. Towards General Evaluation of Intelligent Systems: Lessons Learned from Reproducing AIQ Test Results

    NASA Astrophysics Data System (ADS)

    Vadinský, Ondřej

    2018-03-01

    This paper attempts to replicate the results of evaluating several artificial agents using the Algorithmic Intelligence Quotient test originally reported by Legg and Veness. Three experiments were conducted: One using default settings, one in which the action space was varied and one in which the observation space was varied. While the performance of freq, Q0, Qλ, and HLQλ corresponded well with the original results, the resulting values differed, when using MC-AIXI. Varying the observation space seems to have no qualitative impact on the results as reported, while (contrary to the original results) varying the action space seems to have some impact. An analysis of the impact of modifying parameters of MC-AIXI on its performance in the default settings was carried out with the help of data mining techniques used to identifying highly performing configurations. Overall, the Algorithmic Intelligence Quotient test seems to be reliable, however as a general artificial intelligence evaluation method it has several limits. The test is dependent on the chosen reference machine and also sensitive to changes to its settings. It brings out some differences among agents, however, since they are limited in size, the test setting may not yet be sufficiently complex. A demanding parameter sweep is needed to thoroughly evaluate configurable agents that, together with the test format, further highlights computational requirements of an agent. These and other issues are discussed in the paper along with proposals suggesting how to alleviate them. An implementation of some of the proposals is also demonstrated.

  14. Modeling fluctuations in default-mode brain network using a spiking neural network.

    PubMed

    Yamanishi, Teruya; Liu, Jian-Qin; Nishimura, Haruhiko

    2012-08-01

    Recently, numerous attempts have been made to understand the dynamic behavior of complex brain systems using neural network models. The fluctuations in blood-oxygen-level-dependent (BOLD) brain signals at less than 0.1 Hz have been observed by functional magnetic resonance imaging (fMRI) for subjects in a resting state. This phenomenon is referred to as a "default-mode brain network." In this study, we model the default-mode brain network by functionally connecting neural communities composed of spiking neurons in a complex network. Through computational simulations of the model, including transmission delays and complex connectivity, the network dynamics of the neural system and its behavior are discussed. The results show that the power spectrum of the modeled fluctuations in the neuron firing patterns is consistent with the default-mode brain network's BOLD signals when transmission delays, a characteristic property of the brain, have finite values in a given range.

  15. KABAM Version 1.0 User's Guide and Technical Documentation - Appendix E - Selection of Bird Species of Concern and Corresponding Biological Parameters

    EPA Pesticide Factsheets

    Bird species of concern were identified to define default parameters (body weight and diet composition) to represent birds in KABAM. KABAM is a simulation model used to predict pesticide concentrations in aquatic regions for use in exposure assessments.

  16. User interface user's guide for HYPGEN

    NASA Technical Reports Server (NTRS)

    Chiu, Ing-Tsau

    1992-01-01

    The user interface (UI) of HYPGEN is developed using Panel Library to shorten the learning curve for new users and provide easier ways to run HYPGEN for casual users as well as for advanced users. Menus, buttons, sliders, and type-in fields are used extensively in UI to allow users to point and click with a mouse to choose various available options or to change values of parameters. On-line help is provided to give users information on using UI without consulting the manual. Default values are set for most parameters and boundary conditions are determined by UI to further reduce the effort needed to run HYPGEN; however, users are free to make any changes and save it in a file for later use. A hook to PLOT3D is built in to allow graphics manipulation. The viewpoint and min/max box for PLOT3D windows are computed by UI and saved in a PLOT3D journal file. For large grids which take a long time to generate on workstations, the grid generator (HYPGEN) can be run on faster computers such as Crays, while UI stays at the workstation.

  17. Hybrid Closed-Loop Insulin Delivery in Type 1 Diabetes During Supervised Outpatient Conditions.

    PubMed

    Grosman, Benyamin; Ilany, Jacob; Roy, Anirban; Kurtz, Natalie; Wu, Di; Parikh, Neha; Voskanyan, Gayane; Konvalina, Noa; Mylonas, Chrystaleni; Gottlieb, Rebecca; Kaufman, Francine; Cohen, Ohad

    2016-05-01

    Efficacy and safety of the Medtronic Hybrid Closed-Loop (HCL) system were tested in subjects with type 1 diabetes in a supervised outpatient setting. The HCL system is a prototype research platform that includes a sensor-augmented insulin pump in communication with a control algorithm housed on an Android-based cellular device. Nine subjects with type 1 diabetes (5 female, mean age 53.3 years, mean A1C 7.2%) underwent 9 studies totaling 571 hours of closed-loop control using either default or personalized parameters. The system required meal announcements with estimates of carbohydrate (CHO) intake that were based on metabolic kitchen quantification (MK), dietician estimates (D), or subject estimates (Control). Postprandial glycemia was compared for MK, D, and Control meals. The overall sensor glucose mean was 145 ± 43, the overall percentage time in the range 70-180 mg/dL was 80%, the overall percentage time <70 mg/dL was 0.79%. Compared to intervals of default parameter use (225 hours), intervals of personalized parameter use (346 hours), sensor glucose mean was 158 ± 49 and 137 ± 37 mg/dL (P < .001), respectively, and included more time in range (87% vs 68%) and less time below range (0.54% vs 1.18%). Most subjects underestimated the CHO content of meals, but postprandial glycemia was not significantly different between MK and matched Control meals (P = .16) or between D and matched Control meals (P = .76). There were no episodes of severe hypoglycemia. The HCL system was efficacious and safe during this study. Personally adapted HCL parameters were associated with more time in range and less time below range than default parameters. Accurate estimates of meal CHO did not contribute to improved postprandial glycemia. © 2016 Diabetes Technology Society.

  18. Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices

    NASA Technical Reports Server (NTRS)

    Emmen, R. D.; Tischler, M. B.

    1982-01-01

    This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.

  19. Constraining the GENIE model of neutrino-induced single pion production using reanalyzed bubble chamber data

    DOE PAGES

    Rodrigues, Philip; Wilkinson, Callum; McFarland, Kevin

    2016-08-24

    The longstanding discrepancy between bubble chamber measurements of ν μ-induced single pion production channels has led to large uncertainties in pion production cross section parameters for many years. We extend the reanalysis of pion production data in deuterium bubble chambers where this discrepancy is solved to include the ν μn → μ –pπ 0 and ν μn→μ –nπ + channels, and use the resulting data to fit the parameters of the GENIE pion production model. We find a set of parameters that can describe the bubble chamber data better than the GENIE default parameters, and provide updated central values andmore » reduced uncertainties for use in neutrino oscillation and cross section analyses which use the GENIE model. Here, we find that GENIE’s non-resonant background prediction has to be significantly reduced to fit the data, which may help to explain the recent discrepancies between simulation and data observed by the MINERνA coherent pion and NOνA oscillation analyses.« less

  20. Use of Navier-Stokes methods for the calculation of high-speed nozzle flow fields

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Yoder, Dennis A.

    1994-01-01

    Flows through three reference nozzles have been calculated to determine the capabilities and limitations of the widely used Navier-Stokes solver, PARC. The nozzles examined have similar dominant flow characteristics as those considered for supersonic transport programs. Flows from an inverted velocity profile (IVP) nozzle, an under expanded nozzle, and an ejector nozzle were examined. PARC calculations were obtained with its standard algebraic turbulence model, Thomas, and the two-equation turbulence model, Chien k-epsilon. The Thomas model was run with the default coefficient of mixing set at both 0.09 and a larger value of 0.13 to improve the mixing prediction. Calculations using the default value substantially underpredicted the mixing for all three flows. The calculations obtained with the higher mixing coefficient better predicted mixing in the IVP and underexpanded nozzle flows but adversely affected PARC's convergence characteristics for the IVP nozzle case. The ejector nozzle case did not converge with the Thomas model and the higher mixing coefficient. The Chien k-epsilon results were in better agreement with the experimental data overall than were those of the Thomas run with the default mixing coefficient, but the default boundary conditions for k and epsilon underestimated the levels of mixing near the nozzle exits.

  1. Liquidity crisis detection: An application of log-periodic power law structures to default prediction

    NASA Astrophysics Data System (ADS)

    Wosnitza, Jan Henrik; Denz, Cornelia

    2013-09-01

    We employ the log-periodic power law (LPPL) to analyze the late-2000 financial crisis from the perspective of critical phenomena. The main purpose of this study is to examine whether LPPL structures in the development of credit default swap (CDS) spreads can be used for default classification. Based on the different triggers of Bear Stearns’ near bankruptcy during the late-2000 financial crisis and Ford’s insolvency in 2009, this study provides a quantitative description of the mechanism behind bank runs. We apply the Johansen-Ledoit-Sornette (JLS) positive feedback model to explain the rise of financial institutions’ CDS spreads during the global financial crisis 2007-2009. This investigation is based on CDS spreads of 40 major banks over the period from June 2007 to April 2009 which includes a significant CDS spread increase. The qualitative data analysis indicates that the CDS spread variations have followed LPPL patterns during the global financial crisis. Furthermore, the univariate classification performances of seven LPPL parameters as default indicators are measured by Mann-Whitney U tests. The present study supports the hypothesis that discrete scale-invariance governs the dynamics of financial markets and suggests the application of new and fast updateable default indicators to capture the buildup of long-range correlations between creditors.

  2. ADS: A FORTRAN program for automated design synthesis: Version 1.10

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1985-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.

  3. Risk factors for default from tuberculosis treatment in HIV-infected individuals in the state of Pernambuco, Brazil: a prospective cohort study.

    PubMed

    Maruza, Magda; Albuquerque, Maria F P Militão; Coimbra, Isabella; Moura, Líbia V; Montarroyos, Ulisses R; Miranda Filho, Demócrito B; Lacerda, Heloísa R; Rodrigues, Laura C; Ximenes, Ricardo A A

    2011-12-16

    Concomitant treatment of Human Immunodeficiency Virus (HIV) infection and tuberculosis (TB) presents a series of challenges for treatment compliance for both providers and patients. We carried out this study to identify risk factors for default from TB treatment in people living with HIV. We conducted a cohort study to monitor HIV/TB co-infected subjects in Pernambuco, Brazil, on a monthly basis, until completion or default of treatment for TB. Logistic regression was used to calculate crude and adjusted odds ratios, 95% confidence intervals and P-values. From a cohort of 2310 HIV subjects, 390 individuals (16.9%) who had started treatment after a diagnosis of TB were selected, and data on 273 individuals who completed or defaulted on treatment for TB were analyzed. The default rate was 21.7% and the following risk factors were identified: male gender, smoking and CD4 T-cell count less than 200 cells/mm3. Age over 29 years, complete or incomplete secondary or university education and the use of highly active antiretroviral therapy (HAART) were identified as protective factors for the outcome. The results point to the need for more specific actions, aiming to reduce the default from TB treatment in males, younger adults with low education, smokers and people with CD4 T-cell counts < 200 cells/mm3. Default was less likely to occur in patients under HAART, reinforcing the strategy of early initiation of HAART in individuals with TB.

  4. Pricing for a basket of LCDS under fuzzy environments.

    PubMed

    Wu, Liang; Liu, Jie-Fang; Wang, Jun-Tao; Zhuang, Ya-Ming

    2016-01-01

    This paper looks at both the prepayment risks of housing mortgage loan credit default swaps (LCDS) as well as the fuzziness and hesitation of investors as regards prepayments by borrowers. It further discusses the first default pricing of a basket of LCDS in a fuzzy environment by using stochastic analysis and triangular intuition-based fuzzy set theory. Through the 'fuzzification' of the sensitivity coefficient in the prepayment intensity, this paper describes the dynamic features of mortgage housing values using the One-factor copula function and concludes with a formula for 'fuzzy' pricing the first default of a basket of LCDS. Using analog simulation to analyze the sensitivity of hesitation, we derive a model that considers what the LCDS fair premium is in a fuzzy environment, including a pure random environment. In addition, the model also shows that a suitable pricing range will give investors more flexible choices and make the predictions of the model closer to real market values.

  5. Simulation-based comprehensive benchmarking of RNA-seq aligners

    PubMed Central

    Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R

    2018-01-01

    Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783

  6. Evaluation of tranche in securitization and long-range Ising model

    NASA Astrophysics Data System (ADS)

    Kitsukawa, K.; Mori, S.; Hisakado, M.

    2006-08-01

    This econophysics work studies the long-range Ising model of a finite system with N spins and the exchange interaction J/N and the external field H as a model for homogeneous credit portfolio of assets with default probability Pd and default correlation ρd. Based on the discussion on the (J,H) phase diagram, we develop a perturbative calculation method for the model and obtain explicit expressions for Pd,ρd and the normalization factor Z in terms of the model parameters N and J,H. The effect of the default correlation ρd on the probabilities P(Nd,ρd) for Nd defaults and on the cumulative distribution function D(i,ρd) are discussed. The latter means the average loss rate of the“tranche” (layered structure) of the securities (e.g. CDO), which are synthesized from a pool of many assets. We show that the expected loss rate of the subordinated tranche decreases with ρd and that of the senior tranche increases linearly, which are important in their pricing and ratings.

  7. q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.; Tian, Li

    2013-10-01

    We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1

  8. Risk seeking for losses modulates the functional connectivity of the default mode and left frontoparietal networks in young males.

    PubMed

    Deza Araujo, Yacila I; Nebe, Stephan; Neukam, Philipp T; Pooseh, Shakoor; Sebold, Miriam; Garbusow, Maria; Heinz, Andreas; Smolka, Michael N

    2018-06-01

    Value-based decision making (VBDM) is a principle that states that humans and other species adapt their behavior according to the dynamic subjective values of the chosen or unchosen options. The neural bases of this process have been extensively investigated using task-based fMRI and lesion studies. However, the growing field of resting-state functional connectivity (RSFC) may shed light on the organization and function of brain connections across different decision-making domains. With this aim, we used independent component analysis to study the brain network dynamics in a large cohort of young males (N = 145) and the relationship of these dynamics with VBDM. Participants completed a battery of behavioral tests that evaluated delay aversion, risk seeking for losses, risk aversion for gains, and loss aversion, followed by an RSFC scan session. We identified a set of large-scale brain networks and conducted our analysis only on the default mode network (DMN) and networks comprising cognitive control, appetitive-driven, and reward-processing regions. Higher risk seeking for losses was associated with increased connectivity between medial temporal regions, frontal regions, and the DMN. Higher risk seeking for losses was also associated with increased coupling between the left frontoparietal network and occipital cortices. These associations illustrate the participation of brain regions involved in prospective thinking, affective decision making, and visual processing in participants who are greater risk-seekers, and they demonstrate the sensitivity of RSFC to detect brain connectivity differences associated with distinct VBDM parameters.

  9. Improving phylogenetic analyses by incorporating additional information from genetic sequence databases.

    PubMed

    Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A

    2009-10-01

    Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.

  10. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    PubMed

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  11. Computer modeling of occlusal surfaces of posterior teeth with the CICERO CAD/CAM system.

    PubMed

    Olthoff, L W; Van Der Zel, J M; De Ruiter, W J; Vlaar, S T; Bosman, F

    2000-08-01

    Static and dynamic occlusal interference frequently needs to be corrected by selective grinding of the occlusal surface of conventional cast and ceramic-fused-to-metal restorations. CAD/CAM techniques allow control of the dimensional contours of these restorations. However, parameters responsible for the occlusal form need to be determined. In most articulators, these parameters are set as default values. Which technique is best for minimizing the introduction of occlusal interference in restorations has not been determined. This study investigated differences in crown structure of a crown designed in static occlusion (STA) with designs adapted for dynamic occlusal interferences. Therefore, values from an optoelectronic registration system (String-Condylocomp, KAVO), an occlusal generated path (OGP) technique and default settings (DEF) were used in the CICERO CAD/CAM system. Morphology of CON, DEF, and OGP crowns was compared with that of the STA crown with respect to differences in a buccolingual section and frequency of occlusal distances in an interocclusal range of 1 mm, measured from the occlusal surface of the crown. All crown types fulfilled the esthetic and morphologic criteria for restorations in clinical dentistry. Difference in the morphology of the OGP crown, compared with that of the STA crown, was greater than that for the CON and DEF crowns. These differences were seen especially in the distobuccal part of the occlusal surface; however, the number of occlusal contacts was considered sufficient to stabilize occlusion. Functional occlusion, adapted to dynamic occlusion in a CICERO crown for the first mandibular molar, can be obtained using data acquired with the String-Condylocomp registration system. The OGP technique was preferred to other techniques because of the simplicity of the technique for eliminating potential problems with opposing teeth during motion. However, this is achieved at the cost of fewer points of contact during occlusion than with the CON crown.

  12. Approaches to Estimate Consumer Exposure under TSCA

    EPA Pesticide Factsheets

    CEM contains a combination of models and default parameters which are used to estimate inhalation, dermal, and oral exposures to consumer products and articles for a wide variety of product and article use categories.

  13. Risk factors for default from tuberculosis treatment in HIV-infected individuals in the state of Pernambuco, Brazil: a prospective cohort study

    PubMed Central

    2011-01-01

    Background Concomitant treatment of Human Immunodeficiency Virus (HIV) infection and tuberculosis (TB) presents a series of challenges for treatment compliance for both providers and patients. We carried out this study to identify risk factors for default from TB treatment in people living with HIV. Methods We conducted a cohort study to monitor HIV/TB co-infected subjects in Pernambuco, Brazil, on a monthly basis, until completion or default of treatment for TB. Logistic regression was used to calculate crude and adjusted odds ratios, 95% confidence intervals and P-values. Results From a cohort of 2310 HIV subjects, 390 individuals (16.9%) who had started treatment after a diagnosis of TB were selected, and data on 273 individuals who completed or defaulted on treatment for TB were analyzed. The default rate was 21.7% and the following risk factors were identified: male gender, smoking and CD4 T-cell count less than 200 cells/mm3. Age over 29 years, complete or incomplete secondary or university education and the use of highly active antiretroviral therapy (HAART) were identified as protective factors for the outcome. Conclusion The results point to the need for more specific actions, aiming to reduce the default from TB treatment in males, younger adults with low education, smokers and people with CD4 T-cell counts < 200 cells/mm3. Default was less likely to occur in patients under HAART, reinforcing the strategy of early initiation of HAART in individuals with TB. PMID:22176628

  14. Neuromodulation impact on nonlinear firing behavior of a reduced model motoneuron with the active dendrite

    PubMed Central

    Kim, Hojeong; Heckman, C. J.

    2014-01-01

    Neuromodulatory inputs from brainstem systems modulate the normal function of spinal motoneurons by altering the activation properties of persistent inward currents (PICs) in their dendrites. However, the effect of the PIC on firing outputs also depends on its location in the dendritic tree. To investigate the interaction between PIC neuromodulation and PIC location dependence, we used a two-compartment model that was biologically realistic in that it retains directional and frequency-dependent electrical coupling between the soma and the dendrites, as seen in multi-compartment models based on full anatomical reconstructions of motoneurons. Our two-compartment approach allowed us to systematically vary the coupling parameters between the soma and the dendrite to accurately reproduce the effect of location of the dendritic PIC on the generation of nonlinear (hysteretic) motoneuron firing patterns. Our results show that as a single parameter value for PIC activation was either increased or decreased by 20% from its default value, the solution space of the coupling parameter values for nonlinear firing outputs was drastically reduced by approximately 80%. As a result, the model tended to fire only in a linear mode at the majority of dendritic PIC sites. The same results were obtained when all parameters for the PIC activation simultaneously changed only by approximately ±10%. Our results suggest the democratization effect of neuromodulation: the neuromodulation by the brainstem systems may play a role in switching the motoneurons with PICs at different dendritic locations to a similar mode of firing by reducing the effect of the dendritic location of PICs on the firing behavior. PMID:25309410

  15. Quantitative and descriptive comparison of four acoustic analysis systems: vowel measurements.

    PubMed

    Burris, Carlyn; Vorperian, Houri K; Fourakis, Marios; Kent, Ray D; Bolt, Daniel M

    2014-02-01

    This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9 acoustic measures: fundamental frequency (F0), formant frequencies (F1-F4), and formant bandwidths (B1-B4). The discrepancy between the software measured values and the input values (synthesized, previously reported, and manual measurements) was used to assess comparability and accuracy. Basic AASP features are described. Results indicate that Praat, WaveSurfer, and TF32 generate accurate and comparable F0 and F1-F4 data for synthesized vowels and adult male natural vowels. Results varied by vowel for women and children, with some serious errors. Bandwidth measurements by AASPs were highly inaccurate as compared with manual measurements and published data on formant bandwidths. Values of F0 and F1-F4 are generally consistent and fairly accurate for adult vowels and for some child vowels using the default settings in Praat, WaveSurfer, and TF32. Manipulation of default settings yields improved output values in TF32 and CSL. Caution is recommended especially before accepting F1-F4 results for children and B1-B4 results for all speakers.

  16. An algorithm for automatic parameter adjustment for brain extraction in BrainSuite

    NASA Astrophysics Data System (ADS)

    Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.

    2017-02-01

    Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.

  17. A default Bayesian hypothesis test for mediation.

    PubMed

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  18. CEM-Consumer Exposure Model Download and Install Instructions

    EPA Pesticide Factsheets

    CEM contains a combination of models and default parameters which are used to estimate inhalation, dermal, and oral exposures to consumer products and articles for a wide variety of product and article use categories.

  19. Application of the migration models implemented in the decision system MOIRA-PLUS to assess the long term behaviour of (137)Cs in water and fish of the Baltic Sea.

    PubMed

    Monte, Luigi

    2014-08-01

    This work presents and discusses the results of an application of the contaminant migration models implemented in the decision support system MOIRA-PLUS to simulate the time behaviour of the concentrations of (137)Cs of Chernobyl origin in water and fish of the Baltic Sea. The results of the models were compared with the extensive sets of highly reliable empirical data of radionuclide contamination available from international databases and covering a period of, approximately, twenty years. The model application involved three main phases: a) the customisation performed by using hydrological, morphometric and water circulation data obtained from the literature; b) a blind test of the model results, in the sense that the models made use of default values of the migration parameters to predict the dynamics of the contaminant in the environmental components; and c) the adjustment of the model parameter values to improve the agreement of the predictions with the empirical data. The results of the blind test showed that the models successfully predicted the empirical contamination values within the expected range of uncertainty of the predictions (confidence level at 68% of approximately a factor 2). The parameter adjustment can be helpful for the assessment of the fluxes of water circulating among the main sub-basins of the Baltic Sea, substantiating the usefulness of radionuclides to trace the movement of masses of water in seas. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Anterior Cingulate Engagement in a Foraging Context Reflects Choice Difficulty, Not Foraging Value

    PubMed Central

    Shenhav, Amitai; Straccia, Mark A.; Cohen, Jonathan D.; Botvinick, Matthew M.

    2014-01-01

    Previous theories predict that human dorsal anterior cingulate (dACC) should respond to decision difficulty. An alternative theory has been recently advanced which proposes that dACC evolved to represent the value of “non-default,” foraging behavior, calling into question its role in choice difficulty. However, this new theory does not take into account that choosing whether or not to pursue foraging-like behavior can also be more difficult than simply resorting to a “default.” The results of two neuroimaging experiments show that dACC is only associated with foraging value when foraging value is confounded with choice difficulty; when the two are dissociated, dACC engagement is only explained by choice difficulty, and not the value of foraging. In addition to refuting this new theory, our studies help to formalize a fundamental connection between choice difficulty and foraging-like decisions, while also prescribing a solution for a common pitfall in studies of reward-based decision making. PMID:25064851

  1. Parameterizing the binding properties of dissolved organic matter with default values skews the prediction of copper solution speciation and ecotoxicity in soil.

    PubMed

    Djae, Tanalou; Bravin, Matthieu N; Garnier, Cédric; Doelsch, Emmanuel

    2017-04-01

    Parameterizing speciation models by setting the percentage of dissolved organic matter (DOM) that is reactive (% r-DOM) toward metal cations at a single 65% default value is very common in predictive ecotoxicology. The authors tested this practice by comparing the free copper activity (pCu 2+  = -log 10 [Cu 2+ ]) measured in 55 soil sample solutions with pCu 2+ predicted with the Windermere humic aqueous model (WHAM) parameterized by default. Predictions of Cu toxicity to soil organisms based on measured or predicted pCu 2+ were also compared. Default WHAM parameterization substantially skewed the prediction of measured pCu 2+ by up to 2.7 pCu 2+ units (root mean square residual = 0.75-1.3) and subsequently the prediction of Cu toxicity for microbial functions, invertebrates, and plants by up to 36%, 45%, and 59% (root mean square residuals ≤9 %, 11%, and 17%), respectively. Reparametrizing WHAM by optimizing the 2 DOM binding properties (i.e., % r-DOM and the Cu complexation constant) within a physically realistic value range much improved the prediction of measured pCu 2+ (root mean square residual = 0.14-0.25). Accordingly, this WHAM parameterization successfully predicted Cu toxicity for microbial functions, invertebrates, and plants (root mean square residual ≤3.4%, 4.4%, and 5.8%, respectively). Thus, it is essential to account for the real heterogeneity in DOM binding properties for relatively accurate prediction of Cu speciation in soil solution and Cu toxic effects on soil organisms. Environ Toxicol Chem 2017;36:898-905. © 2016 SETAC. © 2016 SETAC.

  2. Taming parallel I/O complexity with auto-tuning

    DOE PAGES

    Behzad, Babak; Luu, Huong Vu Thanh; Huchette, Joseph; ...

    2013-11-17

    We present an auto-tuning system for optimizing I/O performance of HDF5 applications and demonstrate its value across platforms, applications, and at scale. The system uses a genetic algorithm to search a large space of tunable parameters and to identify effective settings at all layers of the parallel I/O stack. The parameter settings are applied transparently by the auto-tuning system via dynamically intercepted HDF5 calls. To validate our auto-tuning system, we applied it to three I/O benchmarks (VPIC, VORPAL, and GCRM) that replicate the I/O activity of their respective applications. We tested the system with different weak-scaling configurations (128, 2048, andmore » 4096 CPU cores) that generate 30 GB to 1 TB of data, and executed these configurations on diverse HPC platforms (Cray XE6, IBM BG/P, and Dell Cluster). In all cases, the auto-tuning framework identified tunable parameters that substantially improved write performance over default system settings. In conclusion, we consistently demonstrate I/O write speedups between 2x and 100x for test configurations.« less

  3. Unified Scaling Law for flux pinning in practical superconductors: II. Parameter testing, scaling constants, and the Extrapolative Scaling Expression

    NASA Astrophysics Data System (ADS)

    Ekin, Jack W.; Cheggour, Najib; Goodrich, Loren; Splett, Jolene; Bordini, Bernardo; Richter, David

    2016-12-01

    A scaling study of several thousand Nb3Sn critical-current (I c) measurements is used to derive the Extrapolative Scaling Expression (ESE), a relation that can quickly and accurately extrapolate limited datasets to obtain full three-dimensional dependences of I c on magnetic field (B), temperature (T), and mechanical strain (ɛ). The relation has the advantage of being easy to implement, and offers significant savings in sample characterization time and a useful tool for magnet design. Thorough data-based analysis of the general parameterization of the Unified Scaling Law (USL) shows the existence of three universal scaling constants for practical Nb3Sn conductors. The study also identifies the scaling parameters that are conductor specific and need to be fitted to each conductor. This investigation includes two new, rare, and very large I c(B,T,ɛ) datasets (each with nearly a thousand I c measurements spanning magnetic fields from 1 to 16 T, temperatures from ˜2.26 to 14 K, and intrinsic strains from -1.1% to +0.3%). The results are summarized in terms of the general USL parameters given in table 3 of Part 1 (Ekin J W 2010 Supercond. Sci. Technol. 23 083001) of this series of articles. The scaling constants determined for practical Nb3Sn conductors are: the upper-critical-field temperature parameter v = 1.50 ± 0.04 the cross-link parameter w = 3.0 ± 0.3 and the strain curvature parameter u = 1.7 ± 0.1 (from equation (29) for b c2(ɛ) in Part 1). These constants and required fitting parameters result in the ESE relation, given by I c ( B , T , ɛ ) B = C [ b c 2 ( ɛ ) ] s ( 1 - t 1.5 ) η - μ ( 1 - t 2 ) μ b p ( 1 - b ) q with reduced magnetic field b ≡ B/B c2*(T,ɛ) and reduced temperature t ≡ T/T c*(ɛ), where: B c 2 * ( T , ɛ ) = B c 2 * ( 0 , 0 ) ( 1 - t 1.5 ) b c 2 ( ɛ ) T c * ( ɛ ) = T c * ( 0 ) [ b c 2 ( ɛ ) ] 1/3 and fitting parameters: C, B c2*(0,0), T c*(0), s, either η or μ (but not both), plus the parameters in the strain function b c2(ɛ). The pinning-force shape parameters p and q are also preferably fitted (simultaneously with the other parameters), but default values p = 0.5 and q = 2.0 also give high fitting accuracy when the range of relative magnetic fields is not extensive. Default values are also essential when the magnetic field data range is insufficient to determine p and q. The scaling constants are remarkably stable (changes less than ˜1%) with respect to different values of p and q, Nb3Sn conductor configurations, magnetic self-field corrections, and pinning-force trim values. The results demonstrate that the scaling of transport critical current holds down to the lowest temperatures measured ˜2.2 K, for both magnetic self-field corrected and uncorrected data. An initial comparison is also made between transport and magnetization scaling data in matched Nb3Sn samples and significant differences are found, especially for the upper critical field B c2*(T,ɛ), which may be a result of inhomogeneous shielding currents. In Part 3 of this topical review series (Ekin J W 2017 Supercond. Sci. Technol. at press), the smallest practical minimum dataset for extrapolating full I c(B,T,ɛ) datasets is derived. Application of the ESE relation is illustrated in several new areas, including full characterization of Nb3Sn conductors from as little as a single I c(B) curve when a few core parameters have been determined for similar conductors.

  4. Default network connectivity reflects the level of consciousness in non-communicative brain-damaged patients

    PubMed Central

    Vanhaudenhuyse, Audrey; Noirhomme, Quentin; Tshibanda, Luaba J.-F.; Bruno, Marie-Aurelie; Boveroux, Pierre; Schnakers, Caroline; Soddu, Andrea; Perlbarg, Vincent; Ledoux, Didier; Brichant, Jean-François; Moonen, Gustave; Maquet, Pierre; Greicius, Michael D.

    2010-01-01

    The ‘default network’ is defined as a set of areas, encompassing posterior-cingulate/precuneus, anterior cingulate/mesiofrontal cortex and temporo-parietal junctions, that show more activity at rest than during attention-demanding tasks. Recent studies have shown that it is possible to reliably identify this network in the absence of any task, by resting state functional magnetic resonance imaging connectivity analyses in healthy volunteers. However, the functional significance of these spontaneous brain activity fluctuations remains unclear. The aim of this study was to test if the integrity of this resting-state connectivity pattern in the default network would differ in different pathological alterations of consciousness. Fourteen non-communicative brain-damaged patients and 14 healthy controls participated in the study. Connectivity was investigated using probabilistic independent component analysis, and an automated template-matching component selection approach. Connectivity in all default network areas was found to be negatively correlated with the degree of clinical consciousness impairment, ranging from healthy controls and locked-in syndrome to minimally conscious, vegetative then coma patients. Furthermore, precuneus connectivity was found to be significantly stronger in minimally conscious patients as compared with unconscious patients. Locked-in syndrome patient’s default network connectivity was not significantly different from controls. Our results show that default network connectivity is decreased in severely brain-damaged patients, in proportion to their degree of consciousness impairment. Future prospective studies in a larger patient population are needed in order to evaluate the prognostic value of the presented methodology. PMID:20034928

  5. Indirect estimation of emission factors for phosphate surface mining using air dispersion modeling.

    PubMed

    Tartakovsky, Dmitry; Stern, Eli; Broday, David M

    2016-06-15

    To date, phosphate surface mining suffers from lack of reliable emission factors. Due to complete absence of data to derive emissions factors, we developed a methodology for estimating them indirectly by studying a range of possible emission factors for surface phosphate mining operations and comparing AERMOD calculated concentrations to concentrations measured around the mine. We applied this approach for the Khneifiss phosphate mine, Syria, and the Al-Hassa and Al-Abyad phosphate mines, Jordan. The work accounts for numerous model unknowns and parameter uncertainties by applying prudent assumptions concerning the parameter values. Our results suggest that the net mining operations (bulldozing, grading and dragline) contribute rather little to ambient TSP concentrations in comparison to phosphate processing and transport. Based on our results, the common practice of deriving the emission rates for phosphate mining operations from the US EPA emission factors for surface coal mining or from the default emission factor of the EEA seems to be reasonable. Yet, since multiple factors affect dispersion from surface phosphate mines, a range of emission factors, rather than only a single value, was found to satisfy the model performance. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Systematic simulations of modified gravity: chameleon models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference withmore » the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.« less

  7. Parameter investigation with line-implicit lower-upper symmetric Gauss-Seidel on 3D stretched grids

    NASA Astrophysics Data System (ADS)

    Otero, Evelyn; Eliasson, Peter

    2015-03-01

    An implicit lower-upper symmetric Gauss-Seidel (LU-SGS) solver has been implemented as a multigrid smoother combined with a line-implicit method as an acceleration technique for Reynolds-averaged Navier-Stokes (RANS) simulation on stretched meshes. The computational fluid dynamics code concerned is Edge, an edge-based finite volume Navier-Stokes flow solver for structured and unstructured grids. The paper focuses on the investigation of the parameters related to our novel line-implicit LU-SGS solver for convergence acceleration on 3D RANS meshes. The LU-SGS parameters are defined as the Courant-Friedrichs-Lewy number, the left-hand side dissipation, and the convergence of iterative solution of the linear problem arising from the linearisation of the implicit scheme. The influence of these parameters on the overall convergence is presented and default values are defined for maximum convergence acceleration. The optimised settings are applied to 3D RANS computations for comparison with explicit and line-implicit Runge-Kutta smoothing. For most of the cases, a computing time acceleration of the order of 2 is found depending on the mesh type, namely the boundary layer and the magnitude of residual reduction.

  8. Comment on Geoengineering with seagrasses: is credit due where credit is given?

    NASA Astrophysics Data System (ADS)

    Oreska, Matthew P. J.; McGlathery, Karen J.; Emmer, Igino M.; Needelman, Brian A.; Emmett-Mattox, Stephen; Crooks, Stephen; Megonigal, J. Patrick; Myers, Doug

    2018-03-01

    In their recent review, ‘Geoengineering with seagrasses: is credit due where credit is given?,’ Johannessen and Macdonald (2016) invoke the prospect of carbon offset-credit over-allocation by the Verified Carbon Standard as a pretense for their concerns about published seagrass carbon burial rate and global stock estimates. Johannessen and Macdonald (2016) suggest that projects seeking offset-credits under the Verified Carbon Standard methodology VM0033: Methodology for Tidal Wetland and Seagrass Restoration will overestimate long-term (100 yr) sediment organic carbon (SOC) storage because issues affecting carbon burial rates bias storage estimates. These issues warrant serious consideration by the seagrass research community; however, VM0033 does not refer to seagrass SOC ‘burial rates’ or ‘storage.’ Projects seeking credits under VM0033 must document greenhouse gas emission reductions over time, relative to a baseline scenario, in order to receive credits. Projects must also monitor changes in carbon pools, including SOC, to confirm that observed benefits are maintained over time. However, VM0033 allows projects to conservatively underestimate project benefits by citing default values for specific accounting parameters, including CO2 emissions reductions. We therefore acknowledge that carbon crediting methodologies such as VM0033 are sensitive to the quality of the seagrass literature, particularly when permitted default factors are based in part on seagrass burial rates. Literature-derived values should be evaluated based on the concerns raised by Johannessen and Macdonald (2016), but these issues should not lead to credit over-allocation in practice, provided VM0033 is rigorously followed. These issues may, however, affect the feasibility of particular seagrass offset projects.

  9. 12 CFR 360.6 - Treatment of financial assets transferred in connection with a securitization or participation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... or market value characteristics and the credit quality of transferred financial assets (together with... with maximizing the net present value of the financial asset. Servicers shall have the authority to modify assets to address reasonably foreseeable default, and to take other action to maximize the value...

  10. Development of municipal solid waste classification in Korea based on fossil carbon fraction.

    PubMed

    Lee, Jeongwoo; Kang, Seongmin; Kim, Seungjin; Kim, Ki-Hyun; Jeon, Eui-Chan

    2015-10-01

    Environmental problems and climate change arising from waste incineration are taken quite seriously in the world. In Korea, the waste disposal methods are largely classified into landfill, incineration, recycling, etc. and the amount of incinerated waste has risen by 24.5% from 2002. In the analysis of CO₂emissions estimations of waste incinerators fossil carbon content are main factor by the IPCC. FCF differs depending on the characteristics of waste in each country, and a wide range of default values are proposed by the IPCC. This study conducted research on the existing classifications of the IPCC and Korean waste classification systems based on FCF for accurate greenhouse gas emissions estimation of waste incineration. The characteristics possible for sorting were classified according to FCF and form. The characteristics sorted according to fossil carbon fraction were paper, textiles, rubber, and leather. Paper was classified into pure paper and processed paper; textiles were classified into cotton and synthetic fibers; and rubber and leather were classified into artificial and natural. The analysis of FCF was implemented by collecting representative samples from each classification group, by applying the 14C method, and using AMS equipment. And the analysis values were compared with the default values proposed by the IPCC. In this study of garden and park waste and plastics, the differences were within the range of the IPCC default values or the differences were negligible. However, coated paper, synthetic textiles, natural rubber, synthetic rubber, artificial leather, and other wastes showed differences of over 10% in FCF content. IPCC is comprised of largely 9 types of qualitative classifications, in emissions estimation a great difference can occur from the combined characteristics according with the existing IPCC classification system by using the minutely classified waste characteristics as in this study. Fossil carbon fraction (FCF) differs depending on the characteristics of waste in each country; and a wide range of default values are proposed by the IPCC. This study conducted research on the existing classifications of the IPCC and Korean waste classification systems based on FCF for accurate greenhouse gas emissions estimation of waste incineration.

  11. Predictive Models and Tools for Screening Chemicals under TSCA: Consumer Exposure Models 1.5

    EPA Pesticide Factsheets

    CEM contains a combination of models and default parameters which are used to estimate inhalation, dermal, and oral exposures to consumer products and articles for a wide variety of product and article use categories.

  12. Simulation model calibration and validation : phase II : development of implementation handbook and short course.

    DOT National Transportation Integrated Search

    2006-01-01

    A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...

  13. "Birds of a Feather" Fail Together: Exploring the Nature of Dependency in SME Defaults.

    PubMed

    Calabrese, Raffaella; Andreeva, Galina; Ansell, Jake

    2017-08-11

    This article studies the effects of incorporating the interdependence among London small business defaults into a risk analysis framework using the data just before the financial crisis. We propose an extension from standard scoring models to take into account the spatial dimensions and the demographic characteristics of small and medium-sized enterprises (SMEs), such as legal form, industry sector, and number of employees. We estimate spatial probit models using different distance matrices based only on the spatial location or on an interaction between spatial locations and demographic characteristics. We find that the interdependence or contagion component defined on spatial and demographic characteristics is significant and that it improves the ability to predict defaults of non-start-ups in London. Furthermore, including contagion effects among SMEs alters the parameter estimates of risk determinants. The approach can be extended to other risk analysis applications where spatial risk may incorporate correlation based on other aspects. © 2017 Society for Risk Analysis.

  14. Modelling and validation land-atmospheric heat fluxes by using classical surface parameters over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Ma, W.; Ma, Y.; Hu, Z.; Zhong, L.

    2017-12-01

    In this study, a land-atmosphere model was initialized by ingesting AMSR-E products, and the results were compared with the default model configuration and with in situ long-term CAMP/Tibet observations. Firstly our field observation sites will be introduced based on ITPCAS (Institute of Tibetan Plateau Research, Chinese Academy of Sciences). Then, a land-atmosphere model was initialized by ingesting AMSR-E products, and the results were compared with the default model configuration and with in situ long-term CAMP/Tibet observations. The differences between the AMSR-E initialized model runs with the default model configuration and in situ data showed an apparent inconsistency in the model-simulated land surface heat fluxes. The results showed that the soil moisture was sensitive to the specific model configuration. To evaluate and verify the model stability, a long-term modeling study with AMSR-E soil moisture data ingestion was performed. Based on test simulations, AMSR-E data were assimilated into an atmospheric model for July and August 2007. The results showed that the land surface fluxes agreed well with both the in situ data and the results of the default model configuration. Therefore, the simulation can be used to retrieve land surface heat fluxes from an atmospheric model over the Tibetan Plateau.

  15. Artificial neural networks as alternative tool for minimizing error predictions in manufacturing ultradeformable nanoliposome formulations.

    PubMed

    León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa

    2018-01-01

    This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.

  16. Building an ACT-R Reader for Eye-Tracking Corpus Data.

    PubMed

    Dotlačil, Jakub

    2018-01-01

    Cognitive architectures have often been applied to data from individual experiments. In this paper, I develop an ACT-R reader that can model a much larger set of data, eye-tracking corpus data. It is shown that the resulting model has a good fit to the data for the considered low-level processes. Unlike previous related works (most prominently, Engelmann, Vasishth, Engbert & Kliegl, ), the model achieves the fit by estimating free parameters of ACT-R using Bayesian estimation and Markov-Chain Monte Carlo (MCMC) techniques, rather than by relying on the mix of manual selection + default values. The method used in the paper is generalizable beyond this particular model and data set and could be used on other ACT-R models. Copyright © 2017 Cognitive Science Society, Inc.

  17. Global Parameter Optimization of CLM4.5 Using Sparse-Grid Based Surrogates

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Gu, L.

    2016-12-01

    Calibration of the Community Land Model (CLM) is challenging because of its model complexity, large parameter sets, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time. The goal of this study is to calibrate some of the CLM parameters in order to improve model projection of carbon fluxes. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first use advanced sparse grid (SG) interpolation to construct a surrogate system of the actual CLM model, and then we calibrate the surrogate model in the optimization process. As the surrogate model is a polynomial whose evaluation is fast, it can be efficiently evaluated with sufficiently large number of times in the optimization, which facilitates the global search. We calibrate five parameters against 12 months of GPP, NEP, and TLAI data from the U.S. Missouri Ozark (US-MOz) tower. The results indicate that an accurate surrogate model can be created for the CLM4.5 with a relatively small number of SG points (i.e., CLM4.5 simulations), and the application of the optimized parameters leads to a higher predictive capacity than the default parameter values in the CLM4.5 for the US-MOz site.

  18. An effective parameter optimization with radiation balance constraints in the CAM5

    NASA Astrophysics Data System (ADS)

    Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.

    2017-12-01

    Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.

  19. Developing R&D portfolio business validity simulation model and system.

    PubMed

    Yeo, Hyun Jin; Im, Kwang Hyuk

    2015-01-01

    The R&D has been recognized as critical method to take competitiveness by not only companies but also nations with its value creation such as patent value and new product. Therefore, R&D has been a decision maker's burden in that it is hard to decide how much money to invest, how long time one should spend, and what technology to develop which means it accompanies resources such as budget, time, and manpower. Although there are diverse researches about R&D evaluation, business factors are not concerned enough because almost all previous studies are technology oriented evaluation with one R&D technology based. In that, we early proposed R&D business aspect evaluation model which consists of nine business model components. In this research, we develop a simulation model and system evaluating a company or industry's R&D portfolio with business model point of view and clarify default and control parameters to facilitate evaluator's business validity work in each evaluation module by integrate to one screen.

  20. Developing R&D Portfolio Business Validity Simulation Model and System

    PubMed Central

    2015-01-01

    The R&D has been recognized as critical method to take competitiveness by not only companies but also nations with its value creation such as patent value and new product. Therefore, R&D has been a decision maker's burden in that it is hard to decide how much money to invest, how long time one should spend, and what technology to develop which means it accompanies resources such as budget, time, and manpower. Although there are diverse researches about R&D evaluation, business factors are not concerned enough because almost all previous studies are technology oriented evaluation with one R&D technology based. In that, we early proposed R&D business aspect evaluation model which consists of nine business model components. In this research, we develop a simulation model and system evaluating a company or industry's R&D portfolio with business model point of view and clarify default and control parameters to facilitate evaluator's business validity work in each evaluation module by integrate to one screen. PMID:25893209

  1. 77 FR 46699 - Honey From the People's Republic of China: Preliminary Results of Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-06

    ... quantity and value, its separate rate status, structure and affiliations, sales process, accounting and... quantity and value, separate rate status, structure and affiliations, sales process, accounting and... (CIT August 10, 2009) (''Commerce may, of course, begin its total AFA selection process by defaulting...

  2. Greater preference consistency during the Willingness-to-Pay task is related to higher resting state connectivity between the ventromedial prefrontal cortex and the ventral striatum.

    PubMed

    Mackey, Scott; Olafsson, Valur; Aupperle, Robin L; Lu, Kun; Fonzo, Greg A; Parnass, Jason; Liu, Thomas; Paulus, Martin P

    2016-09-01

    The significance of why a similar set of brain regions are associated with the default mode network and value-related neural processes remains to be clarified. Here, we examined i) whether brain regions exhibiting willingness-to-pay (WTP) task-related activity are intrinsically connected when the brain is at rest, ii) whether these regions overlap spatially with the default mode network, and iii) whether individual differences in choice behavior during the WTP task are reflected in functional brain connectivity at rest. Blood-oxygen-level dependent (BOLD) signal was measured by functional magnetic resonance imaging while subjects performed the WTP task and at rest with eyes open. Brain regions that tracked the value of bids during the WTP task were used as seed regions in an analysis of functional connectivity in the resting state data. The seed in the ventromedial prefrontal cortex was functionally connected to core regions of the WTP task-related network. Brain regions within the WTP task-related network, namely the ventral precuneus, ventromedial prefrontal and posterior cingulate cortex overlapped spatially with publically available maps of the default mode network. Also, those individuals with higher functional connectivity during rest between the ventromedial prefrontal cortex and the ventral striatum showed greater preference consistency during the WTP task. Thus, WTP task-related regions are an intrinsic network of the brain that corresponds spatially with the default mode network, and individual differences in functional connectivity within the WTP network at rest may reveal a priori biases in choice behavior.

  3. Greater preference consistency during the Willingness-to-Pay task is related to higher resting state connectivity between the ventromedial prefrontal cortex and the ventral striatum

    PubMed Central

    Mackey, Scott; Olafsson, Valur; Aupperle, Robin; Lu, Kun; Fonzo, Greg; Parnass, Jason; Liu, Thomas; Paulus, Martin P.

    2015-01-01

    The significance of why a similar set of brain regions are associated with the default mode network and value-related neural processes remains to be clarified. Here, we examined i) whether brain regions exhibiting willingness-to-pay (WTP) task-related activity are intrinsically connected when the brain is at rest, ii) whether these regions overlap spatially with the default mode network, and iii) whether individual differences in choice behavior during the WTP task are reflected in functional brain connectivity at rest. Blood-oxygen-level dependent (BOLD) signal was measured by functional magnetic resonance imaging while subjects performed the WTP task and at rest with eyes open. Brain regions that tracked the value of bids during the WTP task were used as seed regions in an analysis of functional connectivity in the resting state data. The seed in the ventromedial prefrontal cortex was functionally connected to core regions of the WTP task-related network. Brain regions within the WTP task-related network, namely the ventral precuneus, ventromedial prefrontal and posterior cingulate cortex overlapped spatially with publically available maps of the default mode network. Also, those individuals with higher functional connectivity during rest between the ventromedial prefrontal cortex and the ventral striatum showed greater preference consistency during the WTP task. Thus, WTP task-related regions are an intrinsic network of the brain that corresponds spatially with the default mode network, and individual differences in functional connectivity within the WTP network at rest may reveal a priori biases in choice behavior. PMID:26271206

  4. Neural correlates of childhood trauma with executive function in young healthy adults.

    PubMed

    Lu, Shaojia; Pan, Fen; Gao, Weijia; Wei, Zhaoguo; Wang, Dandan; Hu, Shaohua; Huang, Manli; Xu, Yi; Li, Lingjiang

    2017-10-03

    The aim of this study was to investigate the relationship among childhood trauma, executive impairments, and altered resting-state brain function in young healthy adults. Twenty four subjects with childhood trauma and 24 age- and gender-matched subjects without childhood trauma were recruited. Executive function was assessed by a series of validated test procedures. Localized brain activity was evaluated by fractional amplitude of low frequency fluctuation (fALFF) method and compared between two groups. Areas with altered fALFF were further selected as seeds in subsequent functional connectivity analysis. Correlations of fALFF and connectivity values with severity of childhood trauma and executive dysfunction were analyzed as well. Subjects with childhood trauma exhibited impaired executive function as assessed by Wisconsin Card Sorting Test and Stroop Color Word Test. Traumatic individuals also showed increased fALFF in the right precuneus and decreased fALFF in the right superior temporal gyrus. Significant correlations of specific childhood trauma severity with executive dysfunction and fALFF value in the right precuneus were found in the whole sample. In addition, individuals with childhood trauma also exhibited diminished precuneus-based connectivity in default mode network with left ventromedial prefrontal cortex, left orbitofrontal cortex, and right cerebellum. Decreased default mode network connectivity was also associated with childhood trauma severity and executive dysfunction. The present findings suggest that childhood trauma is associated with executive deficits and aberrant default mode network functions even in healthy adults. Moreover, this study demonstrates that executive dysfunction is related to disrupted default mode network connectivity.

  5. A case study of tuning MapReduce for efficient Bioinformatics in the cloud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Lizhen; Wang, Zhong; Yu, Weikuan

    The combination of the Hadoop MapReduce programming model and cloud computing allows biological scientists to analyze next-generation sequencing (NGS) data in a timely and cost-effective manner. Cloud computing platforms remove the burden of IT facility procurement and management from end users and provide ease of access to Hadoop clusters. However, biological scientists are still expected to choose appropriate Hadoop parameters for running their jobs. More importantly, the available Hadoop tuning guidelines are either obsolete or too general to capture the particular characteristics of bioinformatics applications. In this paper, we aim to minimize the cloud computing cost spent on bioinformatics datamore » analysis by optimizing the extracted significant Hadoop parameters. When using MapReduce-based bioinformatics tools in the cloud, the default settings often lead to resource underutilization and wasteful expenses. We choose k-mer counting, a representative application used in a large number of NGS data analysis tools, as our study case. Experimental results show that, with the fine-tuned parameters, we achieve a total of 4× speedup compared with the original performance (using the default settings). Finally, this paper presents an exemplary case for tuning MapReduce-based bioinformatics applications in the cloud, and documents the key parameters that could lead to significant performance benefits.« less

  6. The impact of manual threshold selection in medical additive manufacturing.

    PubMed

    van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan

    2017-04-01

    Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.

  7. SiC JFET Transistor Circuit Model for Extreme Temperature Range

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.

    2008-01-01

    A technique for simulating extreme-temperature operation of integrated circuits that incorporate silicon carbide (SiC) junction field-effect transistors (JFETs) has been developed. The technique involves modification of NGSPICE, which is an open-source version of the popular Simulation Program with Integrated Circuit Emphasis (SPICE) general-purpose analog-integrated-circuit-simulating software. NGSPICE in its unmodified form is used for simulating and designing circuits made from silicon-based transistors that operate at or near room temperature. Two rapid modifications of NGSPICE source code enable SiC JFETs to be simulated to 500 C using the well-known Level 1 model for silicon metal oxide semiconductor field-effect transistors (MOSFETs). First, the default value of the MOSFET surface potential must be changed. In the unmodified source code, this parameter has a value of 0.6, which corresponds to slightly more than half the bandgap of silicon. In NGSPICE modified to simulate SiC JFETs, this parameter is changed to a value of 1.6, corresponding to slightly more than half the bandgap of SiC. The second modification consists of changing the temperature dependence of MOSFET transconductance and saturation parameters. The unmodified NGSPICE source code implements a T(sup -1.5) temperature dependence for these parameters. In order to mimic the temperature behavior of experimental SiC JFETs, a T(sup -1.3) temperature dependence must be implemented in the NGSPICE source code. Following these two simple modifications, the Level 1 MOSFET model of the NGSPICE circuit simulation program reasonably approximates the measured high-temperature behavior of experimental SiC JFETs properly operated with zero or reverse bias applied to the gate terminal. Modification of additional silicon parameters in the NGSPICE source code was not necessary to model experimental SiC JFET current-voltage performance across the entire temperature range from 25 to 500 C.

  8. Advisory Algorithm for Scheduling Open Sectors, Operating Positions, and Workstations

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Drew, Michael; Lai, Chok Fung; Bilimoria, Karl D.

    2012-01-01

    Air traffic controller supervisors configure available sector, operating position, and work-station resources to safely and efficiently control air traffic in a region of airspace. In this paper, an algorithm for assisting supervisors with this task is described and demonstrated on two sample problem instances. The algorithm produces configuration schedule advisories that minimize a cost. The cost is a weighted sum of two competing costs: one penalizing mismatches between configurations and predicted air traffic demand and another penalizing the effort associated with changing configurations. The problem considered by the algorithm is a shortest path problem that is solved with a dynamic programming value iteration algorithm. The cost function contains numerous parameters. Default values for most of these are suggested based on descriptions of air traffic control procedures and subject-matter expert feedback. The parameter determining the relative importance of the two competing costs is tuned by comparing historical configurations with corresponding algorithm advisories. Two sample problem instances for which appropriate configuration advisories are obvious were designed to illustrate characteristics of the algorithm. Results demonstrate how the algorithm suggests advisories that appropriately utilize changes in airspace configurations and changes in the number of operating positions allocated to each open sector. The results also demonstrate how the advisories suggest appropriate times for configuration changes.

  9. The fundamental theorem of asset pricing under default and collateral in finite discrete time

    NASA Astrophysics Data System (ADS)

    Alvarez-Samaniego, Borys; Orrillo, Jaime

    2006-08-01

    We consider a financial market where time and uncertainty are modeled by a finite event-tree. The event-tree has a length of N, a unique initial node at the initial date, and a continuum of branches at each node of the tree. Prices and returns of J assets are modeled, respectively, by a R2JxR2J-valued stochastic process . In this framework we prove a version of the Fundamental Theorem of Asset Pricing which applies to defaultable securities backed by exogenous collateral suffering a contingent linear depreciation.

  10. VizieR Online Data Catalog: AKARI IRC asteroid sample diameters & albedos (Ali-Lagoa+, 2018)

    NASA Astrophysics Data System (ADS)

    Ali-Lagoa, V.; Mueller, T. G.; Usui, F.; Hasegawa, S.

    2017-11-01

    Table 1 contains the best-fitting values of size and beaming parameter and corresponding visible geometric albedos for the full AKARI IRC sample. We fitted the near-Earth asteroid thermal model (NEATM) of Harris (1998Icar..131..291H) to the AKARI IRC thermal infrared data (Murakami et al., 2007PASJ...59S.369M, Onaka et al., 2007PASJ...59S.401O, Ishihara et al., 2010A&A...514A...1I, Cat. II/297, Usui et al., 2011PASJ...63.1117U, Cat. J/PASJ/63/1117, Takita et al., 2012PASJ...64..126T, Hasegawa et al., 2013PASJ...65...34H, Cat. J/PASJ/65/34). The NEATM implementation is described in Ali-Lagoa and Delbo' (2017A&A...603A..55A, cat. J/A+A/603/A55). Minimum relative errors of 10, 15, and 20 percent are given for size, beaming parameter and albedo in those cases where the beaming parameter could be fitted. Otherwise, a default value of the beaming parameter is assumed based on Eq. 1 in the article, and the minimum relative errors in size and albedo increase to 20 and 40 percent (see the discussions in Mainzer et al., 2011ApJ...736..100M, Ali-Lagoa et al., 2016A&A...591A..14A, Cat. J/A+A/591/A14). We also provide the asteroid absolute magnitudes and G12 slope parameters retrieved from Oszkiewicz et al. (2012), the number of observations used in each IRC band (S9W and L18W), plus the heliocentric and geocentric distances and phase angle (r, Delta, alpha) based on the ephemerides taken from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).

  11. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    EPA Science Inventory

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  12. Optimization of microphysics in the Unified Model, using the Micro-genetic algorithm.

    NASA Astrophysics Data System (ADS)

    Jang, J.; Lee, Y.; Lee, H.; Lee, J.; Joo, S.

    2016-12-01

    This study focuses on parameter optimization of microphysics in the Unified Model (UM) using the Micro-genetic algorithm (Micro-GA). We need the optimization of microphysics in UM. Because, Microphysics in the Numerical Weather Prediction (NWP) model is important to Quantitative Precipitation Forecasting (QPF). The Micro-GA searches for optimal parameters on the basis of fitness function. The five parameters are chosen. The target parameters include x1, x2 related to raindrop size distribution, Cloud-rain correlation coefficient, Surface droplet number and Droplet taper height. The fitness function is based on the skill score that is BIAS and Critical Successive Index (CSI). An interface between UM and Micro-GA is developed and applied to three precipitation cases in Korea. The cases are (ⅰ) heavy rainfall in the Southern area because of typhoon NAKRI, (ⅱ) heavy rainfall in the Youngdong area, and (ⅲ) heavy rainfall in the Seoul metropolitan area. When the optimized result is compared to the control result (using the UM default value, CNTL), the optimized result leads to improvements in precipitation forecast, especially for heavy rainfall of the late forecast time. Also, we analyze the skill score of precipitation forecasts in terms of various thresholds of CNTL, Optimized result, and experiments on each optimized parameter for five parameters. Generally, the improvement is maximized when the five optimized parameters are used simultaneously. Therefore, this study demonstrates the ability to improve Korean precipitation forecasts by optimizing microphysics in UM.

  13. Understanding heart rate alarm adjustment in the intensive care units through an analytical approach.

    PubMed

    Fidler, Richard L; Pelter, Michele M; Drew, Barbara J; Palacios, Jorge Arroyo; Bai, Yong; Stannard, Daphne; Aldrich, J Matt; Hu, Xiao

    2017-01-01

    Heart rate (HR) alarms are prevalent in ICU, and these parameters are configurable. Not much is known about nursing behavior associated with tailoring HR alarm parameters to individual patients to reduce clinical alarm fatigue. To understand the relationship between heart rate (HR) alarms and adjustments to reduce unnecessary heart rate alarms. Retrospective, quantitative analysis of an adjudicated database using analytical approaches to understand behaviors surrounding parameter HR alarm adjustments. Patients were sampled from five adult ICUs (77 beds) over one month at a quaternary care university medical center. A total of 337 of 461 ICU patients had HR alarms with 53.7% male, mean age 60.3 years, and 39% non-Caucasian. Default HR alarm parameters were 50 and 130 beats per minute (bpm). The occurrence of each alarm, vital signs, and physiologic waveforms was stored in a relational database (SQL server). There were 23,624 HR alarms for analysis, with 65.4% exceeding the upper heart rate limit. Only 51% of patients with HR alarms had parameters adjusted, with a median upper limit change of +5 bpm and -1 bpm lower limit. The median time to first HR parameter adjustment was 17.9 hours, without reduction in alarms occurrence (p = 0.57). HR alarms are prevalent in ICU, and half of HR alarm settings remain at default. There is a long delay between HR alarms and parameters changes, with insufficient changes to decrease HR alarms. Increasing frequency of HR alarms shortens the time to first adjustment. Best practice guidelines for HR alarm limits are needed to reduce alarm fatigue and improve monitoring precision.

  14. Neural network topology in ADHD; evidence for maturational delay and default-mode network alterations.

    PubMed

    Janssen, T W P; Hillebrand, A; Gouw, A; Geladé, K; Van Mourik, R; Maras, A; Oosterlaan, J

    2017-11-01

    Attention-deficit/hyperactivity disorder (ADHD) has been associated with widespread brain abnormalities in white and grey matter, affecting not only local, but global functional networks as well. In this study, we explored these functional networks using source-reconstructed electroencephalography in ADHD and typically developing (TD) children. We expected evidence for maturational delay, with underlying abnormalities in the default mode network. Electroencephalograms were recorded in ADHD (n=42) and TD (n=43) during rest, and functional connectivity (phase lag index) and graph (minimum spanning tree) parameters were derived. Dependent variables were global and local network metrics in theta, alpha and beta bands. We found evidence for a more centralized functional network in ADHD compared to TD children, with decreased diameter in the alpha band (η p 2 =0.06) and increased leaf fraction (η p 2 =0.11 and 0.08) in the alpha and beta bands, with underlying abnormalities in hub regions of the brain, including default mode network. The finding of a more centralized network is in line with maturational delay models of ADHD and should be replicated in longitudinal designs. This study contributes to the literature by combining high temporal and spatial resolution to construct EEG network topology, and associates maturational-delay and default-mode interference hypotheses of ADHD. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  15. Default from tuberculosis treatment in Tashkent, Uzbekistan; who are these defaulters and why do they default?

    PubMed

    Hasker, Epco; Khodjikhanov, Maksad; Usarova, Shakhnoz; Asamidinov, Umid; Yuldashova, Umida; van der Werf, Marieke J; Uzakova, Gulnoz; Veen, Jaap

    2008-07-22

    In Tashkent (Uzbekistan), TB treatment is provided in accordance with the DOTS strategy. Of 1087 pulmonary TB patients started on treatment in 2005, 228 (21%) defaulted. This study investigates who the defaulters in Tashkent are, when they default and why they default. We reviewed the records of 126 defaulters (cases) and 132 controls and collected information on time of default, demographic factors, social factors, potential risk factors for default, characteristics of treatment and recorded reasons for default. Unemployment, being a pensioner, alcoholism and homelessness were associated with default. Patients defaulted mostly during the intensive phase, while they were hospitalized (61%), or just before they were to start the continuation phase (26%). Reasons for default listed in the records were various, 'Refusal of further treatment' (27%) and 'Violation of hospital rules' (18%) were most frequently recorded. One third of the recorded defaulters did not really default but continued treatment under 'non-DOTS' conditions. Whereas patient factors such as unemployment, being a pensioner, alcoholism and homelessness play a role, there are also system factors that need to be addressed to reduce default. Such system factors include the obligatory admission in TB hospitals and the inadequately organized transition from hospitalized to ambulatory treatment.

  16. Default from tuberculosis treatment in Tashkent, Uzbekistan; Who are these defaulters and why do they default?

    PubMed Central

    Hasker, Epco; Khodjikhanov, Maksad; Usarova, Shakhnoz; Asamidinov, Umid; Yuldashova, Umida; Werf, Marieke J van der; Uzakova, Gulnoz; Veen, Jaap

    2008-01-01

    Background In Tashkent (Uzbekistan), TB treatment is provided in accordance with the DOTS strategy. Of 1087 pulmonary TB patients started on treatment in 2005, 228 (21%) defaulted. This study investigates who the defaulters in Tashkent are, when they default and why they default. Methods We reviewed the records of 126 defaulters (cases) and 132 controls and collected information on time of default, demographic factors, social factors, potential risk factors for default, characteristics of treatment and recorded reasons for default. Results Unemployment, being a pensioner, alcoholism and homelessness were associated with default. Patients defaulted mostly during the intensive phase, while they were hospitalized (61%), or just before they were to start the continuation phase (26%). Reasons for default listed in the records were various, 'Refusal of further treatment' (27%) and 'Violation of hospital rules' (18%) were most frequently recorded. One third of the recorded defaulters did not really default but continued treatment under 'non-DOTS' conditions. Conclusion Whereas patient factors such as unemployment, being a pensioner, alcoholism and homelessness play a role, there are also system factors that need to be addressed to reduce default. Such system factors include the obligatory admission in TB hospitals and the inadequately organized transition from hospitalized to ambulatory treatment. PMID:18647400

  17. Bank Regulation: Analysis of the Failure of Superior Bank, FSB, Hinsdale, Illinois

    DTIC Science & Technology

    2002-02-07

    statement of financial position based on the fair value . The best evidence of fair value is a quoted market price in an active market, but if there is no...market price, the value must be estimated. In estimating the fair value of retained interests, valuation techniques include estimating the present...about interest rates, default, prepayment, and volatility. In 1999, FASB explained that when estimating the fair value for 7FAS No. 140: Accounting for

  18. Overestimating resource value and its effects on fighting decisions.

    PubMed

    Dugatkin, Lee Alan; Dugatkin, Aaron David

    2011-01-01

    Much work in behavioral ecology has shown that animals fight over resources such as food, and that they make strategic decisions about when to engage in such fights. Here, we examine the evolution of one, heretofore unexamined, component of that strategic decision about whether to fight for a resource. We present the results of a computer simulation that examined the evolution of over- or underestimating the value of a resource (food) as a function of an individual's current hunger level. In our model, animals fought for food when they perceived their current food level to be below the mean for the environment. We considered seven strategies for estimating food value: 1) always underestimate food value, 2) always overestimate food value, 3) never over- or underestimate food value, 4) overestimate food value when hungry, 5) underestimate food value when hungry, 6) overestimate food value when relatively satiated, and 7) underestimate food value when relatively satiated. We first competed all seven strategies against each other when they began at approximately equal frequencies. In such a competition, two strategies--"always overestimate food value," and "overestimate food value when hungry"--were very successful. We next competed each of these strategies against the default strategy of "never over- or underestimate," when the default strategy was set at 99% of the population. Again, the strategies of "always overestimate food value" and "overestimate food value when hungry" fared well. Our results suggest that overestimating food value when deciding whether to fight should be favored by natural selection.

  19. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  20. Recovering stellar population parameters via two full-spectrum fitting algorithms in the absence of model uncertainties

    NASA Astrophysics Data System (ADS)

    Ge, Junqiang; Yan, Renbin; Cappellari, Michele; Mao, Shude; Li, Hongyu; Lu, Youjun

    2018-05-01

    Using mock spectra based on Vazdekis/MILES library fitted within the wavelength region 3600-7350Å, we analyze the bias and scatter on the resulting physical parameters induced by the choice of fitting algorithms and observational uncertainties, but avoid effects of those model uncertainties. We consider two full-spectrum fitting codes: pPXF and STARLIGHT, in fitting for stellar population age, metallicity, mass-to-light ratio, and dust extinction. With pPXF we find that both the bias μ in the population parameters and the scatter σ in the recovered logarithmic values follows the expected trend μ ∝ σ ∝ 1/(S/N). The bias increases for younger ages and systematically makes recovered ages older, M*/Lr larger and metallicities lower than the true values. For reference, at S/N=30, and for the worst case (t = 108yr), the bias is 0.06 dex in M/Lr, 0.03 dex in both age and [M/H]. There is no significant dependence on either E(B-V) or the shape of the error spectrum. Moreover, the results are consistent for both our 1-SSP and 2-SSP tests. With the STARLIGHT algorithm, we find trends similar to pPXF, when the input E(B-V)<0.2 mag. However, with larger input E(B-V), the biases of the output parameter do not converge to zero even at the highest S/N and are strongly affected by the shape of the error spectra. This effect is particularly dramatic for youngest age (t = 108yr), for which all population parameters can be strongly different from the input values, with significantly underestimated dust extinction and [M/H], and larger ages and M*/Lr. Results degrade when moving from our 1-SSP to the 2-SSP tests. The STARLIGHT convergence to the true values can be improved by increasing Markov Chains and annealing loops to the "slow mode". For the same input spectrum, pPXF is about two order of magnitudes faster than STARLIGHT's "default mode" and about three order of magnitude faster than STARLIGHT's "slow mode".

  1. SU-F-T-681: Does the Biophysical Modeling for Immunological Aspects in Radiotherapy Precisely Predict Tumor and Normal Tissue Responses?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oita, M; Nakata, K; Sasaki, M

    2016-06-15

    Purpose: Recent advances in immunotherapy make possible to combine with radiotherapy. The aim of this study was to assess the TCP/NTCP model with immunological aspects including stochastic distribution as intercellular uncertainties. Methods: In the clinical treatment planning system (Eclipse ver.11.0, Varian medical systems, US), biological parameters such as α/β, D50, γ, n, m, TD50 including repair parameters (bi-exponential repair) can be set as any given values to calculate the TCP/NTCP. Using a prostate cancer patient data with VMAT commissioned as a 6-MV photon beam of Novalis-Tx (BrainLab, US) in clinical use, the fraction schedule were hypothesized as 70–78Gy/35–39fr, 72–81Gy/40–45fr, 52.5–66Gy/16–22fr,more » 35–40Gy/5fr of 5–7 fractions in a week. By use of stochastic biological model applying for Gaussian distribution, the effects of the TCP/NTCP variation of repair parameters of the immune system as well as the intercellular uncertainty of tumor and normal tissues have been evaluated. Results: As respect to the difference of the α/β, the changes of the TCP/NTCP were increased in hypo-fraction regimens. The difference between the values of n and m affect the variation of the NTCP with the fraction schedules, independently. The elongation of repair half-time (long) increased the TCP/NTCP twice or much higher in the case of hypo-fraction scheme. For tumor, the repopulation parameters such as Tpot and Tstart, which is immunologically working to the tumor, improved TCP. Conclusion: Compared to default fixed value, which has affected by the probability of cell death and cure, hypo-fractionation schemes seemed to have advantages for the variations of the values of m. The possibility of an increase of the α/β or TD50 and repair parameters in tumor and normal tissue by immunological aspects were highly expected. For more precise prediction, treatment planning systems should be incorporated the complicated biological optimization in clinical practice combined with basic experiments data.« less

  2. TU-FG-209-04: Testing of Digital Image Receptors Using AAPM TG-150’s Draft Recommendations - Investigating the Impact of Different Processing Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finley, C; Dave, J

    Purpose: To evaluate implementation of AAPM TG-150’s draft recommendations via a parameter study for testing the performance of digital image receptors. Methods: Flat field images were acquired from 9 calibrated digital image receptors associated with 9 new portable digital radiography systems (Carestream Health, Inc.) based on the draft recommendations and manufacturer-specified calibration conditions (set of 4 images at input detector air kerma ranging from 1 to 25 µGy). Effects of exposure response function (linearized and logarithmic), ‘Presentation Intent Type’ (‘For Processing’ and ‘For Presentation’), detector orientation with respect to the anode-cathode axis (4 orientations; 900 rotations per iteration), different ROImore » sizes (5×5–40×40 mm{sup 2}) and elimination of varying dimensions of image border (0 mm i.e., without boundary elimination to 150 mm) on signal, noise, signal-to-noise ratio (SNR) and the associated nonuniformities were evaluated. Images were analyzed in Matlab and quantities were compared using ANOVA. Results: Signal, noise and SNR values averaged over 9 systems with default parameter values in draft recommendations were 4837.2±139.4, 19.7±0.9 and 246.4±10.1 (mean ± standard deviation), respectively (at input detector air kerma: 12.5 µGy). Signal, noise and SNR showed characteristic dependency on exposure response function and on ‘Presentation Intent Type’. These values were not affected by ROI size and detector orientation, but analysis showed that eliminating the edge pixels along the boundary was required for the noise parameter (coefficient of variation range for noise: 72%–106% and 3%–4% without and with boundary elimination; respectively). Local and global nonuniformities showed a similar dependence on the need for boundary elimination. Interestingly, computed non-uniformities showed agreement with manufacturer-reported values except for noise non-uniformities in two units; artifacts were seen in images from these two units highlighting the importance of independent evaluations. Conclusion: The effect of different parameters on performance characterization of digital image receptors was evaluated based on TG-150’s draft recommendations.« less

  3. 40 CFR 98.464 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... = Degradable organic content of waste stream in Year X (weight fraction, wet basis) FDOC = Fraction of the volatile residue that is degradable organic carbon (weight fraction). Use a default value of 0.6...

  4. Neural signatures of economic parameters during decision-making: a functional MRI (FMRI), electroencephalography (EEG) and autonomic monitoring study.

    PubMed

    Minati, Ludovico; Grisoli, Marina; Franceschetti, Silvana; Epifani, Francesca; Granvillano, Alice; Medford, Nick; Harrison, Neil A; Piacentini, Sylvie; Critchley, Hugo D

    2012-01-01

    Adaptive behaviour requires an ability to obtain rewards by choosing between different risky options. Financial gambles can be used to study effective decision-making experimentally, and to distinguish processes involved in choice option evaluation from outcome feedback and other contextual factors. Here, we used a paradigm where participants evaluated 'mixed' gambles, each presenting a potential gain and a potential loss and an associated variable outcome probability. We recorded neural responses using autonomic monitoring, electroencephalography (EEG) and functional neuroimaging (fMRI), and used a univariate, parametric design to test for correlations with the eleven economic parameters that varied across gambles, including expected value (EV) and amount magnitude. Consistent with behavioural economic theory, participants were risk-averse. Gamble evaluation generated detectable autonomic responses, but only weak correlations with outcome uncertainty were found, suggesting that peripheral autonomic feedback does not play a major role in this task. Long-latency stimulus-evoked EEG potentials were sensitive to expected gain and expected value, while alpha-band power reflected expected loss and amount magnitude, suggesting parallel representations of distinct economic qualities in cortical activation and central arousal. Neural correlates of expected value representation were localized using fMRI to ventromedial prefrontal cortex, while the processing of other economic parameters was associated with distinct patterns across lateral prefrontal, cingulate, insula and occipital cortices including default-mode network and early visual areas. These multimodal data provide complementary evidence for distributed substrates of choice evaluation across multiple, predominantly cortical, brain systems wherein distinct regions are preferentially attuned to specific economic features. Our findings extend biologically-plausible models of risky decision-making while providing potential biomarkers of economic representations that can be applied to the study of deficits in motivational behaviour in neurological and psychiatric patients.

  5. Defaulters among lung cancer patients in a suburban district in a developing country.

    PubMed

    Ng, T H; How, S H; Kuan, Y C; Fauzi, A R

    2012-01-01

    This study was carried out to determine the prevalence, patient's characteristic and reasons for defaulting follow-up and treatment among patients with lung cancer. Patients with histologically confirmed lung cancer were recruited. Patient's detailed demographic data, occupation, socioeconomic status, and educational level of both the patients and their children were recorded. Defaulters were classified as either intermittent or persistent defaulters. By using Chi-square test, defaulter status was compared with various demographic and disease characteristic factors. The reasons for default were determined. Ninety five patients were recruited. Among them, 81.1% patients were males; 66.3% were Malays. The mean age (SD) was 60 ± 10.5 years. About 46.3% of the patients had Eastern Cooperation Oncology Group (ECOG) functional status 0/1 and 96.8% of the patients presented with advanced stage (Stage 3b or 4). Overall, 20 patients (21.1%) were defaulters (35.0% intermittent defaulters; 65.0% persistent defaulters). Among the intermittent defaulters, 8 patients defaulted once and one patient defaulted 3 times. Among the 20 defaulters, only 2 (10%) patients turned up for the second follow-up appointment after telephone reminder. Two main reasons for default were 'too ill to come' (38.5.5%) and logistic difficulties (23.1%). No correlation was found between patient education, children education, income, ECOG status, stage of the disease, race, and gender with the defaulter rate. Defaulter rate among lung cancer patients was 21.1%. Children education level is the only significant factor associated with the defaulter rate.

  6. Replica exchange enveloping distribution sampling (RE-EDS): A robust method to estimate multiple free-energy differences from a single simulation.

    PubMed

    Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina

    2016-10-21

    In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.

  7. Evaluating the biochemical methane potential (BMP) of low-organic waste at Danish landfills.

    PubMed

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2014-11-01

    The biochemical methane potential (BMP) is an essential parameter when using first order decay (FOD) landfill gas (LFG) generation models to estimate methane (CH4) generation from landfills. Different categories of waste (mixed, shredder and sludge waste) with a low-organic content and temporarily stored combustible waste were sampled from four Danish landfills. The waste was characterized in terms of physical characteristics (TS, VS, TC and TOC) and the BMP was analyzed in batch tests. The experiment was set up in triplicate, including blank and control tests. Waste samples were incubated at 55°C for more than 60 days, with continuous monitoring of the cumulative CH4 generation. Results showed that samples of mixed waste and shredder waste had similar BMP results, which was in the range of 5.4-9.1 kg CH4/ton waste (wet weight) on average. As a calculated consequence, their degradable organic carbon content (DOCC) was in the range of 0.44-0.70% of total weight (wet waste). Numeric values of both parameters were much lower than values of traditional municipal solid waste (MSW), as well as default numeric values in current FOD models. The sludge waste and temporarily stored combustible waste showed BMP values of 51.8-69.6 and 106.6-117.3 kg CH4/ton waste on average, respectively, and DOCC values of 3.84-5.12% and 7.96-8.74% of total weight. The same category of waste from different Danish landfills did not show significant variation. This research studied the BMP of Danish low-organic waste for the first time, which is important and valuable for using current FOD LFG generation models to estimate realistic CH4 emissions from modern landfills receiving low-organic waste. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Acute and chronic environmental effects of clandestine methamphetamine waste.

    PubMed

    Kates, Lisa N; Knapp, Charles W; Keenan, Helen E

    2014-09-15

    The illicit manufacture of methamphetamine (MAP) produces substantial amounts of hazardous waste that is dumped illegally. This study presents the first environmental evaluation of waste produced from illicit MAP manufacture. Chemical oxygen demand (COD) was measured to assess immediate oxygen depletion effects. A mixture of five waste components (10mg/L/chemical) was found to have a COD (130 mg/L) higher than the European Union wastewater discharge regulations (125 mg/L). Two environmental partition coefficients, K(OW) and K(OC), were measured for several chemicals identified in MAP waste. Experimental values were input into a computer fugacity model (EPI Suite™) to estimate environmental fate. Experimental log K(OW) values ranged from -0.98 to 4.91, which were in accordance with computer estimated values. Experimental K(OC) values ranged from 11 to 72, which were much lower than the default computer values. The experimental fugacity model for discharge to water estimates that waste components will remain in the water compartment for 15 to 37 days. Using a combination of laboratory experimentation and computer modelling, the environmental fate of MAP waste products was estimated. While fugacity models using experimental and computational values were very similar, default computer models should not take the place of laboratory experimentation. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Radium concentration factors and their use in health and environmental risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meinhold, A.F.; Hamilton, L.D.

    1991-12-31

    Radium is known to be taken up by aquatic animals, and tends to accumulate in bone, shell and exoskeleton. The most common approach to estimating the uptake of a radionuclide by aquatic animals for use in health and environmental risk assessments is the concentration factor method. The concentration factor method relates the concentration of a contaminant in an organism to the concentration in the surrounding water. Site specific data are not usually available, and generic, default values are often used in risk assessment studies. This paper describes the concentration factor method, summarizes some of the variables which may influence themore » concentration factor for radium, reviews reported concentration factors measured in marine environments and presents concentration factors derived from data collected in a study in coastal Louisiana. The use of generic default values for the concentration factor is also discussed.« less

  10. Radium concentration factors and their use in health and environmental risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meinhold, A.F.; Hamilton, L.D.

    1991-01-01

    Radium is known to be taken up by aquatic animals, and tends to accumulate in bone, shell and exoskeleton. The most common approach to estimating the uptake of a radionuclide by aquatic animals for use in health and environmental risk assessments is the concentration factor method. The concentration factor method relates the concentration of a contaminant in an organism to the concentration in the surrounding water. Site specific data are not usually available, and generic, default values are often used in risk assessment studies. This paper describes the concentration factor method, summarizes some of the variables which may influence themore » concentration factor for radium, reviews reported concentration factors measured in marine environments and presents concentration factors derived from data collected in a study in coastal Louisiana. The use of generic default values for the concentration factor is also discussed.« less

  11. Default "Gunel and Dickey" Bayes factors for contingency tables.

    PubMed

    Jamil, Tahira; Ly, Alexander; Morey, Richard D; Love, Jonathon; Marsman, Maarten; Wagenmakers, Eric-Jan

    2017-04-01

    The analysis of R×C contingency tables usually features a test for independence between row and column counts. Throughout the social sciences, the adequacy of the independence hypothesis is generally evaluated by the outcome of a classical p-value null-hypothesis significance test. Unfortunately, however, the classical p-value comes with a number of well-documented drawbacks. Here we outline an alternative, Bayes factor method to quantify the evidence for and against the hypothesis of independence in R×C contingency tables. First we describe different sampling models for contingency tables and provide the corresponding default Bayes factors as originally developed by Gunel and Dickey (Biometrika, 61(3):545-557 (1974)). We then illustrate the properties and advantages of a Bayes factor analysis of contingency tables through simulations and practical examples. Computer code is available online and has been incorporated in the "BayesFactor" R package and the JASP program ( jasp-stats.org ).

  12. Parameterization of Shape and Compactness in Object-based Image Classification Using Quickbird-2 Imagery

    NASA Astrophysics Data System (ADS)

    Tonbul, H.; Kavzoglu, T.

    2016-12-01

    In recent years, object based image analysis (OBIA) has spread out and become a widely accepted technique for the analysis of remotely sensed data. OBIA deals with grouping pixels into homogenous objects based on spectral, spatial and textural features of contiguous pixels in an image. The first stage of OBIA, named as image segmentation, is the most prominent part of object recognition. In this study, multiresolution segmentation, which is a region-based approach, was employed to construct image objects. In the application of multi-resolution, three parameters, namely shape, compactness and scale must be set by the analyst. Segmentation quality remarkably influences the fidelity of the thematic maps and accordingly the classification accuracy. Therefore, it is of great importance to search and set optimal values for the segmentation parameters. In the literature, main focus has been on the definition of scale parameter, assuming that the effect of shape and compactness parameters is limited in terms of achieved classification accuracy. The aim of this study is to deeply analyze the influence of shape/compactness parameters by varying their values while using the optimal scale parameter determined by the use of Estimation of Scale Parameter (ESP-2) approach. A pansharpened Qickbird-2 image covering Trabzon, Turkey was employed to investigate the objectives of the study. For this purpose, six different combinations of shape/compactness were utilized to make deductions on the behavior of shape and compactness parameters and optimal setting for all parameters as a whole. Objects were assigned to classes using nearest neighbor classifier in all segmentation observations and equal number of pixels was randomly selected to calculate accuracy metrics. The highest overall accuracy (92.3%) was achieved by setting the shape/compactness criteria to 0.3/0.3. The results of this study indicate that shape/compactness parameters can have significant effect on classification accuracy with 4% change in overall accuracy. Also, statistical significance of differences in accuracy was tested using the McNemar's test and found that the difference between poor and optimal setting of shape/compactness parameters was statistically significant, suggesting a search for optimal parameterization instead of default setting.

  13. Risk factors and mortality associated with default from multidrug-resistant tuberculosis treatment.

    PubMed

    Franke, Molly F; Appleton, Sasha C; Bayona, Jaime; Arteaga, Fernando; Palacios, Eda; Llaro, Karim; Shin, Sonya S; Becerra, Mercedes C; Murray, Megan B; Mitnick, Carole D

    2008-06-15

    Completing treatment for multidrug-resistant (MDR) tuberculosis (TB) may be more challenging than completing first-line TB therapy, especially in resource-poor settings. The objectives of this study were to (1) identify risk factors for default from MDR TB therapy (defined as prolonged treatment interruption), (2) quantify mortality among patients who default from treatment, and (3) identify risk factors for death after default from treatment. We performed a retrospective chart review to identify risk factors for default from MDR TB therapy and conducted home visits to assess mortality among patients who defaulted from such therapy. Sixty-seven (10.0%) of 671 patients defaulted from MDR TB therapy. The median time to treatment default was 438 days (interquartile range, 152-710 days), and 27 (40.3%) of the 67 patients who defaulted from treatment had culture-positive sputum at the time of default. Substance use (hazard ratio, 2.96; 95% confidence interval, 1.56-5.62; P = .001), substandard housing conditions (hazard ratio, 1.83; 95% confidence interval, 1.07-3.11; P = .03), later year of enrollment (hazard ratio, 1.62, 95% confidence interval, 1.09-2.41; P = .02), and health district (P = .02) predicted default from therapy in a multivariable analysis. Severe adverse events did not predict default from therapy. Forty-seven (70.1%) of 67 patients who defaulted from therapy were successfully traced; of these, 25 (53.2%) had died. Poor bacteriologic response, <1 year of treatment at the time of default, low education level, and diagnosis with a psychiatric disorder significantly predicted death after default in a multivariable analysis. The proportion of patients who defaulted from MDR TB treatment was relatively low. The large proportion of patients who had culture-positive sputum at the time of treatment default underscores the public health importance of minimizing treatment default. Prognosis for patients who defaulted from therapy was poor. Interventions aimed at preventing treatment default may reduce TB-related mortality.

  14. A Study of the Impact of Default Management Practices and Other Factors on Student Loan Default Rates in Public Two-Year Community Colleges

    ERIC Educational Resources Information Center

    Daniels, Randell W.

    2013-01-01

    Default management practices and their relationship to the student loan default rate in public two-year community colleges was the focus of this investigation. Five research questions regarding written default management plans, default management practices, process management, accountability, and other factors impacting default guided the study.…

  15. Understanding heart rate alarm adjustment in the intensive care units through an analytical approach

    PubMed Central

    Pelter, Michele M.; Drew, Barbara J.; Palacios, Jorge Arroyo; Bai, Yong; Stannard, Daphne; Aldrich, J. Matt; Hu, Xiao

    2017-01-01

    Background Heart rate (HR) alarms are prevalent in ICU, and these parameters are configurable. Not much is known about nursing behavior associated with tailoring HR alarm parameters to individual patients to reduce clinical alarm fatigue. Objectives To understand the relationship between heart rate (HR) alarms and adjustments to reduce unnecessary heart rate alarms. Methods Retrospective, quantitative analysis of an adjudicated database using analytical approaches to understand behaviors surrounding parameter HR alarm adjustments. Patients were sampled from five adult ICUs (77 beds) over one month at a quaternary care university medical center. A total of 337 of 461 ICU patients had HR alarms with 53.7% male, mean age 60.3 years, and 39% non-Caucasian. Default HR alarm parameters were 50 and 130 beats per minute (bpm). The occurrence of each alarm, vital signs, and physiologic waveforms was stored in a relational database (SQL server). Results There were 23,624 HR alarms for analysis, with 65.4% exceeding the upper heart rate limit. Only 51% of patients with HR alarms had parameters adjusted, with a median upper limit change of +5 bpm and -1 bpm lower limit. The median time to first HR parameter adjustment was 17.9 hours, without reduction in alarms occurrence (p = 0.57). Conclusions HR alarms are prevalent in ICU, and half of HR alarm settings remain at default. There is a long delay between HR alarms and parameters changes, with insufficient changes to decrease HR alarms. Increasing frequency of HR alarms shortens the time to first adjustment. Best practice guidelines for HR alarm limits are needed to reduce alarm fatigue and improve monitoring precision. PMID:29176776

  16. Reduced uncertainty of regional scale CLM predictions of net carbon fluxes and leaf area indices with estimated plant-specific parameters

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2016-04-01

    Reliable estimates of carbon fluxes and states at regional scales are required to reduce uncertainties in regional carbon balance estimates and to support decision making in environmental politics. In this work the Community Land Model version 4.5 (CLM4.5-BGC) was applied at a high spatial resolution (1 km2) for the Rur catchment in western Germany. In order to improve the model-data consistency of net ecosystem exchange (NEE) and leaf area index (LAI) for this study area, five plant functional type (PFT)-specific CLM4.5-BGC parameters were estimated with time series of half-hourly NEE data for one year in 2011/2012, using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, a Markov Chain Monte Carlo (MCMC) approach. The parameters were estimated separately for four different plant functional types (needleleaf evergreen temperate tree, broadleaf deciduous temperate tree, C3-grass and C3-crop) at four different sites. The four sites are located inside or close to the Rur catchment. We evaluated modeled NEE for one year in 2012/2013 with NEE measured at seven eddy covariance sites in the catchment, including the four parameter estimation sites. Modeled LAI was evaluated by means of LAI derived from remotely sensed RapidEye images of about 18 days in 2011/2012. Performance indices were based on a comparison between measurements and (i) a reference run with CLM default parameters, and (ii) a 60 instance CLM ensemble with parameters sampled from the DREAM posterior probability density functions (pdfs). The difference between the observed and simulated NEE sum reduced 23% if estimated parameters instead of default parameters were used as input. The mean absolute difference between modeled and measured LAI was reduced by 59% on average. Simulated LAI was not only improved in terms of the absolute value but in some cases also in terms of the timing (beginning of vegetation onset), which was directly related to a substantial improvement of the NEE estimates in spring. In order to obtain a more comprehensive estimate of the model uncertainty, a second CLM ensemble was set up, where initial conditions and atmospheric forcings were perturbed in addition to the parameter estimates. This resulted in very high standard deviations (STD) of the modeled annual NEE sums for C3-grass and C3-crop PFTs, ranging between 24.1 and 225.9 gC m-2 y-1, compared to STD = 0.1 - 3.4 gC m-2 y-1 (effect of parameter uncertainty only, without additional perturbation of initial states and atmospheric forcings). The higher spread of modeled NEE for the C3-crop and C3-grass indicated that the model uncertainty was notably higher for those PFTs compared to the forest-PFTs. Our findings highlight the potential of parameter and uncertainty estimation to support the understanding and further development of land surface models such as CLM.

  17. The Wageningen Lowland Runoff Simulator (WALRUS): a Novel Open Source Rainfall-Runoff Model for Areas with Shallow Groundwater

    NASA Astrophysics Data System (ADS)

    Brauer, C.; Teuling, R.; Torfs, P.; Uijlenhoet, R.

    2014-12-01

    Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS) to fill the gap between complex, spatially distributed models which are often used in lowland regions and simple, parametric models which have mostly been developed for mountainous catchments. This parametric rainfall-runoff model can be used all over the world, both in freely draining lowland catchments and polders with controlled water levels. Here, we present the model implementation and our recent experience in training students and practitioners to use the model. WALRUS has several advantages that facilitate practical application. Firstly, WALRUS is computationally efficient, which allows for operational forecasting and uncertainty estimation by running ensembles. Secondly, the code is set-up such that it can be used by both practitioners and researchers. For direct use by practitioners, defaults are implemented for relations between model variables and for the computation of initial conditions based on discharge only, leaving only four parameters which require calibration. For research purposes, the defaults can easily be changed. Finally, an approach for flexible time steps increases numerical stability and makes model parameter values independent of time step size, which facilitates use of the model with the same parameter set for multi-year water balance studies as well as detailed analyses of individual flood peaks. The open source model code is currently implemented in R and compiled into a package. This package will be made available through the R CRAN server. A small massive open online course (MOOC) is being developed to give students, researchers and practitioners a step-by-step WALRUS-training. This course contains explanations about model elements and its advantages and limitations, as well as hands-on exercises to learn how to use WALRUS. All code, course, literature and examples will be collected on a dedicated website, which can be found via www.wageningenur.nl/hwm. References C.C. Brauer, et al. (2014a). Geosci. Model Dev. Discuss., 7, 1357—1411. C.C. Brauer, et al. (2014b). Hydrol. Earth Syst. Sci. Discuss., 11, 2091—2148.

  18. Default neglect in attempts at social influence.

    PubMed

    Zlatev, Julian J; Daniels, David P; Kim, Hajin; Neale, Margaret A

    2017-12-26

    Current theories suggest that people understand how to exploit common biases to influence others. However, these predictions have received little empirical attention. We consider a widely studied bias with special policy relevance: the default effect, which is the tendency to choose whichever option is the status quo. We asked participants (including managers, law/business/medical students, and US adults) to nudge others toward selecting a target option by choosing whether to present that target option as the default. In contrast to theoretical predictions, we find that people often fail to understand and/or use defaults to influence others, i.e., they show "default neglect." First, in one-shot default-setting games, we find that only 50.8% of participants set the target option as the default across 11 samples ( n = 2,844), consistent with people not systematically using defaults at all. Second, when participants have multiple opportunities for experience and feedback, they still do not systematically use defaults. Third, we investigate beliefs related to the default effect. People seem to anticipate some mechanisms that drive default effects, yet most people do not believe in the default effect on average, even in cases where they do use defaults. We discuss implications of default neglect for decision making, social influence, and evidence-based policy.

  19. Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation

    ERIC Educational Resources Information Center

    Ross, Steven J.; Mackey, Beth

    2015-01-01

    This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…

  20. Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology

    NASA Astrophysics Data System (ADS)

    Jin, Z.; Azzari, G.; Lobell, D. B.

    2016-12-01

    Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.

  1. 26 CFR 1.503(b)-1 - Prohibited transactions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... otherwise disposed of in default of repayment of the loan, the value and liquidity of which security is such... to the issuer by the purchaser. For rules relating to loan of funds to, or investment of funds in...

  2. Volumetric breast density measurement: sensitivity analysis of a relative physics approach

    PubMed Central

    Lau, Susie; Abdul Aziz, Yang Faridah

    2016-01-01

    Objective: To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. Methods: 3317 raw digital mammograms were processed with Volpara® (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Results: Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Conclusion: Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Advances in knowledge: Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be. PMID:27452264

  3. OPTESIM, a Versatile Toolbox for Numerical Simulation of Electron Spin Echo Envelope Modulation (ESEEM) that Features Hybrid Optimization and Statistical Assessment of Parameters

    PubMed Central

    Sun, Li; Hernandez-Guzman, Jessica; Warncke, Kurt

    2009-01-01

    Electron spin echo envelope modulation (ESEEM) is a technique of pulsed-electron paramagnetic resonance (EPR) spectroscopy. The analyis of ESEEM data to extract information about the nuclear and electronic structure of a disordered (powder) paramagnetic system requires accurate and efficient numerical simulations. A single coupled nucleus of known nuclear g value (gN) and spin I=1 can have up to eight adjustable parameters in the nuclear part of the spin Hamiltonian. We have developed OPTESIM, an ESEEM simulation toolbox, for automated numerical simulation of powder two- and three-pulse one-dimensional ESEEM for arbitrary number (N) and type (I, gN) of coupled nuclei, and arbitrary mutual orientations of the hyperfine tensor principal axis systems for N>1. OPTESIM is based in the Matlab environment, and includes the following features: (1) a fast algorithm for translation of the spin Hamiltonian into simulated ESEEM, (2) different optimization methods that can be hybridized to achieve an efficient coarse-to-fine grained search of the parameter space and convergence to a global minimum, (3) statistical analysis of the simulation parameters, which allows the identification of simultaneous confidence regions at specific confidence levels. OPTESIM also includes a geometry-preserving spherical averaging algorithm as default for N>1, and global optimization over multiple experimental conditions, such as the dephasing time ( ) for three-pulse ESEEM, and external magnetic field values. Application examples for simulation of 14N coupling (N=1, N=2) in biological and chemical model paramagnets are included. Automated, optimized simulations by using OPTESIM lead to a convergence on dramatically shorter time scales, relative to manual simulations. PMID:19553148

  4. Volumetric breast density measurement: sensitivity analysis of a relative physics approach.

    PubMed

    Lau, Susie; Ng, Kwan Hoong; Abdul Aziz, Yang Faridah

    2016-10-01

    To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. 3317 raw digital mammograms were processed with Volpara(®) (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be.

  5. Automating Geospatial Visualizations with Smart Default Renderers for Data Exploration Web Applications

    NASA Astrophysics Data System (ADS)

    Ekenes, K.

    2017-12-01

    This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable for a variety of datasets from multiple sources.

  6. Determinants of default to fully completion of immunization among children aged 12 to 23 months in south Ethiopia: unmatched case-control study.

    PubMed

    Asfaw, Abiyot Getachew; Koye, Digsu Negese; Demssie, Amsalu Feleke; Zeleke, Ejigu Gebeye; Gelaw, Yalemzewod Assefa

    2016-01-01

    Immunization is a cost effective interventions of vaccine preventable disease. There is still, 2.5 million children die by vaccine preventable disease every year in developing countries. In Ethiopia, default to fully completion of child immunization is high and determinants of default to completions are not explored well in the study setting. The aim of the study was to identify determinants of default to fully completion of immunization among children between ages 12 to 23 months in Sodo Zurea District, Southern Ethiopia. Community based unmatched case-control study was conducted. Census was done to identify cases and controls before the actual data collection. A total of 344 samples (172 cases and 172 controls) were selected by simple random sampling technique. Cases were children in the age group of 12 to 23 months old who missed at least one dose from the recommended schedule. Bivariable and multivariable binary logistic regression was used to identify the determinant factors. Odds ratio, 95%CI and p - value less than 0.05 was used to measure the presence and strength of the association. Mothers of infants who are unable to read and write (AOR=8.9; 95%CI: 2.4, 33.9) and attended primary school (AOR=4.1; 95% CI:1.4-15.8), mothers who had no postnatal care follow up (AOR=0.4; 95%CI: 0.3, 0.7), good maternal knowledge towards immunization (AOR= 0.5; 95% CI: 0.3, 0.8) and maternal favorable perception towards uses of health institution for maternal and child care (AOR= 0.2; 95% CI: 0.1, 0.6) were significant determinant factors to default to fully completion of immunization. Working on maternal education, postnatal care follow up, promoting maternal knowledge and perception about child immunization are recommended measures to mitigate defaults to complete immunization.

  7. 77 FR 6610 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-08

    ... to the trade.\\6\\ NSCC will then look to CDS for the satisfaction of the clearance and settlement... indistinguishable from the risk of a clearing broker default, but because the value of the trades of the Canadian broker-dealers cleared through the mechanism is likely to be small in comparison to the values cleared...

  8. 40 CFR 60.759 - Specifications for active collection systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...: Qi = 2 k Lo Mi (e-kt i) (CNMOC) (3.6 × 10−9) where, Qi = NMOC emission rate from the ith section, megagrams per year k = methane generation rate constant, year−1 Lo = methane generation potential, cubic... performed, the default values for k, LO and CNMOC provided in § 60.754(a)(1) or the alternative values from...

  9. 40 CFR 60.759 - Specifications for active collection systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...: Qi = 2 k Lo Mi (e-kt i) (CNMOC) (3.6 × 10−9) where, Qi = NMOC emission rate from the ith section, megagrams per year k = methane generation rate constant, year−1 Lo = methane generation potential, cubic... performed, the default values for k, LO and CNMOC provided in § 60.754(a)(1) or the alternative values from...

  10. 40 CFR 60.759 - Specifications for active collection systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...: Qi = 2 k Lo Mi (e-kt i) (CNMOC) (3.6 × 10−9) where, Qi = NMOC emission rate from the ith section, megagrams per year k = methane generation rate constant, year−1 Lo = methane generation potential, cubic... performed, the default values for k, LO and CNMOC provided in § 60.754(a)(1) or the alternative values from...

  11. 40 CFR 60.759 - Specifications for active collection systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...: Qi = 2 k Lo Mi (e-kt i) (CNMOC) (3.6 × 10−9) where, Qi = NMOC emission rate from the ith section, megagrams per year k = methane generation rate constant, year−1 Lo = methane generation potential, cubic... performed, the default values for k, LO and CNMOC provided in § 60.754(a)(1) or the alternative values from...

  12. Estimation of Logistic Regression Models in Small Samples. A Simulation Study Using a Weakly Informative Default Prior Distribution

    ERIC Educational Resources Information Center

    Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel

    2012-01-01

    In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…

  13. A Coordinate-Based Meta-Analysis of Overlaps in Regional Specialization and Functional Connectivity across Subjective Value and Default Mode Networks.

    PubMed

    Acikalin, M Yavuz; Gorgolewski, Krzysztof J; Poldrack, Russell A

    2017-01-01

    Previous research has provided qualitative evidence for overlap in a number of brain regions across the subjective value network (SVN) and the default mode network (DMN). In order to quantitatively assess this overlap, we conducted a series of coordinate-based meta-analyses (CBMA) of results from 466 functional magnetic resonance imaging experiments on task-negative or subjective value-related activations in the human brain. In these analyses, we first identified significant overlaps and dissociations across activation foci related to SVN and DMN. Second, we investigated whether these overlapping subregions also showed similar patterns of functional connectivity, suggesting a shared functional subnetwork. We find considerable overlap between SVN and DMN in subregions of central ventromedial prefrontal cortex (cVMPFC) and dorsal posterior cingulate cortex (dPCC). Further, our findings show that similar patterns of bidirectional functional connectivity between cVMPFC and dPCC are present in both networks. We discuss ways in which our understanding of how subjective value (SV) is computed and represented in the brain can be synthesized with what we know about the DMN, mind-wandering, and self-referential processing in light of our findings.

  14. Risk factors for treatment default among re-treatment tuberculosis patients in India, 2006.

    PubMed

    Jha, Ugra Mohan; Satyanarayana, Srinath; Dewan, Puneet K; Chadha, Sarabjit; Wares, Fraser; Sahu, Suvanand; Gupta, Devesh; Chauhan, L S

    2010-01-25

    Under India's Revised National Tuberculosis Control Programme (RNTCP), >15% of previously-treated patients in the reported 2006 patient cohort defaulted from anti-tuberculosis treatment. To assess the timing, characteristics, and risk factors for default amongst re-treatment TB patients. For this case-control study, in 90 randomly-selected programme units treatment records were abstracted from all 2006 defaulters from the RNTCP re-treatment regimen (cases), with one consecutively-selected non-defaulter per case. Patients who interrupted anti-tuberculosis treatment for >2 months were classified as defaulters. 1,141 defaulters and 1,189 non-defaulters were included. The median duration of treatment prior to default was 81 days (25%-75% interquartile range 44-117 days) and documented retrieval efforts after treatment interruption were inadequate. Defaulters were more likely to have been male (adjusted odds ratio [aOR] 1.4, 95% confidence interval [CI] 1.2-1.7), have previously defaulted anti-tuberculosis treatment (aOR 1.3 95%CI 1.1-1.6], have previous treatment from non-RNTCP providers (AOR 1.3, 95%CI 1.0-1.6], or have public health facility-based treatment observation (aOR 1.3, 95%CI 1.1-1.6). Amongst the large number of re-treatment patients in India, default occurs early and often. Improved pre-treatment counseling and community-based treatment provision may reduce default rates. Efforts to retrieve treatment interrupters prior to default require strengthening.

  15. Schooling and variation in the COMT gene: The devil is in the details

    PubMed Central

    Campbell, Daniel; Bick, Johanna; Yrigollen, Carolyn M.; Lee, Maria; Joseph, Antony; Chang, Joseph T.; Grigorenko, Elena L.

    2013-01-01

    Background Schooling is considered to be one of the major contributors to the development of intelligence within societies and individuals. Genetic variation might modulate the impact of schooling and explain, at least partially, the presence of individual differences in classrooms. Method We studied a sample of 1502 children (mean age = 11.7 years) from Zambia. Approximately 57% of these children were enrolled in school, and the rest were not. To quantify genetic variation, we investigated a number of common polymorphisms in the catechol-O-methyltransferase (COMT) gene that controls the production of the protein thought to account for >60% of the dopamine degradation in the prefrontal cortex. Results Haplotype analyses generated results ranging from the presence to absence of significant interactions between a number of COMT haplotypes and indicators of schooling (i.e., in- vs. out-of-school and grade completed) in the prediction of nonverbal intelligence, depending on the parameter specification. However, an investigation of the distribution of corresponding p-values suggested that these positive results were false. Conclusions Convincing evidence that the variation in the COMT gene is associated with individual differences in nonverbal intelligence either directly or through interactions with schooling was not found. P-values produced by the method of testing for haplotype effects employed here may be sensitive to parameter settings, invalid under default settings, and should be checked for validity through simulation. PMID:23952646

  16. Customized rating assessment of climate suitability (CRACS): climate satisfaction evaluation based on subjective perception.

    PubMed

    Lin, Tzu-Ping; Yang, Shing-Ru; Matzarakis, Andreas

    2015-12-01

    Climate not only influences the behavior of people in urban environments but also affects people's schedules and travel plans. Therefore, providing people with appropriate long-term climate evaluation information is crucial. Therefore, we developed an innovative climate assessment system based on field investigations conducted in three cities located in Northern, Central, and Southern Taiwan. The field investigations included the questionnaire surveys and climate data collection. We first analyzed the relationship between the participants and climate parameters comprising physiologically equivalent temperature, air temperature, humidity, wind speed, solar radiation, cloud cover, and precipitation. Second, we established the neutral value, comfort range, and dissatisfied range of each parameter. Third, after verifying that the subjects' perception toward the climate parameters vary based on individual preferences, we developed the customized rating assessment of climate suitability (CRACS) approach, which featured functions such as personalized and default climate suitability information to be used by users exhibiting varying demands. Finally, we performed calculations using the climate conditions of two cities during the past 10 years to demonstrate the performance of the CRACS approach. The results can be used as a reference when planning activities in the city or when organizing future travel plans. The flexibility of the assessment system enables it to be adjusted for varying regions and usage characteristics.

  17. Customized rating assessment of climate suitability (CRACS): climate satisfaction evaluation based on subjective perception

    NASA Astrophysics Data System (ADS)

    Lin, Tzu-Ping; Yang, Shing-Ru; Matzarakis, Andreas

    2015-12-01

    Climate not only influences the behavior of people in urban environments but also affects people's schedules and travel plans. Therefore, providing people with appropriate long-term climate evaluation information is crucial. Therefore, we developed an innovative climate assessment system based on field investigations conducted in three cities located in Northern, Central, and Southern Taiwan. The field investigations included the questionnaire surveys and climate data collection. We first analyzed the relationship between the participants and climate parameters comprising physiologically equivalent temperature, air temperature, humidity, wind speed, solar radiation, cloud cover, and precipitation. Second, we established the neutral value, comfort range, and dissatisfied range of each parameter. Third, after verifying that the subjects' perception toward the climate parameters vary based on individual preferences, we developed the customized rating assessment of climate suitability (CRACS) approach, which featured functions such as personalized and default climate suitability information to be used by users exhibiting varying demands. Finally, we performed calculations using the climate conditions of two cities during the past 10 years to demonstrate the performance of the CRACS approach. The results can be used as a reference when planning activities in the city or when organizing future travel plans. The flexibility of the assessment system enables it to be adjusted for varying regions and usage characteristics.

  18. Creating History: By Design or by Default.

    ERIC Educational Resources Information Center

    Baugher, Shirley L.

    1989-01-01

    The author presents social demographic forecasts for the future. She examines social, economic, and political transitions in U.S. society separately and argues that the transitions that society makes depend ultimately on the values upon which individuals choose to act. (CH)

  19. Probabilistic framework for the estimation of the adult and child toxicokinetic intraspecies uncertainty factors.

    PubMed

    Pelekis, Michael; Nicolich, Mark J; Gauthier, Joseph S

    2003-12-01

    Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.

  20. 34 CFR 668.204 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.204 Draft cohort.... (1) We notify you of your draft cohort default rate before your official cohort default rate is...

  1. 34 CFR 668.185 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 3 2014-07-01 2014-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.185 Draft...) General. (1) We notify you of your draft cohort default rate before your official cohort default rate is...

  2. 34 CFR 668.185 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.185 Draft...) General. (1) We notify you of your draft cohort default rate before your official cohort default rate is...

  3. 34 CFR 668.185 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 3 2013-07-01 2013-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.185 Draft...) General. (1) We notify you of your draft cohort default rate before your official cohort default rate is...

  4. 34 CFR 668.185 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 3 2012-07-01 2012-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.185 Draft...) General. (1) We notify you of your draft cohort default rate before your official cohort default rate is...

  5. 34 CFR 668.204 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 3 2012-07-01 2012-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.204 Draft cohort.... (1) We notify you of your draft cohort default rate before your official cohort default rate is...

  6. 34 CFR 668.204 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 3 2014-07-01 2014-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.204 Draft cohort.... (1) We notify you of your draft cohort default rate before your official cohort default rate is...

  7. 34 CFR 668.204 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 3 2013-07-01 2013-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.204 Draft cohort.... (1) We notify you of your draft cohort default rate before your official cohort default rate is...

  8. Risk Factors for Treatment Default among Re-Treatment Tuberculosis Patients in India, 2006

    PubMed Central

    Jha, Ugra Mohan; Satyanarayana, Srinath; Dewan, Puneet K.; Chadha, Sarabjit; Wares, Fraser; Sahu, Suvanand; Gupta, Devesh; Chauhan, L. S.

    2010-01-01

    Setting Under India's Revised National Tuberculosis Control Programme (RNTCP), >15% of previously-treated patients in the reported 2006 patient cohort defaulted from anti-tuberculosis treatment. Objective To assess the timing, characteristics, and risk factors for default amongst re-treatment TB patients. Methodology For this case-control study, in 90 randomly-selected programme units treatment records were abstracted from all 2006 defaulters from the RNTCP re-treatment regimen (cases), with one consecutively-selected non-defaulter per case. Patients who interrupted anti-tuberculosis treatment for >2 months were classified as defaulters. Results 1,141 defaulters and 1,189 non-defaulters were included. The median duration of treatment prior to default was 81 days (25%–75% interquartile range 44–117 days) and documented retrieval efforts after treatment interruption were inadequate. Defaulters were more likely to have been male (adjusted odds ratio [aOR] 1.4, 95% confidence interval [CI] 1.2–1.7), have previously defaulted anti-tuberculosis treatment (aOR 1.3 95%CI 1.1–1.6], have previous treatment from non-RNTCP providers (AOR 1.3, 95%CI 1.0–1.6], or have public health facility-based treatment observation (aOR 1.3, 95%CI 1.1–1.6). Conclusions Amongst the large number of re-treatment patients in India, default occurs early and often. Improved pre-treatment counseling and community-based treatment provision may reduce default rates. Efforts to retrieve treatment interrupters prior to default require strengthening. PMID:20111727

  9. Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis

    ERIC Educational Resources Information Center

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia

    2016-01-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…

  10. WHO Multidrug Therapy for Leprosy: Epidemiology of Default in Treatment in Agra District, Uttar Pradesh, India

    PubMed Central

    Kumar, Anil; Girdhar, Anita; Chakma, Joy Kumar; Girdhar, Bhuwneswar Kumar

    2015-01-01

    Aim. To study the magnitude of default, time of default, its causes, and final clinical outcome. Methods. Data collected in active surveys in Agra is analyzed. Patients were given treatment after medical confirmation and were followed up. The treatment default and other clinical outcomes were recorded. Results. Patients who defaulted have comparable demographic characteristics. However, among defaulters more women (62.7% in PB, 42.6% in MB) were seen than those in treatment completers (PB 52.7% and MB 35.9%). Nerve involvement was high in treatment completers: 45.7% in PB and 91.3% in MB leprosy. Overall default rate was lower (14.8%) in ROM than (28.8%) in standard MDT for PB leprosy (χ 1 2 = 11.6, P = 0.001) and also for MB leprosy: 9.1% in ROM compared to 34.5% in MDT (χ 1 2 = 6.0, P = 0.015). Default rate was not different (28.8% versus 34.5%, P > 0.05) in both types of leprosy given MDT. Most patients defaulted at early stage of treatment and mainly due to manageable side effects. Conclusion. The default in standard MDT both for PB and MB leprosy was observed to be significantly higher than in ROM treatment. Most defaults occurred at early stage of treatment and major contribution of default is due to side effects like drowsiness, weakness, vomiting, diarrhea, and so forth, related to poor general health. Although about half of the defaulters were observed to be cured 2.2% in PB-MDT and 10.9% of MB-MDT developed disability. This is an issue due to default. Attempts are needed to increase treatment compliance. The use of specially designed disease related health education along with easily administered drug regimens may help to reduce default. PMID:25705679

  11. Nudge for (the Public) Good: How Defaults Can Affect Cooperation

    PubMed Central

    Fosgaard, Toke R.; Piovesan, Marco

    2015-01-01

    In this paper we test the effect of non-binding defaults on the level of contribution to a public good. We manipulate the default numbers appearing on the decision screen to nudge subjects toward a free-rider strategy or a perfect conditional cooperator strategy. Our results show that the vast majority of our subjects did not adopt the default numbers, but their stated strategy was affected by the default. Moreover, we find that our manipulation spilled over to a subsequent repeated public goods game where default was not manipulated. Here we found that subjects who previously saw the free rider default were significantly less cooperative than those who saw the perfect conditional cooperator default. PMID:26717569

  12. Nudge for (the Public) Good: How Defaults Can Affect Cooperation.

    PubMed

    Fosgaard, Toke R; Piovesan, Marco

    2015-01-01

    In this paper we test the effect of non-binding defaults on the level of contribution to a public good. We manipulate the default numbers appearing on the decision screen to nudge subjects toward a free-rider strategy or a perfect conditional cooperator strategy. Our results show that the vast majority of our subjects did not adopt the default numbers, but their stated strategy was affected by the default. Moreover, we find that our manipulation spilled over to a subsequent repeated public goods game where default was not manipulated. Here we found that subjects who previously saw the free rider default were significantly less cooperative than those who saw the perfect conditional cooperator default.

  13. Lead and Arsenic Bioaccessibility and Speciation as a Function of Soil Particle Size

    EPA Science Inventory

    Bioavailability research of soil metals has advanced considerably from default values to validated in vitro bioaccessibility (IVBA) assays for site-specific risk assessment. Previously, USEPA determined that the soil-size fraction representative of dermal adherence and consequent...

  14. Episcleral eye plaque dosimetry comparison for the Eye Physics EP917 using Plaque Simulator and Monte Carlo simulation

    PubMed Central

    Amoush, Ahmad; Wilkinson, Douglas A.

    2015-01-01

    This work is a comparative study of the dosimetry calculated by Plaque Simulator, a treatment planning system for eye plaque brachytherapy, to the dosimetry calculated using Monte Carlo simulation for an Eye Physics model EP917 eye plaque. Monte Carlo (MC) simulation using MCNPX 2.7 was used to calculate the central axis dose in water for an EP917 eye plaque fully loaded with 17 IsoAid Advantage  125I seeds. In addition, the dosimetry parameters Λ, gL(r), and F(r,θ) were calculated for the IsoAid Advantage model IAI‐125  125I seed and benchmarked against published data. Bebig Plaque Simulator (PS) v5.74 was used to calculate the central axis dose based on the AAPM Updated Task Group 43 (TG‐43U1) dose formalism. The calculated central axis dose from MC and PS was then compared. When the MC dosimetry parameters for the IsoAid Advantage  125I seed were compared with the consensus values, Λ agreed with the consensus value to within 2.3%. However, much larger differences were found between MC calculated gL(r) and F(r,θ) and the consensus values. The differences between MC‐calculated dosimetry parameters are much smaller when compared with recently published data. The differences between the calculated central axis absolute dose from MC and PS ranged from 5% to 10% for distances between 1 and 12 mm from the outer scleral surface. When the dosimetry parameters for the  125I seed from this study were used in PS, the calculated absolute central axis dose differences were reduced by 2.3% from depths of 4 to 12 mm from the outer scleral surface. We conclude that PS adequately models the central dose profile of this plaque using its defaults for the IsoAid model IAI‐125 at distances of 1 to 7 mm from the outer scleral surface. However, improved dose accuracy can be obtained by using updated dosimetry parameters for the IsoAid model IAI‐125  125I seed. PACS number: 87.55.K‐ PMID:26699577

  15. Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters.

    PubMed

    Khamassi, Mehdi; Enel, Pierre; Dominey, Peter Ford; Procyk, Emmanuel

    2013-01-01

    Converging evidence suggest that the medial prefrontal cortex (MPFC) is involved in feedback categorization, performance monitoring, and task monitoring, and may contribute to the online regulation of reinforcement learning (RL) parameters that would affect decision-making processes in the lateral prefrontal cortex (LPFC). Previous neurophysiological experiments have shown MPFC activities encoding error likelihood, uncertainty, reward volatility, as well as neural responses categorizing different types of feedback, for instance, distinguishing between choice errors and execution errors. Rushworth and colleagues have proposed that the involvement of MPFC in tracking the volatility of the task could contribute to the regulation of one of RL parameters called the learning rate. We extend this hypothesis by proposing that MPFC could contribute to the regulation of other RL parameters such as the exploration rate and default action values in case of task shifts. Here, we analyze the sensitivity to RL parameters of behavioral performance in two monkey decision-making tasks, one with a deterministic reward schedule and the other with a stochastic one. We show that there exist optimal parameter values specific to each of these tasks, that need to be found for optimal performance and that are usually hand-tuned in computational models. In contrast, automatic online regulation of these parameters using some heuristics can help producing a good, although non-optimal, behavioral performance in each task. We finally describe our computational model of MPFC-LPFC interaction used for online regulation of the exploration rate and its application to a human-robot interaction scenario. There, unexpected uncertainties are produced by the human introducing cued task changes or by cheating. The model enables the robot to autonomously learn to reset exploration in response to such uncertain cues and events. The combined results provide concrete evidence specifying how prefrontal cortical subregions may cooperate to regulate RL parameters. It also shows how such neurophysiologically inspired mechanisms can control advanced robots in the real world. Finally, the model's learning mechanisms that were challenged in the last robotic scenario provide testable predictions on the way monkeys may learn the structure of the task during the pretraining phase of the previous laboratory experiments. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. 46 CFR 298.41 - Remedies after default.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 8 2011-10-01 2011-10-01 false Remedies after default. 298.41 Section 298.41 Shipping... Defaults and Remedies, Reporting Requirements, Applicability of Regulations § 298.41 Remedies after default... governing remedies after a default, which relate to our rights and duties, the rights and duties of the...

  17. 46 CFR 298.41 - Remedies after default.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Remedies after default. 298.41 Section 298.41 Shipping... Defaults and Remedies, Reporting Requirements, Applicability of Regulations § 298.41 Remedies after default... governing remedies after a default, which relate to our rights and duties, the rights and duties of the...

  18. The default response to uncertainty and the importance of perceived safety in anxiety and stress: An evolution-theoretical perspective.

    PubMed

    Brosschot, Jos F; Verkuil, Bart; Thayer, Julian F

    2016-06-01

    From a combined neurobiological and evolution-theoretical perspective, the stress response is a subcortically subserved response to uncertainty that is not 'generated' but 'default': the stress response is 'always there' but as long as safety is perceived, the stress response is under tonic prefrontal inhibition, reflected by high vagally mediated heart rate variability. Uncertainty of safety leads to disinhibiting the default stress response, even in the absence of threat. Due to the stress response's survival value, this 'erring on the side of caution' is passed to us via our genes. Thus, intolerance of uncertainty is not acquired during the life cycle, but is a given property of all living organisms, only to be alleviated in situations of which the safety is learned. When the latter is deficient, generalized unsafety ensues, which underlies chronic anxiety and stress and their somatic health risks, as well as other highly prevalent conditions carrying such risks, including loneliness, obesity, aerobic unfitness and old age. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Estimating national landfill methane emissions: an application of the 2006 Intergovernmental Panel on Climate Change Waste Model in Panama.

    PubMed

    Weitz, Melissa; Coburn, Jeffrey B; Salinas, Edgar

    2008-05-01

    This paper estimates national methane emissions from solid waste disposal sites in Panama over the time period 1990-2020 using both the 2006 Intergovernmental Panel on Climate Change (IPCC) Waste Model spreadsheet and the default emissions estimate approach presented in the 1996 IPCC Good Practice Guidelines. The IPCC Waste Model has the ability to calculate emissions from a variety of solid waste disposal site types, taking into account country- or region-specific waste composition and climate information, and can be used with a limited amount of data. Countries with detailed data can also run the model with country-specific values. The paper discusses methane emissions from solid waste disposal; explains the differences between the two methodologies in terms of data needs, assumptions, and results; describes solid waste disposal circumstances in Panama; and presents the results of this analysis. It also demonstrates the Waste Model's ability to incorporate landfill gas recovery data and to make projections. The former default method methane emissions estimates are 25 Gg in 1994, and range from 23.1 Gg in 1990 to a projected 37.5 Gg in 2020. The Waste Model estimates are 26.7 Gg in 1994, ranging from 24.6 Gg in 1990 to 41.6 Gg in 2020. Emissions estimates for Panama produced by the new model were, on average, 8% higher than estimates produced by the former default methodology. The increased estimate can be attributed to the inclusion of all solid waste disposal in Panama (as opposed to only disposal in managed landfills), but the increase was offset somewhat by the different default factors and regional waste values between the 1996 and 2006 IPCC guidelines, and the use of the first-order decay model with a time delay for waste degradation in the IPCC Waste Model.

  20. The comparison of fossil carbon fraction and greenhouse gas emissions through an analysis of exhaust gases from urban solid waste incineration facilities.

    PubMed

    Kim, Seungjin; Kang, Seongmin; Lee, Jeongwoo; Lee, Seehyung; Kim, Ki-Hyun; Jeon, Eui-Chan

    2016-10-01

    In this study, in order to understand accurate calculation of greenhouse gas emissions of urban solid waste incineration facilities, which are major waste incineration facilities, and problems likely to occur at this time, emissions were calculated by classifying calculation methods into 3 types. For the comparison of calculation methods, the waste characteristics ratio, dry substance content by waste characteristics, carbon content in dry substance, and (12)C content were analyzed; and in particular, CO2 concentration in incineration gases and (12)C content were analyzed together. In this study, 3 types of calculation methods were made through the assay value, and by using each calculation method, emissions of urban solid waste incineration facilities were calculated then compared. As a result of comparison, with Calculation Method A, which used the default value as presented in the IPCC guidelines, greenhouse gas emissions were calculated for the urban solid waste incineration facilities A and B at 244.43 ton CO2/day and 322.09 ton CO2/day, respectively. Hence, it showed a lot of difference from Calculation Methods B and C, which used the assay value of this study. It is determined that this was because the default value as presented in IPCC, as the world average value, could not reflect the characteristics of urban solid waste incineration facilities. Calculation Method B indicated 163.31 ton CO2/day and 230.34 ton CO2/day respectively for the urban solid waste incineration facilities A and B; also, Calculation Method C indicated 151.79 ton CO2/day and 218.99 ton CO2/day, respectively. This study intends to compare greenhouse gas emissions calculated using (12)C content default value provided by the IPCC (Intergovernmental Panel on Climate Change) with greenhouse gas emissions calculated using (12)C content and waste assay value that can reflect the characteristics of the target urban solid waste incineration facilities. Also, the concentration and (12)C content were calculated by directly collecting incineration gases of the target urban solid waste incineration facilities, and greenhouse gas emissions of the target urban solid waste incineration facilities through this survey were compared with greenhouse gas emissions, which used the previously calculated assay value of solid waste.

  1. Thresholds for Chemically Induced Toxicity: Theories and Evidence

    EPA Science Inventory

    Regulatory agencies define “science policies” as a means of proceeding with risk assessments and management decisions in the absence of all the data these bodies would like. Policies may include the use of default assumptions, values and methodologies. The U.S. EPA 20...

  2. User-Friendly Interface Developed for a Web-Based Service for SpaceCAL Emulations

    NASA Technical Reports Server (NTRS)

    Liszka, Kathy J.; Holtz, Allen P.

    2004-01-01

    A team at the NASA Glenn Research Center is developing a Space Communications Architecture Laboratory (SpaceCAL) for protocol development activities for coordinated satellite missions. SpaceCAL will provide a multiuser, distributed system to emulate space-based Internet architectures, backbone networks, formation clusters, and constellations. As part of a new effort in 2003, building blocks are being defined for an open distributed system to make the satellite emulation test bed accessible through an Internet connection. The first step in creating a Web-based service to control the emulation remotely is providing a user-friendly interface for encoding the data into a well-formed and complete Extensible Markup Language (XML) document. XML provides coding that allows data to be transferred between dissimilar systems. Scenario specifications include control parameters, network routes, interface bandwidths, delay, and bit error rate. Specifications for all satellite, instruments, and ground stations in a given scenario are also included in the XML document. For the SpaceCAL emulation, the XML document can be created using XForms, a Webbased forms language for data collection. Contrary to older forms technology, the interactive user interface makes the science prevalent, not the data representation. Required versus optional input fields, default values, automatic calculations, data validation, and reuse will help researchers quickly and accurately define missions. XForms can apply any XML schema defined for the test mission to validate data before forwarding it to the emulation facility. New instrument definitions, facilities, and mission types can be added to the existing schema. The first prototype user interface incorporates components for interactive input and form processing. Internet address, data rate, and the location of the facility are implemented with basic form controls with default values provided for convenience and efficiency using basic XForms operations. Because different emulation scenarios will vary widely in their component structure, more complex operations are used to add and delete facilities.

  3. Refinement of arsenic attributable health risks in rural Pakistan using population specific dietary intake values.

    PubMed

    Rasheed, Hifza; Slack, Rebecca; Kay, Paul; Gong, Yun Yun

    2017-02-01

    Previous risk assessment studies have often utilised generic consumption or intake values when evaluating ingestion exposure pathways. If these values do not accurately reflect the country or scenario in question, the resulting risk assessment will not provide a meaningful representation of cancer risks in that particular country/scenario. This study sought to determine water and food intake parameters for one region in South Asia, rural Pakistan, and assess the role population specific intake parameters play in cancer risk assessment. A questionnaire was developed to collect data on sociodemographic features and 24-h water and food consumption patterns from a rural community. The impact of dietary differences on cancer susceptibility linked to arsenic exposure was evaluated by calculating cancer risks using the data collected in the current study against standard water and food intake levels for the USA, Europe and Asia. A probabilistic cancer risk was performed for each set of intake values of this study. Average daily total water intake based on drinking direct plain water and indirect water from food and beverages was found to be 3.5Lday -1 (95% CI: 3.38, 3.57) exceeding the US Environmental Protection Agency's default (2.5Lday -1 ) and World Health Organization's recommended intake value (2Lday -1 ). Average daily rice intake (469gday -1 ) was found to be lower than in India and Bangladesh whereas wheat intake (402gday -1 ) was higher than intake reported for USA, Europe and Asian sub-regions. Consequently, arsenic-associated cumulative cancer risks determined for daily water intake was found to be 17 chances in children of 3-6years (95% CI: 0.0014, 0.0017), 14 in children of age 6-16years (95% CI: 0.001, 0.0011) and 6 in adults of 16-67years (95% CI: 0.0006, 0.0006) in a population size of 10,000. This is higher than the risks estimated using the US Environmental Protection Agency and World Health Organization's default recommended water intake levels. Rice intake data showed early life cumulative cancer risks of 15 chances in 10,000 for children of 3-6years (95% CI: 0.0012, 0.0015), 14 in children of 6-16years (95% CI: 0.0011, 0.0014) and later life risk of 8 adults (95% CI: 0.0008, 0.0008) in a population of 10,000. This is lower than the cancer risks in countries with higher rice intake and elevated arsenic levels (Bangladesh and India). Cumulative cancer risk from arsenic exposure showed the relative risk contribution from total water to be 51%, from rice to be 44% and 5% from wheat intake. The study demonstrates the need to use population specific dietary information for risk assessment and risk management studies. Probabilistic risk assessment concluded the importance of dietary intake in estimating cancer risk, along with arsenic concentrations in water or food and age of exposed rural population. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, Larry K.; Gustafson, William I.; Kassianov, Evgueni I.

    A new treatment for shallow clouds has been introduced into the Weather Research and Forecasting (WRF) model. The new scheme, called the cumulus potential (CuP) scheme, replaces the ad-hoc trigger function used in the Kain-Fritsch cumulus parameterization with a trigger function related to the distribution of temperature and humidity in the convective boundary layer via probability density functions (PDFs). An additional modification to the default version of WRF is the computation of a cumulus cloud fraction based on the time scales relevant for shallow cumuli. Results from three case studies over the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM)more » site in north central Oklahoma are presented. These days were selected because of the presence of shallow cumuli over the ARM site. The modified version of WRF does a much better job predicting the cloud fraction and the downwelling shortwave irradiance thancontrol simulations utilizing the default Kain-Fritsch scheme. The modified scheme includes a number of additional free parameters, including the number and size of bins used to define the PDF, the minimum frequency of a bin within the PDF before that bin is considered for shallow clouds to form, and the critical cumulative frequency of bins required to trigger deep convection. A series of tests were undertaken to evaluate the sensitivity of the simulations to these parameters. Overall, the scheme was found to be relatively insensitive to each of the parameters.« less

  5. Use of cellular phone contacts to increase return rates for immunization services in Kenya.

    PubMed

    Mokaya, Evans; Mugoya, Isaac; Raburu, Jane; Shimp, Lora

    2017-01-01

    In Kenya, failure to complete immunization schedules by children who previously accessed immunization services is an obstacle to ensuring that children are fully immunized. Home visit approaches used to track defaulting children have not been successful in reducing the drop-out rate. This study tested the use of phone contacts as an approach for tracking immunization defaulters in twelve purposively-selected facilities in three districts of western Kenya. For nine months, children accessing immunization services in the facilities were tracked and caregivers were asked their reasons for defaulting. In all of the facilities, caregiver phone ownership was above 80%. In 11 of the 12 facilities, defaulter rates between pentavalent1 and pentavalent3 vaccination doses reduced significantly to within the acceptable level of < 10%. Caregivers provided reliable contact information and health workers positively perceived phone-based defaulter communications. Tracking a defaulter required on average 2 minutes by voice and Ksh 6 ($ 0.07). Competing tasks and concerns about vaccinating sick children and side-effects were the most cited reasons for caregivers defaulting. Notably, a significant number of children categorised as defaulters had been vaccinated in a different facility (and were therefore "false defaulters"). Use of phone contacts for follow-up is a feasible and cost-effective method for tracking defaulters. This approach should complement traditional home visits, especially for caregivers without phones. Given communication-related reasons for defaulting, it is important that immunization programs scale-up community education activities. A system for health facilities to share details of defaulting children should be established to reduce "false defaulters".

  6. Time of default in tuberculosis patients on directly observed treatment.

    PubMed

    Pardeshi, Geeta S

    2010-09-01

    Default remains an important challenge for the Revised National Tuberculosis Control Programme, which has achieved improved cure rates. This study describes the pattern of time of default in patients on DOTS. Tuberculosis Unit in District Tuberculosis Centre, Yavatmal, India; Retrospective cohort study. This analysis was done among the cohort of patients of registered at the Tuberculosis Unit during the year 2004. The time of default was assessed from the tuberculosis register. The sputum smear conversion and treatment outcome were also assessed. Kaplan-Meier plots and log rank tests. Overall, the default rate amongst the 716 patients registered at the Tuberculosis Unit was 10.33%. There was a significant difference in the default rate over time between the three DOTS categories (log rank statistic= 15.49, P=0.0004). Amongst the 331 smear-positive patients, the cumulative default rates at the end of intensive phase were 4% and 16%; while by end of treatment period, the default rates were 6% and 31% in category I and category II, respectively. A majority of the smear-positive patients in category II belonged to the group 'treatment after default' (56/95), and 30% of them defaulted during re-treatment. The sputum smear conversion rate at the end of intensive phase was 84%. Amongst 36 patients without smear conversion at the end of intensive phase, 55% had treatment failure. Patients defaulting in intensive phase of treatment and without smear conversion at the end of intensive phase should be retrieved on a priority basis. Default constitutes not only a major reason for patients needing re-treatment but also a risk for repeated default.

  7. Multiscale entropy analysis of biological signals: a fundamental bi-scaling law

    PubMed Central

    Gao, Jianbo; Hu, Jing; Liu, Feiyan; Cao, Yinhe

    2015-01-01

    Since introduced in early 2000, multiscale entropy (MSE) has found many applications in biosignal analysis, and been extended to multivariate MSE. So far, however, no analytic results for MSE or multivariate MSE have been reported. This has severely limited our basic understanding of MSE. For example, it has not been studied whether MSE estimated using default parameter values and short data set is meaningful or not. Nor is it known whether MSE has any relation with other complexity measures, such as the Hurst parameter, which characterizes the correlation structure of the data. To overcome this limitation, and more importantly, to guide more fruitful applications of MSE in various areas of life sciences, we derive a fundamental bi-scaling law for fractal time series, one for the scale in phase space, the other for the block size used for smoothing. We illustrate the usefulness of the approach by examining two types of physiological data. One is heart rate variability (HRV) data, for the purpose of distinguishing healthy subjects from patients with congestive heart failure, a life-threatening condition. The other is electroencephalogram (EEG) data, for the purpose of distinguishing epileptic seizure EEG from normal healthy EEG. PMID:26082711

  8. The fractured landscape of RNA-seq alignment: the default in our STARs.

    PubMed

    Ballouz, Sara; Dobin, Alexander; Gingeras, Thomas R; Gillis, Jesse

    2018-06-01

    Many tools are available for RNA-seq alignment and expression quantification, with comparative value being hard to establish. Benchmarking assessments often highlight methods' good performance, but are focused on either model data or fail to explain variation in performance. This leaves us to ask, what is the most meaningful way to assess different alignment choices? And importantly, where is there room for progress? In this work, we explore the answers to these two questions by performing an exhaustive assessment of the STAR aligner. We assess STAR's performance across a range of alignment parameters using common metrics, and then on biologically focused tasks. We find technical metrics such as fraction mapping or expression profile correlation to be uninformative, capturing properties unlikely to have any role in biological discovery. Surprisingly, we find that changes in alignment parameters within a wide range have little impact on both technical and biological performance. Yet, when performance finally does break, it happens in difficult regions, such as X-Y paralogs and MHC genes. We believe improved reporting by developers will help establish where results are likely to be robust or fragile, providing a better baseline to establish where methodological progress can still occur.

  9. Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset.

    PubMed

    Shirts, Michael R; Klein, Christoph; Swails, Jason M; Yin, Jian; Gilson, Michael K; Mobley, David L; Case, David A; Zhong, Ellen D

    2017-01-01

    We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to better than 0.1 % relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb's constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison.

  10. Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset

    PubMed Central

    Shirts, Michael R.; Klein, Christoph; Swails, Jason M.; Yin, Jian; Gilson, Michael K.; Mobley, David L.; Case, David A.; Zhong, Ellen D.

    2017-01-01

    We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to a better than 0.1% relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb’s constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison. PMID:27787702

  11. Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset

    NASA Astrophysics Data System (ADS)

    Shirts, Michael R.; Klein, Christoph; Swails, Jason M.; Yin, Jian; Gilson, Michael K.; Mobley, David L.; Case, David A.; Zhong, Ellen D.

    2017-01-01

    We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to better than 0.1 % relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb's constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison.

  12. Children's Responses to Line Spacing in Early Reading Books or "Holes to Tell Which Line You're On"

    ERIC Educational Resources Information Center

    Reynolds, Linda; Walker, Sue; Duncan, Alison

    2006-01-01

    This paper describes a study designed to find out whether children's reading would be affected by line spacing that is wider or narrower than the commonly used default values. The realistic, high quality test material was set using a range of four different line spacing values, and twenty-four children in Years 1 and 2 (between five and seven…

  13. Method of Characteristic (MOC) Nozzle Flowfield Solver - User’s Guide and Input Manual

    DTIC Science & Technology

    2013-01-01

    Description: Axi or Planar calculation. Value Description Default 0.0 Planer solution 1.0 Axisymmetric solution * &INPUT: NI Date Type: Integer...angle error !... !... Set Control values !... DELTA = 1.0 !1 axi, 0 planer (Mass flux not working correctly) NI = 81...DELTA = 1.0 !1 axi, 0 planer NI = 71 !NUMBER OF RADIAL POINTS ON INFLOW PLANE (Max 99) NT = 35 !NUMBER OF

  14. A Web-based tool for UV irradiance data: predictions for European and Southeast Asian sites.

    PubMed

    Kift, Richard; Webb, Ann R; Page, John; Rimmer, John; Janjai, Serm

    2006-01-01

    There are a range of UV models available, but one needs significant pre-existing knowledge and experience in order to be able to use them. In this article a comparatively simple Web-based model developed for the SoDa (Integration and Exploitation of Networked Solar Radiation Databases for Environment Monitoring) project is presented. This is a clear-sky model with modifications for cloud effects. To determine if the model produces realistic UV data the output is compared with 1 year sets of hourly measurements at sites in the United Kingdom and Thailand. The accuracy of the output depends on the input, but reasonable results were obtained with the use of the default database inputs and improved when pyranometer instead of modeled data provided the global radiation input needed to estimate the UV. The average modeled values of UV for the UK site were found to be within 10% of measurements. For the tropical sites in Thailand the average modeled values were within 1120% of measurements for the four sites with the use of the default SoDa database values. These results improved when pyranometer data and TOMS ozone data from 2002 replaced the standard SoDa database values, reducing the error range for all four sites to less than 15%.

  15. Student Loan Defaults in Texas: Yesterday, Today, and Tomorrow.

    ERIC Educational Resources Information Center

    Webster, Jeff; Meyer, Don; Arnold, Adreinne

    In 1988, the Texas student aid community addressed the issue of defaults in the guaranteed student loan program, creating a strategic default initiative. In June 1998, this same group of student aid officials met again to examine the current status of defaults and to share ideas on ways to prevent defaults. This report was intended as a resource…

  16. 34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2011-07-01 2011-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...

  17. 34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2014-07-01 2014-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...

  18. 34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2010-07-01 2010-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...

  19. 34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2013-07-01 2013-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...

  20. 34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2012-07-01 2012-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...

  1. Uncertainty Quantification of Equilibrium Climate Sensitivity in CCSM4

    NASA Astrophysics Data System (ADS)

    Covey, C. C.; Lucas, D. D.; Tannahill, J.; Klein, R.

    2013-12-01

    Uncertainty in the global mean equilibrium surface warming due to doubled atmospheric CO2, as computed by a "slab ocean" configuration of the Community Climate System Model version 4 (CCSM4), is quantified using 1,039 perturbed-input-parameter simulations. The slab ocean configuration reduces the model's e-folding time when approaching an equilibrium state to ~5 years. This time is much less than for the full ocean configuration, consistent with the shallow depth of the upper well-mixed layer of the ocean represented by the "slab." Adoption of the slab ocean configuration requires the assumption of preset values for the convergence of ocean heat transport beneath the upper well-mixed layer. A standard procedure for choosing these values maximizes agreement with the full ocean version's simulation of the present-day climate when input parameters assume their default values. For each new set of input parameter values, we computed the change in ocean heat transport implied by a "Phase 1" model run in which sea surface temperatures and sea ice concentrations were set equal to present-day values. The resulting total ocean heat transport (= standard value + change implied by Phase 1 run) was then input into "Phase 2" slab ocean runs with varying values of atmospheric CO2. Our uncertainty estimate is based on Latin Hypercube sampling over expert-provided uncertainty ranges of N = 36 adjustable parameters in the atmosphere (CAM4) and sea ice (CICE4) components of CCSM4. Two-dimensional projections of our sampling distribution for the N(N-1)/2 possible pairs of input parameters indicate full coverage of the N-dimensional parameter space, including edges. We used a machine learning-based support vector regression (SVR) statistical model to estimate the probability density function (PDF) of equilibrium warming. This fitting procedure produces a PDF that is qualitatively consistent with the raw histogram of our CCSM4 results. Most of the values from the SVR statistical model are within ~0.1 K of the raw results, well below the inter-decile range inferred below. Independent validation of the fit indicates residual errors that are distributed about zero with a standard deviation of 0.17 K. Analysis of variance shows that the equilibrium warming in CCSM4 is mainly linear in parameter changes. Thus, in accord with the Central Limit Theorem of statistics, the PDF of the warming is approximately Gaussian, i.e. symmetric about its mean value (3.0 K). Since SVR allows for highly nonlinear fits, the symmetry is not an artifact of the fitting procedure. The 10-90 percentile range of the PDF is 2.6-3.4 K, consistent with earlier estimates from CCSM4 but narrower than estimates from other models, which sometimes produce a high-temperature asymmetric tail in the PDF. This work was performed under auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was funded by LLNL's Uncertainty Quantification Strategic Initiative (Laboratory Directed Research and Development Project 10-SI-013).

  2. A reduced-form intensity-based model under fuzzy environments

    NASA Astrophysics Data System (ADS)

    Wu, Liang; Zhuang, Yaming

    2015-05-01

    The external shocks and internal contagion are the important sources of default events. However, the external shocks and internal contagion effect on the company is not observed, we cannot get the accurate size of the shocks. The information of investors relative to the default process exhibits a certain fuzziness. Therefore, using randomness and fuzziness to study such problems as derivative pricing or default probability has practical needs. But the idea of fuzzifying credit risk models is little exploited, especially in a reduced-form model. This paper proposes a new default intensity model with fuzziness and presents a fuzzy default probability and default loss rate, and puts them into default debt and credit derivative pricing. Finally, the simulation analysis verifies the rationality of the model. Using fuzzy numbers and random analysis one can consider more uncertain sources in the default process of default and investors' subjective judgment on the financial markets in a variety of fuzzy reliability so as to broaden the scope of possible credit spreads.

  3. Key role of coupling, delay, and noise in resting brain fluctuations

    PubMed Central

    Deco, Gustavo; Jirsa, Viktor; McIntosh, A. R.; Sporns, Olaf; Kötter, Rolf

    2009-01-01

    A growing body of neuroimaging research has documented that, in the absence of an explicit task, the brain shows temporally coherent activity. This so-called “resting state” activity or, more explicitly, the default-mode network, has been associated with daydreaming, free association, stream of consciousness, or inner rehearsal in humans, but similar patterns have also been found under anesthesia and in monkeys. Spatiotemporal activity patterns in the default-mode network are both complex and consistent, which raises the question whether they are the expression of an interesting cognitive architecture or the consequence of intrinsic network constraints. In numerical simulation, we studied the dynamics of a simplified cortical network using 38 noise-driven (Wilson–Cowan) oscillators, which in isolation remain just below their oscillatory threshold. Time delay coupling based on lengths and strengths of primate corticocortical pathways leads to the emergence of 2 sets of 40-Hz oscillators. The sets showed synchronization that was anticorrelated at <0.1 Hz across the sets in line with a wide range of recent experimental observations. Systematic variation of conduction velocity, coupling strength, and noise level indicate a high sensitivity of emerging synchrony as well as simulated blood flow blood oxygen level-dependent (BOLD) on the underlying parameter values. Optimal sensitivity was observed around conduction velocities of 1–2 m/s, with very weak coupling between oscillators. An additional finding was that the optimal noise level had a characteristic scale, indicating the presence of stochastic resonance, which allows the network dynamics to respond with high sensitivity to changes in diffuse feedback activity. PMID:19497858

  4. Tier 1 Rice Model for Estimating Pesticide Concentrations in Rice Paddies

    EPA Science Inventory

    The Tier 1 Rice Model estimates screening level aquatic concentrations of pesticides in rice paddies. It is a simple pesticide soil:water partitioning model with default values for water volume, soil mass, and organic carbon. Pesticide degradation is not considered in the mode...

  5. Default Trends in Major Postsecondary Education Sectors.

    ERIC Educational Resources Information Center

    Merisotis, Jamie P.

    1988-01-01

    Information on GSL defaults in five states is reviewed: California, Illinois, Massachusetts, New Jersey, and Pennsylvania. Default rates are defined and levels of default are examined using a variety of analytical methods. (Author/MLW)

  6. UFO (UnFold Operator) default data format

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kissel, L.; Biggs, F.; Marking, T.R.

    The default format for the storage of x,y data for use with the UFO code is described. The format assumes that the data stored in a file is a matrix of values; two columns of this matrix are selected to define a function of the form y = f(x). This format is specifically designed to allow for easy importation of data obtained from other sources, or easy entry of data using a text editor, with a minimum of reformatting. This format is flexible and extensible through the use of inline directives stored in the optional header of the file. Amore » special extension of the format implements encoded data which significantly reduces the storage required as compared wth the unencoded form. UFO supports several extensions to the file specification that implement execute-time operations, such as, transformation of the x and/or y values, selection of specific columns of the matrix for association with the x and y values, input of data directly from other formats (e.g., DAMP and PFF), and a simple type of library-structured file format. Several examples of the use of the format are given.« less

  7. Treatment default amongst patients with tuberculosis in urban Morocco: predicting and explaining default and post-default sputum smear and drug susceptibility results.

    PubMed

    Cherkaoui, Imad; Sabouni, Radia; Ghali, Iraqi; Kizub, Darya; Billioux, Alexander C; Bennani, Kenza; Bourkadi, Jamal Eddine; Benmamoun, Abderrahmane; Lahlou, Ouafae; Aouad, Rajae El; Dooley, Kelly E

    2014-01-01

    Public tuberculosis (TB) clinics in urban Morocco. Explore risk factors for TB treatment default and develop a prediction tool. Assess consequences of default, specifically risk for transmission or development of drug resistance. Case-control study comparing patients who defaulted from TB treatment and patients who completed it using quantitative methods and open-ended questions. Results were interpreted in light of health professionals' perspectives from a parallel study. A predictive model and simple tool to identify patients at high risk of default were developed. Sputum from cases with pulmonary TB was collected for smear and drug susceptibility testing. 91 cases and 186 controls enrolled. Independent risk factors for default included current smoking, retreatment, work interference with adherence, daily directly observed therapy, side effects, quick symptom resolution, and not knowing one's treatment duration. Age >50 years, never smoking, and having friends who knew one's diagnosis were protective. A simple scoring tool incorporating these factors was 82.4% sensitive and 87.6% specific for predicting default in this population. Clinicians and patients described additional contributors to default and suggested locally-relevant intervention targets. Among 89 cases with pulmonary TB, 71% had sputum that was smear positive for TB. Drug resistance was rare. The causes of default from TB treatment were explored through synthesis of qualitative and quantitative data from patients and health professionals. A scoring tool with high sensitivity and specificity to predict default was developed. Prospective evaluation of this tool coupled with targeted interventions based on our findings is warranted. Of note, the risk of TB transmission from patients who default treatment to others is likely to be high. The commonly-feared risk of drug resistance, though, may be low; a larger study is required to confirm these findings.

  8. Determinants of CD4 cell count change and time-to default from HAART; a comparison of separate and joint models.

    PubMed

    Tegegne, Awoke Seyoum; Ndlovu, Principal; Zewotir, Temesgen

    2018-04-27

    HIV has the most serious effects in Sub-Saharan African countries as compared to countries in other parts of the world. As part of these countries, Ethiopia has been affected significantly by the disease, and the burden of the disease has become worst in the Amhara Region, one of the eleven regions of the country. Being a defaulter or dropout of HIV patients from the treatment plays a significant role in treatment failure. The current research was conducted with the objective of comparing the performance of the joint and the separate modelling approaches in determining important factors that affect HIV patients' longitudinal CD4 cell count change and time to default from treatment. Longitudinal data was obtained from the records of 792 HIV adult patients at Felege-Hiwot Teaching and Specialized Hospital in Ethiopia. Two alternative approaches, namely separate and joint modeling data analyses, were conducted in the current study. Joint modeling was conducted for an analysis of the change of CD4 cell count and the time to default in the treatment. In the joint model, a generalized linear mixed effects model and Weibul survival sub-models were combined together for the repetitive measures of the CD4 cell count change and the number of follow-ups in which patients wait in the treatment. Finally, the two models were linked through their shared unobserved random effects using a shared parameter model. Both separate and joint modeling approach revealed a consistent result. However, the joint modeling approach was more parsimonious and fitted the given data well as compared to the separate one. Age, baseline CD4 cell count, marital status, sex, ownership of cell phone, adherence to HAART, disclosure of the disease and the number of follow-ups were important predictors for both the fluctuation of CD4 cell count and the time-to default from treatment. The inclusion of patient-specific variations in the analyses of the two outcomes improved the model significantly. Certain groups of patients were identified in the current investigation. The groups already identified had high fluctuation in the number of CD4 cell count and defaulted from HAART without any convincing reasons. Such patients need high intervention to adhere to the prescribed medication.

  9. Creative constraints: Brain activity and network dynamics underlying semantic interference during idea production.

    PubMed

    Beaty, Roger E; Christensen, Alexander P; Benedek, Mathias; Silvia, Paul J; Schacter, Daniel L

    2017-03-01

    Functional neuroimaging research has recently revealed brain network interactions during performance on creative thinking tasks-particularly among regions of the default and executive control networks-but the cognitive mechanisms related to these interactions remain poorly understood. Here we test the hypothesis that the executive control network can interact with the default network to inhibit salient conceptual knowledge (i.e., pre-potent responses) elicited from memory during creative idea production. Participants studied common noun-verb pairs and were given a cued-recall test with corrective feedback to strengthen the paired association in memory. They then completed a verb generation task that presented either a previously studied noun (high-constraint) or an unstudied noun (low-constraint), and were asked to "think creatively" while searching for a novel verb to relate to the presented noun. Latent Semantic Analysis of verbal responses showed decreased semantic distance values in the high-constraint (i.e., interference) condition, which corresponded to increased neural activity within regions of the default (posterior cingulate cortex and bilateral angular gyri), salience (right anterior insula), and executive control (left dorsolateral prefrontal cortex) networks. Independent component analysis of intrinsic functional connectivity networks extended this finding by revealing differential interactions among these large-scale networks across the task conditions. The results suggest that interactions between the default and executive control networks underlie response inhibition during constrained idea production, providing insight into specific neurocognitive mechanisms supporting creative cognition. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Time-saving design of experiment protocol for optimization of LC-MS data processing in metabolomic approaches.

    PubMed

    Zheng, Hong; Clausen, Morten Rahr; Dalsgaard, Trine Kastrup; Mortensen, Grith; Bertram, Hanne Christine

    2013-08-06

    We describe a time-saving protocol for the processing of LC-MS-based metabolomics data by optimizing parameter settings in XCMS and threshold settings for removing noisy and low-intensity peaks using design of experiment (DoE) approaches including Plackett-Burman design (PBD) for screening and central composite design (CCD) for optimization. A reliability index, which is based on evaluation of the linear response to a dilution series, was used as a parameter for the assessment of data quality. After identifying the significant parameters in the XCMS software by PBD, CCD was applied to determine their values by maximizing the reliability and group indexes. Optimal settings by DoE resulted in improvements of 19.4% and 54.7% in the reliability index for a standard mixture and human urine, respectively, as compared with the default setting, and a total of 38 h was required to complete the optimization. Moreover, threshold settings were optimized by using CCD for further improvement. The approach combining optimal parameter setting and the threshold method improved the reliability index about 9.5 times for a standards mixture and 14.5 times for human urine data, which required a total of 41 h. Validation results also showed improvements in the reliability index of about 5-7 times even for urine samples from different subjects. It is concluded that the proposed methodology can be used as a time-saving approach for improving the processing of LC-MS-based metabolomics data.

  11. Subtle increases in interletter spacing facilitate the encoding of words during normal reading.

    PubMed

    Perea, Manuel; Gomez, Pablo

    2012-01-01

    Several recent studies have revealed that words presented with a small increase in interletter spacing are identified faster than words presented with the default interletter spacing (i.e., w a t e r faster than water). Modeling work has shown that this advantage occurs at an early encoding level. Given the implications of this finding for the ease of reading in the new digital era, here we examined whether the beneficial effect of small increases in interletter spacing can be generalized to a normal reading situation. We conducted an experiment in which the participant's eyes were monitored when reading sentences varying in interletter spacing: i) sentences were presented with the default (0.0) interletter spacing; ii) sentences presented with a +1.0 interletter spacing; and iii) sentences presented with a +1.5 interletter spacing. Results showed shorter fixation duration times as an inverse function of interletter spacing (i.e., fixation durations were briefest with +1.5 spacing and slowest with the default spacing). Subtle increases in interletter spacing facilitate the encoding of the fixated word during normal reading. Thus, interletter spacing is a parameter that may affect the ease of reading, and it could be adjustable in future implementations of e-book readers.

  12. Subtle Increases in Interletter Spacing Facilitate the Encoding of Words during Normal Reading

    PubMed Central

    Perea, Manuel; Gomez, Pablo

    2012-01-01

    Background Several recent studies have revealed that words presented with a small increase in interletter spacing are identified faster than words presented with the default interletter spacing (i.e., w a t e r faster than water). Modeling work has shown that this advantage occurs at an early encoding level. Given the implications of this finding for the ease of reading in the new digital era, here we examined whether the beneficial effect of small increases in interletter spacing can be generalized to a normal reading situation. Methodology We conducted an experiment in which the participant’s eyes were monitored when reading sentences varying in interletter spacing: i) sentences were presented with the default (0.0) interletter spacing; ii) sentences presented with a +1.0 interletter spacing; and iii) sentences presented with a +1.5 interletter spacing. Principal Findings Results showed shorter fixation duration times as an inverse function of interletter spacing (i.e., fixation durations were briefest with +1.5 spacing and slowest with the default spacing). Conclusions Subtle increases in interletter spacing facilitate the encoding of the fixated word during normal reading. Thus, interletter spacing is a parameter that may affect the ease of reading, and it could be adjustable in future implementations of e-book readers. PMID:23082178

  13. In Debt and in the Dark: It's Time for Better Information on Student Loan Defaults. Charts You Can Trust

    ERIC Educational Resources Information Center

    Gillen, Andrew

    2013-01-01

    Student college loan default rates have nearly doubled in recent years. The three-year default rate exceeds 13 percent nationally. Tracking and reporting default rates is a crucial means of monitoring how well higher education dollars are spent. Yet, the way default data is gathered, measured, and reported by the federal government clouds…

  14. Risk Factors Associated with Default from Multi- and Extensively Drug-Resistant Tuberculosis Treatment, Uzbekistan: A Retrospective Cohort Analysis

    PubMed Central

    Lalor, Maeve K.; Greig, Jane; Allamuratova, Sholpan; Althomsons, Sandy; Tigay, Zinaida; Khaemraev, Atadjan; Braker, Kai; Telnov, Oleksander; du Cros, Philipp

    2013-01-01

    Background The Médecins Sans Frontières project of Uzbekistan has provided multidrug-resistant tuberculosis treatment in the Karakalpakstan region since 2003. Rates of default from treatment have been high, despite psychosocial support, increasing particularly since programme scale-up in 2007. We aimed to determine factors associated with default in multi- and extensively drug-resistant tuberculosis patients who started treatment between 2003 and 2008 and thus had finished approximately 2 years of treatment by the end of 2010. Methods A retrospective cohort analysis of multi- and extensively drug-resistant tuberculosis patients enrolled in treatment between 2003 and 2008 compared baseline demographic characteristics and possible risk factors for default. Default was defined as missing ≥60 consecutive days of treatment (all drugs). Data were routinely collected during treatment and entered in a database. Potential risk factors for default were assessed in univariate analysis using chi-square test and in multivariate analysis with logistic regression. Results 20% (142/710) of patients defaulted after a median of 6 months treatment (IQR 2.6–9.9). Factors associated with default included severity of resistance patterns (pre-extensively drug-resistant/extensively drug-resistant tuberculosis adjusted odds ratio 0.52, 95%CI: 0.31–0.86), previous default (2.38, 1.09–5.24) and age >45 years (1.77, 1.10–2.87). The default rate was 14% (42/294) for patients enrolled 2003–2006 and 24% (100/416) for 2007–2008 enrolments (p = 0.001). Conclusions Default from treatment was high and increased with programme scale-up. It is essential to ensure scale-up of treatment is accompanied with scale-up of staff and patient support. A successful first course of tuberculosis treatment is important; patients who had previously defaulted were at increased risk of default and death. The protective effect of severe resistance profiles suggests that understanding disease severity or fear may motivate against default. Targeted health education and support for at-risk patients after 5 months of treatment when many begin to feel better may decrease default. PMID:24223148

  15. Determinants of Default from Tuberculosis Treatment among Patients with Drug-Susceptible Tuberculosis in Karachi, Pakistan: A Mixed Methods Study.

    PubMed

    Chida, Natasha; Ansari, Zara; Hussain, Hamidah; Jaswal, Maria; Symes, Stephen; Khan, Aamir J; Mohammed, Shama

    2015-01-01

    Non-adherence to tuberculosis therapy can lead to drug resistance, prolonged infectiousness, and death; therefore, understanding what causes treatment default is important. Pakistan has one of the highest burdens of tuberculosis in the world, yet there have been no qualitative studies in Pakistan that have specifically examined why default occurs. We conducted a mixed methods study at a tuberculosis clinic in Karachi to understand why patients with drug-susceptible tuberculosis default from treatment, and to identify factors associated with default. Patients attending this clinic pick up medications weekly and undergo family-supported directly observed therapy. In-depth interviews were administered to 21 patients who had defaulted. We also compared patients who defaulted with those who were cured, had completed, or had failed treatment in 2013. Qualitative analyses showed the most common reasons for default were the financial burden of treatment, and medication side effects and beliefs. The influence of finances on other causes of default was also prominent, as was concern about the effect of treatment on family members. In quantitative analysis, of 2120 patients, 301 (14.2%) defaulted. Univariate analysis found that male gender (OR: 1.34, 95% CI: 1.04-1.71), being 35-59 years of age (OR: 1.54, 95% CI: 1.14-2.08), or being 60 years of age or older (OR: 1.84, 95% CI: 1.17-2.88) were associated with default. After adjusting for gender, disease site, and patient category, being 35-59 years of age (aOR: 1.49, 95% CI: 1.10-2.03) or 60 years of age or older (aOR: 1.76, 95% CI: 1.12-2.77) were associated with default. In multivariate analysis age was the only variable associated with default. This lack of identifiable risk factors and our qualitative findings imply that default is complex and often due to extrinsic and medication-related factors. More tolerable medications, improved side effect management, and innovative cost-reduction measures are needed to reduce default from tuberculosis treatment.

  16. Determinants of Default from Tuberculosis Treatment among Patients with Drug-Susceptible Tuberculosis in Karachi, Pakistan: A Mixed Methods Study

    PubMed Central

    Chida, Natasha; Ansari, Zara; Hussain, Hamidah; Jaswal, Maria; Symes, Stephen; Khan, Aamir J.; Mohammed, Shama

    2015-01-01

    Purpose Non-adherence to tuberculosis therapy can lead to drug resistance, prolonged infectiousness, and death; therefore, understanding what causes treatment default is important. Pakistan has one of the highest burdens of tuberculosis in the world, yet there have been no qualitative studies in Pakistan that have specifically examined why default occurs. We conducted a mixed methods study at a tuberculosis clinic in Karachi to understand why patients with drug-susceptible tuberculosis default from treatment, and to identify factors associated with default. Patients attending this clinic pick up medications weekly and undergo family-supported directly observed therapy. Methods In-depth interviews were administered to 21 patients who had defaulted. We also compared patients who defaulted with those who were cured, had completed, or had failed treatment in 2013. Results Qualitative analyses showed the most common reasons for default were the financial burden of treatment, and medication side effects and beliefs. The influence of finances on other causes of default was also prominent, as was concern about the effect of treatment on family members. In quantitative analysis, of 2120 patients, 301 (14.2%) defaulted. Univariate analysis found that male gender (OR: 1.34, 95% CI: 1.04–1.71), being 35–59 years of age (OR: 1.54, 95% CI: 1.14–2.08), or being 60 years of age or older (OR: 1.84, 95% CI: 1.17–2.88) were associated with default. After adjusting for gender, disease site, and patient category, being 35–59 years of age (aOR: 1.49, 95% CI: 1.10–2.03) or 60 years of age or older (aOR: 1.76, 95% CI: 1.12–2.77) were associated with default. Conclusions In multivariate analysis age was the only variable associated with default. This lack of identifiable risk factors and our qualitative findings imply that default is complex and often due to extrinsic and medication-related factors. More tolerable medications, improved side effect management, and innovative cost-reduction measures are needed to reduce default from tuberculosis treatment. PMID:26562787

  17. Risk factors associated with default from multi- and extensively drug-resistant tuberculosis treatment, Uzbekistan: a retrospective cohort analysis.

    PubMed

    Lalor, Maeve K; Greig, Jane; Allamuratova, Sholpan; Althomsons, Sandy; Tigay, Zinaida; Khaemraev, Atadjan; Braker, Kai; Telnov, Oleksander; du Cros, Philipp

    2013-01-01

    The Médecins Sans Frontières project of Uzbekistan has provided multidrug-resistant tuberculosis treatment in the Karakalpakstan region since 2003. Rates of default from treatment have been high, despite psychosocial support, increasing particularly since programme scale-up in 2007. We aimed to determine factors associated with default in multi- and extensively drug-resistant tuberculosis patients who started treatment between 2003 and 2008 and thus had finished approximately 2 years of treatment by the end of 2010. A retrospective cohort analysis of multi- and extensively drug-resistant tuberculosis patients enrolled in treatment between 2003 and 2008 compared baseline demographic characteristics and possible risk factors for default. Default was defined as missing ≥60 consecutive days of treatment (all drugs). Data were routinely collected during treatment and entered in a database. Potential risk factors for default were assessed in univariate analysis using chi-square test and in multivariate analysis with logistic regression. 20% (142/710) of patients defaulted after a median of 6 months treatment (IQR 2.6-9.9). Factors associated with default included severity of resistance patterns (pre-extensively drug-resistant/extensively drug-resistant tuberculosis adjusted odds ratio 0.52, 95%CI: 0.31-0.86), previous default (2.38, 1.09-5.24) and age >45 years (1.77, 1.10-2.87). The default rate was 14% (42/294) for patients enrolled 2003-2006 and 24% (100/416) for 2007-2008 enrolments (p = 0.001). Default from treatment was high and increased with programme scale-up. It is essential to ensure scale-up of treatment is accompanied with scale-up of staff and patient support. A successful first course of tuberculosis treatment is important; patients who had previously defaulted were at increased risk of default and death. The protective effect of severe resistance profiles suggests that understanding disease severity or fear may motivate against default. Targeted health education and support for at-risk patients after 5 months of treatment when many begin to feel better may decrease default.

  18. Improving the Non-Hydrostatic Numerical Dust Model by Integrating Soil Moisture and Greenness Vegetation Fraction Data with Different Spatiotemporal Resolutions.

    PubMed

    Yu, Manzhu; Yang, Chaowei

    2016-01-01

    Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.

  19. Toward Ada Verification: A Collection of Relevant Topics

    DTIC Science & Technology

    1986-06-01

    presumably it is this- if there are no default values, a programming error which results in failure to initialize a variable is more likely to advertise ... disavantages tu using AVID. First, TDL is a more complicated interface than first-order logic (as used in the CSG). Second, AVID is unsupported and

  20. 40 CFR Table 4 to Subpart Oooo of... - Default Organic HAP Mass Fraction for Solvents and Solvent Blends

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... fraction values in the following table for solvent blends for which you do not have test data or... spirits 64742-89-6 0.15 Toluene. 14. Low aromatic white spirit 64742-82-1 0 None. 15. Mineral spirits...

  1. 40 CFR Table 3 to Subpart IIIi of... - Default Organic HAP Mass Fraction for Solvents and Solvent Blends

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... fraction values in the following table for solvent blends for which you do not have test data or... spirits 64742-89-6 0.15 Toluene. 14. Low aromatic white spirit 64742-82-1 0 None. 15. Mineral spirits...

  2. 40 CFR Table 4 to Subpart Oooo of... - Default Organic HAP Mass Fraction for Solvents and Solvent Blends

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... fraction values in the following table for solvent blends for which you do not have test data or... spirits 64742-89-6 0.15 Toluene. 14. Low aromatic white spirit 64742-82-1 0 None. 15. Mineral spirits...

  3. 40 CFR Table 3 to Subpart IIIi of... - Default Organic HAP Mass Fraction for Solvents and Solvent Blends

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... fraction values in the following table for solvent blends for which you do not have test data or... spirits 64742-89-6 0.15 Toluene. 14. Low aromatic white spirit 64742-82-1 0 None. 15. Mineral spirits...

  4. Influence of parameter settings in voxel-based morphometry 8. Using DARTEL and region-of-interest on reproducibility in gray matter volumetry.

    PubMed

    Goto, M; Abe, O; Aoki, S; Hayashi, N; Miyati, T; Takao, H; Matsuda, H; Yamashita, F; Iwatsubo, T; Mori, H; Kunimatsu, A; Ino, K; Yano, K; Ohtomo, K

    2015-01-01

    To investigate whether reproducibility of gray matter volumetry is influenced by parameter settings for VBM 8 using Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra (DARTEL) with region-of-interest (ROI) analyses. We prepared three-dimensional T1-weighted magnetic resonance images (3D-T1WIs) of 21 healthy subjects. All subjects were imaged with each of five MRI systems. Voxel-based morphometry 8 (VBM 8) and WFU PickAtlas software were used for gray matter volumetry. The bilateral ROI labels used were those provided as default settings with the software: Frontal Lobe, Hippocampus, Occipital Lobe, Orbital Gyrus, Parietal Lobe, Putamen, and Temporal Lobe. All 3D-T1WIs were segmented to gray matter with six parameters of VBM 8, with each parameter having between three and eight selectable levels. Reproducibility was evaluated as the standard deviation (mm³) of measured values for the five MRI systems. Reproducibility was influenced by 'Bias regularization (BiasR)', 'Bias FWHM', and 'De-noising filter' settings, but not by 'MRF weighting', 'Sampling distance', or 'Warping regularization' settings. Reproducibility in BiasR was influenced by ROI. Superior reproducibility was observed in Frontal Lobe with the BiasR1 setting, and in Hippocampus, Parietal Lobe, and Putamen with the BiasR3*, BiasR1, and BiasR5 settings, respectively. Reproducibility of gray matter volumetry was influenced by parameter settings in VBM 8 using DARTEL and ROI. In multi-center studies, the use of appropriate settings in VBM 8 with DARTEL results in reduced scanner effect.

  5. Default Network Modulation and Large-Scale Network Interactivity in Healthy Young and Old Adults

    PubMed Central

    Schacter, Daniel L.

    2012-01-01

    We investigated age-related changes in default, attention, and control network activity and their interactions in young and old adults. Brain activity during autobiographical and visuospatial planning was assessed using multivariate analysis and with intrinsic connectivity networks as regions of interest. In both groups, autobiographical planning engaged the default network while visuospatial planning engaged the attention network, consistent with a competition between the domains of internalized and externalized cognition. The control network was engaged for both planning tasks. In young subjects, the control network coupled with the default network during autobiographical planning and with the attention network during visuospatial planning. In old subjects, default-to-control network coupling was observed during both planning tasks, and old adults failed to deactivate the default network during visuospatial planning. This failure is not indicative of default network dysfunction per se, evidenced by default network engagement during autobiographical planning. Rather, a failure to modulate the default network in old adults is indicative of a lower degree of flexible network interactivity and reduced dynamic range of network modulation to changing task demands. PMID:22128194

  6. Default network connectivity as a vulnerability marker for obsessive compulsive disorder.

    PubMed

    Peng, Z W; Xu, T; He, Q H; Shi, C Z; Wei, Z; Miao, G D; Jing, J; Lim, K O; Zuo, X N; Chan, R C K

    2014-05-01

    Aberrant functional connectivity within the default network is generally assumed to be involved in the pathophysiology of obsessive compulsive disorder (OCD); however, the genetic risk of default network connectivity in OCD remains largely unknown. Here, we systematically investigated default network connectivity in 15 OCD patients, 15 paired unaffected siblings and 28 healthy controls. We sought to examine the profiles of default network connectivity in OCD patients and their siblings, exploring the correlation between abnormal default network connectivity and genetic risk for this population. Compared with healthy controls, OCD patients exhibited reduced strength of default network functional connectivity with the posterior cingulate cortex (PCC), and increased functional connectivity in the right inferior frontal lobe, insula, superior parietal cortex and superior temporal cortex, while their unaffected first-degree siblings only showed reduced local connectivity in the PCC. These findings suggest that the disruptions of default network functional connectivity might be associated with family history of OCD. The decreased default network connectivity in both OCD patients and their unaffected siblings may serve as a potential marker of OCD.

  7. Schooling and variation in the COMT gene: the devil is in the details.

    PubMed

    Campbell, Daniel; Bick, Johanna; Yrigollen, Carolyn M; Lee, Maria; Joseph, Antony; Chang, Joseph T; Grigorenko, Elena L

    2013-10-01

    Schooling is considered one of the major contributors to the development of intelligence within societies and individuals. Genetic variation might modulate the impact of schooling and explain, at least partially, the presence of individual differences in classrooms. We studied a sample of 1,502 children (mean age = 11.7 years) from Zambia. Approximately 57% of these children were enrolled in school, and the rest were not. To quantify genetic variation, we investigated a number of common polymorphisms in the catechol-O-methyltransferase (COMT) gene that controls the production of the protein thought to account for >60% of the dopamine degradation in the prefrontal cortex. Haplotype analyses generated results ranging from the presence to absence of significant interactions between a number of COMT haplotypes and indicators of schooling (i.e., in- vs. out-of-school and grade completed) in the prediction of nonverbal intelligence, depending on the parameter specification. However, an investigation of the distribution of corresponding p-values suggested that these positive results were false. Convincing evidence that the variation in the COMT gene is associated with individual differences in nonverbal intelligence either directly or through interactions with schooling was not found. p-values produced by the method of testing for haplotype effects employed here may be sensitive to parameter settings, invalid under default settings, and should be checked for validity through simulation. © 2013 The Authors. Journal of Child Psychology and Psychiatry © 2013 Association for Child and Adolescent Mental Health.

  8. Integrating Representation Learning and Skill Learning in a Human-Like Intelligent Agent

    DTIC Science & Technology

    2013-06-21

    of 10 full-year controlled studies [Koedinger and MacLaren, 1997]. Nevertheless, the quality of the personalized instructions depends largely on the...relation among its children . The value of the direction field can be d, h, or v. d is the default value set for grammar rules that have only one child ...nearly comparable performance while significantly reducing the amount of knowledge engineering effort needed. 6.3 Experimental Study on

  9. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    PubMed

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  10. Developing a java android application of KMV-Merton default rate model

    NASA Astrophysics Data System (ADS)

    Yusof, Norliza Muhamad; Anuar, Aini Hayati; Isa, Norsyaheeda Natasha; Zulkafli, Sharifah Nursyuhada Syed; Sapini, Muhamad Luqman

    2017-11-01

    This paper presents a developed java android application for KMV-Merton model in predicting the defaut rate of a firm. Predicting default rate is essential in the risk management area as default risk can be immediately transmitted from one entity to another entity. This is the reason default risk is known as a global risk. Although there are several efforts, instruments and methods used to manage the risk, it is said to be insufficient. To the best of our knowledge, there has been limited innovation in developing the default risk mathematical model into a mobile application. Therefore, through this study, default risk is predicted quantitatively using the KMV-Merton model. The KMV-Merton model has been integrated in the form of java program using the Android Studio Software. The developed java android application is tested by predicting the levels of default risk of the three different rated companies. It is found that the levels of default risk are equivalent to the ratings of the respective companies. This shows that the default rate predicted by the KMV-Merton model using the developed java android application can be a significant tool to the risk mangement field. The developed java android application grants users an alternative to predict level of default risk within less procedure.

  11. Prevalence and characteristics associated with default of treatment and follow-up in patients with cancer.

    PubMed

    Chan, C M H; Wan Ahmad, W A; Md Yusof, M; Ho, G F; Krupat, E

    2015-11-01

    Defaulting is an important issue across all medical specialties, but much more so in cancer as delayed or incomplete treatment has been shown to result in worse clinical outcomes such as treatment resistance, disease progression as well as lower survival. Our objective was to identify psychosocial variables and characteristics associated with default among cancer patients. A total of 467 consecutive adult cancer patients attending the oncology clinic at a single academic medical centre completed the Hospital Anxiety and Depression Scale and reported their preference for psychological support at baseline, 4-6 weeks and 12-18 months follow-up. Default was defined as refusal, delay or discontinuation of treatment or visit, despite the ability to do so. A total of 159 of 467 (34.0%) cancer patients were defaulters. Of these 159 defaulters, 89 (56.0%) desired psychological support, compared to only 13 (4.2%) of 308 non-defaulters. Using a logistic regression, patients who were defaulters had 52 times higher odds (P = 0.001; 95% confidence interval 20.61-134.47) of desiring psychological support than non-defaulters after adjusting for covariates. These findings suggest that defaulters should be offered psychological support which may increase cancer treatment acceptance rates and improve survival. © 2015 John Wiley & Sons Ltd.

  12. PSYCHOACOUSTICS: a comprehensive MATLAB toolbox for auditory testing.

    PubMed

    Soranzo, Alessandro; Grassi, Massimo

    2014-01-01

    PSYCHOACOUSTICS is a new MATLAB toolbox which implements three classic adaptive procedures for auditory threshold estimation. The first includes those of the Staircase family (method of limits, simple up-down and transformed up-down); the second is the Parameter Estimation by Sequential Testing (PEST); and the third is the Maximum Likelihood Procedure (MLP). The toolbox comes with more than twenty built-in experiments each provided with the recommended (default) parameters. However, if desired, these parameters can be modified through an intuitive and user friendly graphical interface and stored for future use (no programming skills are required). Finally, PSYCHOACOUSTICS is very flexible as it comes with several signal generators and can be easily extended for any experiment.

  13. PSYCHOACOUSTICS: a comprehensive MATLAB toolbox for auditory testing

    PubMed Central

    Soranzo, Alessandro; Grassi, Massimo

    2014-01-01

    PSYCHOACOUSTICS is a new MATLAB toolbox which implements three classic adaptive procedures for auditory threshold estimation. The first includes those of the Staircase family (method of limits, simple up-down and transformed up-down); the second is the Parameter Estimation by Sequential Testing (PEST); and the third is the Maximum Likelihood Procedure (MLP). The toolbox comes with more than twenty built-in experiments each provided with the recommended (default) parameters. However, if desired, these parameters can be modified through an intuitive and user friendly graphical interface and stored for future use (no programming skills are required). Finally, PSYCHOACOUSTICS is very flexible as it comes with several signal generators and can be easily extended for any experiment. PMID:25101013

  14. Vertical Scaling with the Rasch Model Utilizing Default and Tight Convergence Settings with WINSTEPS and BILOG-MG

    ERIC Educational Resources Information Center

    Custer, Michael; Omar, Md Hafidz; Pomplun, Mark

    2006-01-01

    This study compared vertical scaling results for the Rasch model from BILOG-MG and WINSTEPS. The item and ability parameters for the simulated vocabulary tests were scaled across 11 grades; kindergarten through 10th. Data were based on real data and were simulated under normal and skewed distribution assumptions. WINSTEPS and BILOG-MG were each…

  15. 40 CFR Table F-2 to Subpart F of... - Default Data Sources for Parameters Used for CO2 Emissions

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... metric ton Al (metric tons C/metric tons Al) Individual facility records. Sa: sulfur content in baked anode (percent weight) 2.0. Asha: ash content in baked anode (percent weight) 0.4. CO2 Emissions From... records. Hw: annual hydrogen content in green anodes (metric tons) 0.005 × GA. BA: annual baked anode...

  16. 40 CFR Table F-2 to Subpart F of... - Default Data Sources for Parameters Used for CO2 Emissions

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... metric ton Al (metric tons C/metric tons Al) Individual facility records. Sa: sulfur content in baked anode (percent weight) 2.0. Asha: ash content in baked anode (percent weight) 0.4. CO2 Emissions From... records. Hw: annual hydrogen content in green anodes (metric tons) 0.005 × GA. BA: annual baked anode...

  17. 40 CFR Table F-2 to Subpart F of... - Default Data Sources for Parameters Used for CO2 Emissions

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... metric ton Al (metric tons C/metric tons Al) Individual facility records. Sa: sulfur content in baked anode (percent weight) 2.0. Asha: ash content in baked anode (percent weight) 0.4. CO2 Emissions From... records. Hw: annual hydrogen content in green anodes (metric tons) 0.005 × GA. BA: annual baked anode...

  18. 40 CFR Table F-2 to Subpart F of... - Default Data Sources for Parameters Used for CO2 Emissions

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... metric ton Al (metric tons C/metric tons Al) Individual facility records. Sa: sulfur content in baked anode (percent weight) 2.0. Asha: ash content in baked anode (percent weight) 0.4. CO2 Emissions From... records. Hw: annual hydrogen content in green anodes (metric tons) 0.005 × GA. BA: annual baked anode...

  19. McGill algorithm for precipitation nowcasting by lagrangian extrapolation (MAPLE) applied to the South Korean radar network. Part I: Sensitivity studies of the Variational Echo Tracking (VET) technique

    NASA Astrophysics Data System (ADS)

    Bellon, Aldo; Zawadzki, Isztar; Kilambi, Alamelu; Lee, Hee Choon; Lee, Yong Hee; Lee, Gyuwon

    2010-08-01

    A Variational Echo Tracking (VET) technique has been applied to four months of archived data from the South Korean radar network in order to examine the influence of the various user-selectable parameters on the skill of the resulting 20-min to 4-h nowcasts. The latter are computed over a (512 × 512) array at 2-km resolution. After correcting the original algorithm to take into account the motion of precipitation across the boundaries of such a smaller radar network, we concluded that the set of default input parameters initially assumed is very close to the optimum combination. Decreasing to (5 sx 5) or increasing to (50 × 50) the default vector density of (25 × 25), using two or three maps for velocity determination, varying the relative weights for the constraints of conservation of reflectivity and of the smoothing of the velocity vectors, and finally the application of temporal smoothing all had only marginal effects on the skill of the forecasts. The relatively small sensitivity to significant variations of the VET default parameters is a direct consequence of the fact that the major source of the loss in forecast skill cannot be attributed to errors in the forecast motion, but to the unpredictable nature of the storm growth and decay. Changing the time interval between maps, from 20 to 10 minutes, and significantly increasing the reflectivity threshold from 15 to 30 dBZ had a more noticeable reduction on the forecast skill. Comparisons with the Eulerian "zero velocity" forecast and with a "single" vector forecast have also been performed in order to determine the accrued skill of the VET algorithm. Because of the extensive stratiform nature of the precipitation areas affecting the Korean peninsula, the increased skill is not as large as may have been anticipated. This can be explained by the greater extent of the precipitation systems relative to the size of the radar coverage domain.

  20. Default patterns of patients attending clinics for sexually transmitted diseases.

    PubMed Central

    Mahony, J D; Bevan, J; Wall, B

    1978-01-01

    The influence of gender, propaganda, and treatment methods was studied in relation to default behaviour of patients with sexually transmitted diseases. The overall default rate of men and women was similar, but a larger proportion of men defaulted after the initial visit, while the biggest fall-out in women was after the second attendance at the clinic. The institution of a propaganda campaign was followed by a reduction in defaulting. The statistical significance of this is open to question, however: moreover the observed improvement in default rate was not maintained once the propaganda had been relaxed. Men treated for non-gonococcal urethritis by a regimen which included one injection a week for three weeks showed a highly significantly lower default rate compared with those who received tablets alone. PMID:580413

  1. Predictors of Default from Treatment for Tuberculosis: a Single Center Case–Control Study in Korea

    PubMed Central

    2016-01-01

    Default from tuberculosis (TB) treatment could exacerbate the disease and result in the emergence of drug resistance. This study identified the risk factors for default from TB treatment in Korea. This single-center case–control study analyzed 46 default cases and 100 controls. Default was defined as interrupting treatment for 2 or more consecutive months. The reasons for default were mainly incorrect perception or information about TB (41.3%) and experience of adverse events due to TB drugs (41.3%). In univariate analysis, low income (< 2,000 US dollars/month, 88.1% vs. 68.4%, P = 0.015), absence of TB stigma (4.3% vs. 61.3%, P < 0.001), treatment by a non-pulmonologist (74.1% vs. 25.9%, P < 0.001), history of previous treatment (37.0% vs. 19.0%, P = 0.019), former defaulter (15.2% vs. 2.0%, P = 0.005), and combined extrapulmonary TB (54.3% vs. 34.0%, P = 0.020) were significant risk factors for default. In multivariate analysis, the absence of TB stigma (adjusted odd ratio [aOR]: 46.299, 95% confidence interval [CI]: 8.078–265.365, P < 0.001), treatment by a non-pulmonologist (aOR: 14.567, 95% CI: 3.260–65.089, P < 0.001), former defaulters (aOR: 33.226, 95% CI: 2.658–415.309, P = 0.007), and low income (aOR: 5.246, 95% CI: 1.249–22.029, P = 0.024) were independent predictors of default from TB treatment. In conclusion, patients with absence of disease stigma, treated by a non-pulmonologist, who were former defaulters, and with low income should be carefully monitored during TB treatment in Korea to avoid treatment default. PMID:26839480

  2. Predictors of Default from Treatment for Tuberculosis: a Single Center Case-Control Study in Korea.

    PubMed

    Park, Cheol-Kyu; Shin, Hong-Joon; Kim, Yu-Il; Lim, Sung-Chul; Yoon, Jeong-Sun; Kim, Young-Su; Kim, Jung-Chul; Kwon, Yong-Soo

    2016-02-01

    Default from tuberculosis (TB) treatment could exacerbate the disease and result in the emergence of drug resistance. This study identified the risk factors for default from TB treatment in Korea. This single-center case-control study analyzed 46 default cases and 100 controls. Default was defined as interrupting treatment for 2 or more consecutive months. The reasons for default were mainly incorrect perception or information about TB (41.3%) and experience of adverse events due to TB drugs (41.3%). In univariate analysis, low income (< 2,000 US dollars/month, 88.1% vs. 68.4%, P = 0.015), absence of TB stigma (4.3% vs. 61.3%, P < 0.001), treatment by a non-pulmonologist (74.1% vs. 25.9%, P < 0.001), history of previous treatment (37.0% vs. 19.0%, P = 0.019), former defaulter (15.2% vs. 2.0%, P = 0.005), and combined extrapulmonary TB (54.3% vs. 34.0%, P = 0.020) were significant risk factors for default. In multivariate analysis, the absence of TB stigma (adjusted odd ratio [aOR]: 46.299, 95% confidence interval [CI]: 8.078-265.365, P < 0.001), treatment by a non-pulmonologist (aOR: 14.567, 95% CI: 3.260-65.089, P < 0.001), former defaulters (aOR: 33.226, 95% CI: 2.658-415.309, P = 0.007), and low income (aOR: 5.246, 95% CI: 1.249-22.029, P = 0.024) were independent predictors of default from TB treatment. In conclusion, patients with absence of disease stigma, treated by a non-pulmonologist, who were former defaulters, and with low income should be carefully monitored during TB treatment in Korea to avoid treatment default.

  3. Redundant mechanisms are involved in suppression of default cell fates during embryonic mesenchyme and notochord induction in ascidians.

    PubMed

    Kodama, Hitoshi; Miyata, Yoshimasa; Kuwajima, Mami; Izuchi, Ryoichi; Kobayashi, Ayumi; Gyoja, Fuki; Onuma, Takeshi A; Kumano, Gaku; Nishida, Hiroki

    2016-08-01

    During embryonic induction, the responding cells invoke an induced developmental program, whereas in the absence of an inducing signal, they assume a default uninduced cell fate. Suppression of the default fate during the inductive event is crucial for choice of the binary cell fate. In contrast to the mechanisms that promote an induced cell fate, those that suppress the default fate have been overlooked. Upon induction, intracellular signal transduction results in activation of genes encoding key transcription factors for induced tissue differentiation. It is elusive whether an induced key transcription factor has dual functions involving suppression of the default fates and promotion of the induced fate, or whether suppression of the default fate is independently regulated by other factors that are also downstream of the signaling cascade. We show that during ascidian embryonic induction, default fates were suppressed by multifold redundant mechanisms. The key transcription factor, Twist-related.a, which is required for mesenchyme differentiation, and another independent transcription factor, Lhx3, which is dispensable for mesenchyme differentiation, sequentially and redundantly suppress the default muscle fate in induced mesenchyme cells. Similarly in notochord induction, Brachyury, which is required for notochord differentiation, and other factors, Lhx3 and Mnx, are likely to suppress the default nerve cord fate redundantly. Lhx3 commonly suppresses the default fates in two kinds of induction. Mis-activation of the autonomously executed default program in induced cells is detrimental to choice of the binary cell fate. Multifold redundant mechanisms would be required for suppression of the default fate to be secure. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. True status of smear-positive pulmonary tuberculosis defaulters in Malawi.

    PubMed Central

    Kruyt, M. L.; Kruyt, N. D.; Boeree, M. J.; Harries, A. D.; Salaniponi, F. M.; van Noord, P. A.

    1999-01-01

    The article reports the results of a study to determine the true outcome of 8 months of treatment received by smear-positive pulmonary tuberculosis (PTB) patients who had been registered as defaulters in the Queen Elizabeth Central Hospital (QECH) and Mlambe Mission Hospital (MMH), Blantyre, Malawi. The treatment outcomes were documented from the tuberculosis registers of all patients registered between 1 October 1994 and 30 September 1995. The true treatment outcome for patients who had been registered as defaulters was determined by making personal inquiries at the treatment units and the residences of patients or relatives and, in a few cases, by writing to the appropriate postal address. Interviews were carried out with patients who had defaulted and were still alive and with matched, fully compliant PTB patients who had successfully completed the treatment to determine the factors associated with defaulter status. Of the 1099 patients, 126 (11.5%) had been registered as defaulters, and the true treatment outcome was determined for 101 (80%) of the latter; only 22 were true defaulters, 31 had completed the treatment, 31 had died during the treatment period, and 17 had left the area. A total of 8 of the 22 true defaulters were still alive and were compared with the compliant patients. Two significant characteristics were associated with the defaulters; they were unmarried; and they did not know the correct duration of antituberculosis treatment. Many of the smear-positive tuberculosis patients who had been registered as defaulters in the Blantyre district were found to have different treatment outcomes, without defaulting. The quality of reporting in the health facilities must therefore be improved in order to exclude individuals who are not true defaulters. PMID:10361755

  5. Review and analysis of global agricultural N₂O emissions relevant to the UK.

    PubMed

    Buckingham, S; Anthony, S; Bellamy, P H; Cardenas, L M; Higgins, S; McGeough, K; Topp, C F E

    2014-07-15

    As part of a UK government funded research project to update the UK N2O inventory methodology, a systematic review of published nitrous oxide (N2O) emission factors was carried out of non-UK research, for future comparison and synthesis with the UK measurement based evidence base. The aim of the study is to assess how the UK IPCC default emission factor for N2O emissions derived from synthetic or organic fertiliser inputs (EF1) compares to international values reported in published literature. The availability of data for comparing and/or refining the UK IPCC default value and the possibility of analysing sufficient auxiliary data to propose a Tier 2 EF1 reporting strategy is evaluated. The review demonstrated a lack of consistency in reporting error bounds for fertiliser-derived EFs and N2O flux data with 8% and 44% of publications reporting EF and N2O flux error bounds respectively. There was also poor description of environmental (climate and soil) and experimental design auxiliary data. This is likely to be due to differences in study objectives, however potential improvements to soil parameter reporting are proposed. The review demonstrates that emission factors for agricultural-derived N2O emissions ranged -0.34% to 37% showing high variation compared to the UK Tier 1 IPCC EF1 default values of 1.25% (IPCC 1996) and 1% (IPPC 2006). However, the majority (83%) of EFs reported for UK-relevant soils fell within the UK IPCC EF1 uncertainty range of 0.03% to 3%. Residual maximum likelihood (REML) analysis of the data collated in the review showed that the type and rate of fertiliser N applied and soil type were significant factors influencing EFs reported. Country of emission, the length of the measurement period, the number of splits, the crop type, pH and SOC did not have a significant impact on N2O emissions. A subset of publications where sufficient data was reported for meta-analysis to be conducted was identified. Meta-analysis of effect sizes of 41 treatments demonstrated that the application of fertiliser has a significant effect on N2O emissions in comparison to control plots and that emission factors were significantly different to zero. However no significant relationships between the quantity of fertiliser applied and the effect size of the amount of N2O emitted from fertilised plots compared to control plots were found. Annual addition of fertiliser of 35 to 557 kg N/ha gave a mean increase in emissions of 2.02 ± 0.28 g N2O/ha/day compared to control treatments (p<0.01). Emission factors were significantly different from zero, with a mean emission factor estimated directly from the meta analysis of 0.17 ± 0.02%. This is lower than the IPCC 2006 Tier 1 EF1 value of 1% but falling within the uncertainty bound for the IPCC 2006 Tier 1 EF1 (0.03% to 3%). As only a small number of papers were viable for meta analysis to be conducted due to lack of reporting of the key controlling factors, the estimates of EF in this paper cannot include the true variability under conditions similar to the UK. Review-derived EFs of 0.34% to 37% and mean EF from meta-analysis of 0.17 ± 0.02% highlight variability in reporting EFs depending on the method applied and sample size. A protocol of systematic reporting of N2O emissions and key auxiliary parameters in publications across disciplines is proposed. If adopted this would strengthen the community to inform IPCC Tier 2 reporting development and reduce the uncertainty surrounding reported UK N2O emissions. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Treatment Default amongst Patients with Tuberculosis in Urban Morocco: Predicting and Explaining Default and Post-Default Sputum Smear and Drug Susceptibility Results

    PubMed Central

    Ghali, Iraqi; Kizub, Darya; Billioux, Alexander C.; Bennani, Kenza; Bourkadi, Jamal Eddine; Benmamoun, Abderrahmane; Lahlou, Ouafae; Aouad, Rajae El; Dooley, Kelly E.

    2014-01-01

    Setting Public tuberculosis (TB) clinics in urban Morocco. Objective Explore risk factors for TB treatment default and develop a prediction tool. Assess consequences of default, specifically risk for transmission or development of drug resistance. Design Case-control study comparing patients who defaulted from TB treatment and patients who completed it using quantitative methods and open-ended questions. Results were interpreted in light of health professionals’ perspectives from a parallel study. A predictive model and simple tool to identify patients at high risk of default were developed. Sputum from cases with pulmonary TB was collected for smear and drug susceptibility testing. Results 91 cases and 186 controls enrolled. Independent risk factors for default included current smoking, retreatment, work interference with adherence, daily directly observed therapy, side effects, quick symptom resolution, and not knowing one’s treatment duration. Age >50 years, never smoking, and having friends who knew one’s diagnosis were protective. A simple scoring tool incorporating these factors was 82.4% sensitive and 87.6% specific for predicting default in this population. Clinicians and patients described additional contributors to default and suggested locally-relevant intervention targets. Among 89 cases with pulmonary TB, 71% had sputum that was smear positive for TB. Drug resistance was rare. Conclusion The causes of default from TB treatment were explored through synthesis of qualitative and quantitative data from patients and health professionals. A scoring tool with high sensitivity and specificity to predict default was developed. Prospective evaluation of this tool coupled with targeted interventions based on our findings is warranted. Of note, the risk of TB transmission from patients who default treatment to others is likely to be high. The commonly-feared risk of drug resistance, though, may be low; a larger study is required to confirm these findings. PMID:24699682

  7. 48 CFR 609.405-70 - Termination action decision.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) For overseas posts, A/OPE. (b) Termination for default. Termination for default under a contract's default clause is appropriate when the circumstances giving rise to the debarment or suspension also constitute a default in the contractor's performance of that contract. Debarment or suspension of the...

  8. Determinants of default from pulmonary tuberculosis treatment in Kuwait.

    PubMed

    Zhang, Qing; Gaafer, Mohamed; El Bayoumy, Ibrahim

    2014-01-01

    To determine the prevalence and risk factors of default from pulmonary tuberculosis treatment in Kuwait. Retrospective study. We studied all patients who were registered for pulmonary tuberculosis treatment between January 1, 2010, and December 31, 2012, and admitted into TB wards in El Rashid Center or treated in the outpatient clinic in TB Control Unit. There were 110 (11.5%) patients who defaulted from treatment. Fifty-six percent of those who defaulted did so in the first 2 months of treatment and 86.4% of them were still bacteriologically positive at the time of default. Key risk factors associated with noncompliance were male sex, low educational level, non-Kuwaiti nations, history of default, and history of concomitant diabetes mellitus, liver disease, or lung cancer. Multiple drug resistance was also associated with default from treatment. Default from treatment may be partially responsible for the persistent relatively high rates of tuberculosis in Kuwait. Health professionals and policy makers should ensure that all barriers to treatment are removed and that incentives are used to encourage treatment compliance.

  9. SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2008-01-01

    This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.

  10. Design of oligonucleotides for microarrays and perspectives for design of multi-transcriptome arrays.

    PubMed

    Nielsen, Henrik Bjørn; Wernersson, Rasmus; Knudsen, Steen

    2003-07-01

    Optimal design of oligonucleotides for microarrays involves tedious and laborious work evaluating potential oligonucleotides relative to a series of parameters. The currently available tools for this purpose are limited in their flexibility and do not present the oligonucleotide designer with an overview of these parameters. We present here a flexible tool named OligoWiz for designing oligonucleotides for multiple purposes. OligoWiz presents a set of parameter scores in a graphical interface to facilitate an overview for the user. Additional custom parameter scores can easily be added to the program to extend the default parameters: homology, DeltaTm, low-complexity, position and GATC-only. Furthermore we present an analysis of the limitations in designing oligonucleotide sets that can detect transcripts from multiple organisms. OligoWiz is available at www.cbs.dtu.dk/services/OligoWiz/.

  11. User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes

    USGS Publications Warehouse

    Klein, Fred W.

    2002-01-01

    Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of Hypoinverse includes all of the previous capabilities, but adds Y2000 formats to those defined earlier. In most cases, the new formats add 2 digits to the year field to accommodate the century. Other fields are sometimes rearranged or expanded to accommodate a better field order. The Y2000 formats are invoked with the “200” command. When the Y2000 flag is turned on, all files are read and written in the new format and there is no mixing of format types in a single run. Some formats without a date field, like station files, have not changed. A separate program called 2000CONV has been written to convert old formats to new. Other new features, like expanded station names, calculating amplitude magnitudes from a variety of digital seismometers, station history files, interactive earthquake processing, and locations from CUSP (Caltech USGS Seismic Processing) binary files have been added. General features. Hypoinverse will locate any number of events in an input file, which can be in one of several different formats. Any or all of printout, summary or archive output may be produced. Hypoinverse is driven by user commands. The various commands define input and output files, set adjustable parameters, and solve for locations of a file of earthquake data using the parameters and files currently set. It is both interactive and "batch" in that commands may be executed either from the keyboard or from a file. You execute the commands in a file by typing @filename at the Hypoinverse prompt. Users may either supply parameters on the command line, or omit them and are prompted interactively. The current parameter values are displayed and may be taken as defaults by pressing just the RETURN key after the prompt. This makes the program very easy to use, providing you can remember the names of the commands. Combining commands with and without their required parameters into a command file permits a variety of customized procedures such as automatic input of crustal model and station data, but prompting for a different phase file each time. All commands are 3 letters long and most require one or more parameters or file names. If they appear on a line with a command, character strings such as filenames must be enclosed in apostrophes (single quotes). Appendix 1 gives this and other free-format rules for supplying parameters, which are parsed in Fortran. When several parameters are required following a command, any of them may be omitted by replacing them with null fields (see appendix 1). A null field leaves that parameter unchanged from its current or default value. When you start HYPOINVERSE, default values are in effect for all parameters except file names. Hypoinverse is a complicated program with many features and options. Many of these "advanced" or seldom used features are documented here, but are more detailed than a typical user needs to read about when first starting with the program. I have put some of this material in smaller type so that a first time user can concentrate on the more important information.

  12. 19 CFR 210.16 - Default.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Default. 210.16 Section 210.16 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.16 Default. (a) Definition of default. (1) A party shall be found in...

  13. 19 CFR 210.16 - Default.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 3 2011-04-01 2011-04-01 false Default. 210.16 Section 210.16 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.16 Default. (a) Definition of default. (1) A party shall be found in...

  14. 19 CFR 210.16 - Default.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 3 2013-04-01 2013-04-01 false Default. 210.16 Section 210.16 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.16 Default. (a) Definition of default. (1) A party shall be found in...

  15. 19 CFR 210.16 - Default.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 3 2014-04-01 2014-04-01 false Default. 210.16 Section 210.16 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.16 Default. (a) Definition of default. (1) A party shall be found in...

  16. 7 CFR 1980.470 - Defaults by borrower.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 14 2010-01-01 2009-01-01 true Defaults by borrower. 1980.470 Section 1980.470...) PROGRAM REGULATIONS (CONTINUED) GENERAL Business and Industrial Loan Program § 1980.470 Defaults by... property management. A. In case of any monetary or significant non-monetary default under the loan...

  17. Use of cellular phone contacts to increase return rates for immunization services in Kenya

    PubMed Central

    Mokaya, Evans; Mugoya, Isaac; Raburu, Jane; Shimp, Lora

    2017-01-01

    Introduction In Kenya, failure to complete immunization schedules by children who previously accessed immunization services is an obstacle to ensuring that children are fully immunized. Home visit approaches used to track defaulting children have not been successful in reducing the drop-out rate. Methods This study tested the use of phone contacts as an approach for tracking immunization defaulters in twelve purposively-selected facilities in three districts of western Kenya. For nine months, children accessing immunization services in the facilities were tracked and caregivers were asked their reasons for defaulting. Results In all of the facilities, caregiver phone ownership was above 80%. In 11 of the 12 facilities, defaulter rates between pentavalent1 and pentavalent3 vaccination doses reduced significantly to within the acceptable level of < 10%. Caregivers provided reliable contact information and health workers positively perceived phone-based defaulter communications. Tracking a defaulter required on average 2 minutes by voice and Ksh 6 ($ 0.07). Competing tasks and concerns about vaccinating sick children and side-effects were the most cited reasons for caregivers defaulting. Notably, a significant number of children categorised as defaulters had been vaccinated in a different facility (and were therefore “false defaulters”). Conclusion Use of phone contacts for follow-up is a feasible and cost-effective method for tracking defaulters. This approach should complement traditional home visits, especially for caregivers without phones. Given communication-related reasons for defaulting, it is important that immunization programs scale-up community education activities. A system for health facilities to share details of defaulting children should be established to reduce “false defaulters”. PMID:29138660

  18. Choosers, Obstructed Choosers, and Nonchoosers: A Framework for Defaulting in Schooling Choices

    ERIC Educational Resources Information Center

    Delale-O'Connor, Lori

    2018-01-01

    Background/Context: Prior research overlooks the importance of drawing distinctions within the category of defaulters or "nonchoosers" in schooling choices. Defaulters are both a theoretically and empirically interesting population, and understanding the processes by which families come to or are assigned the default school offers…

  19. 7 CFR 3575.75 - Defaults by borrower.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Defaults by borrower. 3575.75 Section 3575.75... AGRICULTURE GENERAL Community Programs Guaranteed Loans § 3575.75 Defaults by borrower. (a) Lender... default. The lender will continue to keep the Agency informed on a bimonthly basis until such time as the...

  20. 42 CFR 1001.1501 - Default of health education loan or scholarship obligations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 5 2011-10-01 2011-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...

  1. 42 CFR 1001.1501 - Default of health education loan or scholarship obligations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 5 2013-10-01 2013-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...

  2. 42 CFR 1001.1501 - Default of health education loan or scholarship obligations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...

  3. 42 CFR 1001.1501 - Default of health education loan or scholarship obligations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 5 2014-10-01 2014-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...

  4. 42 CFR 1001.1501 - Default of health education loan or scholarship obligations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 5 2012-10-01 2012-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...

  5. 10 CFR 110.110 - Default.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Default. 110.110 Section 110.110 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) EXPORT AND IMPORT OF NUCLEAR EQUIPMENT AND MATERIAL Hearings § 110.110 Default. When a participant fails to act within a specified time, the presiding officer may consider him in default, issue an...

  6. 24 CFR 26.41 - Default.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) General. The respondent may be found in default, upon motion, for failure to file a timely response to the Government's complaint. The motion shall include a copy of the complaint and a proposed default order, and... motion. (b) Default order. The ALJ shall issue a decision on the motion within 15 days after the...

  7. Cohort Default Rate Guide.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC. Default Management Div.

    This guide is designed to assist schools with their Federal Family Education Loan Program (FFEL) and the William D. Ford Federal Direct Loan (Direct Loan) Program cohort default rate. The guide is a reference tool in understanding cohort default rates and processes. This guide incorporates two former guides, the "Draft Cohort Default Rate…

  8. 24 CFR 907.3 - Bases for substantial default.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false Bases for substantial default. 907.3 Section 907.3 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT... DEVELOPMENT SUBSTANTIAL DEFAULT BY A PUBLIC HOUSING AGENCY § 907.3 Bases for substantial default. (a...

  9. 24 CFR 907.3 - Bases for substantial default.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Bases for substantial default. 907.3 Section 907.3 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT... DEVELOPMENT SUBSTANTIAL DEFAULT BY A PUBLIC HOUSING AGENCY § 907.3 Bases for substantial default. (a...

  10. 24 CFR 907.3 - Bases for substantial default.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false Bases for substantial default. 907.3 Section 907.3 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT... DEVELOPMENT SUBSTANTIAL DEFAULT BY A PUBLIC HOUSING AGENCY § 907.3 Bases for substantial default. (a...

  11. 14 CFR 1261.413 - Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of overpayments, delinquencies, or defaults. 1261.413 Section 1261.413 Aeronautics and Space NATIONAL...) § 1261.413 Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults. The... internal controls to identify causes, if any, of overpayments, delinquencies, and defaults, and establish...

  12. 48 CFR 49.403 - Termination of cost-reimbursement contracts for default.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-reimbursement contracts for default. 49.403 Section 49.403 Federal Acquisition Regulations System FEDERAL... of cost-reimbursement contracts for default. (a) The right to terminate a cost-reimbursement contract... case by the clause. (b) Settlement of a cost-reimbursement contract terminated for default is subject...

  13. --No Title--

    Science.gov Websites

    { background-color:#fff; padding-bottom:20px; } .navbar-default ul li { background-color:#1E8728; margin: 0 , .navbar-default .navbar-nav > .active > a:focus { background-color: #004C09; color: #fff; } .navbar -default .navbar-nav > li > a { color:#fff; } .navbar-default .navbar-nav > li > a:hover

  14. A Regional Model for Malaria Vector Developmental Habitats Evaluated Using Explicit, Pond-Resolving Surface Hydrology Simulations.

    PubMed

    Asare, Ernest Ohene; Tompkins, Adrian Mark; Bomblies, Arne

    2016-01-01

    Dynamical malaria models can relate precipitation to the availability of vector breeding sites using simple models of surface hydrology. Here, a revised scheme is developed for the VECTRI malaria model, which is evaluated alongside the default scheme using a two year simulation by HYDREMATS, a 10 metre resolution, village-scale model that explicitly simulates individual ponds. Despite the simplicity of the two VECTRI surface hydrology parametrization schemes, they can reproduce the sub-seasonal evolution of fractional water coverage. Calibration of the model parameters is required to simulate the mean pond fraction correctly. The default VECTRI model tended to overestimate water fraction in periods subject to light rainfall events and underestimate it during periods of intense rainfall. This systematic error was improved in the revised scheme by including the a parametrization for surface run-off, such that light rainfall below the initial abstraction threshold does not contribute to ponds. After calibration of the pond model, the VECTRI model was able to simulate vector densities that compared well to the detailed agent based model contained in HYDREMATS without further parameter adjustment. Substituting local rain-gauge data with satellite-retrieved precipitation gave a reasonable approximation, raising the prospects for regional malaria simulations even in data sparse regions. However, further improvements could be made if a method can be derived to calibrate the key hydrology parameters of the pond model in each grid cell location, possibly also incorporating slope and soil texture.

  15. A Regional Model for Malaria Vector Developmental Habitats Evaluated Using Explicit, Pond-Resolving Surface Hydrology Simulations

    PubMed Central

    Asare, Ernest Ohene; Tompkins, Adrian Mark; Bomblies, Arne

    2016-01-01

    Dynamical malaria models can relate precipitation to the availability of vector breeding sites using simple models of surface hydrology. Here, a revised scheme is developed for the VECTRI malaria model, which is evaluated alongside the default scheme using a two year simulation by HYDREMATS, a 10 metre resolution, village-scale model that explicitly simulates individual ponds. Despite the simplicity of the two VECTRI surface hydrology parametrization schemes, they can reproduce the sub-seasonal evolution of fractional water coverage. Calibration of the model parameters is required to simulate the mean pond fraction correctly. The default VECTRI model tended to overestimate water fraction in periods subject to light rainfall events and underestimate it during periods of intense rainfall. This systematic error was improved in the revised scheme by including the a parametrization for surface run-off, such that light rainfall below the initial abstraction threshold does not contribute to ponds. After calibration of the pond model, the VECTRI model was able to simulate vector densities that compared well to the detailed agent based model contained in HYDREMATS without further parameter adjustment. Substituting local rain-gauge data with satellite-retrieved precipitation gave a reasonable approximation, raising the prospects for regional malaria simulations even in data sparse regions. However, further improvements could be made if a method can be derived to calibrate the key hydrology parameters of the pond model in each grid cell location, possibly also incorporating slope and soil texture. PMID:27003834

  16. Determination of the cross section for (n,p) and (n,α) reactions on (165)Ho at 13.5 and 14.8MeV.

    PubMed

    Luo, Junhua; An, Li; Jiang, Li; He, Long

    2015-04-01

    Activation cross-sections for the (165)Ho(n,p)(165)Dy and (165)Ho(n,α)(162)Tb reactions were measured by means of the activation method at 13.5 and 14.8MeV, to resolve inconsistencies in existing data. A neutron beam produced via the (3)H(d,n)(4)He reaction was used. Statistical model calculations were performed using the nuclear-reaction codes EMPIRE-3.2 Malta and TALYS-1.6 with default parameters, at neutron energies varying from the reaction threshold to 20MeV. Results are also discussed and compared with some corresponding values found in the literature. The calculational results on the (165)Ho(n,α)(162)Tb reaction agreed fairly well with experimental data, but there were large discrepancies in the results for the (165)Ho(n,p)(165)Dy reaction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Data Build-up for the Construction of Korean Specific Greenhouse Gas Emission Inventory in Livestock Categories.

    PubMed

    Won, S G; Cho, W S; Lee, J E; Park, K H; Ra, C S

    2014-03-01

    Many studies on methane (CH4) and nitrous oxide (N2O) emissions from livestock industries have revealed that livestock production directly contributes to greenhouse gas (GHG) emissions through enteric fermentation and manure management, which causes negative impacts on animal environment sustainability. In the present study, three essential values for GHG emission were measured; i.e., i) maximum CH4 producing capacity at mesophilic temperature (37°C) from anaerobically stored manure in livestock category (B0,KM, Korean livestock manure for B0), ii) EF3(s) value representing an emission factor for direct N2O emissions from manure management system S in the country, kg N2O-N kg N(-1), at mesophilic (37°C) and thermophilic (55°C) temperatures, and iii) Nex(T) emissions showing annual N excretion for livestock category T, kg N animal(-1) yr(-1), from different livestock manure. Static incubation with and without aeration was performed to obtain the N2O and CH4 emissions from each sample, respectively. Chemical compositions of pre- and post-incubated manure were analyzed. Contents of total solids (% TS) and volatile solid (% VS), and the ratio of carbon to nitrogen (C/N) decrease significantly in all the samples by C-containing biogas generation, whereas moisture content (%) and pH increased after incubation. A big difference of total nitrogen content was not observed in pre- and post-incubation during CH4 and N2O emissions. CH4 emissions (g CH4 kg VS(-1)) from all the three manures (sows, layers and Korean cattle) were different and high C/N ratio resulted in high CH4 emission. Similarly, N2O emission was found to be affected by % VS, pH, and temperature. The B0,KM values for sows, layers, and Korean cattle obtained at 37°C are 0.0579, 0.0006, and 0.0828 m(3) CH4 kg VS(-1), respectively, which are much less than the default values in IPCC guideline (GL) except the value from Korean cattle. For sows and Korean cattle, Nex(T) values of 7.67 and 28.19 kg N yr(-1), respectively, are 2.5 fold less than those values in IPCC GL as well. However, Nex(T) value of layers 0.63 kg N yr(-1) is very similar to the default value of 0.6 kg N yr(-1) in IPCC GLs for National greenhouse gas inventories for countries such as South Korea/Asia. The EF3(s) value obtained at 37°C and 55°C were found to be far less than the default value.

  18. Water quality guidelines for the Great Barrier Reef World Heritage Area: a basis for development and preliminary values.

    PubMed

    Moss, Andrew; Brodie, Jon; Furnas, Miles

    2005-01-01

    The Australian and New Zealand Guidelines for Fresh and Marine Water Quality (ANZECC Guidelines) provide default national guideline values for a wide range of indicators of relevance to the protection of the ecological condition of natural waters. However, the ANZECC Guidelines also place a strong emphasis on the need to develop more locally relevant guidelines. Using a structured framework, this paper explores indicators and regional data sets that can be used to develop more locally relevant guidelines for the Great Barrier Reef World Heritage Area (GBRWHA). The paper focuses on the water quality impacts of adjacent catchments on the GBRWHA with the key stressors addressed being nutrients, sediments and agricultural chemicals. Indicators relevant to these stressors are discussed including both physico-chemical pressure indicators and biological condition indicators. Where adequate data sets are available, guideline values are proposed. Generally, data were much more readily available for physico-chemical pressure indicators than for biological condition indicators. Specifically, guideline values are proposed for the major nutrients nitrogen (N) and phosphorus (P) and for chlorophyll-a. More limited guidelines are proposed for sediment related indicators. For most agricultural chemicals, the ANZECC Guidelines are likely to remain the default of choice for some time but it is noted that there is data in the literature that could be used to develop more locally relevant guidelines.

  19. 40 CFR 98.463 - Calculating GHG emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in... = Methane correction factor (fraction). Use the default value of 1 unless there is active aeration of waste... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...

  20. 40 CFR 98.463 - Calculating GHG emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in... = Methane correction factor (fraction). Use the default value of 1 unless there is active aeration of waste... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...

  1. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  2. 77 FR 3559 - Energy Conservation Program for Consumer Products: Test Procedures for Refrigerators...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ..., which is typical of an approach enabled by more sophisticated electronic controls. Id. The interim final... and long- time automatic defrost or variable defrost control and adjust the default values of maximum... accurate measurement of the energy use of products with variable defrost control. DATES: The amendments are...

  3. Determination of as-discarded methane potential in residential and commercial municipal solid waste.

    PubMed

    Chickering, Giles W; Krause, Max J; Townsend, Timothy G

    2018-06-01

    Methane generation potential, L 0 , is a primary parameter of the first-order decay (FOD) model used for prediction and regulation of landfill gas (LFG) generation in municipal solid waste (MSW) landfills. The current US EPA AP-42 default value for L 0 , which has been in place for almost 20 years, is 100 m 3 CH 4 /Mg MSW as-discarded. Recent research suggests the yield of landfilled waste could be less than 60 m 3 CH 4 /Mg MSW. This study aimed to measure the L 0 of present-day residential and commercial as-discarded MSW. In doing so, 39 waste collection vehicles were sorted for composition before samples of each biodegradable fraction were analyzed for methane generation potential. Methane yields were determined for over 450 samples of 14 different biodegradable MSW fractions, later to be combined with moisture content and volatile solids data to calculate L 0 values for each waste load. An average value of 80 m 3 CH 4 /Mg MSW was determined for all samples with 95% of values in the interval 74-86 m 3 CH 4 /Mg MSW as-discarded. While no statistically significant difference was observed, commercial MSW yields (mean 85, median 88 m 3 CH 4 /Mg MSW) showed a higher average L 0 than residential MSW (mean 75, median 71 m 3 CH 4 /Mg MSW). Many methane potential values for individual fractions described in previous work were found within the range of values determined by BMP in this study. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Biomass expansion factor and root-to-shoot ratio for Pinus in Brazil.

    PubMed

    Sanquetta, Carlos R; Corte, Ana Pd; da Silva, Fernando

    2011-09-24

    The Biomass Expansion Factor (BEF) and the Root-to-Shoot Ratio (R) are variables used to quantify carbon stock in forests. They are often considered as constant or species/area specific values in most studies. This study aimed at showing tree size and age dependence upon BEF and R and proposed equations to improve forest biomass and carbon stock. Data from 70 sample Pinus spp. grown in southern Brazil trees in different diameter classes and ages were used to demonstrate the correlation between BEF and R, and forest inventory data, such as DBH, tree height and age. Total dry biomass, carbon stock and CO2 equivalent were simulated using the IPCC default values of BEF and R, corresponding average calculated from data used in this study, as well as the values estimated by regression equations. The mean values of BEF and R calculated in this study were 1.47 and 0.17, respectively. The relationship between BEF and R and the tree measurement variables were inversely related with negative exponential behavior. Simulations indicated that use of fixed values of BEF and R, either IPCC default or current average data, may lead to unreliable estimates of carbon stock inventories and CDM projects. It was concluded that accounting for the variations in BEF and R and using regression equations to relate them to DBH, tree height and age, is fundamental in obtaining reliable estimates of forest tree biomass, carbon sink and CO2 equivalent.

  5. Intrinsic brain abnormalities in young healthy adults with childhood trauma: A resting-state functional magnetic resonance imaging study of regional homogeneity and functional connectivity.

    PubMed

    Lu, Shaojia; Gao, Weijia; Wei, Zhaoguo; Wang, Dandan; Hu, Shaohua; Huang, Manli; Xu, Yi; Li, Lingjiang

    2017-06-01

    Childhood trauma confers great risk for the development of multiple psychiatric disorders; however, the neural basis for this association is still unknown. The present resting-state functional magnetic resonance imaging study aimed to detect the effects of childhood trauma on brain function in a group of young healthy adults. In total, 24 healthy individuals with childhood trauma and 24 age- and sex-matched adults without childhood trauma were recruited. Each participant underwent resting-state functional magnetic resonance imaging scanning. Intra-regional brain activity was evaluated by regional homogeneity method and compared between groups. Areas with altered regional homogeneity were further selected as seeds in subsequent functional connectivity analysis. Statistical analyses were performed by setting current depression and anxiety as covariates. Adults with childhood trauma showed decreased regional homogeneity in bilateral superior temporal gyrus and insula, and the right inferior parietal lobule, as well as increased regional homogeneity in the right cerebellum and left middle temporal gyrus. Regional homogeneity values in the left middle temporal gyrus, right insula and right cerebellum were correlated with childhood trauma severity. In addition, individuals with childhood trauma also exhibited altered default mode network, cerebellum-default mode network and insula-default mode network connectivity when the left middle temporal gyrus, right cerebellum and right insula were selected as seed area, respectively. The present outcomes suggest that childhood trauma is associated with disturbed intrinsic brain function, especially the default mode network, in adults even without psychiatric diagnoses, which may mediate the relationship between childhood trauma and psychiatric disorders in later life.

  6. 24 CFR 886.314 - Financial default.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Financial default. 886.314 Section... Program for the Disposition of HUD-Owned Projects § 886.314 Financial default. In the event of a financial... payments to the mortgagee until such time as the default is cured, or until some other time agreeable to...

  7. 17 CFR 201.155 - Default; motion to set aside default.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Default; motion to set aside default. 201.155 Section 201.155 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION... instituting proceedings, the allegations of which may be deemed to be true, if that party fails: (1) To appear...

  8. 33 CFR 20.310 - Default by respondent.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Pleadings and Motions § 20.310 Default by respondent. (a) The ALJ may find a respondent in default upon failure to file a timely answer to the complaint or, after motion, upon failure to appear at a conference or hearing without good cause shown. (b) Each motion for default must conform to the rules of form...

  9. 33 CFR 20.310 - Default by respondent.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Pleadings and Motions § 20.310 Default by respondent. (a) The ALJ may find a respondent in default upon failure to file a timely answer to the complaint or, after motion, upon failure to appear at a conference or hearing without good cause shown. (b) Each motion for default must conform to the rules of form...

  10. 22 CFR 221.21 - Event of Default; Application for Compensation; payment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Event of Default; Application for Compensation... GUARANTEE STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 221.21 Event of Default; Application for Compensation; payment. At any time after an Event of Default, as this term is defined in an...

  11. 22 CFR 204.21 - Event of default; Application for compensation; Payment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...

  12. 22 CFR 221.21 - Event of Default; Application for Compensation; payment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Event of Default; Application for Compensation... GUARANTEE STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 221.21 Event of Default; Application for Compensation; payment. At any time after an Event of Default, as this term is defined in an...

  13. 42 CFR 23.28 - What events constitute default?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false What events constitute default? 23.28 Section 23.28... SERVICE CORPS Private Practice Special Loans for Former Corps Members § 23.28 What events constitute default? The following events will constitute defaults of the loan agreement: (a) Failure to make full...

  14. 22 CFR 204.21 - Event of default; Application for compensation; Payment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...

  15. 42 CFR 23.28 - What events constitute default?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false What events constitute default? 23.28 Section 23.28... SERVICE CORPS Private Practice Special Loans for Former Corps Members § 23.28 What events constitute default? The following events will constitute defaults of the loan agreement: (a) Failure to make full...

  16. 22 CFR 204.21 - Event of default; Application for compensation; Payment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...

  17. 22 CFR 221.21 - Event of Default; Application for Compensation; payment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Event of Default; Application for Compensation... GUARANTEE STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 221.21 Event of Default; Application for Compensation; payment. At any time after an Event of Default, as this term is defined in an...

  18. 22 CFR 204.21 - Event of default; Application for compensation; Payment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...

  19. 42 CFR 23.28 - What events constitute default?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false What events constitute default? 23.28 Section 23.28... SERVICE CORPS Private Practice Special Loans for Former Corps Members § 23.28 What events constitute default? The following events will constitute defaults of the loan agreement: (a) Failure to make full...

  20. 22 CFR 204.21 - Event of default; Application for compensation; Payment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...

  1. 42 CFR 23.28 - What events constitute default?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false What events constitute default? 23.28 Section 23.28... SERVICE CORPS Private Practice Special Loans for Former Corps Members § 23.28 What events constitute default? The following events will constitute defaults of the loan agreement: (a) Failure to make full...

  2. 22 CFR 221.21 - Event of Default; Application for Compensation; payment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Event of Default; Application for Compensation... GUARANTEE STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 221.21 Event of Default; Application for Compensation; payment. At any time after an Event of Default, as this term is defined in an...

  3. 24 CFR 27.15 - Notice of default and foreclosure sale.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... sale. 27.15 Section 27.15 Housing and Urban Development Office of the Secretary, Department of Housing... Foreclosure of Multifamily Mortgages § 27.15 Notice of default and foreclosure sale. (a) Within 45 days after... serving a Notice of Default and Foreclosure Sale. (b) The Notice of Default and Foreclosure Sale shall...

  4. Default risk modeling beyond the first-passage approximation: extended Black-Cox model.

    PubMed

    Katz, Yuri A; Shokhirev, Nikolai V

    2010-07-01

    We develop a generalization of the Black-Cox structural model of default risk. The extended model captures uncertainty related to firm's ability to avoid default even if company's liabilities momentarily exceeding its assets. Diffusion in a linear potential with the radiation boundary condition is used to mimic a company's default process. The exact solution of the corresponding Fokker-Planck equation allows for derivation of analytical expressions for the cumulative probability of default and the relevant hazard rate. Obtained closed formulas fit well the historical data on global corporate defaults and demonstrate the split behavior of credit spreads for bonds of companies in different categories of speculative-grade ratings with varying time to maturity. Introduction of the finite rate of default at the boundary improves valuation of credit risk for short time horizons, which is the key advantage of the proposed model. We also consider the influence of uncertainty in the initial distance to the default barrier on the outcome of the model and demonstrate that this additional source of incomplete information may be responsible for nonzero credit spreads for bonds with very short time to maturity.

  5. Directly observed treatment is associated with reduced default among foreign tuberculosis patients in Thailand.

    PubMed

    Kapella, B K; Anuwatnonthakate, A; Komsakorn, S; Moolphate, S; Charusuntonsri, P; Limsomboon, P; Wattanaamornkiat, W; Nateniyom, S; Varma, J K

    2009-02-01

    Thailand's Tuberculosis (TB) Active Surveillance Network in four provinces in Thailand. As treatment default is common in mobile and foreign populations, we evaluated risk factors for default among non-Thai TB patients in Thailand. Observational cohort study using TB program data. Analysis was restricted to patients with an outcome categorized as cured, completed, failure or default. We used multivariate analysis to identify factors associated with default, including propensity score analysis, to adjust for factors associated with receiving directly observed treatment (DOT). During October 2004-September 2006, we recorded data for 14359 TB patients, of whom 995 (7%) were non-Thais. Of the 791 patients analyzed, 313 (40%) defaulted. In multivariate analysis, age>or=45 years (RR 1.47, 95%CI 1.25-1.74), mobility (RR 2.36, 95%CI 1.77-3.14) and lack of DOT (RR 2.29, 95%CI 1.45-3.61) were found to be significantly associated with default among non-Thais. When controlling for propensity to be assigned DOT, the risk of default remained increased in those not assigned DOT (RR 1.99, 95%CI 1.03-3.85). In non-Thai TB patients, DOT was the only modifiable factor associated with default. Using DOT may help improve TB treatment outcomes in non-Thai TB patients.

  6. Simulating carbon and water fluxes at Arctic and boreal ecosystems in Alaska by optimizing the modified BIOME-BGC with eddy covariance data

    NASA Astrophysics Data System (ADS)

    Ueyama, M.; Kondo, M.; Ichii, K.; Iwata, H.; Euskirchen, E. S.; Zona, D.; Rocha, A. V.; Harazono, Y.; Nakai, T.; Oechel, W. C.

    2013-12-01

    To better predict carbon and water cycles in Arctic ecosystems, we modified a process-based ecosystem model, BIOME-BGC, by introducing new processes: change in active layer depth on permafrost and phenology of tundra vegetation. The modified BIOME-BGC was optimized using an optimization method. The model was constrained using gross primary productivity (GPP) and net ecosystem exchange (NEE) at 23 eddy covariance sites in Alaska, and vegetation/soil carbon from a literature survey. The model was used to simulate regional carbon and water fluxes of Alaska from 1900 to 2011. Simulated regional fluxes were validated with upscaled GPP, ecosystem respiration (RE), and NEE based on two methods: (1) a machine learning technique and (2) a top-down model. Our initial simulation suggests that the original BIOME-BGC with default ecophysiological parameters substantially underestimated GPP and RE for tundra and overestimated those fluxes for boreal forests. We will discuss how optimization using the eddy covariance data impacts the historical simulation by comparing the new version of the model with simulated results from the original BIOME-BGC with default ecophysiological parameters. This suggests that the incorporation of the active layer depth and plant phenology processes is important to include when simulating carbon and water fluxes in Arctic ecosystems.

  7. New approach of a transient ICP-MS measurement method for samples with high salinity.

    PubMed

    Hein, Christina; Sander, Jonas Michael; Kautenburger, Ralf

    2017-03-01

    In the near future it is necessary to establish a disposal for high level nuclear waste (HLW) in deep and stable geological formations. In Germany typical host rocks are salt or claystone. Suitable clay formations exist in the south and in the north of Germany. The geochemical conditions of these clay formations show a strong difference. In the northern ionic strengths of the pore water up to 5M are observed. The determination of parameters like K d values during sorption experiments of metal ions like uranium or europium as homologues for trivalent actinides onto clay stones are very important for long term safety analysis. The measurement of the low concentrated, not sorbed analytes commonly takes place by inductively coupled plasma mass spectrometry (ICP-MS). A direct measurement of high saline samples like seawater with more than 1% total dissolved salt content is not possible. Alternatives like sample clean up, preconcentration or strong dilution have more disadvantages than advantages for example more preparation steps or additional and expensive components. With a small modification of the ICP-MS sample introduction system and a home-made reprogramming of the autosampler a transient analysing method was developed which is suitable for measuring metal ions like europium and uranium in high saline sample matrices up to 5M (NaCl). Comparisons at low ionic strength between the default and the transient measurement show the latter performs similarly well to the default measurement. Additionally no time consuming sample clean-up or expensive online dilution or matrix removal systems are necessary and the analysation shows a high sensitivity due to the data processing based on the peak area. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    PubMed

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  9. Adjusting the fairshare policy to prevent computing power loss

    NASA Astrophysics Data System (ADS)

    Dal Pra, Stefano

    2017-10-01

    On a typical WLCG site providing batch access to computing resources according to a fairshare policy, the idle time lapse after a job ends and before a new one begins on a given slot is negligible if compared to the duration of typical jobs. The overall amount of these intervals over a time window increases with the size of the cluster and the inverse of job duration and can be considered equivalent to an average number of unavailable slots over that time window. This value has been investigated for the Tier-1 at CNAF, and observed to occasionally grow and reach up to more than the 10% of the about 20,000 available computing slots. Analysis reveals that this happens when a sustained rate of short jobs is submitted to the cluster and dispatched by the batch system. Because of how the default fairshare policy works, it increases the dynamic priority of those users mostly submitting short jobs, since they are not accumulating runtime, and will dispatch more of their jobs at the next round, thus worsening the situation until the submission flow ends. To address this problem the default behaviour of the fairshare have been altered by adding a correcting term to the default formula for the dynamic priority. The LSF batch system, currently adopted at CNAF, provides a way to define its value by invoking a C function, which returns it for each user in the cluster. The correcting term works by rounding up to a minimum defined runtime the most recently done jobs. Doing so, each short job looks almost like a regular one and the dynamic priority value settles to a proper value. The net effect is a reduction of the dispatching rate of short jobs and, consequently, the average number of available slots greatly improves. Furthermore, a potential starvation problem, actually observed at least once is also prevented. After describing short jobs and reporting about their impact on the cluster, possible workarounds are discussed and the selected solution is motivated. Details on the most critical aspects of the implementation are explained and the observed results are presented.

  10. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    PubMed Central

    Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.

    2018-01-01

    Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870

  11. 34 CFR Appendix A to Subpart N of... - Sample Default Prevention Plan

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false Sample Default Prevention Plan A Appendix A to Subpart N of Part 668 Education Regulations of the Offices of the Department of Education (Continued) OFFICE... Default Rates Appendix A to Subpart N of Part 668—Sample Default Prevention Plan This appendix is provided...

  12. Student Loans: Characteristics of Students and Default Rates at Historically Black Colleges and Universities. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Health, Education, and Human Services Div.

    This report to Congress analyzes student loan default rates at historically black colleges and universities (HBCUs), focusing on student characteristics which may predict the likelihood of default. The study examined available student databases for characteristics identified by previous studies as related to level of student loan defaults. Among…

  13. 7 CFR 4287.145 - Default by borrower.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Default by borrower. 4287.145 Section 4287.145... Loans § 4287.145 Default by borrower. (a) The lender must notify the Agency when a borrower is 30 days past due on a payment or is otherwise in default of the Loan Agreement. Form FmHA 1980-44, “Guaranteed...

  14. 34 CFR 668.204 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Draft cohort default rates and your ability to challenge before official cohort default rates are issued. 668.204 Section 668.204 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT...

  15. 34 CFR 668.185 - Draft cohort default rates and your ability to challenge before official cohort default rates are...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Draft cohort default rates and your ability to challenge before official cohort default rates are issued. 668.185 Section 668.185 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT...

  16. 49 CFR 260.47 - Events of default for direct loans.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...

  17. 49 CFR 260.47 - Events of default for direct loans.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...

  18. 49 CFR 260.45 - Events of default for guaranteed loans.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...

  19. 49 CFR 260.45 - Events of default for guaranteed loans.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...

  20. 49 CFR 260.45 - Events of default for guaranteed loans.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...

  1. 49 CFR 260.45 - Events of default for guaranteed loans.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...

  2. 49 CFR 260.47 - Events of default for direct loans.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...

  3. 49 CFR 260.47 - Events of default for direct loans.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...

  4. 49 CFR 260.47 - Events of default for direct loans.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...

  5. 49 CFR 260.45 - Events of default for guaranteed loans.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...

  6. Consumer default risk assessment in a banking institution

    NASA Astrophysics Data System (ADS)

    Costa e Silva, Eliana; Lopes, Isabel Cristina; Correia, Aldina; Faria, Susana

    2016-12-01

    Credit scoring is an application of financial risk forecasting to consumer lending. In this study, statistical analysis is applied to credit scoring data from a financial institution to evaluate the default risk of consumer loans. The default risk was found to be influenced by the spread, the age of the consumer, the number of credit cards owned by the consumer. A lower spread, a higher number of credit cards and a younger age of the borrower are factors that decrease the risk of default. Clients receiving the salary in the same banking institution of the loan have less chances of default than clients receiving their salary in another institution. We also found that clients in the lowest income tax echelon have more propensity to default.

  7. A Real Options Approach to Valuing the Risk Transfer in a Multi-Year Procurement Contract

    DTIC Science & Technology

    2009-10-01

    asset follows a Brownian motion process where the returns have a lognormal distribution. H. BLACK-SCHOLES MODEL The value of the put option p on...risk in a firm-fixed-price contract. The government also provides interest-free financing that can greatly reduce the amount of capital a contractor...structured finance and credit default swap applications. 8 E. OPTIONS THEORY We will use closed form BS-type option pricing methods to estimate the

  8. Vertical Ship Motion Study for Ambrose Entrance Channel, New York

    DTIC Science & Technology

    2014-05-01

    channels, PIANC Bulletin 1971, Vol. 1, No. 7, 17-20. Hardy, T. A. 1993. The attenuation of spectral transformation of wind waves on a coral reef ...A80(12): 95 p. Hearn, C. J. 1999. Wave -breaking hydrodynamics within coral reef systems and the effect of changing relative sea level, Journal of...Values of cf applied for coral reefs range from 0.05 to 0.40 (Hardy 1993; Hearn 1999 and Lowe et al. 2005). CMS- Wave uses a default value of cf

  9. The Preliminary Pollutant Limit Value Approach: Manual for Users

    DTIC Science & Technology

    1988-07-01

    48 5.2.3 Plant Consumption by Dairy Cows (Upd) ............. 48 5.2.4 Water Consumption by Dairy Cows (Uwd) ............. 48 5.2.5 Soil...other equations include the effect of concurrent consumption of soil by grazing cows (equation 19), and for contaminated water intake, such as from a...ingestion of soil by dairy cow , kg/day. A default value of 0.87 kg/day is suggested (see Section 5.2.5) 4.2.6 Direct Soil Intake Two pathway equations are

  10. Third COS FUV Lifetime Position: FUV Target Acquisition Parameter Update {LENA3}

    NASA Astrophysics Data System (ADS)

    Penton, Steven

    2013-10-01

    Verify the ability of the Cycle 22 COS FSW to place an isolated point source at the center of the PSA, using FUV dispersed light target acquisition (TA) from the object and all three FUV gratings at the Third Lifetime Position (LP3). This program is modeled from the activity summary of LENA3.This program should be executed after the LP3 HV, XD spectral positions, aperture mechanism position, and focus are determined and updated. In addition, initial estimates of the LIFETIME=ALTERNATE TA FSW parameters and subarrays should be updated prior to execution of this program. After Visit 01, the subarrays will be updated. After Visit 2, the FUV WCA-to-PSA offsets will be updateded. Prior to Visit 6, LV56 will be installed will include new values for the LP3 FUV plate scales. VISIT 6 exposures use the default lifetime position (LP3).NUV imaging TAs have previously been used to determine the correct locations for FUV spectra. We follow the same procedure here.Note that the ETC runs here were made using ETC22.2 and are therefore valid for Mach 2014. Some TDS drop will likely have occured before these visits execute, but we have plenty of count to go what we need to do in this program.

  11. Track propagation methods for the correlation of charged tracks with clusters in the calorimeter of the bar PANDA experiment

    NASA Astrophysics Data System (ADS)

    Nasawasd, T.; Simantathammakul, T.; Herold, C.; Stockmanns, T.; Ritman, J.; Kobdaj, C.

    2018-02-01

    To classify clusters of hits in the electromagnetic calorimeter (EMC) of bar PANDA (antiProton ANnihilation at DArmstadt), one has to match these EMC clusters with tracks of charged particles reconstructed from hits in the tracking system. Therefore the tracks are propagated to the surface of the EMC and associated with EMC clusters which are nearby and below a cut parameter. In this work, we propose a helix propagator to extrapolate the track from the Straw Tube Tracker (STT) to the inner surface of the EMC instead of the GEANE propagator which is already embedded within the PandaRoot computational framework. The results for both propagation methods show a similar quality, with a 30% gain in CPU time when using the helix propagator. We use Monte-Carlo truth information to compare the particle ID of the EMC clusters with the ID of the extrapolated points, thus deciding upon the correctness of the matches. By varying the cut parameter as a function of transverse momentum and particle type, our simulations show that the purity can be increased by 3-5% compared to the default value which is a constant cut in the bar PANDA simulation framework PandaRoot.

  12. Effect of varying two key parameters in simulating evacuation for a dormitory in China

    NASA Astrophysics Data System (ADS)

    Lei, Wenjun; Li, Angui; Gao, Ran

    2013-01-01

    Student dormitories are both living and resting areas for students in their spare time. There are many small rooms in the dormitories. And the students are distributed densely in the dormitories. High occupant density is the main characteristic of student dormitories. Once there is an accident, such as fire or earthquake, the losses will be cruel. Computer evacuation models developed overseas are commonly applied in working out safety management schemes. The average minimum widths of corridor and exit are the two key parameters affecting the evacuation for the dormitory. The effect of varying these two parameters will be studied in this paper by taking a dormitory in our university as an example. Evacuation performance is predicted with the software FDS + Evac. The default values in the software are used and adjusted through a field survey. The effect of varying either of the two parameters is discussed. It is found that the simulated results agree well with the experimental results. From our study it seems that the evacuation time is not in proportion to the evacuation distance. And we also named a phenomenon of “the closer is not the faster”. For the building researched in this article, a corridor width of 3 m is the most appropriate. And the suitable exit width of the dormitory for evacuation is about 2.5 to 3 m. The number of people has great influence on the walking speed of people. The purpose of this study is to optimize the building, and to make the building in favor of personnel evacuation. Then the damage could be minimized.

  13. Estimation Of Rheological Law By Inverse Method From Flow And Temperature Measurements With An Extrusion Die

    NASA Astrophysics Data System (ADS)

    Pujos, Cyril; Regnier, Nicolas; Mousseau, Pierre; Defaye, Guy; Jarny, Yvon

    2007-05-01

    Simulation quality is determined by the knowledge of the parameters of the model. Yet the rheological models for polymer are often not very accurate, since the viscosity measurements are made under approximations as homogeneous temperature and empirical corrections as Bagley one. Furthermore rheological behaviors are often traduced by mathematical laws as the Cross or the Carreau-Yasuda ones, whose parameters are fitted from viscosity values, obtained with corrected experimental data, and not appropriate for each polymer. To correct these defaults, a table-like rheological model is proposed. This choice makes easier the estimation of model parameters, since each parameter has the same order of magnitude. As the mathematical shape of the model is not imposed, the estimation process is appropriate for each polymer. The proposed method consists in minimizing the quadratic norm of the difference between calculated variables and measured data. In this study an extrusion die is simulated, in order to provide us temperature along the extrusion channel, pressure and flow references. These data allow to characterize thermal transfers and flow phenomena, in which the viscosity is implied. Furthermore the different natures of data allow to estimate viscosity for a large range of shear rates. The estimated rheological model improves the agreement between measurements and simulation: for numerical cases, the error on the flow becomes less than 0.1% for non-Newtonian rheology. This method couples measurements and simulation, constitutes a very accurate mean of rheology determination, and allows to improve the prediction abilities of the model.

  14. Effects of pay-for-performance system on tuberculosis default cases control and treatment in Taiwan.

    PubMed

    Tsai, Wen-Chen; Kung, Pei-Tseng; Khan, Mahmud; Campbell, Claudia; Yang, Wen-Ta; Lee, Tsuey-Fong; Li, Ya-Hsin

    2010-09-01

    In order to make tuberculosis (TB) treatment more effective and to lower the default rate of the disease, the Bureau of National Health Insurance (BNHI) in Taiwan implemented the "pay-for-performance on Tuberculosis" program (P4P on TB) in 2004. The purpose of this study is to investigate the effectiveness of the P4P system in terms of default rate. This is a retrospective study. National Health Insurance Research Datasets in Taiwan from 2002 to 2005 has been used for the study. The study compared the differences of TB default rate before and after the implementation of P4P program, between participating and non-participating hospitals, and between P4P hospitals with and without case managers. Furthermore, logistic regression analysis was conducted to explore the related factors influencing TB patients default treatment after TB detected. The treatment default rate after "P4P on TB" was 11.37% compared with the 15.56% before "P4P on TB" implementation. The treatment default rate in P4P hospitals was 10.67% compared to 12.7% in non-P4P hospitals. In addition, the default rate was 10.4% in hospitals with case managers compared with 12.68% in hospitals without case managers. The results of the study showed that "P4P on TB" program improved the treatment default rate for TB patients. In addition, case managers improved the treatment outcome in controlling patients' default rate. Copyright 2010 The British Infection Society. Published by Elsevier Ltd. All rights reserved.

  15. Smear Conversion, Treatment Outcomes and the Time of Default in Registered Tuberculosis Patients on RNTCP DOTS in Puducherry, Southern India

    PubMed Central

    Jayakumar, Niranjana; Gnanasekaran, Dhivyalakshmi

    2014-01-01

    Background: Revised National Tuberculosis Control Programme (RNTCP) in India has achieved improved cure rates. Objectives: This study describes the achievements under RNTCP in terms of conversion rates, treatment outcomes and pattern of time of default in patients on directly observed short-course treatment for Tuberculosis in Puducherry, Southern India. Settings: Retrospective cohort study; Tuberculosis Unit in District Tuberculosis Centre, Puducherry, India. Materials and Methods: Cohort analysis of patients of registered at the Tuberculosis Unit during 1st and 2nd quarter of the year 2011. Details about sputum conversion, treatment outcome and time of default were obtained from the tuberculosis register. Statistical Analysis: Kaplan-Meier plots & log rank tests. Results: RNTCP targets with respect to success rate (85.7%), death rate (2.7%) and failure rate (2.1%) in new cases have been achieved but the sputum conversion rate (88%) and default rate (5.9%) targets have not been achieved. The overall default rate for all registered TB patients was 7.4%; significantly higher in category II. In retreatment cases registered as treatment after default, the default rate was high (9%). The cumulative default rate; though similar in the initial two months of treatment; was consistently higher in category II as compared to that in category I. Nearly 40% of all defaulters interrupted treatment between the second and fourth month after treatment initiation. Conclusion: Defaulting from treatment is more common among the retreatment cases and usually occurs during the transition phase from intensive phase to continuation phase. PMID:25478371

  16. Risk factors associated with default among new pulmonary TB patients and social support in six Russian regions.

    PubMed

    Jakubowiak, W M; Bogorodskaya, E M; Borisov, S E; Borisov, E S; Danilova, I D; Danilova, D I; Kourbatova, E V; Kourbatova, E K

    2007-01-01

    Tuberculosis (TB) services in six Russian regions in which social support programmes for TB patients were implemented. To identify risk factors for default and to evaluate possible impact of social support. Retrospective study of new pulmonary smear-positive and smear-negative TB patients registered during the second and third quarters of the 2003. Data were analysed in a case-control study including default patients as cases and successfully treated patients as controls, using multivariate logistic regression modelling. A total of 1805 cases of pulmonary TB were enrolled. Default rates in the regions were 2.3-6.3%. On multivariate analysis, risk factors independently associated with default outcome included: unemployment (OR 4.44; 95%CI 2.23-8.86), alcohol abuse (OR 1.99; 95%CI 1.04-3.81), and homelessness (OR 3.49; 95%CI 1.25-9.77). Social support reduced the default outcome (OR 0.13; 95%CI 0.06-0.28), controlling for age, sex, region, residence and acid-fast bacilli (AFB) smear of sputum. Unemployment, alcohol abuse and homelessness were associated with increased default outcome among new TB patients, while social support for TB patients reduced default. Further prospective randomised studies are necessary to evaluate the impact and to determine the most cost-effective social support for improving treatment outcomes of TB in patients in Russia, especially among populations at risk of default.

  17. Correlates of default from anti-tuberculosis treatment: a case study using Kenya's electronic data system.

    PubMed

    Sitienei, J; Kipruto, H; Mansour, O; Ndisha, M; Hanson, C; Wambu, R; Addona, V

    2015-09-01

    In 2012, the World Health Organization estimated that there were 120,000 new cases and 9500 deaths due to tuberculosis (TB) in Kenya. Almost a quarter of the cases were not detected, and the treatment of 4% of notified cases ended in default. To identify the determinants of anti-tuberculosis treatment default. Data from 2012 and 2013 were retrieved from a national case-based electronic data recording system. A comparison was made between new pulmonary TB patients for whom treatment was interrupted vs. those who successfully completed treatment. A total of 106,824 cases were assessed. Human immunodeficiency virus infection was the single most influential risk factor for default (aOR 2.7). More than 94% of patients received family-based directly observed treatment (DOT) and were more likely to default than patients who received DOT from health care workers (aOR 2.0). Caloric nutritional support was associated with lower default rates (aOR 0.89). Males were more likely to default than females (aOR 1.6). Patients cared for in the private sector were less likely to default than those in the public sector (aOR 0.86). Understanding the factors contributing to default can guide future program improvements and serve as a proxy to understanding the factors that constrain access to care among undetected cases.

  18. 13 CFR 108.1810 - Events of default and SBA's remedies for NMVC Company's noncompliance with terms of Debentures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... diversity between management and ownership as required by § 108.150. (g) SBA remedies for events of default... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Events of default and SBA's... Company's Noncompliance With Terms of Leverage § 108.1810 Events of default and SBA's remedies for NMVC...

  19. 13 CFR 107.1810 - Events of default and SBA's remedies for Licensee's noncompliance with terms of Debentures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Events of default and SBA's... Noncompliance With Terms of Leverage § 107.1810 Events of default and SBA's remedies for Licensee's... time of their issuance. (b) Automatic events of default. The occurrence of one or more of the events in...

  20. 7 CFR 4290.1810 - Events of default and the Secretary's remedies for RBIC's noncompliance with terms of Debentures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Events of default and the Secretary's remedies for... With Terms of Leverage § 4290.1810 Events of default and the Secretary's remedies for RBIC's... and as if fully set forth in the Debentures. (b) Automatic events of default. The occurrence of one or...

  1. Student Loan Default: Do Characteristics of Four-Year Institutions Contribute to the Puzzle?

    ERIC Educational Resources Information Center

    Webber, Karen L.; Rogers, Sharon L.

    2010-01-01

    College student debt and loan default are growing concerns in the United States. For each U.S. institution, the federal government is now reporting a cohort default rate, which is the percent of students who defaulted on their loan, averaged over a three-year period. Previous studies have amply shown that student characteristics are strongly…

  2. Selected Amendments Enacted Since 1980 To Control Guaranteed Student Loan Defaults. CRS Report for Congress.

    ERIC Educational Resources Information Center

    Fraas, Charlotte J.

    Congress, over the past decade, has enacted a number of laws with provisions aimed at preventing defaults and improving collections on defaulted student loans. This report presents a synopsis of legislative provisions enacted to combat student loan defaults beginning with the Education Amendments of 1980. The laws included in the report are:…

  3. Multivariate Analysis of Student Loan Defaulters at Texas A&M University

    ERIC Educational Resources Information Center

    Steiner, Matt; Teszler, Natali

    2005-01-01

    In an effort to better understand student loan default behavior at Texas A&M University (TAMU), the research staff at TG, at the request of TAMU, conducted a study of the relationship between loan default, on the one hand, and many student and borrower characteristics, on the other hand. The study examines the default behavior of 12,776…

  4. Predicting Student Loan Default for the University of Texas at Austin

    ERIC Educational Resources Information Center

    Herr, Elizabeth; Burt, Larry

    2005-01-01

    During spring 2001, Noel-Levitz created a student loan default model for the University of Texas at Austin (UT Austin). The goal of this project was to identify students most likely to default, to identify as risk elements those characteristics that contributed to student loan default, and to use these risk elements to plan and implement targeted,…

  5. Two Studies Assessing the Effectiveness of Early Intervention on the Default Behavior of Student Loan Borrowers

    ERIC Educational Resources Information Center

    Seifert, Charles F.; Wordern, Lorenz

    2004-01-01

    The cost of student loan defaults is a growing problem. At the beginning of this century, defaulted student loans exceed $25 billion (Student Aid News, 2001). In addition to the costs borne by the taxpayer as the federal government purchases defaulted accounts, there are costs incurred by schools, lenders, loan servicers, and guaranty agencies for…

  6. The maturing architecture of the brain's default network

    PubMed Central

    Fair, Damien A.; Cohen, Alexander L.; Dosenbach, Nico U. F.; Church, Jessica A.; Miezin, Francis M.; Barch, Deanna M.; Raichle, Marcus E.; Petersen, Steven E.; Schlaggar, Bradley L.

    2008-01-01

    In recent years, the brain's “default network,” a set of regions characterized by decreased neural activity during goal-oriented tasks, has generated a significant amount of interest, as well as controversy. Much of the discussion has focused on the relationship of these regions to a “default mode” of brain function. In early studies, investigators suggested that, the brain's default mode supports “self-referential” or “introspective” mental activity. Subsequently, regions of the default network have been more specifically related to the “internal narrative,” the “autobiographical self,” “stimulus independent thought,” “mentalizing,” and most recently “self-projection.” However, the extant literature on the function of the default network is limited to adults, i.e., after the system has reached maturity. We hypothesized that further insight into the network's functioning could be achieved by characterizing its development. In the current study, we used resting-state functional connectivity MRI (rs-fcMRI) to characterize the development of the brain's default network. We found that the default regions are only sparsely functionally connected at early school age (7–9 years old); over development, these regions integrate into a cohesive, interconnected network. PMID:18322013

  7. Impact of patient and program factors on default during treatment of multidrug-resistant tuberculosis.

    PubMed

    Gler, M T; Podewils, L J; Munez, N; Galipot, M; Quelapio, M I D; Tupasi, T E

    2012-07-01

    In the Philippines, programmatic treatment of drug-resistant tuberculosis (TB) was initiated by the Tropical Disease Foundation in 1999 and transitioned to the National TB Program in 2006. To determine patient and socio-demographic characteristics associated with default, and the impact of patient support measures on default. Retrospective cohort analysis of 583 MDR-TB patients treated from 1999 to 2006. A total of 88 (15%) patients defaulted from treatment. The median follow-up time for patients who defaulted was 289 days (range 1-846). In multivariate analysis adjusted for age, sex and previous TB treatment, receiving a greater number of treatment drugs (≥ 5 vs. 2-3 drugs, HR 7.2, 95%CI 3.3-16.0, P < 0.001) was significantly associated with an increased risk of default, while decentralization reduced the risk of default (HR 0.3, 95%CI 0.2-0.7, P < 0.001). Improving access to treatment for MDR-TB through decentralization of care to centers near the patient's residence reduced the risk of default. Further research is needed to evaluate the feasibility, impact and cost-effectiveness of decentralized care models for MDR-TB treatment.

  8. Default risk modeling beyond the first-passage approximation: Extended Black-Cox model

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.; Shokhirev, Nikolai V.

    2010-07-01

    We develop a generalization of the Black-Cox structural model of default risk. The extended model captures uncertainty related to firm’s ability to avoid default even if company’s liabilities momentarily exceeding its assets. Diffusion in a linear potential with the radiation boundary condition is used to mimic a company’s default process. The exact solution of the corresponding Fokker-Planck equation allows for derivation of analytical expressions for the cumulative probability of default and the relevant hazard rate. Obtained closed formulas fit well the historical data on global corporate defaults and demonstrate the split behavior of credit spreads for bonds of companies in different categories of speculative-grade ratings with varying time to maturity. Introduction of the finite rate of default at the boundary improves valuation of credit risk for short time horizons, which is the key advantage of the proposed model. We also consider the influence of uncertainty in the initial distance to the default barrier on the outcome of the model and demonstrate that this additional source of incomplete information may be responsible for nonzero credit spreads for bonds with very short time to maturity.

  9. Guerrilla Video: A New Protocol for Producing Classroom Video

    ERIC Educational Resources Information Center

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  10. Relieving Consumer Overindebtedness in South Africa: Policy Reviews and Recommendations

    ERIC Educational Resources Information Center

    Ssebagala, Ralph Abbey

    2017-01-01

    A large fraction of South African consumers are highly leveraged, inadequately insured, and/or own little to no assets of value, which increases their exposure not only to idiosyncratic risk but also to severe indebtedness and/or default. This scenario can present negative ramifications that lead well beyond the confines of individual households.…

  11. Development of a Methodology to Guide the Replacement of Agency Default Uncertainty Factors with Those Based on Data

    EPA Science Inventory

    The Agency's guidance for the derivation of RfD and RfC values call for the downward adjustment of exposure-response levels observed in animals and/or humans to account for the potentially greater sensitivity of humans as compared to test animals (UFA) and the differential sensit...

  12. JEDI Transmission Line Model | Jobs and Economic Development Impact Models

    Science.gov Websites

    , reasonable default values are provided. Individual projects may vary and when possible, project specific data Line Model rel. TL12.23.16. JEDI Transmission Line Model User Reference Guide Using MS Excel 2007 When ;High." Set the level to "Medium" or "Low" and then re-open the JEDI worksheet

  13. 40 CFR Table Tt-1 to Subpart Tt of... - Default DOC and Decay Rate Values for Industrial Waste Landfills

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Industrial Waste Landfills TT Table TT-1 to Subpart TT of Part 98 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Industrial Waste... for Industrial Waste Landfills Industry/Waste Type DOC(weight fraction, wet basis) k[dry climatea] (yr...

  14. 40 CFR Table Tt-1 to Subpart Tt of... - Default DOC and Decay Rate Values for Industrial Waste Landfills

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Industrial Waste Landfills TT Table TT-1 to Subpart TT of Part 98 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Industrial Waste... for Industrial Waste Landfills Industry/Waste Type DOC(weight fraction, wet basis) k[dry climatea] (yr...

  15. 40 CFR 98.408 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.408 Definitions. All terms...) Natural Gas 1.027 MMBtu/Mscf 53.02 Propane 3.836 MMBtu/bbl 63.02 Normal butane 4.326 MMBtu/bbl 64.93... Unit Default CO2 emission value(MT CO2/Unit) Natural Gas Mscf 0.054452 Propane Barrel 0.241745 Normal...

  16. Performance Analysis of Hybrid Electric Vehicle over Different Driving Cycles

    NASA Astrophysics Data System (ADS)

    Panday, Aishwarya; Bansal, Hari Om

    2017-02-01

    Article aims to find the nature and response of a hybrid vehicle on various standard driving cycles. Road profile parameters play an important role in determining the fuel efficiency. Typical parameters of road profile can be reduced to a useful smaller set using principal component analysis and independent component analysis. Resultant data set obtained after size reduction may result in more appropriate and important parameter cluster. With reduced parameter set fuel economies over various driving cycles, are ranked using TOPSIS and VIKOR multi-criteria decision making methods. The ranking trend is then compared with the fuel economies achieved after driving the vehicle over respective roads. Control strategy responsible for power split is optimized using genetic algorithm. 1RC battery model and modified SOC estimation method are considered for the simulation and improved results compared with the default are obtained.

  17. Payload accommodation and development planning tools - A Desktop Resource Leveling Model (DRLM)

    NASA Technical Reports Server (NTRS)

    Hilchey, John D.; Ledbetter, Bobby; Williams, Richard C.

    1989-01-01

    The Desktop Resource Leveling Model (DRLM) has been developed as a tool to rapidly structure and manipulate accommodation, schedule, and funding profiles for any kind of experiments, payloads, facilities, and flight systems or other project hardware. The model creates detailed databases describing 'end item' parameters, such as mass, volume, power requirements or costs and schedules for payload, subsystem, or flight system elements. It automatically spreads costs by calendar quarters and sums costs or accommodation parameters by total project, payload, facility, payload launch, or program phase. Final results can be saved or printed out, automatically documenting all assumptions, inputs, and defaults.

  18. Verification and Validation of the Coastal Modeling System. Report 3: CMS-Flow: Hydrodynamics

    DTIC Science & Technology

    2011-12-01

    Jansen (1978) Spectrum TMA Directional spreading distribution Cosine Power Directional spreading parameter γ 3.3 Bottom friction Off (default...Ramp duration 3 hr The wave breaking formula applied was Battjes and Jansen (1978) because it is the recommended wave breaking formula when using...Li, Z.H., K.D. Nguyen , J.C. Brun-Cottan and J.M. Martin. 1994. Numerical simulation of the turbidity maximum transport in the Gironde Estuary (France

  19. Alcohol, Hospital Discharge, and Socioeconomic Risk Factors for Default from Multidrug Resistant Tuberculosis Treatment in Rural South Africa: A Retrospective Cohort Study

    PubMed Central

    Kendall, Emily A.; Theron, Danie; Franke, Molly F.; van Helden, Paul; Victor, Thomas C.; Murray, Megan B.; Warren, Robin M.; Jacobson, Karen R.

    2013-01-01

    Background Default from multidrug-resistant tuberculosis (MDR-TB) treatment remains a major barrier to cure and epidemic control. We sought to identify patient risk factors for default from MDR-TB treatment and high-risk time periods for default in relation to hospitalization and transition to outpatient care. Methods We retrospectively analyzed a cohort of 225 patients who initiated MDR-TB treatment between 2007 through 2010 at a rural TB hospital in the Western Cape Province, South Africa. Results Fifty percent of patients were cured or completed treatment, 27% defaulted, 14% died, 4% failed treatment, and 5% transferred out. Recent alcohol use was common (63% of patients). In multivariable proportional hazards regression, older age (hazard ratio [HR]= 0.97 [95% confidence interval 0.94-0.99] per year of greater age), formal housing (HR=0.38 [0.19-0.78]), and steady employment (HR=0.41 [0.19-0.90]) were associated with decreased risk of default, while recent alcohol use (HR=2.1 [1.1-4.0]), recent drug use (HR=2.0 [1.0-3.6]), and Coloured (mixed ancestry) ethnicity (HR=2.3 [1.1-5.0]) were associated with increased risk of default (P<0.05). Defaults occurred throughout the first 18 months of the two-year treatment course but were especially frequent among alcohol users after discharge from the initial four-to-five-month in-hospital phase of treatment, with the highest default rates occurring among alcohol users within two months of discharge. Default rates during the first two months after discharge were also elevated for patients who received care from mobile clinics. Conclusions Among patients who were not cured or did not complete MDR-TB treatment, the majority defaulted from treatment. Younger, economically-unstable patients and alcohol and drug users were particularly at risk. For alcohol users as well as mobile-clinic patients, the early outpatient treatment phase is a high-risk period for default that could be targeted in efforts to increase treatment completion rates. PMID:24349518

  20. Alcohol, hospital discharge, and socioeconomic risk factors for default from multidrug resistant tuberculosis treatment in rural South Africa: a retrospective cohort study.

    PubMed

    Kendall, Emily A; Theron, Danie; Franke, Molly F; van Helden, Paul; Victor, Thomas C; Murray, Megan B; Warren, Robin M; Jacobson, Karen R

    2013-01-01

    Default from multidrug-resistant tuberculosis (MDR-TB) treatment remains a major barrier to cure and epidemic control. We sought to identify patient risk factors for default from MDR-TB treatment and high-risk time periods for default in relation to hospitalization and transition to outpatient care. We retrospectively analyzed a cohort of 225 patients who initiated MDR-TB treatment between 2007 through 2010 at a rural TB hospital in the Western Cape Province, South Africa. Fifty percent of patients were cured or completed treatment, 27% defaulted, 14% died, 4% failed treatment, and 5% transferred out. Recent alcohol use was common (63% of patients). In multivariable proportional hazards regression, older age (hazard ratio [HR]= 0.97 [95% confidence interval 0.94-0.99] per year of greater age), formal housing (HR=0.38 [0.19-0.78]), and steady employment (HR=0.41 [0.19-0.90]) were associated with decreased risk of default, while recent alcohol use (HR=2.1 [1.1-4.0]), recent drug use (HR=2.0 [1.0-3.6]), and Coloured (mixed ancestry) ethnicity (HR=2.3 [1.1-5.0]) were associated with increased risk of default (P<0.05). Defaults occurred throughout the first 18 months of the two-year treatment course but were especially frequent among alcohol users after discharge from the initial four-to-five-month in-hospital phase of treatment, with the highest default rates occurring among alcohol users within two months of discharge. Default rates during the first two months after discharge were also elevated for patients who received care from mobile clinics. Among patients who were not cured or did not complete MDR-TB treatment, the majority defaulted from treatment. Younger, economically-unstable patients and alcohol and drug users were particularly at risk. For alcohol users as well as mobile-clinic patients, the early outpatient treatment phase is a high-risk period for default that could be targeted in efforts to increase treatment completion rates.

  1. The rate of sputum smear-positive tuberculosis after treatment default in a high-burden setting: a retrospective cohort study.

    PubMed

    Marx, Florian M; Dunbar, Rory; Enarson, Donald A; Beyers, Nulda

    2012-01-01

    High rates of recurrent tuberculosis after successful treatment have been reported from different high burden settings in Sub-Saharan Africa. However, little is known about the rate of smear-positive tuberculosis after treatment default. In particular, it is not known whether or not treatment defaulters continue to be or become again smear-positive and thus pose a potential for transmission of infection to others. To investigate, in a high tuberculosis burden setting, the rate of re-treatment for smear-positive tuberculosis among cases defaulting from standardized treatment compared to successfully treated cases. Retrospective cohort study among smear-positive tuberculosis cases treated between 1996 and 2008 in two urban communities in Cape Town, South Africa. Episodes of re-treatment for smear-positive tuberculosis were ascertained via probabilistic record linkage. Survival analysis and Poisson regression were used to compare the rate of smear-positive tuberculosis after treatment default to that after successful treatment. A total of 2,136 smear-positive tuberculosis cases were included in the study. After treatment default, the rate of re-treatment for smear-positive tuberculosis was 6.86 (95% confidence interval [CI]: 5.59-8.41) per 100 person-years compared to 2.09 (95% CI: 1.81-2.41) after cure (adjusted Hazard Ratio [aHR]: 3.97; 95% CI: 3.00-5.26). Among defaulters, the rate was inversely associated with treatment duration and sputum conversion prior to defaulting. Smear grade at start of the index treatment episode (Smear3+: aHR 1.61; 95%CI 1.11-2.33) was independently associated with smear-positive tuberculosis re-treatment, regardless of treatment outcome. In this high-burden setting, there is a high rate of subsequent smear-positive tuberculosis after treatment default. Treatment defaulters are therefore likely to contribute to the pool of infectious source cases in the community. Our findings underscore the importance of preventing treatment default, as a means of successful tuberculosis control in high-burden settings.

  2. Cortical Thinning in Network-Associated Regions in Cognitively Normal and Below-Normal Range Schizophrenia

    PubMed Central

    Pinnock, Farena; Parlar, Melissa; Hawco, Colin; Hanford, Lindsay; Hall, Geoffrey B.

    2017-01-01

    This study assessed whether cortical thickness across the brain and regionally in terms of the default mode, salience, and central executive networks differentiates schizophrenia patients and healthy controls with normal range or below-normal range cognitive performance. Cognitive normality was defined using the MATRICS Consensus Cognitive Battery (MCCB) composite score (T = 50 ± 10) and structural magnetic resonance imaging was used to generate cortical thickness data. Whole brain analysis revealed that cognitively normal range controls (n = 39) had greater cortical thickness than both cognitively normal (n = 17) and below-normal range (n = 49) patients. Cognitively normal controls also demonstrated greater thickness than patients in regions associated with the default mode and salience, but not central executive networks. No differences on any thickness measure were found between cognitively normal range and below-normal range controls (n = 24) or between cognitively normal and below-normal range patients. In addition, structural covariance between network regions was high and similar across subgroups. Positive and negative symptom severity did not correlate with thickness values. Cortical thinning across the brain and regionally in relation to the default and salience networks may index shared aspects of the psychotic psychopathology that defines schizophrenia with no relation to cognitive impairment. PMID:28348889

  3. Boreal Winter MJO Teleconnection in the Community Atmosphere Model Version 5 with the Unified Convection Parameterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Changhyun; Park, Sungsu; Kim, Daehyun

    2015-10-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, influences weather and climate in the extratropics through atmospheric teleconnection. In this study, two simulations using the Community Atmosphere Model version 5 (CAM5) - one with the default shallow and deep convection schemes and the other with the Unified Convection scheme (UNICON) - are employed to examine the impacts of cumulus parameterizations on the simulation of the boreal wintertime MJO teleconnection in the Northern Hemisphere. We demonstrate that the UNICON substantially improves the MJO teleconnection. When the UNICON is employed, the simulated circulation anomalies associated with the MJO bettermore » resemble the observed counterpart, compared to the simulation with the default convection schemes. Quantitatively, the pattern correlation for the 300-hPa geopotential height anomalies between the simulations and observation increases from 0.07 for the default schemes to 0.54 for the UNICON. These circulation anomalies associated with the MJO further help to enhance the surface air temperature and precipitation anomalies over North America, although room for improvement is still evident. Initial value calculations suggest that the realistic MJO teleconnection with the UNICON is not attributed to the changes in the background wind, but primarily to the improved tropical convective heating associated with the MJO.« less

  4. LAND AND WATER USE CHARACTERISTICS AND HUMAN HEALTH INPUT PARAMETERS FOR USE IN ENVIRONMENTAL DOSIMETRY AND RISK ASSESSMENTS AT THE SAVANNAH RIVER SITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, T.; Karapatakis, D.; Lee, P.

    2010-08-06

    Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) Regulatory Guides. Within the regulatory guides, default values are provided for many of the dose model parameters but the use of site-specific values by the applicant is encouraged. A detailed survey of land and water use parameters was conducted in 1991 and is being updated here. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors to be used in human health exposure calculations at SRS are documented. Based on comparisons to the 2009 SRS environmental compliance doses, the following effects are expected in future SRS compliance dose calculations: (1) Aquatic all-pathway maximally exposed individual doses may go up about 10 percent due to changes in the aquatic bioaccumulation factors; (2) Aquatic all-pathway collective doses may go up about 5 percent due to changes in the aquatic bioaccumulation factors that offset the reduction in average individual water consumption rates; (3) Irrigation pathway doses to the maximally exposed individual may go up about 40 percent due to increases in the element-specific transfer factors; (4) Irrigation pathway collective doses may go down about 50 percent due to changes in food productivity and production within the 50-mile radius of SRS; (5) Air pathway doses to the maximally exposed individual may go down about 10 percent due to the changes in food productivity in the SRS area and to the changes in element-specific transfer factors; and (6) Air pathway collective doses may go down about 30 percent mainly due to the decrease in the inhalation rate assumed for the average individual.« less

  5. Why do Patients in Pre-Anti Retroviral Therapy (ART) Care Default: A Cross-Sectional Study.

    PubMed

    Chakravarty, Jaya; Kansal, Sangeeta; Tiwary, Narendra; Sundar, Shyam

    2016-01-01

    Approximately, 40% of the patients registered in the National AIDS Control Program in India are not on antiretroviral therapy (ART), i.e., are in pre-ART care. However, there are scarce data regarding the retention of pre-ART patients under routine program conditions. The main objective of this study was to find out the reasons for default among patients in pre-ART care. Patients enrolled in the ART Centre, Banaras Hindu University (BHU) between January and December 2009 and in pre-ART care were included in the study. Defaulters were those pre-ART patients who missed their last appointment of CD4 count by more than 1 month. Defaulters were traced telephonically in 2011 and those who returned and gave their consent for the study were interviewed using a semi-structured questionnaire. Out of 620 patients in pre-ART care, 384 (68.2%) were defaulters. One hundred forty-four of the defaulters were traced and only 83 reached the ART center for interview. Among defaulters who did not reach the ART center, illiterate and unmarried were significantly more and mean duration from registration to default was also significantly less as compared to those who came back for the interview. Most defaulters gave more than one reason for defaulting that were as follows: Inconvenient clinic timings (98%), need for multiple mode of transport (92%), perceived improved health (65%), distance of center from home (61%), lack of social support (62%), and financial difficulty (59%). Active tracing of pre-ART patients through outreach and strengthening of the Link ART centers will improve the retention of patients in the program.

  6. The effect of a default-based nudge on the choice of whole wheat bread.

    PubMed

    van Kleef, Ellen; Seijdell, Karen; Vingerhoeds, Monique H; de Wijk, René A; van Trijp, Hans C M

    2018-02-01

    Consumer choices are often influenced by the default option presented. This study examines the effect of whole wheat bread as a default option in a sandwich choice situation. Whole wheat bread consists of 100% whole grain and is healthier than other bread types that are commonly consumed, such as brown or white bread. A pilot survey (N = 291) examined the strength of combinations of toppings and bread type as carrier to select stimuli for the main study. In the main experimental study consisting of a two (bread type) by two (topping type) between-subjects design, participants (N = 226) were given a free sandwich at a university stand with either a relatively unhealthy deep-fried snack (croquette) or a healthy topping. About half of the participants were offered a whole wheat bun unless they asked for white bun, and the other half were offered a white bun unless they asked for a whole wheat bun. Regardless of the topping, the results show that when the whole wheat bun was the default option, 108 out of 115 participants (94%) decided to stick with this default option. When the default of bread offered was white, 89 out of 111 participants (80%) similarly chose to stick with this default. Across conditions, participants felt equally free to make a choice. The attractiveness of and willingness to pay for the sandwich were not affected by default type of bread. This study demonstrated a strong default effect of bread type. This clearly shows the benefit of steering consumers towards a healthier bread choice, by offering healthier default bread at various locations such as restaurants, schools and work place canteens. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. SeisFlows-Flexible waveform inversion software

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.; Borisov, Dmitry; Lefebvre, Matthieu; Tromp, Jeroen

    2018-06-01

    SeisFlows is an open source Python package that provides a customizable waveform inversion workflow and framework for research in oil and gas exploration, earthquake tomography, medical imaging, and other areas. New methods can be rapidly prototyped in SeisFlows by inheriting from default inversion or migration classes, and code can be tested on 2D examples before application to more expensive 3D problems. Wave simulations must be performed using an external software package such as SPECFEM3D. The ability to interface with external solvers lends flexibility, and the choice of SPECFEM3D as a default option provides optional GPU acceleration and other useful capabilities. Through support for massively parallel solvers and interfaces for high-performance computing (HPC) systems, inversions with thousands of seismic traces and billions of model parameters can be performed. So far, SeisFlows has run on clusters managed by the Department of Defense, Chevron Corp., Total S.A., Princeton University, and the University of Alaska, Fairbanks.

  8. The application of defaults to optimize parents' health-based choices for children.

    PubMed

    Loeb, Katharine L; Radnitz, Cynthia; Keller, Kathleen; Schwartz, Marlene B; Marcus, Sue; Pierson, Richard N; Shannon, Michael; DeLaurentis, Danielle

    2017-06-01

    Optimal defaults is a compelling model from behavioral economics and the psychology of human decision-making, designed to shape or "nudge" choices in a positive direction without fundamentally restricting options. The current study aimed to test the effectiveness of optimal (less obesogenic) defaults and parent empowerment priming on health-based decisions with parent-child (ages 3-8) dyads in a community-based setting. Two proof-of-concept experiments (one on breakfast food selections and one on activity choice) were conducted comparing the main and interactive effects of optimal versus suboptimal defaults, and parent empowerment priming versus neutral priming, on parents' health-related choices for their children. We hypothesized that in each experiment, making the default option more optimal will lead to more frequent health-oriented choices, and that priming parents to be the ultimate decision-makers on behalf of their child's health will potentiate this effect. Results show that in both studies, default condition, but not priming condition or the interaction between default and priming, significantly predicted choice (healthier vs. less healthy option). There was also a significant main effect for default condition (and no effect for priming condition or the interaction term) on the quantity of healthier food children consumed in the breakfast experiment. These pilot studies demonstrate that optimal defaults can be practicably implemented to improve parents' food and activity choices for young children. Results can inform policies and practices pertaining to obesogenic environmental factors in school, restaurant, and home environments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Estimation of methane emission rate changes using age-defined waste in a landfill site.

    PubMed

    Ishii, Kazuei; Furuichi, Toru

    2013-09-01

    Long term methane emissions from landfill sites are often predicted by first-order decay (FOD) models, in which the default coefficients of the methane generation potential and the methane generation rate given by the Intergovernmental Panel on Climate Change (IPCC) are usually used. However, previous studies have demonstrated the large uncertainty in these coefficients because they are derived from a calibration procedure under ideal steady-state conditions, not actual landfill site conditions. In this study, the coefficients in the FOD model were estimated by a new approach to predict more precise long term methane generation by considering region-specific conditions. In the new approach, age-defined waste samples, which had been under the actual landfill site conditions, were collected in Hokkaido, Japan (in cold region), and the time series data on the age-defined waste sample's methane generation potential was used to estimate the coefficients in the FOD model. The degradation coefficients were 0.0501/y and 0.0621/y for paper and food waste, and the methane generation potentials were 214.4 mL/g-wet waste and 126.7 mL/g-wet waste for paper and food waste, respectively. These coefficients were compared with the default coefficients given by the IPCC. Although the degradation coefficient for food waste was smaller than the default value, the other coefficients were within the range of the default coefficients. With these new coefficients to calculate methane generation, the long term methane emissions from the landfill site was estimated at 1.35×10(4)m(3)-CH(4), which corresponds to approximately 2.53% of the total carbon dioxide emissions in the city (5.34×10(5)t-CO(2)/y). Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Social and Clinical Characteristics of Immigrants with Tuberculosis in South Korea.

    PubMed

    Min, Gee Ho; Kim, Young; Lee, Jong Seok; Oh, Jee Youn; Hur, Gyu Young; Lee, Young Seok; Min, Kyung Hoon; Lee, Sung Yong; Kim, Je Hyeong; Shin, Chol; Lee, Seung Heon

    2017-05-01

    To determine the social and clinical characteristics of immigrants with tuberculosis (TB) in South Korea. The registered adult TB patients who were diagnosed and treated in Korea Medical Centers from January 2013 to December 2015 were analyzed retrospectively. A total of 105 immigrants with TB were compared to 932 native Korean TB patients. Among these 105 immigrants with TB, 86 (82%) were Korean-Chinese. The rate of drug-susceptible TB were lower in the immigrants group than in the native Korean group [odds ratio (OR): 0.46; 95% confidence interval (CI): 0.22-0.96, p=0.035]. Cure rate was higher in the immigrant group than in the native Korean group (OR: 2.03; 95% CI: 1.26-3.28, p=0.003). Treatment completion rate was lower in the immigrant group than in the native Korean group (OR: 0.50; 95% CI: 0.33-0.74, p=0.001). However, treatment success rate showed no significant difference between two groups (p=0.141). Lost to follow up (default) rate was higher in the immigrant group than in the native Korean group after adjusting for age and drug resistance (OR: 3.61; 95% CI: 1.36-9.61, p=0.010). There was no difference between defaulter and non-defaulter in clinical characteristics or types of visa among these immigrants (null p value). However, 43 TB patients with recent immigration were diagnosed as TB even though they had been screened as normal at the time of immigration. Endeavor to reduce the default rate of immigrants with TB and reinforce TB screening during the immigration process must be performed for TB infection control in South Korea. © Copyright: Yonsei University College of Medicine 2017

  11. The Influence of Aerosol Hygroscopicity on Precipitation Intensity During a Mesoscale Convective Event

    NASA Astrophysics Data System (ADS)

    Kawecki, Stacey; Steiner, Allison L.

    2018-01-01

    We examine how aerosol composition affects precipitation intensity using the Weather and Research Forecasting Model with Chemistry (version 3.6). By changing the prescribed default hygroscopicity values to updated values from laboratory studies, we test model assumptions about individual component hygroscopicity values of ammonium, sulfate, nitrate, and organic species. We compare a baseline simulation (BASE, using default hygroscopicity values) with four sensitivity simulations (SULF, increasing the sulfate hygroscopicity; ORG, decreasing organic hygroscopicity; SWITCH, using a concentration-dependent hygroscopicity value for ammonium; and ALL, including all three changes) to understand the role of aerosol composition on precipitation during a mesoscale convective system (MCS). Overall, the hygroscopicity changes influence the spatial patterns of precipitation and the intensity. Focusing on the maximum precipitation in the model domain downwind of an urban area, we find that changing the individual component hygroscopicities leads to bulk hygroscopicity changes, especially in the ORG simulation. Reducing bulk hygroscopicity (e.g., ORG simulation) initially causes fewer activated drops, weakened updrafts in the midtroposphere, and increased precipitation from larger hydrometeors. Increasing bulk hygroscopicity (e.g., SULF simulation) simulates more numerous and smaller cloud drops and increases precipitation. In the ALL simulation, a stronger cold pool and downdrafts lead to precipitation suppression later in the MCS evolution. In this downwind region, the combined changes in hygroscopicity (ALL) reduces the overprediction of intense events (>70 mm d-1) and better captures the range of moderate intensity (30-60 mm d-1) events. The results of this single MCS analysis suggest that aerosol composition can play an important role in simulating high-intensity precipitation events.

  12. Risk factors and timing of default from treatment for non-multidrug-resistant tuberculosis in Moldova.

    PubMed

    Jenkins, H E; Ciobanu, A; Plesca, V; Crudu, V; Galusca, I; Soltan, V; Cohen, T

    2013-03-01

    The Republic of Moldova, in Eastern Europe, has among the highest reported nationwide proportions of tuberculosis (TB) patients with multidrug-resistant tuberculosis (MDR-TB) worldwide. Default has been associated with increased mortality and amplification of drug resistance, and may contribute to the high MDR-TB rates in Moldova. To assess risk factors and timing of default from treatment for non-MDR-TB from 2007 to 2010. A retrospective analysis of routine surveillance data on all non-MDR-TB patients reported. A total of 14.7% of non-MDR-TB patients defaulted from treatment during the study period. Independent risk factors for default included sociodemographic factors, such as homelessness, living alone, less formal education and spending substantial time outside Moldova in the year prior to diagnosis; and health-related factors such as human immunodeficiency virus co-infection, greater lung pathology and increasing TB drug resistance. Anti-tuberculosis treatment is usually initiated within an institutional setting in Moldova, and the default risk was highest in the month following the phase of hospitalized treatment (among civilians) and after leaving prison (among those diagnosed while incarcerated). Targeted interventions to increase treatment adherence for patients at highest risk of default, and improving the continuity of care for patients transitioning from institutional to community care may substantially reduce risk of default.

  13. Risk factors and timing of default from treatment for non-multidrug-resistant tuberculosis in Moldova

    PubMed Central

    Jenkins, Helen E.; Ciobanu, Anisoara; Plesca, Valeriu; Crudu, Valeriu; Galusca, Irina; Soltan, Viorel; Cohen, Ted

    2013-01-01

    SUMMARY Setting The Republic of Moldova, Eastern Europe, 2007–2010. Moldova has among the highest reported nationwide proportions of TB patients with multidrug-resistant tuberculosis (MDR-TB) worldwide. Objective To assess risk factors and timing of default from treatment for non-MDR-TB. Default has been associated with increased mortality and amplification of drug resistance and may contribute to the high MDR-TB rates in Moldova. Design A retrospective analysis of routine surveillance data on all non-MDR-TB patients reported. Results 14.7% of non-MDR-TB patients defaulted from treatment during the study period. Independent risk factors for default included sociodemographic factors (i.e. homelessness, living alone, less formal education and spending substantial time outside Moldova in the year prior to diagnosis) and health-related factors (i.e. HIV-coinfection, greater lung pathology, and increasing TB drug resistance). TB treatment is usually initiated within an institutional setting in Moldova and the default risk was highest in the month following the hospitalized treatment phase (among civilians) and after leaving prison (among those diagnosed while incarcerated). Conclusions Targeted interventions to increase treatment adherence for patients at highest risk of default and improving the continuity of care for patients transitioning from institutional to community care may substantially reduce the default risk. PMID:23407226

  14. Local Risk-Minimization for Defaultable Claims with Recovery Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biagini, Francesca, E-mail: biagini@mathematik.uni-muenchen.de; Cretarola, Alessandra, E-mail: alessandra.cretarola@dmi.unipg.it

    We study the local risk-minimization approach for defaultable claims with random recovery at default time, seen as payment streams on the random interval [0,{tau} Logical-And T], where T denotes the fixed time-horizon. We find the pseudo-locally risk-minimizing strategy in the case when the agent information takes into account the possibility of a default event (local risk-minimization with G-strategies) and we provide an application in the case of a corporate bond. We also discuss the problem of finding a pseudo-locally risk-minimizing strategy if we suppose the agent obtains her information only by observing the non-defaultable assets.

  15. 34 CFR 668.193 - Loan servicing appeals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... default rate; or (2) Any cohort default rate upon which a loss of eligibility under § 668.187 is based. (b... request for preclaims or default aversion assistance to the guaranty agency; and (ii) Submit a...

  16. 34 CFR 668.193 - Loan servicing appeals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... default rate; or (2) Any cohort default rate upon which a loss of eligibility under § 668.187 is based. (b... request for preclaims or default aversion assistance to the guaranty agency; and (ii) Submit a...

  17. Student Loan Defaults. Department of Education Limitations in Sanctioning Problem Schools. Report to the Ranking Minority Member, Subcommittee on Human Resources and Intergovernmental Relations, Committee on Government Reform and Oversight, House of Representatives.

    ERIC Educational Resources Information Center

    Blanchette, Cornelia M.

    This report examines the effectiveness of recent federal government efforts through amendments to the Higher Education Act (1993) to reduce student loan defaults. Key measures to curb defaults had been to make schools with high student loan default rates ineligible for federal student loan programs. However, many institutions have challenged…

  18. Computer Center CDC Libraries/NSRDC (Subprograms).

    DTIC Science & Technology

    1981-02-01

    TRANSFORM." COMM, OF THE ACM, VOL, 10, NO. 10, OCTOBER 1967. 3. SYSTEM/360 SCIENTIFIC SUBROUTINE PACKAGE, IBM TECHNICAL PUBLICATONS DEPARTMENT, 1967...VARIABLE 3) UP TO 9 DEPENDENT VARIABLES PER PLOT. FUNCTIONAL CATEGORIES: J5 LANGUAGE: FORTRAN IV USAGE COMMON /PLO/ NRUN, NPLOT, ITP .6), ITY(6), ITX(61...PLO/ NRUN - NUMBER OF THIS RUN iDEFAULT: 1) NPLOT - NUMBER OF PLOT (DEFAULT: 1 ITP - PAGE TITLE (DEFAULT: BLANK) ITY - Y TITLE (DEFAULT: BLANK) ITX - X

  19. The brain's default network: origins and implications for the study of psychosis.

    PubMed

    Buckner, Randy L

    2013-09-01

    The brain's default network is a set of regions that is spontaneously active during passive moments. The network is also active during directed tasks that require participants to remember past events or imagine upcoming events. One hypothesis is that the network facilitates construction of mental models (simulations) that can be used adaptively in many contexts. Extensive research has considered whether disruption of the default network may contribute to disease. While an intriguing possibility, a specific challenge to this notion is the fact that it is difficult to accurately measure the default network in patients where confounds of head motion and compliance are prominent. Nonetheless, some intriguing recent findings suggest that dysfunctional interactions between front-oparietal control systems and the default network contribute to psychosis. Psychosis may be a network disturbance that manifests as disordered thought partly because it disrupts the fragile balance between the default network and competing brain systems.

  20. The brain's default network: origins and implications for the study of psychosis

    PubMed Central

    Buckner, Randy L.

    2013-01-01

    The brain's default network is a set of regions that is spontaneously active during passive moments. The network is also active during directed tasks that require participants to remember past events or imagine upcoming events. One hypothesis is that the network facilitates construction of mental models (simulations) that can be used adaptively in many contexts. Extensive research has considered whether disruption of the default network may contribute to disease. While an intriguing possibility, a specific challenge to this notion is the fact that it is difficult to accurately measure the default network in patients where confounds of head motion and compliance are prominent. Nonetheless, some intriguing recent findings suggest that dysfunctional interactions between front-oparietal control systems and the default network contribute to psychosis. Psychosis may be a network disturbance that manifests as disordered thought partly because it disrupts the fragile balance between the default network and competing brain systems. PMID:24174906

  1. 78 FR 56188 - Wireline Competition Bureau Announces Availability of Version 3.2 of the Connect America Fund...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-12

    ... Communications Systems Group, Inc. (ACS) for Alaska, and using the default value of ``1'' for the regional cost adjustment for the U.S. Virgin Islands, which has the effect of increasing labor costs. Lastly, the Bureau... Puerto Rico Telephone Company, Inc. (PRTC) and Virgin Islands Telephone Corporation d/b/a Innovative...

  2. Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.

    PubMed

    Kodell, R L; Gaylor, D W

    1999-01-01

    Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.

  3. Factors associated with default from treatment among tuberculosis patients in nairobi province, Kenya: A case control study

    PubMed Central

    2011-01-01

    Background Successful treatment of tuberculosis (TB) involves taking anti-tuberculosis drugs for at least six months. Poor adherence to treatment means patients remain infectious for longer, are more likely to relapse or succumb to tuberculosis and could result in treatment failure as well as foster emergence of drug resistant tuberculosis. Kenya is among countries with high tuberculosis burden globally. The purpose of this study was to determine the duration tuberculosis patients stay in treatment before defaulting and factors associated with default in Nairobi. Methods A Case-Control study; Cases were those who defaulted from treatment and Controls those who completed treatment course between January 2006 and March 2008. All (945) defaulters and 1033 randomly selected controls from among 5659 patients who completed treatment course in 30 high volume sites were enrolled. Secondary data was collected using a facility questionnaire. From among the enrolled, 120 cases and 154 controls were randomly selected and interviewed to obtain primary data not routinely collected. Data was analyzed using SPSS and Epi Info statistical software. Univariate and multivariate logistic regression analysis to determine association and Kaplan-Meier method to determine probability of staying in treatment over time were applied. Results Of 945 defaulters, 22.7% (215) and 20.4% (193) abandoned treatment within first and second months (intensive phase) of treatment respectively. Among 120 defaulters interviewed, 16.7% (20) attributed their default to ignorance, 12.5% (15) to traveling away from treatment site, 11.7% (14) to feeling better and 10.8% (13) to side-effects. On multivariate analysis, inadequate knowledge on tuberculosis (OR 8.67; 95% CI 1.47-51.3), herbal medication use (OR 5.7; 95% CI 1.37-23.7), low income (OR 5.57, CI 1.07-30.0), alcohol abuse (OR 4.97; 95% CI 1.56-15.9), previous default (OR 2.33; 95% CI 1.16-4.68), co-infection with Human immune-deficient Virus (HIV) (OR 1.56; 95% CI 1.25-1.94) and male gender (OR 1.43; 95% CI 1.15-1.78) were independently associated with default. Conclusion The rate of defaulting was highest during initial two months, the intensive phase of treatment. Multiple factors were attributed by defaulting patients as cause for abandoning treatment whereas several were independently associated with default. Enhanced patient pre-treatment counseling and education about TB is recommended. PMID:21906291

  4. Predictors and mortality associated with treatment default in pulmonary tuberculosis.

    PubMed

    Kliiman, K; Altraja, A

    2010-04-01

    To identify risk factors for default from pulmonary tuberculosis (TB) treatment and to assess mortality associated with default in Estonia. All patients with culture-confirmed pulmonary TB who started treatment during 2003-2005 were included in a retrospective cohort study. In 1107 eligible patients, the treatment success rate was 81.5% and the default rate 9.4% (respectively 60.4% and 17.0% in multidrug-resistant TB [MDR-TB]). Independent predictors of treatment default were alcohol abuse (OR 3.22, 95%CI 1.93-5.38), unemployment (OR 3.05, 95%CI 1.84-5.03), MDR-TB (OR 2.17, 95%CI 1.35-3.50), urban residence (OR 1.85, 95%CI 1.00-3.42) and previous incarceration (OR 1.78, 95%CI 1.05-3.03). Of the defaulters, 29.4% died during follow-up (median survival 342.0 days). Cox regression analysis revealed that unemployment was associated with all-cause and TB-related mortality among defaulters (respectively HR 4.58, 95%CI 1.05-20.1 and HR 11.2, 95%CI 1.58-80.2). HIV infection (HR 51.2, 95%CI 6.06-432), sputum smear positivity (HR 9.59, 95%CI 1.79-51.4), MDR-TB (HR 8.56, 95%CI 1.81-40.4) and previous TB (HR 5.15, 95%CI 1.64-16.2) were predictors of TB-related mortality. The main risk factors for treatment default can be influenced. Interventions to reduce default should therefore concentrate on socially disadvantaged patients and prevention of alcohol abuse, with special attention given to MDR-TB patients.

  5. Vehicle Tire and Wheel Creation in BRL-CAD

    DTIC Science & Technology

    2009-04-01

    Tire Tread Modeling 4  4.  Setting Tire Thickness 7  5.  Changing the Rim Width 9  6.  Changing the Radial Location of the... treaded or nontreaded model in the tire -model.c combination based on the analysis. 4. Setting Tire Thickness Tire thickness is manipulated via... tread is not modeled by default but can be added using options. • Fine-grained control of parameters such as tire thickness is available with

  6. Solar Cycle Variations of SABER CO2 and MLS H2O in the Mesosphere and Lower Thermosphere Region

    NASA Astrophysics Data System (ADS)

    Salinas, C. C. J.; Chang, L. C.; Liang, M. C.; Qian, L.; Yue, J.; Russell, J. M., III; Mlynczak, M. G.

    2017-12-01

    This work aims to present the solar cycle variations of SABER CO2 and MLS H2O in the Mesosphere and Lower Thermosphere region. These observations are then compared to SD-WACCM outputs of CO2 and H2O in order to understand their physical mechanisms. After which, we attempt to model their solar cycle variations using the default TIME-GCM and the TIME-GCM with MERRA reanalysis as lower-boundary conditions. Comparing the outputs of the default TIME-GCM and TIME-GCM with MERRA will give us insight into the importance of solar forcing and lower atmospheric forcing on the solar cycle variations of CO2 and H2O. The solar cycle influence in the parameters are calculated by doing a multiple linear regression with the F10.7 index. The solar cycle of SABER CO2 is reliable above 1e-2 mb and below 1e-3 mb. Preliminary results from the observations show that SABER CO2 has a stronger negative anomaly due to the solar cycle over the winter hemisphere. MLS H2O is reliable until 1e-2. Preliminary results from the observations show that MLS H2O also has a stronger negative anomaly due to the solar cycle over the winter hemisphere. Both SD-WACCM and the default TIME-GCM reproduce these stronger anomalies over the winter hemisphere. An analysis of the tendency equations in SD-WACCM and default TIME-GCM then reveal that for CO2, the stronger winter anomaly may be attributed to stronger downward transport over the winter hemisphere. For H2O, an analysis of the tendency equations in SD-WACCM reveal that the stronger winter anomaly may be attributed to both stronger downward transport and stronger photochemical loss. On the other hand, in the default TIME-GCM, the stronger winter anomaly in H2O may only be attributed to stronger downward transport. For both models, the stronger downward transport is attributed to enhanced stratospheric polar winter jet during solar maximum. Future work will determine whether setting the lower boundary conditions of TIME-GCM with MERRA will improve the match between TIME-GCM and SD-WACCM. Also, with the TIME-GCM outputs, the influence of these MLT circulation changes on the ionospheric winter anomaly will be determined.

  7. Reasons for defaulting from drug-resistant tuberculosis treatment in Armenia: a quantitative and qualitative study.

    PubMed

    Sanchez-Padilla, E; Marquer, C; Kalon, S; Qayyum, S; Hayrapetyan, A; Varaine, F; Bastard, M; Bonnet, M

    2014-02-01

    Armenia, a country with a high prevalence of drug-resistant tuberculosis (DR-TB). To identify factors related to default from DR-TB treatment in Yerevan. Using a retrospective cohort design, we compared defaulters with patients who were cured, completed or failed treatment. Patients who initiated DR-TB treatment from 2005 to 2011 were included in the study. A qualitative survey was conducted including semi-structured interviews with defaulters and focus group discussions with care providers. Of 381 patients, 193 had achieved treatment success, 24 had died, 51 had failed treatment and 97 had defaulted. The number of drugs to which the patient was resistant at admission (aRR 1.16, 95%CI 1.05-1.27), the rate of treatment interruption based on patient's decision (aRR 1.03, 95%CI 1.02-1.05), the rate of side effects (aRR 1.18, 95%CI 1.09-1.27), and absence of culture conversion during the intensive phase (aRR 0.47, 95%CI 0.31-0.71) were independently associated with default from treatment. In the qualitative study, poor treatment tolerance, a perception that treatment was inefficient, lack of information, incorrect perception of being cured, working factors and behavioural problems were factors related to treatment default. In addition to economic reasons, poor tolerance of and poor response to treatment were the main factors associated with treatment default.

  8. Meditation leads to reduced default mode network activity beyond an active task

    PubMed Central

    Garrison, Kathleen A.; Zeffiro, Thomas A.; Scheinost, Dustin; Constable, R. Todd; Brewer, Judson A.

    2015-01-01

    Meditation has been associated with relatively reduced activity in the default mode network, a brain network implicated in self-related thinking and mind wandering. However, previous imaging studies have typically compared meditation to rest despite other studies reporting differences in brain activation patterns between meditators and controls at rest. Moreover, rest is associated with a range of brain activation patterns across individuals that has only recently begun to be better characterized. Therefore, this study compared meditation to another active cognitive task, both to replicate findings that meditation is associated with relatively reduced default mode network activity, and to extend these findings by testing whether default mode activity was reduced during meditation beyond the typical reductions observed during effortful tasks. In addition, prior studies have used small groups, whereas the current study tested these hypotheses in a larger group. Results indicate that meditation is associated with reduced activations in the default mode network relative to an active task in meditators compared to controls. Regions of the default mode showing a group by task interaction include the posterior cingulate/precuneus and anterior cingulate cortex. These findings replicate and extend prior work indicating that suppression of default mode processing may represent a central neural process in long-term meditation, and suggest that meditation leads to relatively reduced default mode processing beyond that observed during another active cognitive task. PMID:25904238

  9. Mitigation of biases in SMOS Level 2 soil moisture retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Mahmoodi, Ali; Richaume, Philippe; Kerr, Yann

    2017-04-01

    The Soil Moisture and Ocean Salinity (SMOS) mission of the European Space Agency (ESA) relies on the L-band Microwave Emission of the Biosphere (L-MEB) radiative transfer models to retrieve soil moisture (SM). These models require, as input, parameters which characterize the target like soil water content and temperature. The Soil Water Volume at Level 1 (SWVL1) from the European Centre for Medium-Range Weather Forecast (ECMWF) is used in the SMOS Level 2 SM algorithms as both an initial guess for SM in the iterative retrieval process and to compute fixed contributions from the so called "default" fractions. In case of mixed fractions of nominal (low vegetation land) and forest, retrieval is performed over one fraction while the contribution of the other is assumed to be fixed and known based on ECMWF data. Studies have shown that ECMWF SWVL1 is biased when compared to SMOS SM and represents values at a deeper layer of soil ( 7 cm) than that represented by SMOS ( 2 to 5 cm). This study uses a well know bias reduction technique based on matching of the Cumulative Distribution Functions (CDF) of the two distributions to help reduce the biases. Early results using a linear matching method provide very encouraging results. A complication with respect to performing CDF matching is that SMOS SM values are not available where they are needed, i.e. over the default fractions. In order to remedy this, we treat mixed fractions as homogeneous targets to retrieve SM over the whole target. The obtained values are then used to derive the CDF matching coefficients. A set of CDF coefficients derived using average and standard deviation of soil moisture values for 2014 has been used in reprocessing SMOS data for 2014 and 2015, as well as over selected sites (with in-situ data) over a longer period. The 2014 was selected due to its lower Radio Frequency Interference (RFI) contamination in comparison with other years. The application of CDF coefficients has lead to a wetter SM for many pixels (both in 2014 and 2015), where pixels are close to forested areas. It has also led to improvements in the frequency of successful retrievals for these pixels. These results are in agreement with our current state of knowledge that SMOS is dryer than expected near forests, and hence are encouraging and in support of future incorporation of CDF matching in the operational processor. We also discuss the performances of the CDF matched SM values in comparison with the operational ones over a number of sites where in-situ data is available, like Soil Climate Analysis Network (SCAN) in North America.

  10. 29 CFR 4219.32 - Interest on overdue, defaulted and overpaid withdrawal liability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., as reported by the Board of Governors of the Federal Reserve System in Statistical Release H.15... default, the date of the missed payment that gave rise to the delinquency or the default. (e) Date paid...

  11. Qualitative study of perceived causes of tuberculosis treatment default among health care workers in Morocco.

    PubMed

    Kizub, D; Ghali, I; Sabouni, R; Bourkadi, J E; Bennani, K; El Aouad, R; Dooley, K E

    2012-09-01

    In Morocco, tuberculosis (TB) treatment default is increasing in some urban areas. To provide a detailed description of factors that contribute to patient default and solutions from the point of view of health care professionals who participate in TB care. In-depth interviews were conducted with 62 physicians and nurses at nine regional public pulmonary clinics and local health clinics. Participants had a median of 24 years of experience in health care. Treatment default was seen as a result of multilevel factors related to the patient (lack of means, being a migrant worker, distance to treatment site, poor understanding of treatment, drug use, mental illness), medical team (high patient load, low motivation, lack of resources for tracking defaulters), treatment organization (poor communication between treatment sites, no systematic strategy for patient education or tracking, incomplete record keeping), and health care system and society. Tailored recommendations for low- and higher-cost interventions are provided. Interventions to enhance TB treatment completion should take into account the local context and multilevel factors that contribute to default. Qualitative studies involving health care workers directly involved in TB care can be powerful tools to identify contributing factors and define strategies to help reduce treatment default.

  12. 7 CFR 2201.33 - Defaults.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Defaults. 2201.33 Section 2201.33 Agriculture Regulations of the Department of Agriculture (Continued) LOCAL TELEVISION LOAN GUARANTEE BOARD LOCAL TELEVISION LOAN GUARANTEE PROGRAM-PROGRAM REGULATIONS Loan Guarantees § 2201.33 Defaults. (a) In determining...

  13. Predictors of default from follow-up care in a cervical cancer screening program using direct visual inspection in south-western Nigeria.

    PubMed

    Ezechi, Oliver Chukwujekwu; Petterson, Karen Odberg; Gbajabiamila, Titilola A; Idigbe, Ifeoma Eugenia; Kuyoro, Olutunmike; Ujah, Innocent Achaya Otobo; Ostergren, Per Olof

    2014-03-31

    Increasingly evidence is emerging from south East Asia, southern and east Africa on the burden of default to follow up care after a positive cervical cancer screening/diagnosis, which impacts negatively on cervical cancer prevention and control. Unfortunately little or no information exists on the subject in the West Africa sub region. This study was designed to determine the proportion of and predictors and reasons for default from follow up care after positive cervical cancer screen. Women who screen positive at community cervical cancer screening using direct visual inspection were followed up to determine the proportion of default and associated factors. Multivariate logistic regression was used to determine independent predictors of default. One hundred and eight (16.1%) women who screened positive to direct visual inspection out of 673 were enrolled into the study. Fifty one (47.2%) out of the 108 women that screened positive defaulted from follow-up appointment. Women who were poorly educated (OR: 3.1, CI: 2.0 - 5.2), or lived more than 10 km from the clinic (OR: 2.0, CI: 1.0 - 4.1), or never screened for cervical cancer before (OR: 3.5, CI:3:1-8.4) were more likely to default from follow-up after screening positive for precancerous lesion of cervix . The main reasons for default were cost of transportation (48.6%) and time constraints (25.7%). The rate of default was high (47.2%) as a result of unaffordable transportation cost and limited time to keep the scheduled appointment. A change from the present strategy that involves multiple visits to a "see and treat" strategy in which both testing and treatment are performed at a single visit is recommended.

  14. Risk factors associated with default among retreatment tuberculosis patients on DOTS in Paschim Medinipur district (West Bengal).

    PubMed

    Sarangi, S S; Dutt, D

    2014-07-01

    In India in 2010, 14.1% of retreatment of TB patients' treatment outcome was 'default'. Since 2002, in Paschim Midnapur District (West Bengal), it has been around 15-20%. To determine the timing, characteristics and risk factors associated with default among retreatment TB patients on DOTS. It was a case control study, conducted in six TB units (TU) of Paschim Midnapur District, which were selected by simple random sampling. Data was collected from treatment records of TUs/DTC. Data was also collected through interviews of the patients using the same pre-tested semi-structured questionnaire from 87 defaulters and 86 consecutively registered non-defaulters registered in first quarter, 2009 to second quarter, 2010. Median duration of treatment taken before default was 121 days (inter-quartile range of 64-176 days). Median number of doses of treatment taken before default was 36 (inter -quartile range of 26-63 doses). No retrieval action was documented in 57.5% cases. Retrieval was done between 0-7 days of missed doses in 29.9% cases. Multiple logistic regression analysis indicated the following important risk factors for default at 95% confidence interval: male-sex limit: [aOR 3.957 (1.162-13.469)], alcoholic inebriation[ aOR6.076 (2.088-17.675)], distance from DOT centre [aOR 4.066 (1.675-9.872)], number of missed doses during treatment [aOR 1.849 (1.282-2.669)] and no initial home visit [aOR 10.607 (2.286 -49.221)]. In Paschim Midnapur district, default of retreatment TB occurs mostly after a few doses in continuation phase. Initial home visit, patient provider meeting, retrieval action, community-based treatment as per RNTCP guidelines are required to uplift the programme.

  15. Weight and cost estimating relationships for heavy lift airships

    NASA Technical Reports Server (NTRS)

    Gray, D. W.

    1979-01-01

    Weight and cost estimating relationships, including additional parameters that influence the cost and performance of heavy-lift airships (HLA), are discussed. Inputs to a closed loop computer program, consisting of useful load, forward speed, lift module positive or negative thrust, and rotors and propellers, are examined. Detail is given to the HLA cost and weight program (HLACW), which computes component weights, vehicle size, buoyancy lift, rotor and propellar thrust, and engine horse power. This program solves the problem of interrelating the different aerostat, rotors, engines and propeller sizes. Six sets of 'default parameters' are left for the operator to change during each computer run enabling slight data manipulation without altering the program.

  16. Morality constrains the default representation of what is possible.

    PubMed

    Phillips, Jonathan; Cushman, Fiery

    2017-05-02

    The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.

  17. Burden Calculator: a simple and open analytical tool for estimating the population burden of injuries.

    PubMed

    Bhalla, Kavi; Harrison, James E

    2016-04-01

    Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. Present-Day Vegetation Helps Quantifying Past Land Cover in Selected Regions of the Czech Republic

    PubMed Central

    Abraham, Vojtěch; Oušková, Veronika; Kuneš, Petr

    2014-01-01

    The REVEALS model is a tool for recalculating pollen data into vegetation abundances on a regional scale. We explored the general effect of selected parameters by performing simulations and ascertained the best model setting for the Czech Republic using the shallowest samples from 120 fossil sites and data on actual regional vegetation (60 km radius). Vegetation proportions of 17 taxa were obtained by combining the CORINE Land Cover map with forest inventories, agricultural statistics and habitat mapping data. Our simulation shows that changing the site radius for all taxa substantially affects REVEALS estimates of taxa with heavy or light pollen grains. Decreasing the site radius has a similar effect as increasing the wind speed parameter. However, adjusting the site radius to 1 m for local taxa only (even taxa with light pollen) yields lower, more correct estimates despite their high pollen signal. Increasing the background radius does not affect the estimates significantly. Our comparison of estimates with actual vegetation in seven regions shows that the most accurate relative pollen productivity estimates (PPEs) come from Central Europe and Southern Sweden. The initial simulation and pollen data yielded unrealistic estimates for Abies under the default setting of the wind speed parameter (3 m/s). We therefore propose the setting of 4 m/s, which corresponds to the spring average in most regions of the Czech Republic studied. Ad hoc adjustment of PPEs with this setting improves the match 3–4-fold. We consider these values (apart from four exceptions) to be appropriate, because they are within the ranges of standard errors, so they are related to original PPEs. Setting a 1 m radius for local taxa (Alnus, Salix, Poaceae) significantly improves the match between estimates and actual vegetation. However, further adjustments to PPEs exceed the ranges of original values, so their relevance is uncertain. PMID:24936973

  19. 34 CFR 668.217 - Default prevention plans.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Default prevention plans. 668.217 Section 668.217 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.217...

  20. 24 CFR 985.109 - Default under the Annual Contributions Contract (ACC).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Contributions Contract (ACC). 985.109 Section 985.109 Housing and Urban Development REGULATIONS RELATING TO... § 985.109 Default under the Annual Contributions Contract (ACC). HUD may determine that an PHA's failure... required by HUD constitutes a default under the ACC. ...

  1. 24 CFR 985.109 - Default under the Annual Contributions Contract (ACC).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Contributions Contract (ACC). 985.109 Section 985.109 Housing and Urban Development Regulations Relating to... § 985.109 Default under the Annual Contributions Contract (ACC). HUD may determine that an PHA's failure... required by HUD constitutes a default under the ACC. ...

  2. 29 CFR 2570.5 - Consequences of default.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Consequences of default. 2570.5 Section 2570.5 Labor Regulations Relating to Labor (Continued) EMPLOYEE BENEFITS SECURITY ADMINISTRATION, DEPARTMENT OF LABOR... ERISA Section 502(i) § 2570.5 Consequences of default. For prohibited transaction penalty proceedings...

  3. Specifics on a XML Data Format for Scientific Data

    NASA Astrophysics Data System (ADS)

    Shaya, E.; Thomas, B.; Cheung, C.

    An XML-based data format for interchange and archiving of scientific data would benefit in many ways from the features standardized in XML. Foremost of these features is the world-wide acceptance and adoption of XML. Applications, such as browsers, XQL and XSQL advanced query, XML editing, or CSS or XSLT transformation, that are coming out of industry and academia can be easily adopted and provide startling new benefits and features. We have designed a prototype of a core format for holding, in a very general way, parameters, tables, scalar and vector fields, atlases, animations and complex combinations of these. This eXtensible Data Format (XDF) makes use of XML functionalities such as: self-validation of document structure, default values for attributes, XLink hyperlinks, entity replacements, internal referencing, inheritance, and XSLT transformation. An API is available to aid in detailed assembly, extraction, and manipulation. Conversion tools to and from FITS and other existing data formats are under development. In the future, we hope to provide object oriented interfaces to C++, Java, Python, IDL, Mathematica, Maple, and various databases. http://xml.gsfc.nasa.gov/XDF

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, George

    VICE 2.0 is the second generation of the VICE financial model developed by the National Renewable Energy Laboratory for fleet managers to assess the financial soundness of converting their fleets to run on CNG. VICE 2.0 uses a number of variables for infrastructure and vehicles to estimate the business case for decision-makers when considering CNG as a vehicle fuel. Enhancements in version 2.0 include the ability to select the project type (vehicles and infrastructure or vehicle acquisitions only), and to decouple vehicle acquisition from the infrastructure investment, so the two investments may be made independently. Outputs now include graphical presentationsmore » of investment cash flow, payback period (simple and discounted), petroleum displacement (annual and cumulative), and annual greenhouse gas reductions. Also, the Vehicle Data are now built around several common conventionally fueled (gasoline and diesel) fleet vehicles. Descriptions of the various model sections and available inputs follow. Each description includes default values for the base-case business model, which was created so economic sensitivities can be investigated by altering various project parameters one at a time.« less

  5. Experimental investigation and CFD simulation of multi-pipe earth-to-air heat exchangers (EAHEs) flow performance

    NASA Astrophysics Data System (ADS)

    Amanowicz, Łukasz; Wojtkowiak, Janusz

    2017-11-01

    In this paper the experimentally obtained flow characteristics of multi-pipe earth-to-air heat exchangers (EAHEs) were used to validate the EAHE flow performance numerical model prepared by means of CFD software Ansys Fluent. The cut-cell meshing and the k-ɛ realizable turbulence model with default coefficients values and enhanced wall treatment was used. The total pressure losses and airflow in each pipe of multi-pipe exchangers was investigated both experimentally and numerically. The results show that airflow in each pipe of multi-pipe EAHE structures is not equal. The validated numerical model can be used for a proper designing of multi-pipe EAHEs from the flow characteristics point of view. The influence of EAHEs geometrical parameters on the total pressure losses and airflow division between the exchanger pipes can be also analysed. Usage of CFD for designing the EAHEs can be helpful for HVAC engineers (Heating Ventilation and Air Conditioning) for optimizing the geometrical structure of multi-pipe EAHEs in order to save the energy and decrease operational costs of low-energy buildings.

  6. 26 CFR 20.2041-1 - Powers of appointment; in general.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Chapter 11. For example, if a trust created by S provides for payment of the income to A for life with... income to A's widow, W, for her life and for payment of the remainder to A's estate, the value of A's... to A for life, then to W for life, with power in A to appoint the remainder by will and in default of...

  7. 26 CFR 20.2041-1 - Powers of appointment; in general.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Chapter 11. For example, if a trust created by S provides for payment of the income to A for life with... income to A's widow, W, for her life and for payment of the remainder to A's estate, the value of A's... to A for life, then to W for life, with power in A to appoint the remainder by will and in default of...

  8. 26 CFR 20.2041-1 - Powers of appointment; in general.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Chapter 11. For example, if a trust created by S provides for payment of the income to A for life with... income to A's widow, W, for her life and for payment of the remainder to A's estate, the value of A's... to A for life, then to W for life, with power in A to appoint the remainder by will and in default of...

  9. 40 CFR 1066.610 - Dilution air background correction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... a = atomic hydrogen-to-carbon ratio of the test fuel. You may measure a or use default values from Table 1 of 40 CFR 1065.655. b = atomic oxygen-to-carbon ratio of the test fuel. You may measure b or use.... ER28AP14.100 Where: x CO2 = amount of CO2 measured in the sample over the test interval. x NMHC = amount of...

  10. 40 CFR Table Nn-2 to Subpart Nn of... - Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart NN of Part 98.../Unit) 1 Natural Gas Mscf 0.0544 Propane Barrel 0.241 Normal butane Barrel 0.281 Ethane Barrel 0.170...

  11. The relationship between adherence to clinic appointments and year-one mortality for newly enrolled HIV infected patients at a regional referral hospital in Western Kenya, January 2011-December 2012.

    PubMed

    Kimeu, Muthusi; Burmen, Barbara; Audi, Beryl; Adega, Anne; Owuor, Karen; Arodi, Susan; Bii, Dennis; Zielinski-Gutiérrez, Emily

    2016-01-01

    This retrospective cohort analysis was conducted to describe the association between adherence to clinic appointments and mortality, one year after enrollment into HIV care. We examined appointment-adherence for newly enrolled patients between January 2011 and December 2012 at a regional referral hospital in western Kenya. The outcomes of interest were patient default, risk factors for repeat default, and year-one risk of death. Of 582 enrolled patients, 258 (44%) were defaulters. GEE revealed that once having been defaulters, patients were significantly more likely to repeatedly default (OR 1.4; 95% CI 1.12-1.77), especially the unemployed (OR 1.43; 95% CI 1.07-1.91), smokers (OR 2.22; 95% CI 1.31-3.76), and those with no known disclosure (OR 2.17; 95% CI 1.42-3.3). Nineteen patients (3%) died during the follow-up period. Cox proportional hazards revealed that the risk of death was significantly higher among defaulters (HR 3.12; 95% CI 1.2-8.0) and increased proportionally to the rate of patient default; HR was 4.05 (95% CI1.38-11.81) and 4.98 (95% CI 1.45-17.09) for a cumulative of 4-60 and ≥60 days elapsed between all scheduled and actual clinic appointment dates, respectively. Risk factors for repeat default suggest a need to deliver targeted adherence programs.

  12. Risk factors for treatment default among adult tuberculosis patients in Indonesia.

    PubMed

    Rutherford, M E; Hill, P C; Maharani, W; Sampurno, H; Ruslami, R

    2013-10-01

    Defaulting from anti-tuberculosis treatment hinders tuberculosis (TB) control. To identify potential defaulters. We conducted a cohort study in newly diagnosed Indonesian TB patients. We administered a questionnaire, prospectively identified defaulters (discontinued treatment ≥ 2 weeks) and assessed risk factors using Cox's regression. Of 249 patients, 39 (16%) defaulted, 61% in the first 2 months. Default was associated with liver disease (HR 3.40, 95%CI 1.02-11.78), chest pain (HR 2.25, 95%CI 1.06-4.77), night sweats (HR 1.98, 95%CI 1.03-3.79), characteristics of the head of the household (self-employed, HR 2.47, 95%CI 1.15-5.34; patient's mother, HR 7.72, 95%CI 1.66-35.88), household wealth (HR 4.24, 95%CI 1.12-16.09), walking to clinic (HR 4.53, 95%CI 1.39-14.71), being unaccompanied at diagnosis (HR 30.49, 95%CI 7.55-123.07) or when collecting medication (HR 3.34, 95%CI 1.24-8.98) and low level of satisfaction with the clinic (HR 3.85, 95%CI 1.17-12.62) or doctors (HR 2.45, 95%CI 1.18-5.10). Health insurance (HR 0.24, 95%CI 0.07-0.74) and paying for diagnosis (HR 0.14, 95%CI 0.04-0.48) were protective. Defaulting is common and occurs early. Interventions that improve clinic services, strengthen patient support and increase insurance coverage may reduce default in Indonesia.

  13. Who are the patients that default tuberculosis treatment? - space matters!

    PubMed

    Nunes, C; Duarte, R; Veiga, A M; Taylor, B

    2017-04-01

    The goals of this article are: (i) to understand how individual characteristics affect the likelihood of patients defaulting their pulmonary tuberculosis (PTB) treatment regimens; (ii) to quantify the predictive capacity of these risk factors; and (iii) to quantify and map spatial variation in the risk of defaulting. We used logistic regression models and generalized additive models with a spatial component to determine the odds of default across continental Portugal. We focused on new PTB cases, diagnosed between 2000 and 2013, and included some individual information (sex, age, residence area, alcohol abuse, intravenous drug use, homelessness, HIV, imprisonment status). We found that the global default rate was 4·88%, higher in individuals with well-known risk profiles (males, immigrants, HIV positive, homeless, prisoners, alcohol and drug users). Of specific epidemiological interest was that our geographical analysis found that Portugal's main urban areas (the two biggest cities) and one tourist region have higher default rates compared to the rest of the country, after adjusting for the previously mentioneded risk factors. The challenge of treatment defaulting, either due to other individual non-measured characteristics, healthcare system failure or patient recalcitrance requires further analysis in the spatio-temporal domain. Our findings suggest the presence of significant within-country variation in the risk of defaulting that cannot be explained by these classical individual risk factors alone. The methods we advocate are simple to implement and could easily be applied to other diseases.

  14. Meditation leads to reduced default mode network activity beyond an active task.

    PubMed

    Garrison, Kathleen A; Zeffiro, Thomas A; Scheinost, Dustin; Constable, R Todd; Brewer, Judson A

    2015-09-01

    Meditation has been associated with relatively reduced activity in the default mode network, a brain network implicated in self-related thinking and mind wandering. However, previous imaging studies have typically compared meditation to rest, despite other studies having reported differences in brain activation patterns between meditators and controls at rest. Moreover, rest is associated with a range of brain activation patterns across individuals that has only recently begun to be better characterized. Therefore, in this study we compared meditation to another active cognitive task, both to replicate the findings that meditation is associated with relatively reduced default mode network activity and to extend these findings by testing whether default mode activity was reduced during meditation, beyond the typical reductions observed during effortful tasks. In addition, prior studies had used small groups, whereas in the present study we tested these hypotheses in a larger group. The results indicated that meditation is associated with reduced activations in the default mode network, relative to an active task, for meditators as compared to controls. Regions of the default mode network showing a Group × Task interaction included the posterior cingulate/precuneus and anterior cingulate cortex. These findings replicate and extend prior work indicating that the suppression of default mode processing may represent a central neural process in long-term meditation, and they suggest that meditation leads to relatively reduced default mode processing beyond that observed during another active cognitive task.

  15. Further evidence of alerted default network connectivity and association with theory of mind ability in schizophrenia.

    PubMed

    Mothersill, Omar; Tangney, Noreen; Morris, Derek W; McCarthy, Hazel; Frodl, Thomas; Gill, Michael; Corvin, Aiden; Donohoe, Gary

    2017-06-01

    Resting-state functional magnetic resonance imaging (rs-fMRI) has repeatedly shown evidence of altered functional connectivity of large-scale networks in schizophrenia. The relationship between these connectivity changes and behaviour (e.g. symptoms, neuropsychological performance) remains unclear. Functional connectivity in 27 patients with schizophrenia or schizoaffective disorder, and 25 age and gender matched healthy controls was examined using rs-fMRI. Based on seed regions from previous studies, we examined functional connectivity of the default, cognitive control, affective and attention networks. Effects of symptom severity and theory of mind performance on functional connectivity were also examined. Patients showed increased connectivity between key nodes of the default network including the precuneus and medial prefrontal cortex compared to controls (p<0.01, FWE-corrected). Increasing positive symptoms and increasing theory of mind performance were both associated with altered connectivity of default regions within the patient group (p<0.01, FWE-corrected). This study confirms previous findings of default hyper-connectivity in schizophrenia spectrum patients and reveals an association between altered default connectivity and positive symptom severity. As a novel find, this study also shows that default connectivity is correlated to and predictive of theory of mind performance. Extending these findings by examining the effects of emerging social cognition treatments on both default connectivity and theory of mind performance is now an important goal for research. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. 24 CFR 907.7 - Remedies for substantial default.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Remedies for substantial default... URBAN DEVELOPMENT SUBSTANTIAL DEFAULT BY A PUBLIC HOUSING AGENCY § 907.7 Remedies for substantial... staff; or (3) Provide assistance deemed necessary, in the discretion of HUD, to remedy emergency...

  17. 7 CFR 1779.75 - Defaults by borrower.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Defaults by borrower. 1779.75 Section 1779.75 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE (CONTINUED) WATER AND WASTE DISPOSAL PROGRAMS GUARANTEED LOANS § 1779.75 Defaults by borrower. (a...

  18. 10 CFR 609.15 - Default, demand, payment, and collateral liquidation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Default, demand, payment, and collateral liquidation. 609.15 Section 609.15 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS LOAN GUARANTEES FOR PROJECTS THAT EMPLOY INNOVATIVE TECHNOLOGIES § 609.15 Default, demand, payment, and collateral liquidation...

  19. 24 CFR 266.515 - Record retention.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Project Management... insurance remains in force. (b) Defaults and claims. Records pertaining to a mortgage default and claim must be retained from the date of default through final settlement of the claim for a period of no less...

  20. 45 CFR 672.10 - Default order.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ENFORCEMENT AND..., an admission of all facts alleged in the complaint and a waiver of respondent's right to a hearing on... with the Hearing Clerk. (c) Contents of a default order. A default order shall include findings of fact...

Top