Sample records for dwpf pccs models

  1. SME Acceptability Determination For DWPF Process Control (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T.

    2017-06-12

    The statistical system described in this document is called the Product Composition Control System (PCCS). K. G. Brown and R. L. Postles were the originators and developers of this system as well as the authors of the first three versions of this technical basis document for PCCS. PCCS has guided acceptability decisions for the processing at the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS) since the start of radioactive operations in 1996. The author of this revision to the document gratefully acknowledges the firm technical foundation that Brown and Postles established to support the ongoing successfulmore » operation at the DWPF. Their integration of the glass propertycomposition models, developed under the direction of C. M. Jantzen, into a coherent and robust control system, has served the DWPF well over the last 20+ years, even as new challenges, such as the introduction into the DWPF flowsheet of auxiliary streams from the Actinide Removal Process (ARP) and other processes, were met. The purpose of this revision is to provide a technical basis for modifications to PCCS required to support the introduction of waste streams from the Salt Waste Processing Facility (SWPF) into the DWPF flowsheet. An expanded glass composition region is anticipated by the introduction of waste streams from SWPF, and property-composition studies of that glass region have been conducted. Jantzen, once again, directed the development of glass property-composition models applicable for this expanded composition region. The author gratefully acknowledges the technical contributions of C.M. Jantzen leading to the development of these glass property-composition models. The integration of these models into the PCCS constraints necessary to administer future acceptability decisions for the processing at DWPF is provided by this sixth revision of this document.« less

  2. Defense Waste Processing Facility (DWPF) Durability-Composition Models and the Applicability of the Associated Reduction of Constraints (ROC) Criteria for High TiO 2 Containing Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C. M.; Edwards, T. B.; Trivelpiece, C. L.

    Radioactive high-level waste (HLW) at the Savannah River Site (SRS) has successfully been vitrified into borosilicate glass in the DWPF since 1996. Vitrification requires stringent product/process (P/P) constraints since the glass cannot be reworked once it has been poured into ten foot tall by two foot diameter canisters. A unique “feed forward” statistical process control (SPC) was developed for this control rather than relying on statistical quality control (SQC). In SPC, the feed composition to the DWPF melter is controlled prior to vitrification. In SQC, the glass product would be sampled after it is vitrified. Individual glass property-composition models formmore » the basis for the “feed forward” SPC. The models transform constraints on the melt and glass properties into constraints on the feed composition going to the melter in order to determine, at the 95% confidence level, that the feed will be processable and that the durability of the resulting waste form will be acceptable to a geologic repository. The DWPF SPC system is known as the Product Composition Control System (PCCS). One of the process models within PCCS is known as the Thermodynamic Hydration Energy Reaction MOdel (THERMO™). The DWPF will soon be receiving increased concentrations of TiO 2-, Na 2O-, and Cs 2O-enriched wastes from the Salt Waste Processing Facility (SWPF). The SWPF has been built to pretreat the high-curie fraction of the salt waste to be removed from the HLW tanks in the F- and H-Area Tank Farms at the SRS. In order to validate the existing TiO 2 term in THERMO™ beyond 2.0 wt% in the DWPF, new durability data were developed over the target range of 2.00 to 6.00 wt% TiO 2 and evaluated against the 1995 durability model. The durability was measured by the 7-day Product Consistency Test. This study documents the adequacy of the existing THERMO™ terms. It is recommended that the modified THERMO™ durability models and the modified property acceptable region limits for the durability constraints be incorporated in the next revision of the technical bases for PCCS and then implemented into PCCS. It is also recommended that an reduction of constraints of 4 wt% Al 2O 3 be implemented with no restrictions on the amount of alkali in the glass for TiO 2 values ≥2 wt%. The ultimate limit on the amount of TiO 2 that can be accommodated from SWPF will be determined by the three PCCS models, the waste composition of a given sludge batch, the waste loading of the sludge batch, and the frit used for vitrification.« less

  3. SLURRY MIX EVAPORATOR BATCH ACCEPTABILITY AND TEST CASES OF THE PRODUCT COMPOSITION CONTROL SYSTEM WITH THORIUM AS A REPORTABLE ELEMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T.

    2010-10-07

    The Defense Waste Processing Facility (DWPF), which is operated by Savannah River Remediation, LLC (SRR), has recently begun processing Sludge Batch 6 (SB6) by combining it with Frit 418 at a nominal waste loading (WL) of 36%. A unique feature of the SB6/Frit 418 glass system, as compared to the previous glass systems processed in DWPF, is that thorium will be a reportable element (i.e., concentrations of elemental thorium in the final glass product greater than 0.5 weight percent (wt%)) for the resulting wasteform. Several activities were initiated based upon this unique aspect of SB6. One of these was anmore » investigation into the impact of thorium on the models utilized in DWPF's Product Composition and Control System (PCCS). While the PCCS is described in more detail below, for now note that it is utilized by Waste Solidification Engineering (WSE) to evaluate the acceptability of each batch of material in the Slurry Mix Evaporator (SME) before this material is passed on to the melter. The evaluation employs models that predict properties associated with processability and product quality from the composition of vitrified samples of the SME material. The investigation of the impact of thorium on these models was conducted by Peeler and Edwards [1] and led to a recommendation that DWPF can process the SB6/Frit 418 glass system with ThO{sub 2} concentrations up to 1.8 wt% in glass. Questions also arose regarding the handling of thorium in the SME batch acceptability process as documented by Brown, Postles, and Edwards [2]. Specifically, that document is the technical bases of PCCS, and while Peeler and Edwards confirmed the reliability of the models, there is a need to confirm that the current implementation of DWPF's PCCS appropriately handles thorium as a reportable element. Realization of this need led to a Technical Task Request (TTR) prepared by Bricker [3] that identified some specific SME-related activities that the Savannah River National Laboratory (SRNL) was requested to conduct. SRNL issued a Task Technical and Quality Assurance (TT&QA) plan [4] in response to the SRR request. The conclusions provided in this report are that no changes need to be made to the SME acceptability process (i.e., no modifications to WSRC-TR-95-00364, Revision 5, are needed) and no changes need to be made to the Product Composition Control System (PCCS) itself (i.e. the spreadsheet utilized by Waste Solidification Engineering (WSE) for acceptability decisions does not require modification) in response to thorium becoming a reportable element for DWPF operations. In addition, the inputs and results for the two test cases requested by WSE for use in confirming the successful activation of thorium as a reportable element for DWPF operations during the processing of SB6 are presented in this report.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D; Tommy Edwards, T; Kevin Fox, K

    The Savannah River National Laboratory (SRNL) has developed, and continues to enhance, its integrated capability to evaluate the impact of proposed sludge preparation plans on the Defense Waste Processing Facility's (DWPF's) operation. One of the components of this integrated capability focuses on frit development which identifies a viable frit or frits for each sludge option being contemplated for DWPF processing. A frit is considered viable if its composition allows for economic fabrication and if, when it is combined with the sludge option under consideration, the DWPF property/composition models (the models of DWPF's Product Composition Control System (PCCS)) indicate that themore » combination has the potential for an operating window (a waste loading (WL) interval over which the sludge/frit glass system satisfies processability and durability constraints) that would allow DWPF to meet its goals for waste loading and canister production. This report documents the results of SRNL's efforts to identify candidate frit compositions and corresponding predicted operating windows (defined in terms of WL intervals) for the February 2007 compositional projection of Sludge Batch 4 (SB4) developed by the Liquid Waste Organization (LWO). The nominal compositional projection was used to assess projected operating windows (in terms of a waste loading interval over which all predicted properties were classified as acceptable) for various frits, evaluate the applicability of the 0.6 wt% SO{sub 4}{sup =} PCCS limit to the glass systems of interest, and determine the impact (or lack thereof) to the previous SB4 variability studies. It should be mentioned that the information from this report will be coupled with assessments of melt rate to recommend a frit for SB4 processing. The results of this paper study suggest that candidate frits are available to process the nominal SB4 composition over attractive waste loadings of interest to DWPF. Specifically, two primary candidate frits for SB4 processing, Frit 510 and Frit 418, have projected operating windows that should allow for successful processing at DWPF. While Frit 418 has been utilized at DWPF, Frit 510 is a higher B{sub 2}O{sub 3} based frit which could lead to improvements in melt rate. These frits provide relatively large operating windows and demonstrate robustness to possible sludge compositional variation while avoiding potential nepheline formation issues. In addition, assessments of SO{sub 4}{sup =} solubility indicate that the 0.6 wt% SO{sub 4}{sup =} limit in PCCS is applicable for the Frit 418 and the Frit 510 based SB4 glass systems.« less

  5. Defense Waste Processing Facility (DWPF) Viscosity Model: Revisions for Processing High TiO 2 Containing Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C. M.; Edwards, T. B.

    Radioactive high-level waste (HLW) at the Savannah River Site (SRS) has successfully been vitrified into borosilicate glass in the Defense Waste Processing Facility (DWPF) since 1996. Vitrification requires stringent product/process (P/P) constraints since the glass cannot be reworked once it is poured into ten foot tall by two foot diameter canisters. A unique “feed forward” statistical process control (SPC) was developed for this control rather than statistical quality control (SQC). In SPC, the feed composition to the DWPF melter is controlled prior to vitrification. In SQC, the glass product would be sampled after it is vitrified. Individual glass property-composition modelsmore » form the basis for the “feed forward” SPC. The models transform constraints on the melt and glass properties into constraints on the feed composition going to the melter in order to guarantee, at the 95% confidence level, that the feed will be processable and that the durability of the resulting waste form will be acceptable to a geologic repository. The DWPF SPC system is known as the Product Composition Control System (PCCS). The DWPF will soon be receiving wastes from the Salt Waste Processing Facility (SWPF) containing increased concentrations of TiO 2, Na 2O, and Cs 2O . The SWPF is being built to pretreat the high-curie fraction of the salt waste to be removed from the HLW tanks in the F- and H-Area Tank Farms at the SRS. In order to process TiO 2 concentrations >2.0 wt% in the DWPF, new viscosity data were developed over the range of 1.90 to 6.09 wt% TiO 2 and evaluated against the 2005 viscosity model. An alternate viscosity model is also derived for potential future use, should the DWPF ever need to process other titanate-containing ion exchange materials. The ultimate limit on the amount of TiO 2 that can be accommodated from SWPF will be determined by the three PCCS models, the waste composition of a given sludge batch, the waste loading of the sludge batch, and the frit used for vitrification.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D.; Edwards, T.

    High-level waste (HLW) throughput (i.e., the amount of waste processed per unit of time) is primarily a function of two critical parameters: waste loading (WL) and melt rate. For the Defense Waste Processing Facility (DWPF), increasing HLW throughput would significantly reduce the overall mission life cycle costs for the Department of Energy (DOE). Significant increases in waste throughput have been achieved at DWPF since initial radioactive operations began in 1996. Key technical and operational initiatives that supported increased waste throughput included improvements in facility attainment, the Chemical Processing Cell (CPC) flowsheet, process control models and frit formulations. As a resultmore » of these key initiatives, DWPF increased WLs from a nominal 28% for Sludge Batch 2 (SB2) to {approx}34 to 38% for SB3 through SB6 while maintaining or slightly improving canister fill times. Although considerable improvements in waste throughput have been obtained, future contractual waste loading targets are nominally 40%, while canister production rates are also expected to increase (to a rate of 325 to 400 canisters per year). Although implementation of bubblers have made a positive impact on increasing melt rate for recent sludge batches targeting WLs in the mid30s, higher WLs will ultimately make the feeds to DWPF more challenging to process. Savannah River Remediation (SRR) recently requested the Savannah River National Laboratory (SRNL) to perform a paper study assessment using future sludge projections to evaluate whether the current Process Composition Control System (PCCS) algorithms would provide projected operating windows to allow future contractual WL targets to be met. More specifically, the objective of this study was to evaluate future sludge batch projections (based on Revision 16 of the HLW Systems Plan) with respect to projected operating windows using current PCCS models and associated constraints. Based on the assessments, the waste loading interval over which a glass system (i.e., a projected sludge composition with a candidate frit) is predicted to be acceptable can be defined (i.e., the projected operating window) which will provide insight into the ability to meet future contractual WL obligations. In this study, future contractual WL obligations are assumed to be 40%, which is the goal after all flowsheet enhancements have been implemented to support DWPF operations. For a system to be considered acceptable, candidate frits must be identified that provide access to at least 40% WL while accounting for potential variation in the sludge resulting from differences in batch-to-batch transfers into the Sludge Receipt and Adjustment Tank (SRAT) and/or analytical uncertainties. In more general terms, this study will assess whether or not the current glass formulation strategy (based on the use of the Nominal and Variation Stage assessments) and current PCCS models will allow access to compositional regions required to targeted higher WLs for future operations. Some of the key questions to be considered in this study include: (1) If higher WLs are attainable with current process control models, are the models valid in these compositional regions? If the higher WL glass regions are outside current model development or validation ranges, is there existing data that could be used to demonstrate model applicability (or lack thereof)? If not, experimental data may be required to revise current models or serve as validation data with the existing models. (2) Are there compositional trends in frit space that are required by the PCCS models to obtain access to these higher WLs? If so, are there potential issues with the compositions of the associated frits (e.g., limitations on the B{sub 2}O{sub 3} and/or Li{sub 2}O concentrations) as they are compared to model development/validation ranges or to the term 'borosilicate' glass? If limitations on the frit compositional range are realized, what is the impact of these restrictions on other glass properties such as the ability to suppress nepheline formation or influence melt rate? The model based assessments being performed make the assumption that the process control models are applicable over the glass compositional regions being evaluated. Although the glass compositional region of interest is ultimately defined by the specific frit, sludge, and WL interval used, there is no prescreening of these compositional regions with respect to the model development or validation ranges which is consistent with current DWPF operations.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raszewski, F; Tommy Edwards, T; David Peeler, D

    Sludge Batch 4 (SB4) is currently being processed in the Defense Waste Processing Facility (DWPF) using Frit 510. The slurry pumps in Tank 40 are experiencing in-leakage of bearing water, which is causing the sludge slurry feed in Tank 40 to become dilute at a rapid rate. Currently, the DWPF is removing this dilution water by performing caustic boiling during the Sludge Receipt and Adjustment Tank (SRAT) cycle. In order to alleviate prolonged SRAT cycle times that may eventually impact canister production rates, decant scenarios of 100, 150, and 200 kilogallons of supernate were proposed for Tank 40 during themore » DWPF March outage. Based on the results of the preliminary assessment issued by the Savannah River National Laboratory (SRNL), the Liquid Waste Organization (LWO) issued a Technical Task Request (TTR) for SRNL to (1) perform a more detailed evaluation using updated SB4 compositional information and (2) assess the viability of Frit 510 and determine any potential impacts on the SB4 system. As defined in the TTR, LWO requested that SRNL validate the sludge--only SB4 flowsheet and the coupled operations flowsheet using the 100K gallon decant volume as well as the addition of 3 wt% sodium on a calcined oxide basis. Approximately 12 historical glasses were identified during a search of the ComProTM database that are located within at least one of the five glass regions defined by the proposed SB4 flowsheet options. While these glasses meet the requirements of a variability study there was some concern that the compositional coverage did not adequately bound all cases. Therefore, SRNL recommended that a supplemental experimental variability study be performed to support the various SB4 flowsheet options that may be implemented for future SB4 operations in DWPF. Eighteen glasses were selected based on nominal sludge projections representing the current as well as the proposed flowsheets over a WL interval of interest to DWPF (32-42%). The intent of the experimental portion of the variability study is to demonstrate that the glasses of the Frit 510-modified SB4 compositional region (Cases No.1-5) are both acceptable relative to the Environmental Assessment (EA) reference glass and predictable by the current DWPF process control models for durability. Frit 510 is a viable option for the processing of SB4 after a Tank 40 decant and the addition of products from the Actinide Removal Process (ARP). The addition of ARP did not have any negative impacts on the acceptability and predictability of the variability study glasses. The results of the variability study indicate that all of the study glasses (both quenched and centerline canister cooled (ccc)) have normalized releases for boron that are well below the reference EA glass (16.695 g/L). The durabilities of all of the study glasses are predictable using the current Product Composition Control System (PCCS) durability models with the exception of SB4VAR24ccc (Case No.2 at 41%). PCCS is not applicable to non-homogeneous glasses (i.e. glasses containing crystals such as acmite and nepheline), thus SB4VAR24ccc should not be predictable as it contains nepheline. The presence of nepheline has been confirmed in both SB4VAR13ccc and SB4VAR24ccc by X-ray diffraction (XRD). These two glasses are the first results which indicate that the current nepheline discriminator value of 0.62 is not conservative. The nepheline discriminator was implemented into PCCS for SB4 based on the fact that all of the historical glasses evaluated with nepheline values of 0.62 or greater did not contain nepheline via XRD analysis. Although these two glasses do cause some concern over the use of the 0.62 nepheline value for future DWPF glass systems, the impact to the current SB4 system is of little concern. More specifically, the formation of nepheline was observed in glasses targeting 41 or 42% WL. Current processing of the Frit 510-SB4 system in DWPF has nominally targeted 34% WL. For the SB4 variability study glasses targeting these lower WLs, nepheline formation was not observed and the minimal difference in PCT response between quenched and ccc versions supported its absence.« less

  8. SUMMARY OF FY11 SULFATE RETENTION STUDIES FOR DEFENSE WASTE PROCESSING FACILITY GLASS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K.; Edwards, T.

    2012-05-08

    This report describes the results of studies related to the incorporation of sulfate in high level waste (HLW) borosilicate glass produced at the Savannah River Site (SRS) Defense Waste Processing Facility (DWPF). A group of simulated HLW glasses produced for earlier sulfate retention studies was selected for full chemical composition measurements to determine whether there is any clear link between composition and sulfate retention over the compositional region evaluated. In addition, the viscosity of several glasses was measured to support future efforts in modeling sulfate solubility as a function of predicted viscosity. The intent of these studies was to developmore » a better understanding of sulfate retention in borosilicate HLW glass to allow for higher loadings of sulfate containing waste. Based on the results of these and other studies, the ability to improve sulfate solubility in DWPF borosilicate glasses lies in reducing the connectivity of the glass network structure. This can be achieved, as an example, by increasing the concentration of alkali species in the glass. However, this must be balanced with other effects of reduced network connectivity, such as reduced viscosity, potentially lower chemical durability, and in the case of higher sodium and aluminum concentrations, the propensity for nepheline crystallization. Future DWPF processing is likely to target higher waste loadings and higher sludge sodium concentrations, meaning that alkali concentrations in the glass will already be relatively high. It is therefore unlikely that there will be the ability to target significantly higher total alkali concentrations in the glass solely to support increased sulfate solubility without the increased alkali concentration causing failure of other Product Composition Control System (PCCS) constraints, such as low viscosity and durability. No individual components were found to provide a significant improvement in sulfate retention (i.e., an increase of the magnitude necessary to have a dramatic impact on blending, washing, or waste loading strategies for DWPF) for the glasses studied here. In general, the concentrations of those species that significantly improve sulfate solubility in a borosilicate glass must be added in relatively large concentrations (e.g., 13 to 38 wt % or more of the frit) in order to have a substantial impact. For DWPF, these concentrations would constitute too large of a portion of the frit to be practical. Therefore, it is unlikely that specific additives may be introduced into the DWPF glass via the frit to significantly improve sulfate solubility. The results presented here continue to show that sulfate solubility or retention is a function of individual glass compositions, rather than a property of a broad glass composition region. It would therefore be inappropriate to set a single sulfate concentration limit for a range of DWPF glass compositions. Sulfate concentration limits should continue to be identified and implemented for each sludge batch. The current PCCS limit is 0.4 wt % SO{sub 4}{sup 2-} in glass, although frit development efforts have led to an increased limit of 0.6 wt % for recent sludge batches. Slightly higher limits (perhaps 0.7-0.8 wt %) may be possible for future sludge batches. An opportunity for allowing a higher sulfate concentration limit at DWPF may lay lie in improving the laboratory experiments used to set this limit. That is, there are several differences between the crucible-scale testing currently used to define a limit for DWPF operation and the actual conditions within the DWPF melter. In particular, no allowance is currently made for sulfur partitioning (volatility versus retention) during melter processing as the sulfate limit is set for a specific sludge batch. A better understanding of the partitioning of sulfur in a bubbled melter operating with a cold cap as well as the impacts of sulfur on the off-gas system may allow a higher sulfate concentration limit to be established for the melter feed. This approach would have to be taken carefully to ensure that a sulfur salt layer is not formed on top of the melt pool while allowing higher sulfur based feeds to be processed through DWPF.« less

  9. REDUCTION OF CONSTRAINTS FOR COUPLED OPERATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raszewski, F.; Edwards, T.

    2009-12-15

    The homogeneity constraint was implemented in the Defense Waste Processing Facility (DWPF) Product Composition Control System (PCCS) to help ensure that the current durability models would be applicable to the glass compositions being processed during DWPF operations. While the homogeneity constraint is typically an issue at lower waste loadings (WLs), it may impact the operating windows for DWPF operations, where the glass forming systems may be limited to lower waste loadings based on fissile or heat load limits. In the sludge batch 1b (SB1b) variability study, application of the homogeneity constraint at the measurement acceptability region (MAR) limit eliminated muchmore » of the potential operating window for DWPF. As a result, Edwards and Brown developed criteria that allowed DWPF to relax the homogeneity constraint from the MAR to the property acceptance region (PAR) criterion, which opened up the operating window for DWPF operations. These criteria are defined as: (1) use the alumina constraint as currently implemented in PCCS (Al{sub 2}O{sub 3} {ge} 3 wt%) and add a sum of alkali constraint with an upper limit of 19.3 wt% ({Sigma}M{sub 2}O < 19.3 wt%), or (2) adjust the lower limit on the Al{sub 2}O{sub 3} constraint to 4 wt% (Al{sub 2}O{sub 3} {ge} 4 wt%). Herman et al. previously demonstrated that these criteria could be used to replace the homogeneity constraint for future sludge-only batches. The compositional region encompassing coupled operations flowsheets could not be bounded as these flowsheets were unknown at the time. With the initiation of coupled operations at DWPF in 2008, the need to revisit the homogeneity constraint was realized. This constraint was specifically addressed through the variability study for SB5 where it was shown that the homogeneity constraint could be ignored if the alumina and alkali constraints were imposed. Additional benefit could be gained if the homogeneity constraint could be replaced by the Al{sub 2}O{sub 3} and sum of alkali constraint for future coupled operations processing based on projections from Revision 14 of the High Level Waste (HLW) System Plan. As with the first phase of testing for sludge-only operations, replacement of the homogeneity constraint with the alumina and sum of alkali constraints will ensure acceptable product durability over the compositional region evaluated. Although these study glasses only provide limited data in a large compositional region, the approach and results are consistent with previous studies that challenged the homogeneity constraint for sludge-only operations. That is, minimal benefit is gained by imposing the homogeneity constraint if the other PCCS constraints are satisfied. The normalized boron releases of all of the glasses are well below the Environmental Assessment (EA) glass results, regardless of thermal history. Although one of the glasses had a normalized boron release of approximately 10 g/L and was not predictable, the glass is still considered acceptable. This particular glass has a low Al{sub 2}O{sub 3} concentration, which may have attributed to the anomalous behavior. Given that poor durability has been previously observed in other glasses with low Al{sub 2}O{sub 3} and Fe{sub 2}O{sub 3} concentrations, including the sludge-only reduction of constraints study, further investigations appear to be warranted. Based on the results of this study, it is recommended that the homogeneity constraint (in its entirety with the associated low frit/high frit constraints) be eliminated for coupled operations as defined by Revision 14 of the HLW System Plan with up to 2 wt% TiO{sub 2}. The use of the alumina and sum of alkali constraints should be continued along with the variability study to determine the predictability of the current durability models and/or that the glasses are acceptable with respect to durability. The use of a variability study for each batch is consistent with the glass product control program and it will help to assess new streams or compositional changes. It is also recommended that the influence of alumina and alkali on durability be studied in greater detail. Limited data suggests that there may be a need to adjust the lower Al{sub 2}O{sub 3} limit and/or the upper alkali limit in order to prevent the fabrication of unacceptable glasses. An in-depth evaluation of all previous data as well as any new data would help to better define an alumina and alkali combination that would avoid potential phase separation and ensure glass durability.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raszewski, F; Tommy Edwards, T; David Peeler, D

    Sludge Batch 4 (SB4) is currently being processed in the Defense Waste Processing Facility (DWPF) using Frit 510. The slurry pumps in Tank 40 are experiencing in-leakage of bearing water, which is causing the sludge slurry in Tank 40 to become dilute at a rapid rate. Currently, the DWPF is removing this dilution water by performing caustic boiling during the Sludge Receipt and Adjustment Tank (SRAT) cycle. In order to alleviate prolonged SRAT cycle times, which may eventually impact canister production rates, the Liquid Waste Organization (LWO) performed a 100K gallon supernate decant of Tank 40 in April 2008. SRNLmore » performed a supplemental glass variability study to support the April 2008 100K gallon decant incorporating the impact of coupled operations (addition of the Actinide Removal Process (ARP) stream). Recently LWO requested that SRNL assess the impact of a second decant (up to 100K gallon) to the Frit 510-SB4 system. This second decant occurred in June 2008. LWO provided nominal compositions on May 6, 2008 representing Tank 40 prior to the second decant, following the second decant, and the SB4 Heel prior to blending with Tank 51 to constitute SB5. Paper study assessments were performed for these options based on sludge-only and coupled operations processing (ARP addition), as well as possible Na{sub 2}O additions (via NaOH additions) to both flowsheets. A review of the ComProTM database relative to the compositional region defined by the projections after the second decant coupled with Frit 510 identified only a few glasses with similar glass compositions. These glasses were acceptable from a durability perspective, but did not sufficiently cover the new glass compositional region. Therefore, SRNL recommended that a supplemental variability study be performed to support the June 2008 Tank 40 decant. Glasses were selected for the variability study based on three sludge compositional projections (sludge-only, coupled and coupled + 2 wt% Na{sub 2}O) at waste loadings (WLs) of interest to DWPF (32%, 35% and 38%). These nine glasses were fabricated and characterized using chemical composition analysis, X-ray Diffraction (XRD) and the Product Consistency Test (PCT). All of the glasses that were selected for this study satisfy the Product Composition Control System (PCCS) criteria and are deemed processable and acceptable for the DWPF, except for the SB4VS2-03 (sludge-only at 38% WL) target composition. This glass fails the T{sub L} criterion and would not be considered processable based on Slurry Mix Evaporator (SME) acceptability decisions. The durabilities of all of the study glasses (both quenched and ccc) are well below that of the normalized leachate for boron (NL [B]) of the reference EA glass (16.695 g/L) and are predictable using the current PCCS models. Very little variation exists between the NL [B] of the quenched and ccc versions of the glasses. There is some evidence of a trend toward a less durable glass as WL increases for some of the sludge projections. Frit 510 is a viable option for the processing of SB4 after a second Tank 40 decant with or without the addition of products from the ARP stream as well as the 2 wt% Na{sub 2}O addition. The addition of ARP had no negative impacts on the acceptability and predictability of the variability study glasses.« less

  11. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and datamore » interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy's extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative sampling directly from the large Tank Farm tanks is a difficult, if not unsolvable enterprise due to limited accessibility. However, the consistency and the adequacy of sampling and mixing at SRS could at least be studied under the controlled process conditions based on samples discussed by Ray and others [2012a] in Waste Form Qualification Report (WQR) Volume 2 and the transfers from Tanks 40H and 51H to the Sludge Receipt and Adjustment Tank (SRAT) within DWPF. It is important to realize that the need for sample representativeness becomes more stringent as the material gets closer to the melter, and the tanks within DWPF have been studied extensively to meet those needs.« less

  12. Path modeling of knowledge, attitude and practice toward palliative care consultation service among Taiwanese nursing staff: a cross-sectional study.

    PubMed

    Pan, Hsueh-Hsing; Shih, Hsiu-Ling; Wu, Li-Fen; Hung, Yu-Chun; Chu, Chi-Ming; Wang, Kwua-Yun

    2017-08-17

    The Taiwanese government has promoted palliative care consultation services (PCCS) to support terminally ill patients in acute ward settings to receive palliative care since 2005. Such an intervention can enhance the quality of life and dignity of terminally ill patients. However, research focusing on the relationship between the knowledge, attitude and practice of a PCCS using path modelling in nursing staff is limited. Therefore, the aim of this study was to elucidate the effect of path modeling on the knowledge, attitude and practice toward PCCS in Taiwanese nursing staff. This was a cross-sectional, descriptive study design using convenience sampling. Data collected included demographics, knowledge, attitude and practice as measured by the PCCS inventory (KAP-PCCSI). Two hundred and eighty-four nursing staff from a medical center in northern Taiwan participated in the study in 2013. We performed descriptive statistics, regression analysis, and path modeling using SPSS 19.0 and set p < 0.05 as the statistical significance threshold. The results showed that the identical factor significantly associated with knowledge, attitude, and practice toward PCCS among nurses was the frequency of contact with PCCS. In addition, higher level of knowledge toward PCCS was associated with working in haematology and oncology wards, and participation in education related to palliative care. A more positive attitude toward PCCS was associated with working in a haematology and oncology ward, and experience of friends or relatives dying. Higher level of practice toward PCCS was associated with nurses who participated in education related to palliative care. In the path modeling, we found that holders of a master's degree indirectly positive affected practice toward PCCS. Possession of a bachelor degree or above, being single, working within a haematology and oncology ward, and frequency of contact with PCCS positively affected practice toward PCCS. Based on this study, it is proposed that consultation with PCCS has a positive impact on the care of terminally ill patients. Encouragement of staff to undertake further education can improve the practice of ward staff providing palliative care.

  13. Condensation model for the ESBWR passive condensers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Revankar, S. T.; Zhou, W.; Wolf, B.

    2012-07-01

    In the General Electric's Economic simplified boiling water reactor (GE-ESBWR) the passive containment cooling system (PCCS) plays a major role in containment pressure control in case of an loss of coolant accident. The PCCS condenser must be able to remove sufficient energy from the reactor containment to prevent containment from exceeding its design pressure following a design basis accident. There are three PCCS condensation modes depending on the containment pressurization due to coolant discharge; complete condensation, cyclic venting and flow through mode. The present work reviews the models and presents model predictive capability along with comparison with existing data frommore » separate effects test. The condensation models in thermal hydraulics code RELAP5 are also assessed to examine its application to various flow modes of condensation. The default model in the code predicts complete condensation well, and basically is Nusselt solution. The UCB model predicts through flow well. None of condensation model in RELAP5 predict complete condensation, cyclic venting, and through flow condensation consistently. New condensation correlations are given that accurately predict all three modes of PCCS condensation. (authors)« less

  14. Crystallization in high level waste (HLW) glass melters: Savannah River Site operational experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Kevin M.; Peeler, David K.; Kruger, Albert A.

    2015-06-12

    This paper provides a review of the scaled melter testing that was completed for design input to the Defense Waste Processing Facility (DWPF) melter. Testing with prototype melters provided the data to define the DWPF operating limits to avoid bulk (volume) crystallization in the un-agitated DWPF melter and provided the data to distinguish between spinels generated by refractory corrosion versus spinels that precipitated from the HLW glass melt pool. A review of the crystallization observed with the prototype melters and the full-scale DWPF melters (DWPF Melter 1 and DWPF Melter 2) is included. Examples of actual DWPF melter attainment withmore » Melter 2 are given. The intent is to provide an overview of lessons learned, including some example data, that can be used to advance the development and implementation of an empirical model and operating limit for crystal accumulation for a waste treatment and immobilization plant.« less

  15. Crystallization In High Level Waste (HLW) Glass Melters: Operational Experience From The Savannah River Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K. M.

    2014-02-27

    processing strategy for the Hanford Tank Waste Treatment and Immobilization Plant (WTP). The basis of this alternative approach is an empirical model predicting the crystal accumulation in the WTP glass discharge riser and melter bottom as a function of glass composition, time, and temperature. When coupled with an associated operating limit (e.g., the maximum tolerable thickness of an accumulated layer of crystals), this model could then be integrated into the process control algorithms to formulate crystal tolerant high level waste (HLW) glasses targeting higher waste loadings while still meeting process related limits and melter lifetime expectancies. This report provides amore » review of the scaled melter testing that was completed in support of the Defense Waste Processing Facility (DWPF) melter. Testing with scaled melters provided the data to define the DWPF operating limits to avoid bulk (volume) crystallization in the un-agitated DWPF melter and provided the data to distinguish between spinels generated by K-3 refractory corrosion versus spinels that precipitated from the HLW glass melt pool. This report includes a review of the crystallization observed with the scaled melters and the full scale DWPF melters (DWPF Melter 1 and DWPF Melter 2). Examples of actual DWPF melter attainment with Melter 2 are given. The intent is to provide an overview of lessons learned, including some example data, that can be used to advance the development and implementation of an empirical model and operating limit for crystal accumulation for WTP. Operation of the first and second (current) DWPF melters has demonstrated that the strategy of using a liquidus temperature predictive model combined with a 100 °C offset from the normal melter operating temperature of 1150 °C (i.e., the predicted liquidus temperature (TL) of the glass must be 1050 °C or less) has been successful in preventing any detrimental accumulation of spinel in the DWPF melt pool, and spinel has not been observed in any of the pour stream glass samples. Spinel was observed at the bottom of DWPF Melter 1 as a result of K-3 refractory corrosion. Issues have occurred with accumulation of spinel in the pour spout during periods of operation at higher waste loadings. Given that both DWPF melters were or have been in operation for greater than 8 years, the service life of the melters has far exceeded design expectations. It is possible that the DWPF liquidus temperature approach is conservative, in that it may be possible to successfully operate the melter with a small degree of allowable crystallization in the glass. This could be a viable approach to increasing waste loading in the glass assuming that the crystals are suspended in the melt and swept out through the riser and pour spout. Additional study is needed, and development work for WTP might be leveraged to support a different operating limit for the DWPF. Several recommendations are made regarding considerations that need to be included as part of the WTP crystal tolerant strategy based on the DWPF development work and operational data reviewed here. These include: Identify and consider the impacts of potential heat sinks in the WTP melter and glass pouring system; Consider the contributions of refractory corrosion products, which may serve to nucleate additional crystals leading to further accumulation; Consider volatilization of components from the melt (e.g., boron, alkali, halides, etc.) and determine their impacts on glass crystallization behavior; Evaluate the impacts of glass REDuction/OXidation (REDOX) conditions and the distribution of temperature within the WTP melt pool and melter pour chamber on crystal accumulation rate; Consider the impact of precipitated crystals on glass viscosity; Consider the impact of an accumulated crystalline layer on thermal convection currents and bubbler effectiveness within the melt pool; Evaluate the impact of spinel accumulation on Joule heating of the WTP melt pool; and Include noble metals in glass melt experiments because of their potential to act as nucleation sites for spinel crystallization.« less

  16. Planck 2015 results. XXVI. The Second Planck Catalogue of Compact Sources

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Beichman, C.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Böhringer, H.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Carvalho, P.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Clemens, M.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Negrello, M.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanghera, H. S.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tornikoski, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Walter, B.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    The Second Planck Catalogue of Compact Sources is a list of discrete objects detected in single-frequency maps from the full duration of the Planck mission and supersedes previous versions. It consists of compact sources, both Galactic and extragalactic, detected over the entire sky. Compact sources detected in the lower frequency channels are assigned to the PCCS2, while at higher frequencies they are assigned to one of two subcatalogues, the PCCS2 or PCCS2E, depending on their location on the sky. The first of these (PCCS2) covers most of the sky and allows the user to produce subsamples at higher reliabilities than the target 80% integral reliability of the catalogue. The second (PCCS2E) contains sources detected in sky regions where the diffuse emission makes it difficult to quantify the reliability of the detections. Both the PCCS2 and PCCS2E include polarization measurements, in the form of polarized flux densities, or upper limits, and orientation angles for all seven polarization-sensitive Planck channels. The improved data-processing of the full-mission maps and their reduced noise levels allow us to increase the number of objects in the catalogue, improving its completeness for the target 80% reliability as compared with the previous versions, the PCCS and the Early Release Compact Source Catalogue (ERCSC).

  17. Characteristics of palliative care consultation services in California hospitals.

    PubMed

    Pantilat, Steven Z; Kerr, Kathleen M; Billings, J Andrew; Bruno, Kelly A; O'Riordan, David L

    2012-05-01

    Although hospital palliative care consultation services (PCCS) can improve a variety of clinical and nonclinical outcomes, little is known about how these services are structured. We surveyed all 351 acute care hospitals in California to examine the structure and characteristics of those hospitals with PCCS. We achieved a 92% response rate. Thirty-one percent (n=107) of hospitals reported having a PCCS. Teams commonly included physicians (87%), social workers (80%), spiritual care professionals (77%), and registered nurses (71%). Nearly all PCCS were available on-site during weekday business hours; 50% were available on-site or by phone in the weekday evenings and 54% were available during weekend daytime hours. The PCCS saw an average of 347 patients annually (median=310, standard deviation [SD]=217), or 258 patients per clinical full-time equivalent (FTE; median=250, SD=150.3). Overall, 60% of consultation services reported they are struggling to cope with the workload. On average, patients were in the hospital 5.9 days (median=5.5, SD=3.3) prior to referral to PCCS, and remained in the hospital for 6 days (median=4, SD=7.9) following the initial consultation. Patient and family meetings were an aspect of the consultation in 74% of cases. Overall, 21% of consultation patients were discharged home with hospice services and 25% died in the hospital. There is variation in how PCCS in California hospitals are structured and in the ways they engage with patients. Ultimately, linking PCCS characteristics and practices to patient and family outcomes will identify best practices that PCCS can use to maximize quality.

  18. Material compatibility evaluation for DWPF nitric-glycolic acid-literature review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mickalonis, J.; Skidmore, E.

    2013-06-01

    Glycolic acid is being evaluated as an alternative for formic and nitric acid in the DWPF flowsheet. Demonstration testing and modeling for this new flowsheet has shown that glycolic acid and glycolate has a potential to remain in certain streams generated during the production of the nuclear waste glass. A literature review was conducted to assess the impact of glycolic acid on the corrosion of the materials of construction for the DWPF facility as well as facilities downstream which may have residual glycolic acid and glycolates present. The literature data was limited to solutions containing principally glycolic acid.

  19. Planck 2015 results: XXVI. The Second Planck Catalogue of Compact Sources

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Argüeso, F.; ...

    2016-09-20

    The Second Planck Catalogue of Compact Sources is a list of discrete objects detected in single-frequency maps from the full duration of the Planck mission and supersedes previous versions. Also, it consists of compact sources, both Galactic and extragalactic, detected over the entire sky. Compact sources detected in the lower frequency channels are assigned to the PCCS2, while at higher frequencies they are assigned to one of two subcatalogues, the PCCS2 or PCCS2E, depending on their location on the sky. The first of these (PCCS2) covers most of the sky and allows the user to produce subsamples at higher reliabilitiesmore » than the target 80% integral reliability of the catalogue. The second (PCCS2E) contains sources detected in sky regions where the diffuse emission makes it difficult to quantify the reliability of the detections. Both the PCCS2 and PCCS2E include polarization measurements, in the form of polarized flux densities, or upper limits, and orientation angles for all seven polarization-sensitive Planck channels. Finally, the improved data-processing of the full-mission maps and their reduced noise levels allow us to increase the number of objects in the catalogue, improving its completeness for the target 80% reliability as compared with the previous versions, the PCCS and the Early Release Compact Source Catalogue (ERCSC).« less

  20. Planck 2015 results: XXVI. The Second Planck Catalogue of Compact Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Argüeso, F.

    The Second Planck Catalogue of Compact Sources is a list of discrete objects detected in single-frequency maps from the full duration of the Planck mission and supersedes previous versions. Also, it consists of compact sources, both Galactic and extragalactic, detected over the entire sky. Compact sources detected in the lower frequency channels are assigned to the PCCS2, while at higher frequencies they are assigned to one of two subcatalogues, the PCCS2 or PCCS2E, depending on their location on the sky. The first of these (PCCS2) covers most of the sky and allows the user to produce subsamples at higher reliabilitiesmore » than the target 80% integral reliability of the catalogue. The second (PCCS2E) contains sources detected in sky regions where the diffuse emission makes it difficult to quantify the reliability of the detections. Both the PCCS2 and PCCS2E include polarization measurements, in the form of polarized flux densities, or upper limits, and orientation angles for all seven polarization-sensitive Planck channels. Finally, the improved data-processing of the full-mission maps and their reduced noise levels allow us to increase the number of objects in the catalogue, improving its completeness for the target 80% reliability as compared with the previous versions, the PCCS and the Early Release Compact Source Catalogue (ERCSC).« less

  1. Impact of palliative care consultative service on disease awareness for patients with terminal cancer.

    PubMed

    Chou, Wen-Chi; Hung, Yu-Shin; Kao, Chen-Yi; Su, Po-Jung; Hsieh, Chia-Hsun; Chen, Jen-Shi; Liau, Chi-Ting; Lin, Yung-Chang; Liaw, Chuang-Chi; Wang, Hung-Ming

    2013-07-01

    Awareness of the status of disease among terminally ill cancer patients is an important part of the end-of-life care. We have evaluated how palliative care consultative service (PCCS) affects patient disease awareness and determined who may benefit from such services in Taiwan. In total, 2,887 terminally ill cancer patients consecutively received PCCS between January 2006 and December 2010 at a single medical center in Taiwan, after which they were evaluated for disease awareness. At the beginning of PCCS, 31 % of patients (n = 895) were unaware of their disease status. The characteristics of these 895 patients were analyzed retrospectively to determine variables pertinent to patient disease awareness after PCCS. In total, 485 (50 %) of the 895 patients became aware of their disease at the end of PCCS. Factors significantly associated with higher disease awareness included a longer interval between the date of hospital admission and that of PCCS referral (>4 weeks versus ≤2 weeks), a longer duration of PCCS (>14 days versus ≤7 days), the male gender, divorced marital status (versus married), and family awareness (versus lack of family awareness). Lower disease awareness was associated with older age (age > 75 years versus age = 18-65 years), referral from non-oncology departments, and primary cancer localization (lung, colon-rectum, or urological versus liver). Disease awareness is affected by multiple factors related to the patients, their families, and the clinicians. The promotion of PCCS increased disease awareness among terminally ill cancer patients in Taiwan.

  2. Palmatine suppresses glutamine-mediated interaction between pancreatic cancer and stellate cells through simultaneous inhibition of survivin and COL1A1

    PubMed Central

    Chakravarthy, Divya; Muñoz, Amanda R.; Su, Angel; Hwang, Rosa F.; Keppler, Brian R.; Chan, Daniel E.; Halff, Glenn; Ghosh, Rita; Kumar, Addanki P.

    2018-01-01

    Reciprocal interaction between pancreatic stellate cells (PSCs) and cancer cells (PCCs) in the tumor microenvironment (TME) promotes tumor cell survival and progression to lethal, therapeutically resistant pancreatic cancer. The goal of this study was to test the ability of Palmatine (PMT) to disrupt this reciprocal interaction in vitro and examine the underlying mechanism of interaction. We show that PSCs secrete glutamine into the extracellular environment under nutrient deprivation. PMT suppresses glutamine-mediated changes in GLI signaling in PCCs resulting in the inhibition of growth and migration while inducing apoptosis by inhibition of survivin. PMT-mediated inhibition of (glioma-associated oncogene 1) GLI activity in stellate cells leads to suppression (collagen type 1 alpha 1) COL1A1 activation. Remarkably, PMT potentiated gemcitabine’s growth inhibitory activity in PSCs, PCCs and inherently gemcitabine-resistant pancreatic cancer cells. This is the first study that shows the ability of PMT to inhibit growth of PSCs and PCCs either alone or in combination with gemcitabine. These studies warrant additional investigations using preclinical models to develop PMT as an agent for clinical management of pancreatic cancer. PMID:29414301

  3. Disaster preparedness of poison control centers in the USA: a 15-year follow-up study.

    PubMed

    Darracq, Michael A; Clark, Richard F; Jacoby, Irving; Vilke, Gary M; DeMers, Gerard; Cantrell, F Lee

    2014-03-01

    There is limited published literature on the extent to which United States (US) Poison Control Centers (PCCs) are prepared for responding to disasters. We describe PCCs' disaster preparedness activities and compare and contrast these results to those previously reported in the medical literature. We also describe the extent to which PCCs are engaged in disaster and terrorism preparedness planning and other public health roles such as surveillance. An electronic questionnaire was sent via email to the managing directors of the 57 member PCCs of the American Association of Poison Control Centers. Collected data included the population served and number of calls received, extent of disaster preparedness including the presence of a written disaster plan and elements included in that plan, the presence and nature of regular disaster drills, experience with disaster including periods of inability to operate, involvement in terrorism and disaster preparedness/response policy development, and public health surveillance of US PCCs. Descriptive statistics were performed on collected data. Comparisons with the results from a previously published survey were performed. A response was obtained from 40/57 (70 %) PCCs. Each PCC serves a larger population (p < 0.0001) and receives more calls per year (p = 0.0009) than the previous descriptions of PCC preparedness. More centers report the presence of a written disaster plan (p < 0.0001), backup by another center (p < 0.0001), regular disaster drills (p < 0.0001), and comfort with ability to operate in a disaster (p < 0.0001) than previously described. PCCs are involved in disaster (34/40, 85 %) and terrorism (29/40, 73 %) preparedness at the local, state, or federal levels. PCCs (36/40, 90 %) are also involved in public health functions (illness surveillance or answering "after hours" public health calls). Despite an increase in calls received and population served per center as compared to previous descriptions, more PCCs report the presence of a written disaster plan, backup by another center, regular disaster drills, and comfort in ability to operate in a disaster. PCCs are actively involved in terrorism and disaster preparedness and response planning and traditional public health responsibilities such as surveillance.

  4. [Design and biological evaluation of poly-lactic-co-glycolic acid (PLGA) mesh/collagen-chitosan hybrid scaffold (CCS) as a dermal substitute].

    PubMed

    Wang, Xin-Gang; You, Chuan-Gang; Sun, Hua-Feng; Hu, Xin-Lei; Han, Chun-Mao; Zhang, Li-Ping; Zheng, Yu-Rong; Li, Qi-Yin

    2011-02-01

    To design and construct a kind of dermal regeneration template with mesh, and to preliminarily evaluate its biological characteristics. PLGA mesh was integrated into CCS with freeze-drying method for constructing PLGA mesh/CCS composite (PCCS). The micromorphologies and mechanical properties among PLGA mesh, CCS, and PCCS were compared. PCCS and CCS was respectively implanted into subcutaneous tissue of SD rats (PCCS and CCS groups, 9 rats in each group). The tissue samples were collected at post operation week (POW) 1, 2, and 4 for histopathological and immunohistochemical observation. Protein levels of CD68, MPO, IL-1beta, IL-10 were examined by Western blot, with expression of gray value. Data were processed with one-way analysis of variance and t test. Three-dimensional porous structure of PCCS was similar to that of CCS. Mechanical property of PLGA mesh and PCCS was respectively (3.07 +/- 0.10), (3.26 +/- 0.15) MPa, and they were higher than that of CCS [(0.42 +/- 0.21) MPa, F = 592.3, P < 0.0001)]. The scaffolds were filled with newly formed tissue in PCCS group at POW 2, while those in CCS group were observed at POW 4. A large accumulation of macrophages was observed in both groups, especially at POW 2, and more macrophage infiltration was observed in CCS group. The protein level of IL-10 in PCCS group at POW 2 was obviously higher than that in CCS group, while the protein levels of CD68, MPO, IL-1beta were significantly decreased as compared with those in CCS group (with t value from -4.06 to 2.89, P < 0.05 or P < 0.01). PCCS has excellent mechanical property with appropriate three-dimensional porous structure. Meanwhile, it can rapidly induce formation of new tissue and vascularization, and it has a prospect of serving as a dermal substitute.

  5. Emotional exhaustion in primary care during early implementation of the VA's medical home transformation: Patient-aligned Care Team (PACT).

    PubMed

    Meredith, Lisa S; Schmidt Hackbarth, Nicole; Darling, Jill; Rodriguez, Hector P; Stockdale, Susan E; Cordasco, Kristina M; Yano, Elizabeth M; Rubenstein, Lisa V

    2015-03-01

    Transformation of primary care to new patient-centered models requires major changes in healthcare organizations, including interprofessional expectations and organizational policies. Emotional exhaustion (EE) among workers can accompany major organizational change, threatening its success. Yet little guidance exists about the magnitude of associations with EE during primary care transformation. We assessed EE during the initial phase of national primary care transformation in the Veterans Health Administration. Cross-sectional online surveys of primary care clinicians (PCCs) and staff in 23 primary care clinics within 5 healthcare systems in 1 veterans administration administrative region. We used descriptive, bivariate, and multivariable analyses adjusted for clinic membership and weighted for nonresponse. 515 veterans administration employees (191 PCCs and 324 other primary care staff). Outcome is the EE subscale of the Maslach Burnout Inventory. Predictors include clinic characteristics (from administrative data) and self-reported efficacy for change, experiences with transformation, and perspectives about the organization. The overall response rate was 64% (515/811). In total, 53% of PCCs and 43% of staff had high EE. PCCs (vs. other primary care staff), female (vs. male), and non-Latino (vs. Latino) respondents reported higher EE. Respondents reporting higher efficacy for change and participatory decision making had lower EE scores, adjusting for sex and race. Recognition by healthcare organizations of the potential for clinician and staff EE during primary care transformation is critical. Methods for reducing EE by increasing clinician and staff change efficacy and opportunities to participate in decision making should be considered, with attention to PCCs, and women.

  6. Reversal of dabigatran anticoagulation ex vivo: Porcine study comparing prothrombin complex concentrates and idarucizumab.

    PubMed

    Honickel, Markus; Treutler, Stefanie; van Ryn, Joanne; Tillmann, Sabine; Rossaint, Rolf; Grottke, Oliver

    2015-04-01

    Urgent surgery or life-threatening bleeding requires prompt reversal of the anticoagulant effects of dabigatran. This study assessed the ability of three- and four-factor prothrombin complex concentrate (PCC) and idarucizumab (specific antidote for dabigatran) to reverse the anticoagulant effects of dabigatran in a porcine model of trauma. Twelve animals were given dabigatran etexilate (DE) orally and dabigatran intravenously, before infliction of trauma. Six animals received tranexamic acid plus fibrinogen concentrate 12 minutes post-injury. Six PCCs (each 30 and 60 U/kg) and idarucizumab (30 and 60 mg/kg) were added to blood samples ex vivo. Coagulation was assessed by several coagulation assays. All coagulation parameters were altered after dabigatran infusion (plasma level: 442 ± 138 ng/ml). Both three- and four-factor PCCs mostly or completely reversed the effects of dabigatran on thromboelastometry variables and PT but not on aPTT. Idarucizumab neutralised plasma concentrations of dabigatran, and reversed the effects of the drug on coagulation variables. Thrombin generation showed dose-dependent over-correction following the addition of PCC, implying that elevated levels of thrombin are required to overcome dabigatran-induced coagulopathy. In contrast, treatment with idarucizumab returned thrombin generation to baseline levels. Following trauma, therapy with tranexamic acid plus fibrinogen improved correction of coagulation parameters by PCC, and thromboelastometry parameters by idarucizumab. All investigated PCCs improved dabigatran- and trauma-induced coagulopathy to a similar degree. In conclusion, this study shows that three- and four-factor PCCs are similarly effective for dabigatran reversal. Idarucizumab also reversed the effects of dabigatran and, unlike PCCs, was not associated with over-correction of thrombin generation.

  7. In vivo study of the biocompatibility of a novel compressed collagen hydrogel scaffold for artificial corneas.

    PubMed

    Xiao, Xianghua; Pan, Shiyin; Liu, Xianning; Zhu, Xiuping; Connon, Che John; Wu, Jie; Mi, Shengli

    2014-06-01

    The experiments were designed to evaluate the biocompatibility of a plastically compressed collagen scaffold (PCCS). The ultrastructure of the PCCS was observed via scanning electron microscopy. Twenty New Zealand white rabbits were randomly divided into experimental and control groups that received corneal pocket transplantation with PCCS and an amniotic membrane, respectively. And the contralateral eye of the implanted rabbit served as the normal group. On the 1st, 7th, 14th, 21st, 30th, 60th, 90th, and 120th postoperative day, the eyes were observed via a slit lamp. On the 120th postoperative day, the rabbit eyes were enucleated to examine the tissue compatibility of the implanted stroma. The PCCS was white and translucent. The scanning electron microscopy results showed that fibers within the PCCS were densely packed and evenly arranged. No edema, inflammation, or neovascularization was observed on ocular surface under a slit lamp and few lymphocytes were observed in the stroma of rabbit cornea after histological study. In conclusion, the PCCS has extremely high biocompatibility and is a promising corneal scaffold for an artificial cornea. Copyright © 2013 Society of Plastics Engineers.

  8. Material Compatibility Evaluation for DWPF Nitric-Glycolic Acid - Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mickalonis, J. I.; Skidmore, T. E.

    Glycolic acid is being evaluated as an alternative for formic and nitric acid in the DWPF flowsheet. Demonstration testing and modeling for this new flowsheet has shown that glycolic acid and glycolate has a potential to remain in certain streams generated during the production of the nuclear waste glass. A literature review was conducted to assess the impact of glycolic acid on the corrosion of the materials of construction for the DWPF facility as well as facilities downstream which may have residual glycolic acid and glycolates present. The literature data was limited to solutions containing principally glycolic acid. The reportedmore » corrosion rates and degradation characteristics have shown the following for the materials of construction.« less

  9. Product/Process (P/P) Models For The Defense Waste Processing Facility (DWPF): Model Ranges And Validation Ranges For Future Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C.; Edwards, T.

    Radioactive high level waste (HLW) at the Savannah River Site (SRS) has successfully been vitrified into borosilicate glass in the Defense Waste Processing Facility (DWPF) since 1996. Vitrification requires stringent product/process (P/P) constraints since the glass cannot be reworked once it is poured into ten foot tall by two foot diameter canisters. A unique “feed forward” statistical process control (SPC) was developed for this control rather than statistical quality control (SQC). In SPC, the feed composition to the DWPF melter is controlled prior to vitrification. In SQC, the glass product would be sampled after it is vitrified. Individual glass property-compositionmore » models form the basis for the “feed forward” SPC. The models transform constraints on the melt and glass properties into constraints on the feed composition going to the melter in order to guarantee, at the 95% confidence level, that the feed will be processable and that the durability of the resulting waste form will be acceptable to a geologic repository.« less

  10. Advances in proton-exchange membranes for fuel cells: an overview on proton conductive channels (PCCs).

    PubMed

    Wu, Liang; Zhang, Zhenghui; Ran, Jin; Zhou, Dan; Li, Chuanrun; Xu, Tongwen

    2013-04-14

    Proton-exchange membranes (PEM) display unique ion-selective transport that has enabled a breakthrough in high-performance proton-exchange membrane fuel cells (PEMFCs). Elemental understanding of the morphology and proton transport mechanisms of the commercially available Nafion® has promoted a majority of researchers to tune proton conductive channels (PCCs). Specifically, knowledge of the morphology-property relationship gained from statistical and segmented copolymer PEMs has highlighted the importance of the alignment of PCCs. Furthermore, increasing efforts in fabricating and aligning artificial PCCs in field-aligned copolymer PEMs, nanofiber composite PEMs and mesoporous PEMs have set new paradigms for improvement of membrane performances. This perspective profiles the recent development of the channels, from the self-assembled to the artificial, with a particular emphasis on their formation and alignment. It concludes with an outlook on benefits of highly aligned PCCs for fuel cell operation, and gives further direction to develop new PEMs from a practical point of view.

  11. DWPF Melter Off-Gas Flammability Assessment for Sludge Batch 9

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, A. S.

    2016-07-11

    The slurry feed to the Defense Waste Processing Facility (DWPF) melter contains several organic carbon species that decompose in the cold cap and produce flammable gases that could accumulate in the off-gas system and create potential flammability hazard. To mitigate such a hazard, DWPF has implemented a strategy to impose the Technical Safety Requirement (TSR) limits on all key operating variables affecting off-gas flammability and operate the melter within those limits using both hardwired/software interlocks and administrative controls. The operating variables that are currently being controlled include; (1) total organic carbon (TOC), (2) air purges for combustion and dilution, (3)more » melter vapor space temperature, and (4) feed rate. The safety basis limits for these operating variables are determined using two computer models, 4-stage cold cap and Melter Off-Gas (MOG) dynamics models, under the baseline upset scenario - a surge in off-gas flow due to the inherent cold cap instabilities in the slurry-fed melter.« less

  12. Is the co-location of GPs in primary care centres associated with a higher patient satisfaction? Evidence from a population survey in Italy.

    PubMed

    Bonciani, Manila; Barsanti, Sara; Murante, Anna Maria

    2017-04-04

    Several countries have co-located General Practitioners (GPs) in Primary Care Centres (PCCs) with other health and social care professionals in order to improve integrated care. It is not clear whether the co-location of a multidisciplinary team actually facilitates a positive patient experience concerning GP care. The aim of this study was to verify whether the co-location of GPs in PCCs is associated positively with patient satisfaction with their GP when patients have experience of a multidisciplinary team. We also investigated whether patients who frequently use health services, due to their complex needs, benefitted the most from the co-location of a multidisciplinary team. The study used data from a population survey carried out in Tuscany (central Italy) at the beginning of 2015 to evaluate the patients' experience and satisfaction with their GPs. Multilevel linear regression models were implemented to verify the relationship between patient satisfaction and co-location. This key explanatory variable was measured by considering both the list of GPs working in PCCs and the answers of surveyed patients who had experienced the co-location of their GP in a multidisciplinary team. We also explored the effect modification on patient satisfaction due to the use of hospitalisation, access to emergency departments and visits with specialists, by performing the multilevel modelling on two strata of patient data: frequent and non-frequent health service users. A sample of 2025 GP patients were included in the study, 757 of which were patients of GPs working in a PCC. Patient satisfaction with their GP was generally positive. Results showed that having a GP working within a PCC and the experience of the co-located multidisciplinary team were associated with a higher satisfaction (p < 0.01). For non-frequent users of health services on the other hand, the co-location of multidisciplinary team in PCCs was not significantly associated with patient satisfaction, whereas for frequent users, the strength of relationships identified in the overall model increased (p < 0.01). The co-location of GPs with other professionals and their joint working as experienced in PCCs seems to represent a greater benefit for patients, especially for those with complex needs who use primary care, hospitals, emergency care and specialized care frequently.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, M. S.; Miller, D. H.; Fowley, M. D.

    The Savannah River National Laboratory (SRNL) was tasked to support validation of the Defense Waste Processing Facility (DWPF) melter offgas flammability model for the nitric-glycolic (NG) flowsheet. The work supports Deliverable 4 of the DWPF & Saltstone Facility Engineering Technical Task Request (TTR)1 and is supplemental to the Cold Cap Evaluation Furnace (CEF) testing conducted in 2014.2 The Slurry-fed Melt Rate Furnace (SMRF) was selected for the supplemental testing as it requires significantly less resources than the CEF and could provide a tool for more rapid analysis of melter feeds in the future. The SMRF platform has been used previouslymore » to evaluate melt rate behavior of DWPF glasses, but was modified to accommodate analysis of the offgas stream. Additionally, the Melt Rate Furnace (MRF) and Quartz Melt Rate Furnace (QMRF) were utilized for evaluations. MRF data was used exclusively for melt behavior observations and REDuction/OXidation (REDOX) prediction comparisons and will be briefly discussed in conjunction with its support of the SMRF testing. The QMRF was operated similarly to the SMRF for the same TTR task, but will be discussed in a separate future report. The overall objectives of the SMRF testing were to; 1) Evaluate the efficacy of the SMRF as a platform for steady state melter testing with continuous feeding and offgas analysis; and 2) Generate supplemental melter offgas flammability data to support the melter offgas flammability modelling effort for DWPF implementation of the NG flowsheet.« less

  14. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas datamore » were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard deviation of the average vapor space temperature during each steady state ranged from 2 to 6°C; however, those of the measured off-gas data were much larger due to the inherent cold cap instabilities in the slurry-fed melters. In order to predict the off-gas composition at the sampling location downstream of the film cooler, the measured feed composition was charge-reconciled and input into the DWPF melter off-gas flammability model, which was then run under the conditions for each of the six Phase 1 steady states. In doing so, it was necessary to perform an overall heat/mass balance calculation from the melter to the Off-Gas Condensate Tank (OGCT) in order to estimate the rate of air inleakage as well as the true gas temperature in the CEF vapor space (T{sub gas}) during each steady state by taking into account the effects of thermal radiation on the measured temperature (T{sub tw}). The results of Phase 1 data analysis and subsequent model runs showed that the predicted concentrations of H{sub 2} and CO by the DWPF model correctly trended and further bounded the respective measured data in the CEF off-gas by over predicting the TOC-to-H{sub 2} and TOC-to-CO conversion ratios by a factor of 2 to 5; an exception was the 7X over prediction of the latter at T{sub gas} = 371°C but the impact of CO on the off-gas flammability potential is only minor compared to that of H{sub 2}. More importantly, the seemingly-excessive over prediction of the TOC-to-H{sub 2} conversion by a factor of 4 or higher at T{sub gas} < ~350°C was attributed to the conservative antifoam decomposition scheme added recently to the model and therefore is considered a modeling issue and not a design issue. At T{sub gas} > ~350°C, the predicted TOC-to-H{sub 2} conversions were closer to but still higher than the measured data by a factor of 2, which may be regarded as adequate from the safety margin standpoint. The heat/mass balance calculations also showed that the correlation between T{sub tw} and T{sub gas} in the CEF vapor space was close to that of the ½ scale SGM, whose data were taken as directly applicable to the DWPF melter and thus used to set all the parameters of the original model. Based on these results of the CEF Phase 1 off-gas and thermal data analyses, it is concluded that: (1) The thermal characteristics of the CEF vapor space are prototypic thanks to its prototypic design; and (2) The CEF off-gas data are scalable in terms of predicting the flammability potential of the DWPF melter off-gas. These results also show that the existing DWPF safety controls on the TOC and antifoam as a function of nitrate are conservative by the same order of magnitude shown by the Phase 1 data at T{sub gas} < ~350°C, since they were set at T{sub gas} = 294°C, which falls into the region of excessive conservatism for the current DWPF model in terms of predicting the TOC-to-H{sub 2} conversion. In order to remedy the overly-conservative antifoam decomposition scheme used in the current DWPF model, the data from two recent tests will be analyzed in detail in order to gain additional insights into the antifoam decomposition chemistry in the cold cap. The first test was run in a temperature-programmed furnace using both normal and spiked feeds with fresh antifoam under inert and slightly oxidizing vapor space conditions. Phase 2 of the CEF test was run with the baseline nitric-glycolic acid flowsheet feeds that contained the “processed antifoam” and those spiked with fresh antifoam in order to study the effects of antifoam concentration as well as processing history on its decomposition chemistry under actual melter conditions. The goal is to develop an improved antifoam decomposition model from the analysis of these test data and incorporate it into a new multistage cold cap model to be developed concurrently for the nitric-glycolic acid flowsheet feeds. These activities will be documented in the Phase 2 report. Finally, it is recommended that some of the conservatism in the existing DWPF safety controls be removed by improving the existing measured-vs.-true gas temperature correlation used in the melter vapor space combustion calculations. The basis for this recommendation comes from the fact that the existing correlation was developed by linearly extrapolating the SGM data taken over a relatively narrow temperature range down to the safety basis minimum of 460°C, thereby under predicting the true gas temperature considerably, as documented in this report. Specifically, the task of improving the current temperature correlation will involve; (1) performing a similar heat/mass balance analysis used in this study on actual DWPF data, (2) validating the measured-vs.-true gas temperature correlation for the CEF developed in this study against the DWPF melter heat/mass balance results, and (3) making adjustments to the CEF correlation, if necessary, before incorporating it into the DWPF safety basis calculations. The steps described here can be completed with relatively minimum efforts.« less

  15. Reversal of apixaban anticoagulation by four-factor prothrombin complex concentrates in healthy subjects: a randomized three-period crossover study.

    PubMed

    Song, Y; Wang, Z; Perlstein, I; Wang, J; LaCreta, F; Frost, R J A; Frost, C

    2017-11-01

    Essentials Prothrombin complex concentrates (PCCs) may reverse the effect of factor Xa (FXa) inhibitors. We conducted an open-label, randomized, placebo-controlled, three-period crossover study in 15 subjects. Both PCCs rapidly reversed apixaban-mediated decreases in mean endogenous thrombin potential. Four-factor PCC administration had no effect on apixaban pharmacokinetics or anti-FXa activity. Background Currently, there is no approved reversal agent for direct activated factor Xa (FXa) inhibitors; however, several agents are under investigation, including prothrombin complex concentrates (PCCs). Objective This open-label, randomized, placebo-controlled, three-period crossover study assessed the effect of two four-factor PCCs on apixaban pharmacodynamics and pharmacokinetics in 15 healthy subjects. Methods Subjects received apixaban 10 mg twice daily for 3 days. On day 4, 3 h after apixaban, subjects received a 30-min infusion of 50 IU kg -1 Cofact, Beriplex P/N (Beriplex), or saline. Change in endogenous thrombin potential (ETP), measured with a thrombin generation assay (TGA), was the primary endpoint. Secondary endpoints included changes in other TGA parameters, prothrombin time (PT), International Normalized Ratio (INR), activated partial thromboplastin time, anti-FXa activity, apixaban pharmacokinetics, and safety. Results Apixaban-related changes in ETP and several other pharmacodynamic measures occurred following apixaban administration. Both PCCs reversed apixaban's effect on ETP; the differences in adjusted mean change from pre-PCC baseline to end of infusion were 425 nm min (95% confidence interval [CI] 219.8-630.7 nm min; P < 0.001) for Cofact, and 91 nm min (95% CI - 31.3 to 212.4 nm min; P > 0.05) for Beriplex. Both PCCs returned ETP to pre-apixaban baseline levels 4 h after PCC infusion, versus 45 h for placebo. For both PCCs, mean ETP peaked 21 h after PCC initiation, and then slowly decreased over the following 48 h. Both PCCs reversed apixaban's effect on TGA peak height, PT, and INR. Apixaban pharmacokinetic and anti-FXa profiles were consistent across treatments. Conclusions Cofact and Beriplex reversed apixaban's steady-state effects on several coagulation assessments. © 2017 International Society on Thrombosis and Haemostasis.

  16. Devices and methods for managing noncombustible gasses in nuclear power plants

    DOEpatents

    Marquino, Wayne; Moen, Stephan C; Wachowiak, Richard M; Gels, John L; Diaz-Quiroz, Jesus; Burns, Jr., John C

    2014-12-23

    Systems passively eliminate noncondensable gasses from facilities susceptible to damage from combustion of built-up noncondensable gasses, such as H2 and O2 in nuclear power plants, without the need for external power and/or moving parts. Systems include catalyst plates installed in a lower header of the Passive Containment Cooling System (PCCS) condenser, a catalyst packing member, and/or a catalyst coating on an interior surface of a condensation tube of the PCCS condenser or an annular outlet of the PCCS condenser. Structures may have surfaces or hydrophobic elements that inhibit water formation and promote contact with the noncondensable gas. Noncondensable gasses in a nuclear power plant are eliminated by installing and using the systems individually or in combination. An operating pressure of the PCCS condenser may be increased to facilitate recombination of noncondensable gasses therein.

  17. Devices and methods for managing noncondensable gasses in nuclear power plants

    DOEpatents

    Marquino, Wayne; Moen, Stephan C.; Wachowiak, Richard M.; Gels, John L.; Diaz-Quiroz, Jesus; Burns, Jr., John C.

    2016-11-15

    Systems passively eliminate noncondensable gasses from facilities susceptible to damage from combustion of built-up noncondensable gasses, such as H2 and O2 in nuclear power plants, without the need for external power and/or moving parts. Systems include catalyst plates installed in a lower header of the Passive Containment Cooling System (PCCS) condenser, a catalyst packing member, and/or a catalyst coating on an interior surface of a condensation tube of the PCCS condenser or an annular outlet of the PCCS condenser. Structures may have surfaces or hydrophobic elements that inhibit water formation and promote contact with the noncondensable gas. Noncondensable gasses in a nuclear power plant are eliminated by installing and using the systems individually or in combination. An operating pressure of the PCCS condenser may be increased to facilitate recombination of noncondensable gasses therein.

  18. Characterization of DWPF recycle condensate materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.; Adamson, D. J.; King, W. D.

    2015-04-01

    A Defense Waste Processing Facility (DWPF) Recycle Condensate Tank (RCT) sample was delivered to the Savannah River National Laboratory (SRNL) for characterization with particular interest in the concentration of I-129, U-233, U-235, total U, and total Pu. Since a portion of Salt Batch 8 will contain DWPF recycle materials, the concentration of I-129 is important to understand for salt batch planning purposes. The chemical and physical characterizations are also needed as input to the interpretation of future work aimed at determining the propensity of the RCT material to foam, and methods to remediate any foaming potential. According to DWPF themore » Tank Farm 2H evaporator has experienced foaming while processing DWPF recycle materials. The characterization work on the RCT samples has been completed and is reported here.« less

  19. ELIMINATION OF THE CHARACTERIZATION OF DWPF POUR STREAM SAMPLE AND THE GLASS FABRICATION AND TESTING OF THE DWPF SLUDGE BATCH QUALIFICATION SAMPLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amoroso, J.; Peeler, D.; Edwards, T.

    2012-05-11

    A recommendation to eliminate all characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification sample was made by a Six-Sigma team chartered to eliminate non-value-added activities for the Defense Waste Processing Facility (DWPF) sludge batch qualification program and is documented in the report SS-PIP-2006-00030. That recommendation was supported through a technical data review by the Savannah River National Laboratory (SRNL) and is documented in the memorandums SRNL-PSE-2007-00079 and SRNL-PSE-2007-00080. At the time of writing those memorandums, the DWPF was processing sludge-only waste but, has since transitioned to a coupledmore » operation (sludge and salt). The SRNL was recently tasked to perform a similar data review relevant to coupled operations and re-evaluate the previous recommendations. This report evaluates the validity of eliminating the characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification samples based on sludge-only and coupled operations. The pour stream sample has confirmed the DWPF's ability to produce an acceptable waste form from Slurry Mix Evaporator (SME) blending and product composition/durability predictions for the previous sixteen years but, ultimately the pour stream analysis has added minimal value to the DWPF's waste qualification strategy. Similarly, the information gained from the glass fabrication and PCT of the sludge batch qualification sample was determined to add minimal value to the waste qualification strategy since that sample is routinely not representative of the waste composition ultimately processed at the DWPF due to blending and salt processing considerations. Moreover, the qualification process has repeatedly confirmed minimal differences in glass behavior from actual radioactive waste to glasses fabricated from simulants or batch chemicals. In contrast, the variability study has significantly added value to the DWPF's qualification strategy. The variability study has evolved to become the primary aspect of the DWPF's compliance strategy as it has been shown to be versatile and capable of adapting to the DWPF's various and diverse waste streams and blending strategies. The variability study, which aims to ensure durability requirements and the PCT and chemical composition correlations are valid for the compositional region to be processed at the DWPF, must continue to be performed. Due to the importance of the variability study and its place in the DWPF's qualification strategy, it will also be discussed in this report. An analysis of historical data and Production Records indicated that the recommendation of the Six Sigma team to eliminate all characterization of pour stream glass samples and the glass fabrication and PCT performed with the qualification glass does not compromise the DWPF's current compliance plan. Furthermore, the DWPF should continue to produce an acceptable waste form following the remaining elements of the Glass Product Control Program; regardless of a sludge-only or coupled operations strategy. If the DWPF does decide to eliminate the characterization of pour stream samples, pour stream samples should continue to be collected for archival reasons, which would allow testing to be performed should any issues arise or new repository test methods be developed.« less

  20. A multifaceted approach to spreading palliative care consultation services in California public hospital systems.

    PubMed

    Brousseau, Ruth Tebbets; Jameson, Wendy; Kalanj, Boris; Kerr, Kathleen; O'Malley, Kate; Pantilat, Steven

    2012-01-01

    Historically, California's 17 public hospital systems-those that are county owned and operated, and those University of California medical centers with the mandate to serve low income, vulnerable populations-have struggled to implement Palliative Care Consultation Services (PCCS)-this, despite demonstrated need for these services among the uninsured and Medicaid populations served by these facilities. Since 2008, through a collaborative effort of a foundation, a palliative care training center, and a nonprofit quality improvement organization, the Spreading Palliative Care in Public Hospitals initiative (SPCPH) has resulted in a 3-fold increase in the number of California public hospitals providing PCCS, from 4 to 12. The SPCPH leveraged grant funding, the trusted relationships between California public hospitals and their quality improvement organization, technical assistance and training, peer support and learning, and a tailored business case demonstrating the financial/resource utilization benefits of dedicated PCCS. This article describes the SPCPH's distinctive design, features of the public hospital PCCS, patient and team characteristics, and PCCS provider perceptions of environmental factors, and SPCPH features that promoted or impeded their success. Lessons learned may have implications for other hospital systems undertaking implementation of palliative care services. © 2012 National Association for Healthcare Quality.

  1. Double minute chromosomes in mouse methotrexate-resistant cells studied by atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng Xinyu; Zhang Liangyu; Zhang Yu

    2006-08-11

    Double minute chromosomes (DMs) are acentric, autonomously replicating extra-chromosomes and frequently mediate gene amplification in tumor and drug resistant cells. Atomic force microscopy (AFM) is a powerful tool in microbiology. We used AFM to explore the ultrastructure of DMs in mouse fibroblasts 3T3R500. DMs in various phases of cell cycle were also studied in order to elucidate the mechanisms of their duplication and separation. Metaphase spread and induced premature condensed chromosomes (PCCs) were observed under the AFM. DMs were detected to be composed of two compact spheres linked by fibers. The fibers of DMs directly connected with metaphase chromosomes weremore » observed. Many single-minutes and few DMs were detected in G1 PCCs, while more DMs were detected in S PCCs than in G1 PCCs. Besides, all of the DMs in G2 PCCs were coupled. Our present results suggested that DMs might divide into single-minutes during or before G1-phase, followed by duplication of the single-minutes in S-phase. Moreover, we introduced a new powerful tool to study DMs and got some ideal results.« less

  2. SCIX IMPACT ON DWPF CPC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koopman, D.

    2011-07-14

    A program was conducted to systematically evaluate potential impacts of the proposed Small Column Ion Exchange (SCIX) process on the Defense Waste Processing Facility (DWPF) Chemical Processing Cell (CPC). The program involved a series of interrelated tasks. Past studies of the impact of crystalline silicotitanate (CST) and monosodium titanate (MST) on DWPF were reviewed. Paper studies and material balance calculations were used to establish reasonable bounding levels of CST and MST in sludge. Following the paper studies, Sludge Batch 10 (SB10) simulant was modified to have both bounding and intermediate levels of MST and ground CST. The SCIX flow sheetmore » includes grinding of the CST which is larger than DWPF frit when not ground. Nominal ground CST was not yet available, therefore a similar CST ground previously in Savannah River National Laboratory (SRNL) was used. It was believed that this CST was over ground and that it would bound the impact of nominal CST on sludge slurry properties. Lab-scale simulations of the DWPF CPC were conducted using SB10 simulants with no, intermediate, and bounding levels of CST and MST. Tests included both the Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles. Simulations were performed at high and low acid stoichiometry. A demonstration of the extended CPC flowsheet was made that included streams from the site interim salt processing operations. A simulation using irradiated CST and MST was also completed. An extensive set of rheological measurements was made to search for potential adverse consequences of CST and MST and slurry rheology in the CPC. The SCIX CPC impact program was conducted in parallel with a program to evaluate the impact of SCIX on the final DWPF glass waste form and on the DWPF melter throughput. The studies must be considered together when evaluating the full impact of SCIX on DWPF. Due to the fact that the alternant flowsheet for DWPF has not been selected, this study did not consider the impact of proposed future alternative DWPF CPC flowsheets. The impact of the SCIX streams on DWPF processing using the selected flowsheet need to be considered as part of the technical baseline studies for coupled processing with the selected flowsheet. In addition, the downstream impact of aluminum dissolution on waste containing CST and MST has not yet been evaluated. The current baseline would not subject CST to the aluminum dissolution process and technical concerns with performing the dissolution with CST have been expressed. Should this option become feasible, the downstream impact should be considered. The main area of concern for DWPF from aluminum dissolution is an impact on rheology. The SCIX project is planning for SRNL to complete MST, CST, and sludge rheology testing to evaluate any expected changes. The impact of ground CST transport and flush water on the DWPF CPC feed tank (and potential need for decanting) has not been defined or studied.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarrack, A.G.

    The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less

  4. Summary Report For The Analysis Of The Sludge Batch 7b (Macrobatch 9) DWPF Pour Stream Glass Sample For Canister S04023

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, F. C.

    2013-11-18

    In order to comply with the Defense Waste Processing Facility (DWPF) Waste Form Compliance Plan for Sluldge Batch 7b, Savannah River National Laboratory (SRNL) personnel characterized the Defense Waste Processing Facility (DWPF) pour stream (PS) glass sample collected while filling canister S04023. This report summarizes the results of the compositional analysis for reportable oxides and radionuclides and the normalized Product Consistency Test (PCT) results. The PCT responses indicate that the DWPF produced glass that is significantly more durable than the Environmental Assessment glass.

  5. Testing of the Defense Waste Processing Facility Cold Chemical Dissolution Method in Sludge Batch 9 Qualification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T.; Pareizs, J.; Coleman, C.

    For each sludge batch that is processed in the Defense Waste Processing Facility (DWPF), the Savannah River National Laboratory (SRNL) tests the applicability of the digestion methods used by the DWPF Laboratory for elemental analysis of Sludge Receipt and Adjustment Tank (SRAT) Receipt samples and SRAT Product process control samples. DWPF SRAT samples are typically dissolved using a method referred to as the DWPF Cold Chemical or Cold Chem Method (CC), (see DWPF Procedure SW4- 15.201). Testing indicates that the CC method produced mixed results. The CC method did not result in complete dissolution of either the SRAT Receipt ormore » SRAT Product with some fine, dark solids remaining. However, elemental analyses did not reveal extreme biases for the major elements in the sludge when compared with analyses obtained following dissolution by hot aqua regia (AR) or sodium peroxide fusion (PF) methods. The CC elemental analyses agreed with the AR and PF methods well enough that it should be adequate for routine process control analyses in the DWPF after much more extensive side-by-side tests of the CC method and the PF method are performed on the first 10 SRAT cycles of the Sludge Batch 9 (SB9) campaign. The DWPF Laboratory should continue with their plans for further tests of the CC method during these 10 SRAT cycles.« less

  6. Defense waste processing facility (DWPF) liquids model: revisions for processing higher TIO 2 containing glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C. M.; Edwards, T. B.; Trivelpiece, C. L.

    Radioactive high level waste (HLW) at the Savannah River Site (SRS) has successfully been vitrified into borosilicate glass in the Defense Waste Processing Facility (DWPF) since 1996. Vitrification requires stringent product/process (P/P) constraints since the glass cannot be reworked once it is poured into ten foot tall by two foot diameter canisters. A unique “feed forward” statistical process control (SPC) was developed for this control rather than statistical quality control (SQC). In SPC, the feed composition to the DWPF melter is controlled prior to vitrification. In SQC, the glass product would be sampled after it is vitrified. Individual glass property-compositionmore » models form the basis for the “feed forward” SPC. The models transform constraints on the melt and glass properties into constraints on the feed composition going to the melter in order to guarantee, at the 95% confidence level, that the feed will be processable and that the durability of the resulting waste form will be acceptable to a geologic repository. This report documents the development of revised TiO 2, Na 2O, Li 2O and Fe 2O 3 coefficients in the SWPF liquidus model and revised coefficients (a, b, c, and d).« less

  7. DWPF STARTUP FRIT VISCOSITY MEASUREMENT ROUND ROBIN RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crum, Jarrod V.; Edwards, Tommy B.; Russell, Renee L.

    2012-07-31

    A viscosity standard is needed to replace the National Institute of Standards and Technology (NIST) glasses currently being used to calibrate viscosity measurement equipment. The current NIST glasses are either unavailable or less than ideal for calibrating equipment to measure the viscosity of high-level waste glasses. This report documents the results of a viscosity round robin study conducted on the Defense Waste Processing Facility (DWPF) startup frit. DWPF startup frit was selected because its viscosity-temperature relationship is similar to most DWPF and Hanford high-level waste glass compositions. The glass underwent grinding and blending to homogenize the large (100 lb) batch.more » Portions of the batch were supplied to the laboratories (named A through H) for viscosity measurements following a specified temperature schedule with a temperature range of 1150 C to 950 C and with an option to measure viscosity at lower temperatures if their equipment was capable of measuring at the higher viscosities. Results were used to fit the Vogel-Tamman-Fulcher and Arrhenius equations to viscosity as a function of temperature for the entire temperature range of 460 C through 1250 C as well as the limited temperature interval of approximately 950 C through 1250 C. The standard errors for confidence and prediction were determined for the fitted models.« less

  8. Palliative care consultations for heart failure patients: how many, when, and why?

    PubMed

    Bakitas, Marie; Macmartin, Meredith; Trzepkowski, Kenneth; Robert, Alina; Jackson, Lisa; Brown, Jeremiah R; Dionne-Odom, James N; Kono, Alan

    2013-03-01

    In preparation for development of a palliative care intervention for patients with heart failure (HF) and their caregivers, we aimed to characterize the HF population receiving palliative care consultations (PCCs). Reviewing charts from January 2006 to April 2011, we analyzed HF patient data including demographic and clinical characteristics, Seattle Heart Failure scores, and PCCs. Using Atlas qualitative software, we conducted a content analysis of PCC notes to characterize palliative care assessment and treatment recommendations. There were 132 HF patients with PCCs, of which 37% were New York Heart Association functional class III and 50% functional class IV. Retrospectively computed Seattle Heart Failure scores predicted 1-year mortality of 29% [interquartile range (IQR) 19-45] and median life expectancy of 2.8 years [IQR 1.6-4.2] years. Of the 132 HF patients, 115 (87%) had died by the time of the audit. In that cohort the actual median time from PCC to death was 21 [IQR 3-125] days. Reasons documented for PCCs included goals of care (80%), decision making (24%), hospice referral/discussion (24%), and symptom management (8%). Despite recommendations, PCCs are not being initiated until the last month of life. Earlier referral for PCC may allow for integration of a broader array of palliative care services. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Production code control system for hydrodynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration managementmore » system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.« less

  10. VERIFICATION OF THE DEFENSE WASTE PROCESSING FACILITY PROCESS DIGESTION METHOD FOR THE SLUDGE BATCH 6 QUALIFICATION SAMPLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Click, D.; Jones, M.; Edwards, T.

    2010-06-09

    For each sludge batch that is processed in the Defense Waste Processing Facility (DWPF), the Savannah River National Laboratory (SRNL) confirms applicability of the digestion method to be used by the DWPF lab for elemental analysis of Sludge Receipt and Adjustment Tank (SRAT) receipt samples and SRAT product process control samples.1 DWPF SRAT samples are typically dissolved using a room temperature HF-HNO3 acid dissolution (i.e., DWPF Cold Chem (CC) Method, see DWPF Procedure SW4-15.201) and then analyzed by inductively coupled plasma - atomic emission spectroscopy (ICPAES). In addition to the CC method confirmation, the DWPF lab's mercury (Hg) digestion methodmore » was also evaluated for applicability to SB6 (see DWPF procedure 'Mercury System Operating Manual', Manual: SW4-15.204. Section 6.1, Revision 5, Effective date: 12-04-03). This report contains the results and comparison of data generated from performing the Aqua Regia (AR), Sodium Peroxide/Hydroxide Fusion (PF) and DWPF Cold Chem (CC) method digestion of Sludge Batch 6 (SB6) SRAT Receipt and SB6 SRAT Product samples. For validation of the DWPF lab's Hg method, only SRAT receipt material was used and compared to AR digestion results. The SB6 SRAT Receipt and SB6 SRAT Product samples were prepared in the SRNL Shielded Cells, and the SRAT Receipt material is representative of the sludge that constitutes the SB6 Batch or qualification composition. This is the sludge in Tank 51 that is to be transferred into Tank 40, which will contain the heel of Sludge Batch 5 (SB5), to form the SB6 Blend composition. In addition to the 16 elements currently measured by the DWPF, this report includes Hg and thorium (Th) data (Th comprising {approx}2.5 - 3 Wt% of the total solids in SRAT Receipt and SRAT Product, respectively) and provides specific details of ICP-AES analysis of Th. Thorium was found to interfere with the U 367.007 nm emission line, and an inter-element correction (IEC) had to be applied to U data, which is also discussed. The results for any one particular element should not be used in any way to identify the form or speciation of a particular element without support from XRD analysis or used to estimate ratios of compounds in the sludge.« less

  11. ESIP's Emerging Provenance and Context Content Standard Use Cases: Developing Examples and Models for Data Stewardship

    NASA Astrophysics Data System (ADS)

    Ramdeen, S.; Hills, D. J.

    2013-12-01

    Earth science data collections range from individual researchers' private collections to large-scale data warehouses, from computer-generated data to field or lab based observations. These collections require stewardship. Fundamentally, stewardship ensures long term preservation and the provision of access to the user community. In particular, stewardship includes capturing appropriate metadata and documentation--and thus the context of the data's creation and any changes they underwent over time --to enable data reuse. But scientists and science data managers must translate these ideas into practice. How does one balance the needs of current and (projected) future stakeholders? In 2011, the Data Stewardship Committee (DSC) of the Federation of Earth Science Information Partners (ESIP) began developing the Provenance and Context Content Standard (PCCS). As an emerging standard, PCCS provides a framework for 'what' must be captured or preserved as opposed to describing only 'how' it should be done. Originally based on the experiences of NASA and NOAA researchers within ESIP, the standard currently provides data managers with content items aligned to eight key categories. While the categories and content items are based on data life cycles of remote sensing missions, they can be generalized to cover a broader set of activities, for example, preservation of physical objects. These categories will include the information needed to ensure the long-term understandability and usability of earth science data products. In addition to the PCCS, the DSC is developing a series of use cases based on the perspectives of the data archiver, data user, and the data consumer that will connect theory and practice. These cases will act as specifications for developing PCCS-based systems. They will also provide for examination of the categories and content items covered in the PCCS to determine if any additions are needed to cover the various use cases, and also provide rationale and indicate priorities for preservation. Though the use cases currently focus on two areas, 'creating' a data set and 'using' a data set, the use cases will eventually cover the full data lifecycle. Currently developing a template to be used in future use case creation, the DSC is also preparing and testing more use case scenarios. This presentation will introduce the ESIP use cases based on the PCCS. It will at once expand stakeholder participation and show the application of these materials beyond the ESIP community in which they were developed. More information about the ESIP use case activities can be found on the DSC wiki - http://wiki.esipfed.org/index.php/Preservation_Use_Case_Activity.

  12. Nuclear Waste: Defense Waste Processing Facility-Cost, Schedule, and Technical Issues.

    DTIC Science & Technology

    1992-06-17

    gallons of high-level radioactive waste stored in underground tanks at the savannah major facility involved Is the Defense Waste Processing Facility ( DwPF ...As a result of concerns about potential problems with the DWPF and delays in its scheduled start-up, the Chairman of the Environment, Energy, and...Natural Resources Subcommittee, House Committee on Government Operations, asked GAO to review the status of the DWPF and other facilities. This report

  13. Comparative risk assessments for the production and interim storage of glass and ceramic waste forms: Defense waste processing facility

    NASA Astrophysics Data System (ADS)

    Huang, J. C.; Wright, W. V.

    1982-04-01

    The Defense Waste Processing Facility (DWPF) for immobilizing nuclear high level waste (HLW) is scheduled to be built. High level waste is produced when reactor components are subjected to chemical separation operations. Two candidates for immobilizing this HLW are borosilicate glass and crystalline ceramic, either being contained in weld sealed stainless steel canisters. A number of technical analyses are being conducted to support a selection between these two waste forms. The risks associated with the manufacture and interim storage of these two forms in the DWPF are compared. Process information used in the risk analysis was taken primarily from a DWPF processibility analysis. The DWPF environmental analysis provided much of the necessary environmental information.

  14. Long-Term Effectiveness of Two Educational Methods on Knowledge, Attitude, and Practice Toward Palliative Care Consultation Services Among Nursing Staff: A Longitudinal Follow-Up Study.

    PubMed

    Pan, Hsueh-Hsing; Wu, Li-Fen; Hung, Yu-Chun; Chu, Chi-Ming; Wang, Kwua-Yun

    2018-05-01

    This experimental study investigated long-term effectiveness of two educational methods on knowledge, attitude, and practice (KAP) about palliative care consultation services (PCCS) among nurses, recruited from a medical center located in Northern Taiwan in 2015, using a stratified cluster sampling method, with 88 participants in multimedia (experimental) and 92 in traditional paper education (control) group. Data were collected using KAP-PCCS questionnaire before education, immediately after, and 3rd and 6th month after education. Results showed that both K-PCCSI and P-PCCSI significantly increased immediately after, and at the 3rd month after education for the experimental group; the K-PCCSI remained significantly higher for the experimental group at the 6th month. The highest increase in scores for both K-PCCSI and P-PCCSI was observed at the 3rd month. There was no significant change in A-PCCS in both groups after follow-up periods, when compared before education. Therefore, using multimedia every 3 months to continue strengthening their knowledge may increase the referrals of terminal patients to PCCS.

  15. Infrasound Detection of Rocket Launches

    DTIC Science & Technology

    2000-09-01

    infrasound pressure, and λ and µ are the Lame and shear modulii. Seismic data was available from the IRIS data center for the seismic station DWPF ...the bandwidth of interest. Figure 4 shows a recording of STS-93 (07/24/99 04:31:00GMT) at DWPF (97 km). The largest seismic amplitudes are consistent...lasts ~400 seconds. The dominant frequency (~4 Hz) at DWPF is consistent with the long-range infrasound signals observed at DLIAR. Figure 3. Seismic

  16. Results for the DWPF Slurry Mix Evaporator Condensate Tank, Off Gas Condensate Tank, And Recycle Collection Tank Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TERRI, FELLINGER

    2004-12-21

    The Defense Waste Processing Facility, DWPF, currently generates approximately 1.4 million gallons of recycle water per year during Sludge-Only operations. DWPF has minimized condensate generation to 1.4 million gallons by not operating the Steam Atomized Scrubbers, SASs, for the melter off gas system. By not operating the SASs, DWPF has reduced the total volume by approximately 800,000 gallons of condensate per year. Currently, the recycle stream is sent to back to the Tank Farm and processed through the 2H Evaporator system. To alleviate the load on the 2H Evaporator system, an acid evaporator design is being considered as an alternatemore » processing and/or concentration method for the DWPF recycle stream. In order to support this alternate processing option, the DWPF has requested that the chemical and radionuclide compositions of the Off Gas Condensate Tank, OGCT, Slurry Mix Evaporator Condensate Tank, SMECT, Recycle Collection Tank, RCT, and the Decontamination Waste Treatment Tank, DWTT, be determined as a part of the process development work for the acid evaporator design. Samples have been retrieved from the OGCT, RCT, and SMECT and have been sent to the Savannah River National Laboratory, SRNL for this characterization. The DWTT samples have been recently shipped to SRNL. The results for the DWTT samples will be issued at later date.« less

  17. Prevalence of Prostate Cancer Clinical States and Mortality in the United States: Estimates Using a Dynamic Progression Model

    PubMed Central

    Scher, Howard I.; Solo, Kirk; Valant, Jason; Todd, Mary B.; Mehra, Maneesha

    2015-01-01

    Objective To identify patient populations most in need of treatment across the prostate cancer disease continuum, we developed a novel dynamic transition model based on risk of disease progression and mortality. Design and Outcome Measurements We modeled the flow of patient populations through eight prostate cancer clinical states (PCCS) that are characterized by the status of the primary tumor, presence of metastases, prior and current treatment, and testosterone levels. Simulations used published US incidence rates for each year from 1990. Progression and mortality rates were derived from published clinical trials, meta-analyses, and observational studies. Model outputs included the incidence, prevalence, and mortality for each PCCS. The impact of novel treatments was modeled in three distinct scenarios: metastatic castration-resistant prostate cancer (mCRPC), non-metastatic CRPC (nmCRPC), or both. Results and Limitations The model estimated the prevalence of prostate cancer as 2,219,280 in the US in 2009 and 3,072,480 in 2020, and incidence of mCRPC as 36,100 and 42,970, respectively. All-cause mortality in prostate cancer was estimated at 168,290 in 2009 and 219,360 in 2020, with 20.5% and 19.5% of these deaths, respectively, occurring in men with mCRPC. The majority (86%) of incidence flow into mCRPC states was from the nmCRPC clinical state. In the scenario with novel interventions for nmCRPC states, the progression to mCRPC is reduced, thus decreasing mCRPC incidence by 12% in 2020, with a sustained decline in mCRPC mortality. A limitation of the model is that it does not estimate prostate cancer—specific mortality. Conclusion The model informs clinical trial design for prostate cancer by quantifying outcomes in PCCS, and demonstrates the impact of an effective therapy applied in an earlier clinical state of nmCRPC on the incidence of mCRPC morbidity and subsequent mortality. PMID:26460686

  18. Exploring Type and Amount of Parent Talk during Individualized Family Service Plan Meetings

    ERIC Educational Resources Information Center

    Ridgley, Robyn; Snyder, Patricia; McWilliam, R. A.

    2014-01-01

    We discuss the utility of a coding system designed to evaluate the amount and type of parent talk during individualized family service plan (IFSP) meetings. The iterative processes used to develop the "Parent Communication Coding System" (PCCS) and its associated codes are described. In addition, we explored whether PCCS codes could be…

  19. DWPF simulant CPC studies for SB8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koopman, D. C.; Zamecnik, J. R.

    2013-06-25

    The Savannah River National Laboratory (SRNL) accepted a technical task request (TTR) from Waste Solidification Engineering to perform simulant tests to support the qualification of Sludge Batch 8 (SB8) and to develop the flowsheet for SB8 in the Defense Waste Processing Facility (DWPF). These efforts pertained to the DWPF Chemical Process Cell (CPC). Separate studies were conducted for frit development and glass properties (including REDOX). The SRNL CPC effort had two primary phases divided by the decision to drop Tank 12 from the SB8 constituents. This report focuses on the second phase with SB8 compositions that do not contain themore » Tank 12 piece. A separate report will document the initial phase of SB8 testing that included Tank 12. The second phase of SB8 studies consisted of two sets of CPC studies. The first study involved CPC testing of an SB8 simulant for Tank 51 to support the CPC demonstration of the washed Tank 51 qualification sample in the SRNL Shielded Cells facility. SB8-Tank 51 was a high iron-low aluminum waste with fairly high mercury and moderate noble metal concentrations. Tank 51 was ultimately washed to about 1.5 M sodium which is the highest wash endpoint since SB3-Tank 51. This study included three simulations of the DWPF Sludge Receipt and Adjustment Tank (SRAT) cycle and Slurry Mix Evaporator (SME) cycle with the sludge-only flowsheet at nominal DWPF processing conditions and three different acid stoichiometries. These runs produced a set of recommendations that were used to guide the successful SRNL qualification SRAT/SME demonstration with actual Tank 51 washed waste. The second study involved five SRAT/SME runs with SB8-Tank 40 simulant. Four of the runs were designed to define the acid requirements for sludge-only processing in DWPF with respect to nitrite destruction and hydrogen generation. The fifth run was an intermediate acid stoichiometry demonstration of the coupled flowsheet for SB8. These runs produced a set of processing recommendations for DWPF along with some data related to Safety Class documentation at DWPF. Some significant observations regarding SB8 follow: Reduced washing in Tank 51 led to an increase in the wt.% soluble solids of the DWPF feed. If wt.% total solids for the SRAT and SME product weren’t adjusted upward to maintain insoluble solids levels similar to past sludge batches, then the rheological properties of the slurry went below the low end of the DWPF design bases for the SRAT and SME. Much higher levels of dissolved manganese were found in the SRAT and SME products than in recent sludge batches. Closed crucible melts were more reduced than expected. The working hypothesis is that the soluble Mn is less oxidizing than assumed in the REDOX calculations. A change in the coefficient for Mn in the REDOX equation was recommended in a separate report. The DWPF (Hsu) stoichiometric acid equation was examined in detail to better evaluate how to control acid in DWPF. The existing DWPF equation can likely be improved without changing the required sample analyses through a paper study using existing data. The recommended acid stoichiometry for initial SB8 SRAT batches is 115-120% stoichiometry until some processing experience is gained. The conservative range (based on feed properties) of stoichiometric factors derived in this study was from 110-147%, but SRNL recommends using only the lower half of this range, 110-126% even after initial batches provide processing experience. The stoichiometric range for sludge-only processing appears to be suitable for coupled operation based on results from the run in the middle of the range. Catalytic hydrogen was detectable (>0.005 vol%) in all SRAT and SME cycles. Hydrogen reached 30-35% of the SRAT and SME limits at the mid-point of the stoichiometry window (bounding noble metals and acid demand).« less

  20. DWPF SIMULANT CPC STUDIES FOR SB7B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koopman, D.

    2011-11-01

    Lab-scale DWPF simulations of Sludge Batch 7b (SB7b) processing were performed. Testing was performed at the Savannah River National Laboratory - Aiken County Technology Laboratory (SRNL-ACTL). The primary goal of the simulations was to define a likely operating window for acid stoichiometry for the DWPF Sludge Receipt and Adjustment Tank (SRAT). In addition, the testing established conditions for the SRNL Shielded Cells qualification simulation of SB7b-Tank 40 blend, supported validation of the current glass redox model, and validated the coupled process flowsheet at the nominal acid stoichiometry. An acid window of 105-140% by the Koopman minimum acid (KMA) equation (107-142%more » DWPF Hsu equation) worked for the sludge-only flowsheet. Nitrite was present in the SRAT product for the 105% KMA run at 366 mg/kg, while SME cycle hydrogen reached 94% of the DWPF Slurry Mix Evaporator (SME) cycle limit in the 140% KMA run. The window was determined for sludge with added caustic (0.28M additional base, or roughly 12,000 gallons 50% NaOH to 820,000 gallons waste slurry). A suitable processing window appears to be 107-130% DWPF acid equation for sludge-only processing allowing some conservatism for the mapping of lab-scale simulant data to full-scale real waste processing including potentially non-conservative noble metal and mercury concentrations. This window should be usable with or without the addition of up to 7,000 gallons of caustic to the batch. The window could potentially be wider if caustic is not added to SB7b. It is recommended that DWPF begin processing SB7b at 115% stoichiometry using the current DWPF equation. The factor could be increased if necessary, but changes should be made with caution and in small increments. DWPF should not concentrate past 48 wt.% total solids in the SME cycle if moderate hydrogen generation is occurring simultaneously. The coupled flowsheet simulation made more hydrogen in the SRAT and SME cycles than the sludge-only run with the same acid stoichiometric factor. The slow acid addition in MCU seemed to alter the reactions that consumed the small excess acid present such that hydrogen generation was promoted relative to sludge-only processing. The coupled test reached higher wt.% total solids, and this likely contributed to the SME cycle hydrogen limit being exceeded at 110% KMA. It is clear from the trends in the SME processing GC data, however, that the frit slurry formic acid contributed to driving the hydrogen generation rate above the SME cycle limit. Hydrogen generation rates after the second frit addition generally exceeded those after the first frit addition. SRAT formate loss increased with increasing acid stoichiometry (15% to 35%). A substantial nitrate gain which was observed to have occurred after acid addition (and nitrite destruction) was reversed to a net nitrate loss in runs with higher acid stoichiometry (nitrate in SRAT product less than sum of sludge nitrate and added nitric acid). Increased ammonium ion formation was also indicated in the runs with nitrate loss. Oxalate loss on the order 20% was indicated in three of the four acid stoichiometry runs and in the coupled flowsheet run. The minimum acid stoichiometry run had no indicated loss. The losses were of the same order as the official analytical uncertainty of the oxalate concentration measurement, but were not randomly distributed about zero loss, so some actual loss was likely occurring. Based on the entire set of SB7b test data, it is recommended that DWPF avoid concentrating additional sludge solids in single SRAT batches to limit the concentrations of noble metals to SB7a processing levels (on a grams noble metal per SRAT batch basis). It is also recommended that DWPF drop the formic acid addition that accompanies the process frit 418 additions, since SME cycle data showed considerable catalytic activity for hydrogen generation from this additional acid (about 5% increase in stoichiometry occurred from the frit formic acid). Frit 418 also does not appear to need formic acid addition to prevent gel formation in the frit slurry. Simulant processing was successful using 100 ppm of 747 antifoam added prior to nitric acid instead of 200 ppm. This is a potential area for DWPF to cut antifoam usage in any future test program. An additional 100 ppm was added before formic acid addition. Foaming during formic acid addition was not observed. No build-up of oily or waxy material was observed in the off-gas equipment. Lab-scale mercury stripping behavior was similar to SB6 and SB7a. More mercury was unaccounted for as the acid stoichiometry increased.« less

  1. Literature review: Assessment of DWPF melter and melter off-gas system lifetime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reigel, M. M.

    2015-07-30

    A glass melter for use in processing radioactive waste is a challenging environment for the materials of construction (MOC) resulting from a combination of high temperatures, chemical attack, and erosion/corrosion; therefore, highly engineered materials must be selected for this application. The focus of this report is to review the testing and evaluations used in the selection of the Defense Waste Processing Facility (DWPF), glass contact MOC specifically the Monofrax ® K-3 refractory and Inconel ® 690 alloy. The degradation or corrosion mechanisms of these materials during pilot scale testing and in-service operation were analyzed over a range of oxidizing andmore » reducing flowsheets; however, DWPF has primarily processed a reducing flowsheet (i.e., Fe 2+/ΣFe of 0.09 to 0.33) since the start of radioactive operations. This report also discusses the materials selection for the DWPF off-gas system and the corrosion evaluation of these materials during pilot scale testing and non-radioactive operations of DWPF Melter #1. Inspection of the off-gas components has not been performed during radioactive operations with the exception of maintenance because of plugging.« less

  2. Literature review: Assessment of DWPF melter and melter off-gas system lifetime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reigel, M.

    2015-07-30

    A glass melter for use in processing radioactive waste is a challenging environment for the materials of construction (MOC) resulting from a combination of high temperatures, chemical attack, and erosion/corrosion; therefore, highly engineered materials must be selected for this application. The focus of this report is to review the testing and evaluations used in the selection of the Defense Waste Processing Facility (DWPF), glass contact MOC specifically the Monofrax® K-3 refractory and Inconel® 690 alloy. The degradation or corrosion mechanisms of these materials during pilot scale testing and in-service operation were analyzed over a range of oxidizing and reducing flowsheets;more » however, DWPF has primarily processed a reducing flowsheet (i.e., Fe 2+/ΣFe of 0.09 to 0.33) since the start of radioactive operations. This report also discusses the materials selection for the DWPF off-gas system and the corrosion evaluation of these materials during pilot scale testing and non-radioactive operations of DWPF Melter #1. Inspection of the off-gas components has not been performed during radioactive operations with the exception of maintenance because of plugging.« less

  3. Sludge Washing and Demonstration of the DWPF Nitric/Formic Flowsheet in the SRNL Shielded Cells for Sludge Batch 9 Qualification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pareizs, J.; Newell, D.; Martino, C.

    Savannah River National Laboratory (SRNL) was requested by Savannah River Remediation (SRR) to qualify the next batch of sludge – Sludge Batch 9 (SB9). Current practice is to prepare sludge batches in Tank 51 by transferring sludge to Tank 51 from other tanks. The sludge is washed and transferred to Tank 40, the current Defense Waste Process Facility (DWPF) feed tank. Prior to sludge transfer from Tank 51 to Tank 40, the Tank 51 sludge must be qualified. SRNL qualifies the sludge in multiple steps. First, a Tank 51 sample is received, then characterized, washed, and again characterized. SRNL thenmore » demonstrates the DWPF Chemical Process Cell (CPC) flowsheet with the sludge. The final step of qualification involves chemical durability measurements of glass fabricated in the DWPF CPC demonstrations. In past sludge batches, SRNL had completed the DWPF demonstration with Tank 51 sludge. For SB9, SRNL has been requested to process a blend of Tank 51 and Tank 40 at a targeted ratio of 44% Tank 51 and 56% Tank 40 on an insoluble solids basis.« less

  4. Degradation of Akt using protein-catalyzed capture agents.

    PubMed

    Henning, Ryan K; Varghese, Joseph O; Das, Samir; Nag, Arundhati; Tang, Grace; Tang, Kevin; Sutherland, Alexander M; Heath, James R

    2016-04-01

    Abnormal signaling of the protein kinase Akt has been shown to contribute to human diseases such as diabetes and cancer, but Akt has proven to be a challenging target for drugging. Using iterative in situ click chemistry, we recently developed multiple protein-catalyzed capture (PCC) agents that allosterically modulate Akt enzymatic activity in a protein-based assay. Here, we utilize similar PCCs to exploit endogenous protein degradation pathways. We use the modularity of the anti-Akt PCCs to prepare proteolysis targeting chimeric molecules that are shown to promote the rapid degradation of Akt in live cancer cells. These novel proteolysis targeting chimeric molecules demonstrate that the epitope targeting selectivity of PCCs can be coupled with non-traditional drugging moieties to inhibit challenging targets. Copyright © 2016 European Peptide Society and John Wiley & Sons, Ltd.

  5. Ecological studies related to the construction of the Defense Waste Processing Facility on the Savannah River Site. Annual report, FY-1991 and FY-1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, D.E.; Chazel, A.C.; Pechmann, J.H.K.

    1993-06-01

    The Defense Waste Processing Facility (DWPF) was built on the Savannah River Site (SRS) during the mid-1980`s. The Savannah River Ecology Laboratory (SREL) has completed 14 years of ecological studies related to the construction of the DWPF complex. Prior to construction, the 600-acre site (S-Area) contained a Carolina bay and the headwaters of a stream. Research conducted by the SREL has focused primarily on four questions related to these wetlands: (1) Prior to construction, what fauna and flora were present at the DWPF site and at similar, yet undisturbed, alternative sites? (2) By comparing the Carolina bay at the DWPFmore » site (Sun Bay) with an undisturbed control Carolina bay (Rainbow Bay), what effect is construction having on the organisms that inhabited the DWPF site? (3) By comparing control streams with streams on the periphery of the DWPF site, what effect is construction having on the peripheral streams? (4) How effective have efforts been to lessen the impacts of construction, both with respect to erosion control measures and the construction of ``refuge ponds`` as alternative breeding sites for amphibians that formerly bred at Sun Bay? Through the long-term census-taking of biota at the DWPF site and Rainbow Bay, SREL has begun to evaluate the impact of construction on the biota and the effectiveness of mitigation efforts. Similarly, the effects of erosion from the DWPF site on the water quality of S-Area peripheral streams are being assessed. This research provides supporting data relevant to the National Environmental Policy Act (NEPA) of 1969, the Endangered Species Act of 1973, Executive Orders 11988 (Floodplain Management) and 11990 (Protection of Wetlands), and United States Department of Energy (DOE) Guidelines for Compliance with Floodplain/Wetland Environmental Review Requirements (10 CFR 1022).« less

  6. The Relationship of Financial Pressures and Community Characteristics to Closure of Private Safety Net Clinics.

    PubMed

    Li, Suhui; Dor, Avi; Pines, Jesse M; Zocchi, Mark S; Hsia, Renee Y

    2016-10-01

    In order to better understand what threatens vulnerable populations' access to primary care, it is important to understand the factors associated with closing safety net clinics. This article examines how a clinic's financial position, productivity, and community characteristics are associated with its risk of closure. We examine patterns of closures among private-run primary care clinics (PCCs) in California between 2006 and 2012. We use a discrete-time proportional hazard model to assess relative hazard ratios of covariates, and a random-effect hazard model to adjust for unobserved heterogeneity among PCCs. We find that lower net income from patient care, smaller amount of government grants, and lower productivity were associated with significantly higher risk of PCC closure. We also find that federally qualified health centers and nonfederally qualified health centers generally faced the same risk factors of closure. These results underscore the critical role of financial incentives in the long-term viability of safety net clinics. © The Author(s) 2015.

  7. Radial and ulnar fracture treatment with paraosseous clamp-cerclage stabilisation technique in 17 toy breed dogs

    PubMed Central

    Manchi, George; Brunnberg, Mathias M; Shahid, Muhammad; Al Aiyan, Ahmad; Chow, Eric; Brunnberg, Leo; Stein, Silke

    2017-01-01

    Objective Description of surgical technique, complications and outcome of radius/ulna fractures in toy and miniature breed dogs treated with the paraosseous clamp-cerclage stabilisation (PCCS) method. Study design Retrospective study. Methods Clinical records of small breed dogs with fractures of the radius and ulna were reviewed between January 2011 and January 2016. Inclusion criteria were bodyweight of ≤3.5 kg, fracture of the radius and ulna of one or two limbs without previous repair attempts, available follow-up information, and the use of PCCS for repair of the fracture as the sole method of fixation. Results Seventeen fractures in 17 dogs were included in the study. Radiographic union was documented in 13/17 cases. Median time to radiographic union was 13 weeks (range: 5–53 weeks). Major complications occurred in 24 per cent (4/17) due to implant failure, and for revision surgery the PCCS method was chosen in all four cases. Three of four revised fractures healed radiographically. One of the four dogs was lost for radiographic follow-up, but the owner could be contacted for a telephone questionnaire. Eleven of 17 dogs achieved an excellent return to function without any lameness during clinical examination, but 5/17 dogs showed an intermittent mild lameness despite full radiographic union. Routine implant removal was performed in 9/17 dogs. The owners of 15/17 dogs could be contacted for a telephone questionnaire for a long-term follow-up. No further complications were reported. Conclusions PCCS is a feasible low-cost internal fixation technique for repairing radial and ulnar fractures in toy breed dogs. Further biomechanical and clinical studies are needed for better evaluation of the PCCS method. PMID:28761666

  8. Controlled, prospective, randomized, clinical split-mouth evaluation of partial ceramic crowns luted with a new, universal adhesive system/resin cement: results after 18 months.

    PubMed

    Vogl, Vanessa; Hiller, Karl-Anton; Buchalla, Wolfgang; Federlin, Marianne; Schmalz, Gottfried

    2016-12-01

    A new universal adhesive with corresponding luting composite was recently marketed which can be used both, in a self-etch or in an etch-and-rinse mode. In this study, the clinical performance of partial ceramic crowns (PCCs) inserted with this adhesive and the corresponding luting material used in a self-etch or selective etch approach was compared with a self-adhesive universal luting material. Three PCCs were placed in a split-mouth design in 50 patients. Two PCCs were luted with a combination of a universal adhesive/resin cement (Scotchbond Universal/RelyX Ultimate, 3M ESPE) with (SB+E)/without (SB-E) selective enamel etching. Another PCC was luted with a self-adhesive resin cement (RelyX Unicem 2, 3M ESPE). Forty-eight patients were evaluated clinically according to FDI criteria at baseline and 6, 12 and 18 months. For statistical analyses, the chi-square test (α = 0.05) and Kaplan-Meier analysis were applied. Clinically, no statistically significant differences between groups were detected over time. Within groups, clinically significant increase for criterion "marginal staining" was detected for SB-E over 18 months. Kaplan-Meier analysis revealed significantly higher retention rates for SB+E (97.8 %) and SB-E (95.6 %) in comparison to RXU2 (75.6 %). The 18-month clinical performance of a new universal adhesive/composite combination showed no differences with respect to bonding strategy and may be recommended for luting PCCs. Longer-term evaluation is needed to confirm superiority of SB+E over SB-E. At 18 months, the new multi-mode adhesive, Scotchbond Universal, showed clinically reliable results when used for luting PCCs.

  9. Molecular and biochemical characterization of caffeine synthase and purine alkaloid concentration in guarana fruit.

    PubMed

    Schimpl, Flávia Camila; Kiyota, Eduardo; Mayer, Juliana Lischka Sampaio; Gonçalves, José Francisco de Carvalho; da Silva, José Ferreira; Mazzafera, Paulo

    2014-09-01

    Guarana seeds have the highest caffeine concentration among plants accumulating purine alkaloids, but in contrast with coffee and tea, practically nothing is known about caffeine metabolism in this Amazonian plant. In this study, the levels of purine alkaloids in tissues of five guarana cultivars were determined. Theobromine was the main alkaloid that accumulated in leaves, stems, inflorescences and pericarps of fruit, while caffeine accumulated in the seeds and reached levels from 3.3% to 5.8%. In all tissues analysed, the alkaloid concentration, whether theobromine or caffeine, was higher in young/immature tissues, then decreasing with plant development/maturation. Caffeine synthase activity was highest in seeds of immature fruit. A nucleotide sequence (PcCS) was assembled with sequences retrieved from the EST database REALGENE using sequences of caffeine synthase from coffee and tea, whose expression was also highest in seeds from immature fruit. The PcCS has 1083bp and the protein sequence has greater similarity and identity with the caffeine synthase from cocoa (BTS1) and tea (TCS1). A recombinant PcCS allowed functional characterization of the enzyme as a bifunctional CS, able to catalyse the methylation of 7-methylxanthine to theobromine (3,7-dimethylxanthine), and theobromine to caffeine (1,3,7-trimethylxanthine), respectively. Among several substrates tested, PcCS showed higher affinity for theobromine, differing from all other caffeine synthases described so far, which have higher affinity for paraxanthine. When compared to previous knowledge on the protein structure of coffee caffeine synthase, the unique substrate affinity of PcCS is probably explained by the amino acid residues found in the active site of the predicted protein. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Evaluation of quartz melt rate furnace with the nitric-glycolic flowsheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, M. S.; Miller, D. H.

    The Savannah River National Laboratory (SRNL) was tasked to support validation of the Defense Waste Processing Facility (DWPF) melter offgas flammability model for the Nitric-Glycolic (NG) flowsheet. The work is supplemental to the Cold Cap Evaluation Furnace (CEF) testing conducted in 20141 and the Slurry-fed Melt Rate Furnace (SMRF) testing conducted in 20162 that supported Deliverable 4 of the DWPF & Saltstone Facility Engineering Technical Task Request (TTR).3 The Quartz Melt Rate Furnace (QMRF) was evaluated as a bench-scale scoping tool to potentially be used in lieu of or simply prior to the use of the larger-scale SMRF or CEF.more » The QMRF platform has been used previously to evaluate melt rate behavior and offgas compositions of DWPF glasses prepared from the Nitric-Formic (NF) flowsheet but not for the NG flowsheet and not with continuous feeding.4 The overall objective of the 2016-2017 testing was to evaluate the efficacy of the QMRF as a lab-scale platform for steady state, continuously fed melter testing with the NG flowsheet as an alternative to more expensive and complex testing with the SMRF or CEF platforms.« less

  11. DWPF RECYCLE EVAPORATOR FLOWSHEET EVALUATION (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M

    2005-04-30

    The Defense Waste Processing Facility (DWPF) converts the high level waste slurries stored at the Savannah River Site into borosilicate glass for long-term storage. The vitrification process results in the generation of approximately five gallons of dilute recycle streams for each gallon of waste slurry vitrified. This dilute recycle stream is currently transferred to the H-area Tank Farm and amounts to approximately 1,400,000 gallons of effluent per year. Process changes to incorporate salt waste could increase the amount of effluent to approximately 2,900,000 gallons per year. The recycle consists of two major streams and four smaller streams. The first majormore » recycle stream is condensate from the Chemical Process Cell (CPC), and is collected in the Slurry Mix Evaporator Condensate Tank (SMECT). The second major recycle stream is the melter offgas which is collected in the Off Gas Condensate Tank (OGCT). The four smaller streams are the sample flushes, sump flushes, decon solution, and High Efficiency Mist Eliminator (HEME) dissolution solution. These streams are collected in the Decontamination Waste Treatment Tank (DWTT) or the Recycle Collection Tank (RCT). All recycle streams are currently combined in the RCT and treated with sodium nitrite and sodium hydroxide prior to transfer to the tank farm. Tank Farm space limitations and previous outages in the 2H Evaporator system due to deposition of sodium alumino-silicates have led to evaluation of alternative methods of dealing with the DWPF recycle. One option identified for processing the recycle was a dedicated evaporator to concentrate the recycle stream to allow the solids to be recycled to the DWPF Sludge Receipt and Adjustment Tank (SRAT) and the condensate from this evaporation process to be sent and treated in the Effluent Treatment Plant (ETP). In order to meet process objectives, the recycle stream must be concentrated to 1/30th of the feed volume during the evaporation process. The concentrated stream must be pumpable to the DWPF SRAT vessel and should not precipitate solids to avoid fouling the evaporator vessel and heat transfer coils. The evaporation process must not generate excessive foam and must have a high Decontamination Factor (DF) for many species in the evaporator feed to allow the condensate to be transferred to the ETP. An initial scoping study was completed in 2001 to evaluate the feasibility of the evaporator which concluded that the concentration objectives could be met. This initial study was based on initial estimates of recycle concentration and was based solely on OLI modeling of the evaporation process. The Savannah River National Laboratory (SRNL) has completed additional studies using simulated recycle streams and OLI{reg_sign} simulations. Based on this work, the proposed flowsheet for the recycle evaporator was evaluated for feasibility, evaporator design considerations, and impact on the DWPF process. This work was in accordance with guidance from DWPF-E and was performed in accordance with the Technical Task and Quality Assurance Plan.« less

  12. Preliminary analysis of species partitioning in the DWPF melter. Sludge batch 7A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, A. S.; Smith III, F. G.; McCabe, D. J.

    2017-01-01

    The work described in this report is preliminary in nature since its goal was to demonstrate the feasibility of estimating the off-gas carryover from the Defense Waste Processing Facility (DWPF) melter based on a simple mass balance using measured feed and glass pour stream (PS) compositions and time-averaged melter operating data over the duration of one canister-filling cycle. The DWPF has been in radioactive operation for over 20 years processing a wide range of high-level waste (HLW) feed compositions under varying conditions such as bubbled vs. non-bubbled and feeding vs. idling. So it is desirable to find out how themore » varying feed compositions and operating parameters would have impacted the off-gas entrainment. However, the DWPF melter is not equipped with off-gas sampling or monitoring capabilities, so it is not feasible to measure off-gas entrainment rates directly. The proposed method provides an indirect way of doing so.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prod'homme, A.; Drouvot, O.; Gregory, J.

    In 2009, Savannah River Remediation LLC (SRR) assumed the management lead of the Liquid Waste (LW) Program at the Savannah River Site (SRS). The four SRR partners and AREVA, as an integrated subcontractor are performing the ongoing effort to safely and reliably: - Close High Level Waste (HLW) storage tanks; - Maximize waste throughput at the Defense Waste Processing Facility (DWPF); - Process salt waste into stable final waste form; - Manage the HLW liquid waste material stored at SRS. As part of these initiatives, SRR and AREVA deployed a performance management methodology based on Overall Equipment Effectiveness (OEE) atmore » the DWPF in order to support the required production increase. This project took advantage of lessons learned by AREVA through the deployment of Total Productive Maintenance and Visual Management methodologies at the La Hague reprocessing facility in France. The project also took advantage of measurement data collected from different steps of the DWPF process by the SRR team (Melter Engineering, Chemical Process Engineering, Laboratory Operations, Plant Operations). Today the SRR team has a standard method for measuring processing time throughout the facility, a reliable source of objective data for use in decision-making at all levels, and a better balance between engineering department goals and operational goals. Preliminary results show that the deployment of this performance management methodology to the LW program at SRS has already significantly contributed to the DWPF throughput increases and is being deployed in the Saltstone facility. As part of the liquid waste program on Savannah River Site, SRR committed to enhance production throughput of DWPF. Beyond technical modifications implemented at different location of the facility, SRR deployed performance management methodology based on OEE metrics. The implementation benefited from the experience gained by AREVA in its own facilities in France. OEE proved to be a valuable tool in order to support the enhancement program in DWPF by providing unified metrics to measure plant performances, identify bottleneck location, and rank the most time consuming causes from objective data shared between the different groups belonging to the organization. Beyond OEE, the Visual Management tool adapted from the one used at La Hague were also provided in order to further enhance communication within the operating teams. As a result of all the initiatives implemented on DWPF, achieved production has been increased to record rates from FY10 to FY11. It is expected that thanks to the performance management tools now available within DWPF, these results will be sustained and even improved in the future to meet system plan targets. (authors)« less

  14. Characterization of Radioactive Waste Melter Feed Vitrified By Microwave Energy,

    DTIC Science & Technology

    processed in the Defense Waste Processing Facility ( DWPF ) and poured into stainless steel canisters for eventual disposal in a geologic repository...Vitrification of melter feed samples is necessary for DWPF process and product control. Microwave fusion of melter feed at approximately 12OO deg C for 10

  15. Urban Fire Simulation. Version 2

    DTIC Science & Technology

    1993-02-01

    of the building. In this case the distribution of windows in the tract per floor ( DWPF (FLOORHT)) is calculated under the assumption that the number of...given urban area. The probability that no room on the subject floor will flash over is calculated at label (V) from PNRFOF DWPF (FLOORHT) (1 - FFORF

  16. Exosomes derived from pancreatic cancer cells induce activation and profibrogenic activities in pancreatic stellate cells.

    PubMed

    Masamune, Atsushi; Yoshida, Naoki; Hamada, Shin; Takikawa, Tetsuya; Nabeshima, Tatsuhide; Shimosegawa, Tooru

    2018-01-01

    Pancreatic cancer cells (PCCs) interact with pancreatic stellate cells (PSCs), which play a pivotal role in pancreatic fibrogenesis, to develop the cancer-conditioned tumor microenvironment. Exosomes are membrane-enclosed nanovesicles, and have been increasingly recognized as important mediators of cell-to-cell communications. The aim of this study was to clarify the effects of PCC-derived exosomes on cell functions in PSCs. Exosomes were isolated from the conditioned medium of Panc-1 and SUIT-2 PCCs. Human primary PSCs were treated with PCC-derived exosomes. PCC-derived exosomes stimulated the proliferation, migration, activation of ERK and Akt, the mRNA expression of α-smooth muscle actin (ACTA2) and fibrosis-related genes, and procollagen type I C-peptide production in PSCs. Ingenuity pathway analysis of the microarray data identified transforming growth factor β1 and tumor necrosis factor as top upstream regulators. PCCs increased the expression of miR-1246 and miR-1290, abundantly contained in PCC-derived exosomes, in PSCs. Overexpression of miR-1290 induced the expression of ACTA2 and fibrosis-related genes in PSCs. In conclusion, PCC-derived exosomes stimulate activation and profibrogenic activities in PSCs. Exosome-mediated interactions between PSCs and PCCs might play a role in the development of the tumor microenvironment. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Reversal of Apixaban Induced Alterations in Hemostasis by Different Coagulation Factor Concentrates: Significance of Studies In Vitro with Circulating Human Blood

    PubMed Central

    Arellano-Rodrigo, Eduardo; Roquer, Jaume; Reverter, Joan Carles; Sanz, Victoria Veronica; Molina, Patricia; Lopez-Vilchez, Irene; Diaz-Ricart, Maribel; Galan, Ana Maria

    2013-01-01

    Apixaban is a new oral anticoagulant with a specific inhibitory action on FXa. No information is available on the reversal of the antihemostatic action of apixaban in experimental or clinical settings. We have evaluated the effectiveness of different factor concentrates at reversing modifications of hemostatic mechanisms induced by moderately elevated concentrations of apixaban (200 ng/ml) added in vitro to blood from healthy donors (n = 10). Effects on thrombin generation (TG) and thromboelastometry (TEM) parameters were assessed. Modifications in platelet adhesive, aggregating and procoagulant activities were evaluated in studies with blood circulating through damaged vascular surfaces, at a shear rate of 600 s−1. The potential of prothrombin complex concentrates (PCCs; 50 IU/kg), activated prothrombin complex concentrates (aPCCs; 75 IU/kg), or activated recombinant factor VII (rFVIIa; 270 μg/kg), at reversing the antihemostatic actions of apixaban, were investigated. Apixaban interfered with TG kinetics. Delayed lag phase, prolonged time to peak and reduced peak values, were improved by the different concentrates, though modifications in TG patterns were diversely affected depending on the activating reagents. Apixaban significantly prolonged clotting times (CTs) in TEM studies. Prolongations in CTs were corrected by the different concentrates with variable efficacies (rFVIIa≥aPCC>PCC). Apixaban significantly reduced fibrin and platelet interactions with damaged vascular surfaces in perfusion studies (p<0.05 and p<0.01, respectively). Impairments in fibrin formation were normalized by the different concentrates. Only rFVIIa significantly restored levels of platelet deposition. Alterations in hemostasis induced by apixaban were variably compensated by the different factor concentrates investigated. However, effects of these concentrates were not homogeneous in all the tests, with PCCs showing more efficacy in TG, and rFVIIa being more effective on TEM and perfusion studies. Our results indicate that rFVIIa, PCCs and aPCCs have the potential to restore platelet and fibrin components of the hemostasis previously altered by apixaban. PMID:24244342

  18. Ecological studies related to construction of the Defense Waste Processing Facility on the Savannah River Site. Annual report, FY 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-11-01

    Construction of the Defense Waste Processing Facility (DWPF) on the Savannah River Site (SRS) began during FY-1984. The Savannah River Ecology Laboratory (SREL) has completed 15 years of ecological studies related to the construction of the DWPF complex. Prior to construction, the 600-acre site (S-Area) contained a Carolina bay and the headwaters of a stream. Through the long-term census taking of biota at the DWPF site and Rainbow Bay, SREL has been evaluating the impact of construction on the biota and the effectiveness of mitigation efforts. similarly, the effects of erosion from the DWPF site on the water quality ofmore » S-Area peripheral streams are being assessed. This research provides supporting data relevant to the National Environmental Policy Act (NEPA) of 1969, the Endangered Species Act of 1973, Executive orders 11988 (Floodplain Management) and 11990 (Protection of Wetlands), and United States Department of Energy (DOE) Guidelines for Compliance with Floodplain/Wetland Environmental Review Requirements (10 CFR 1022).« less

  19. Assessment of the impact of TOA partitioning on DWPF off-gas flammability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, W. E.

    2013-06-01

    An assessment has been made to evaluate the impact on the DWPF melter off-gas flammability of increasing the amount of TOA in the current solvent used in the Modular Caustic-Side Solvent Extraction Process Unit (MCU) process. The results of this study showed that the concentrations of nonvolatile carbon of the current solvent limit (150 ppm) in the Slurry Mix Evaporator (SME) product would be about 7% higher and the nonvolatile hydrogen would be 2% higher than the actual current solvent (126 ppm) with an addition of up to 3 ppm of TOA when the concentration of Isopar L in themore » effluent transfer is controlled below 87 ppm and the volume of MCU effluent transfer to DWPF is limited to 15,000 gallons per Sludge Receipt and Adjustment Tank (SRAT)/SME cycle. Therefore, the DWPF melter off-gas flammability assessment is conservative for up to an additional 3 ppm of TOA in the effluent based on these assumptions. This report documents the calculations performed to reach this conclusion.« less

  20. Characterization of DWPF recycle condensate tank materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.

    2015-01-01

    A Defense Waste Processing Facility (DWPF) Recycle Condensate Tank (RCT) sample was delivered to the Savannah River National Laboratory (SRNL) for characterization with particular interest in the concentration of I-129, U-233, U-235, total U, and total Pu. Since a portion of Salt Batch 8 will contain DWPF recycle materials, the concentration of I-129 is important to undertand for salt batch planning purposes. The chemical and physical characterizations are also needed as input to the interpretation of future work aimed at determining the propensity of the RCT material to foam, and methods to remediate any foaming potential. According to DWPF themore » Tank Farm 2H evaporator has experienced foaming while processing DWPF recycle materials. The characterization work on the RCT samples has been completed and is reported here. The composition of the Sludge Batch 8 (SB8) RCT material is largely a low base solution of 0.2M NaNO 2 and 0.1M NaNO 3 with a small amount of formate present. Insoluble solids comprise only 0.05 wt.% of the slurry. The solids appear to be largely sludge-like solids based on elemental composition and SEM-EDS analysis. The sample contains an elevated concentration of I-129 (38x) and substantial 59% fraction of Tc-99, as compared to the incoming SB8 Tank 40 feed material. The Hg concentration is 5x, when compared to Fe, of that expected based on sludge carryover. The total U and Pu concentrations are reduced significantly, 0.536 wt.% TS and 2.42E-03 wt.% TS, respectively, with the fissile components, U-233, U-235, Pu-239, and Pu-241, an order of magnitude lower in concentration than those in the SB8 Tank 40 DWPF feed material. This report will be revised to include the foaming study requested in the TTR and outlined in the TTQAP when that work is concluded.« less

  1. DWPF Simulant CPC Studies For SB8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newell, J. D.

    2013-09-25

    Prior to processing a Sludge Batch (SB) in the Defense Waste Processing Facility (DWPF), flowsheet studies using simulants are performed. Typically, the flowsheet studies are conducted based on projected composition(s). The results from the flowsheet testing are used to 1) guide decisions during sludge batch preparation, 2) serve as a preliminary evaluation of potential processing issues, and 3) provide a basis to support the Shielded Cells qualification runs performed at the Savannah River National Laboratory (SRNL). SB8 was initially projected to be a combination of the Tank 40 heel (Sludge Batch 7b), Tank 13, Tank 12, and the Tank 51more » heel. In order to accelerate preparation of SB8, the decision was made to delay the oxalate-rich material from Tank 12 to a future sludge batch. SB8 simulant studies without Tank 12 were reported in a separate report.1 The data presented in this report will be useful when processing future sludge batches containing Tank 12. The wash endpoint target for SB8 was set at a significantly higher sodium concentration to allow acceptable glass compositions at the targeted waste loading. Four non-coupled tests were conducted using simulant representing Tank 40 at 110-146% of the Koopman Minimum Acid requirement. Hydrogen was generated during high acid stoichiometry (146% acid) SRAT testing up to 31% of the DWPF hydrogen limit. SME hydrogen generation reached 48% of of the DWPF limit for the high acid run. Two non-coupled tests were conducted using simulant representing Tank 51 at 110-146% of the Koopman Minimum Acid requirement. Hydrogen was generated during high acid stoichiometry SRAT testing up to 16% of the DWPF limit. SME hydrogen generation reached 49% of the DWPF limit for hydrogen in the SME for the high acid run. Simulant processing was successful using previously established antifoam addition strategy. Foaming during formic acid addition was not observed in any of the runs. Nitrite was destroyed in all runs and no N2O was detected during SME processing. Mercury behavior was consistent with that seen in previous SRAT runs. Mercury was stripped below the DWPF limit on 0.8 wt% for all runs. Rheology yield stress fell within or below the design basis of 1-5 Pa. The low acid Tank 40 run (106% acid stoichiometry) had the highest yield stress at 3.78 Pa.« less

  2. Corrosion impact of reductant on DWPF and downstream facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mickalonis, J. I.; Imrich, K. J.; Jantzen, C. M.

    2014-12-01

    Glycolic acid is being evaluated as an alternate reductant in the preparation of high level waste for the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS). During processing, the glycolic acid is not completely consumed and small quantities of the glycolate anion are carried forward to other high level waste (HLW) facilities. The impact of the glycolate anion on the corrosion of the materials of construction throughout the waste processing system has not been previously evaluated. A literature review had revealed that corrosion data in glycolate-bearing solution applicable to SRS systems were not available. Therefore, testing wasmore » recommended to evaluate the materials of construction of vessels, piping and components within DWPF and downstream facilities. The testing, conducted in non-radioactive simulants, consisted of both accelerated tests (electrochemical and hot-wall) with coupons in laboratory vessels and prototypical tests with coupons immersed in scale-up and mock-up test systems. Eight waste or process streams were identified in which the glycolate anion might impact the performance of the materials of construction. These streams were 70% glycolic acid (DWPF feed vessels and piping), SRAT/SME supernate (Chemical Processing Cell (CPC) vessels and piping), DWPF acidic recycle (DWPF condenser and recycle tanks and piping), basic concentrated recycle (HLW tanks, evaporators, and transfer lines), salt processing (ARP, MCU, and Saltstone tanks and piping), boric acid (MCU separators), and dilute waste (HLW evaporator condensate tanks and transfer line and ETF components). For each stream, high temperature limits and worst-case glycolate concentrations were identified for performing the recommended tests. Test solution chemistries were generally based on analytical results of actual waste samples taken from the various process facilities or of prototypical simulants produced in the laboratory. The materials of construction for most vessels, components and piping were not impacted with the presence of glycolic acid or the impact is not expected to affect the service life. However, the presence of the glycolate anion was found to affect corrosion susceptibility of some materials of construction in the DWPF and downstream facilities, especially at elevated temperatures. The following table summarizes the results of the electrochemical and hot wall testing and indicates expected performance in service with the glycolate anion present.« less

  3. DWPF Safely Dispositioning Liquid Waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-01-05

    The only operating radioactive waste glassification plant in the nation, the Defense Waste Processing Facility (DWPF) converts the liquid radioactive waste currently stored at the Savannah River Site (SRS) into a solid glass form suitable for long-term storage and disposal. Scientists have long considered this glassification process, called “vitrification,” as the preferred option for treating liquid radioactive waste.

  4. Phase Stability Determinations of DWPF Waste Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marra, S.L.

    1999-10-22

    Liquid high-level nuclear waste will be immobilized at the Savannah River Site (SRS) by vitrification in borosilicate glass. To fulfill this requirement, glass samples were heat treated at various times and temperatures. These results will provide guidance to the repository program about conditions to be avoided during shipping, handling and storage of DWPF canistered waste forms.

  5. Promoting resiliency among palliative care clinicians: stressors, coping strategies, and training needs.

    PubMed

    Perez, Giselle K; Haime, Vivian; Jackson, Vicki; Chittenden, Eva; Mehta, Darshan H; Park, Elyse R

    2015-04-01

    Palliative care clinicians (PCCs) are susceptible to burnout, as they regularly witness immense patient and family suffering; however, little is known about their specific challenges and training needs to enhance their long-term sustainability. The purpose of this qualitative study was to explore common stressors, coping strategies, and training needs among PCCs in efforts to inform the development of a targeted Resiliency Program. Utilizing a semistructured interview guide, we conducted a series of in-depth interviews with 15 PCCs at the Massachusetts General Hospital. Content analysis highlighted three main areas of stressors: (1) systematic challenges related to managing large, emotionally demanding caseloads within time constraints; (2) patient factors, such as addressing patients' mutable needs, managing family dynamics, and meeting patient and family demands and expectations; and (3) personal challenges of delineating emotional and professional boundaries. Engaging in healthy behaviors and hobbies and seeking emotional support from colleagues and friends were among the most common methods of coping with stressors. In terms of programmatic topics, PCCs desired training in mind-body skills (e.g., breathing, yoga, meditation), health education about the effects of stress, and cognitive strategies to help reduce ruminative thoughts and negative self-talk. A majority of clinicians stressed the need for brief strategies that could be readily integrated in the workplace. These results suggest that an intervention aimed to enhance PCC sustainability should focus on utilizing a skill-building approach to stress reduction that imparts strategies that can be readily utilized during work hours.

  6. DWPF Safely Dispositioning Liquid Waste

    ScienceCinema

    None

    2018-06-21

    The only operating radioactive waste glassification plant in the nation, the Defense Waste Processing Facility (DWPF) converts the liquid radioactive waste currently stored at the Savannah River Site (SRS) into a solid glass form suitable for long-term storage and disposal. Scientists have long considered this glassification process, called “vitrification,” as the preferred option for treating liquid radioactive waste.

  7. SIMULANT DEVELOPMENT FOR SAVANNAH RIVER SITE HIGH LEVEL WASTE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M; Russell Eibling, R; David Koopman, D

    2007-09-04

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site vitrifies High Level Waste (HLW) for repository internment. The process consists of three major steps: waste pretreatment, vitrification, and canister decontamination/sealing. The HLW consists of insoluble metal hydroxides (primarily iron, aluminum, magnesium, manganese, and uranium) and soluble sodium salts (carbonate, hydroxide, nitrite, nitrate, and sulfate). The HLW is processed in large batches through DWPF; DWPF has recently completed processing Sludge Batch 3 (SB3) and is currently processing Sludge Batch 4 (SB4). The composition of metal species in SB4 is shown in Table 1 as a function of the ratiomore » of a metal to iron. Simulants remove radioactive species and renormalize the remaining species. Supernate composition is shown in Table 2.« less

  8. ROLE OF MANGANESE REDUCTION/OXIDATION (REDOX) ON FOAMING AND MELT RATE IN HIGH LEVEL WASTE (HLW) MELTERS (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C; Michael Stone, M

    2007-03-30

    High-level nuclear waste is being immobilized at the Savannah River Site (SRS) by vitrification into borosilicate glass at the Defense Waste Processing Facility (DWPF). Control of the Reduction/Oxidation (REDOX) equilibrium in the DWPF melter is critical for processing high level liquid wastes. Foaming, cold cap roll-overs, and off-gas surges all have an impact on pouring and melt rate during processing of high-level waste (HLW) glass. All of these phenomena can impact waste throughput and attainment in Joule heated melters such as the DWPF. These phenomena are caused by gas-glass disequilibrium when components in the melter feeds convert to glass andmore » liberate gases such as H{sub 2}O vapor (steam), CO{sub 2}, O{sub 2}, H{sub 2}, NO{sub x}, and/or N{sub 2}. During the feed-to-glass conversion in the DWPF melter, multiple types of reactions occur in the cold cap and in the melt pool that release gaseous products. The various gaseous products can cause foaming at the melt pool surface. Foaming should be avoided as much as possible because an insulative layer of foam on the melt surface retards heat transfer to the cold cap and results in low melt rates. Uncontrolled foaming can also result in a blockage of critical melter or melter off-gas components. Foaming can also increase the potential for melter pressure surges, which would then make it difficult to maintain a constant pressure differential between the DWPF melter and the pour spout. Pressure surges can cause erratic pour streams and possible pluggage of the bellows as well. For these reasons, the DWPF uses a REDOX strategy and controls the melt REDOX between 0.09 {le} Fe{sup 2+}/{summation}Fe {le} 0.33. Controlling the DWPF melter at an equilibrium of Fe{sup +2}/{summation}Fe {le} 0.33 prevents metallic and sulfide rich species from forming nodules that can accumulate on the floor of the melter. Control of foaming, due to deoxygenation of manganic species, is achieved by converting oxidized MnO{sub 2} or Mn{sub 2}O{sub 3} species to MnO during melter preprocessing. At the lower redox limit of Fe{sup +2}/{summation}Fe {approx} 0.09 about 99% of the Mn{sup +4}/Mn{sup +3} is converted to Mn{sup +2}. Therefore, the lower REDOX limits eliminates melter foaming from deoxygenation.« less

  9. Short Term Weather Forecasting in Real Time in a Base Weather Station Setting

    DTIC Science & Technology

    1993-10-01

    SMSL DWPF Figure 25. Plot of surface airways observations at 18 UTC, I April 1993. Data is plotted in conventional notation. 35 mu eb 23 -:.-j-32 29292 3...34 38 3 ... .. :......:.. . . O0i-02-93 0600 GMT CLCT TMPF WSYM SMSL DWPF Figure 26. As in Figure 25, except for 06 UTC, 2 April 1993. 36 Figure 27

  10. Long-term safety and efficacy of a pasteurized nanofiltrated prothrombin complex concentrate (Beriplex P/N): a pharmacovigilance study.

    PubMed

    Hanke, A A; Joch, C; Görlinger, K

    2013-05-01

    The rapid reversal of the effects of vitamin K antagonists is often required in cases of emergency surgery and life-threatening bleeding, or during bleeding associated with high morbidity and mortality such as intracranial haemorrhage. Increasingly, four-factor prothrombin complex concentrates (PCCs) containing high and well-balanced concentrations of vitamin K-dependent coagulation factors are recommended for emergency oral anticoagulation reversal. Both the safety and efficacy of such products are currently in focus, and their administration is now expanding into the critical care setting for the treatment of life-threatening bleeding and coagulopathy resulting either perioperatively or in cases of acute trauma. After 15 yr of clinical use, findings of a pharmacovigilance report (February 1996-March 2012) relating to the four-factor PCC Beriplex P/N (CSL Behring, Marburg, Germany) were analysed and are presented here. Furthermore, a review of the literature with regard to the efficacy and safety of four-factor PCCs was performed. Since receiving marketing authorization (February 21, 1996), ~647 250 standard applications of Beriplex P/N have taken place. During this time, 21 thromboembolic events judged to be possibly related to Beriplex P/N administration have been reported, while no incidences of viral transmission or heparin-induced thrombocytopenia were documented. The low risk of thromboembolic events reported during the observation period (one in ~31 000) is in line with the incidence observed with other four-factor PCCs. In general, four-factor PCCs have proven to be well tolerated and highly effective in the rapid reversal of vitamin K antagonists.

  11. ISOLOK VALVE ACCEPTANCE TESTING FOR DWPF SME SAMPLING PROCESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T.; Hera, K.; Coleman, C.

    2011-12-05

    Evaluation of the Defense Waste Processing Facility (DWPF) Chemical Process Cell (CPC) cycle time identified several opportunities to improve the CPC processing time. Of the opportunities, a focus area related to optimizing the equipment and efficiency of the sample turnaround time for DWPF Analytical Laboratory was identified. The Mechanical Systems & Custom Equipment Development (MS&CED) Section of the Savannah River National Laboratory (SRNL) evaluated the possibility of using an Isolok{reg_sign} sampling valve as an alternative to the Hydragard{reg_sign} valve for taking process samples. Previous viability testing was conducted with favorable results using the Isolok sampler and reported in SRNL-STI-2010-00749 (1).more » This task has the potential to improve operability, reduce maintenance time and decrease CPC cycle time. This report summarizes the results from acceptance testing which was requested in Task Technical Request (TTR) HLW-DWPF-TTR-2010-0036 (2) and which was conducted as outlined in Task Technical and Quality Assurance Plan (TTQAP) SRNL-RP-2011-00145 (3). The Isolok to be tested is the same model which was tested, qualified, and installed in the Sludge Receipt Adjustment Tank (SRAT) sample system. RW-0333P QA requirements apply to this task. This task was to qualify the Isolok sampler for use in the DWPF Slurry Mix Evaporator (SME) sampling process. The Hydragard, which is the current baseline sampling method, was used for comparison to the Isolok sampling data. The Isolok sampler is an air powered grab sampler used to 'pull' a sample volume from a process line. The operation of the sampler is shown in Figure 1. The image on the left shows the Isolok's spool extended into the process line and the image on the right shows the sampler retracted and then dispensing the liquid into the sampling container. To determine tank homogeneity, a Coliwasa sampler was used to grab samples at a high and low location within the mixing tank. Data from the two locations were compared to determine if the contents of the tank were well mixed. The Coliwasa sampler is a tube with a stopper at the bottom and is designed to obtain grab samples from specific locations within the drum contents. A position paper (4) was issued to address the prototypic flow loop issues and simulant selections. A statistically designed plan (5) was issued to address the total number of samples each sampler needed to pull, to provide the random order in which samples were pulled and to group samples for elemental analysis. The TTR required that the Isolok sampler perform as well as the Hydragard sampler during these tests to ensure the acceptability of the Isolok sampler for use in the DWPF sampling cells. Procedure No.L9.4-5015 was used to document the sample parameters and process steps. Completed procedures are located in R&D Engineering job folder 23269.« less

  12. Pheochromocytoma/Paraganglioma: A Poster Child for Cancer Metabolism.

    PubMed

    Tevosian, Sergei G; Ghayee, Hans K

    2018-05-01

    Pheochromocytomas (PCCs) are tumors that are derived from the chromaffin cells of the adrenal medulla. Extra-adrenal PCCs called paragangliomas (PGLs) are derived from the sympathetic and parasympathetic chain ganglia. PCCs secrete catecholamines, which cause hypertension and have adverse cardiovascular consequences as a result of catecholamine excess. PGLs may or may not produce catecholamines depending on their genetic type and anatomical location. The most worrisome aspect of these tumors is their ability to become aggressive and metastasize; there are no known cures for metastasized PGLs. Original articles and reviews indexed in PubMed were identified by querying with specific PCC/PGL- and Krebs cycle pathway-related terms. Additional references were selected through the in-depth analysis of the relevant publications. We primarily discuss Krebs cycle mutations that can be instrumental in helping investigators identify key biological pathways and molecules that may serve as biomarkers of or treatment targets for PCC/PGL. The mainstay of treatment of patients with PCC/PGLs is surgical. However, the tide may be turning with the discovery of new genes associated with PCC/PGLs that may shed light on oncometabolites used by these tumors.

  13. Chloroplast behaviour and interactions with other organelles in Arabidopsis thaliana pavement cells.

    PubMed

    Barton, Kiah A; Wozny, Michael R; Mathur, Neeta; Jaipargas, Erica-Ashley; Mathur, Jaideep

    2018-01-29

    Chloroplasts are a characteristic feature of green plants. Mesophyll cells possess the majority of chloroplasts and it is widely believed that, with the exception of guard cells, the epidermal layer in most higher plants does not contain chloroplasts. However, recent observations on Arabidopsis thaliana have shown a population of chloroplasts in pavement cells that are smaller than mesophyll chloroplasts and have a high stroma to grana ratio. Here, using stable transgenic lines expressing fluorescent proteins targeted to the plastid stroma, plasma membrane, endoplasmic reticulum, tonoplast, nucleus, mitochondria, peroxisomes, F-actin and microtubules, we characterize the spatiotemporal relationships between the pavement cell chloroplasts (PCCs) and their subcellular environment. Observations on the PCCs suggest a source-sink relationship between the epidermal and the mesophyll layers, and experiments with the Arabidopsis mutants glabra2 ( gl2 ) and immutans ( im ), which show altered epidermal plastid development, underscored their developmental plasticity. Our findings lay down the foundation for further investigations aimed at understanding the precise role and contributions of PCCs in plant interactions with the environment. © 2018. Published by The Company of Biologists Ltd.

  14. Nitric-glycolic flowsheet testing for maximum hydrogen generation rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martino, C. J.; Newell, J. D.; Williams, M. S.

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site is developing for implementation a flowsheet with a new reductant to replace formic acid. Glycolic acid has been tested over the past several years and found to effectively replace the function of formic acid in the DWPF chemical process. The nitric-glycolic flowsheet reduces mercury, significantly lowers the chemical generation of hydrogen and ammonia, allows purge reduction in the Sludge Receipt and Adjustment Tank (SRAT), stabilizes the pH and chemistry in the SRAT and the Slurry Mix Evaporator (SME), allows for effective adjustment of the SRAT/SME rheology, and is favorablemore » with respect to melter flammability. The objective of this work was to perform DWPF Chemical Process Cell (CPC) testing at conditions that would bound the catalytic hydrogen production for the nitric-glycolic flowsheet.« less

  15. Literature Review: Assessment of DWPF Melter and Melter Off-gas System Lifetime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reigel, M.

    2015-07-30

    Testing to date for the MOC for the Hanford Waste Treatment and Immobilization Plant (WTP) melters is being reviewed with the lessons learned from DWPF in mind and with consideration to the changes in the flowsheet/feed compositions that have occurred since the original testing was performed. This information will be presented in a separate technical report that identifies any potential gaps for WTP processing.

  16. Poison control centers in developing countries and Asia's need for toxicology education

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makalinao, Irma R.; Awang, Rahmat

    2005-09-01

    Poison control centers (PCCs) in developing countries have been set up in response to the challenge of decreasing mortality and morbidity from poisoning. The services range from poison information to actual clinical treatment mostly of acute cases. Lately, PCCs have expanded from their traditional role to one that actively engages in community health studies, toxicovigilance along with treatment of chronic poisoning. Recognizing that types of poisoning and specific needs may vary from country to country, toxicology education that addresses these unique regional issues has become more necessary. Toxicology education, both formal and informal, exists in various stages of development inmore » Asia. Clearly, there are gaps that need to be addressed especially in areas where there are no poison centers or where strengthening is necessary. Collaboration between PCCs in developing countries can help augment available resources including human, analytical and technical expertise. The critical mass of trained toxicologists will fill in the demand for clinical and regulatory specialists and educators as well. This paper highlights the experiences and resources available to the Philippine and Malaysian poison centers and the strengths generated by networking and collaboration. The role of Asia Pacific Association of Medical Toxicology (APAMT) as the Science NGO representative to the Intergovernmental Forum on Chemical Safety (IFCS) forum standing committee in promoting chemical safety at the regional level will be discussed. The 'Clearinghouse on the Sound Management of Chemicals', a platform for engaging multi-stakeholder and interdisciplinary partnerships, will be described as a possible model for capacity building to advance chemical safety through education and training not only in developing countries in Asia but globally as well.« less

  17. Actual waste demonstration of the nitric-glycolic flowsheet for sludge batch 9 qualification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newell, D.; Pareizs, J.; Martino, C.

    For each sludge batch that is processed in the Defense Waste Processing Facility (DWPF), the Savannah River National Laboratory (SRNL) performs qualification testing to demonstrate that the sludge batch is processable. Based on the results of this actual-waste qualification and previous simulant studies, SRNL recommends implementation of the nitric-glycolic acid flowsheet in DWPF. Other recommendations resulting from this demonstration are reported in section 5.0.

  18. Adrenal medullary hyperplasia is a precursor lesion for pheochromocytoma in MEN2 syndrome.

    PubMed

    Korpershoek, Esther; Petri, Bart-Jeroen; Post, Edward; van Eijck, Casper H J; Oldenburg, Rogier A; Belt, Eric J T; de Herder, Wouter W; de Krijger, Ronald R; Dinjens, Winand N M

    2014-10-01

    Adrenal medullary hyperplasias (AMHs) are adrenal medullary proliferations with a size < 1 cm, while larger lesions are considered as pheochromocytoma (PCC). This arbitrary distinction has been proposed decades ago, although the biological relationship between AMH and PCC has never been investigated. Both lesions are frequently diagnosed in multiple endocrine neoplasia type 2 (MEN2) patients in whom they are considered as two unrelated clinical entities. In this study, we investigated the molecular relationship between AMH and PCC in MEN2 patients. Molecular aberrations of 19 AMHs and 13 PCCs from 18 MEN2 patients were determined by rearranged during transfection (RET) proto-oncogene mutation analysis and loss of heterozygosity (LOH) analysis for chromosomal regions 1p13, 1p36, 3p, and 3q, genomic areas covering commonly altered regions in RET-related PCC. Identical molecular aberrations were found in all AMHs and PCCs, at similar frequencies. LOH was seen for chromosomes 1p13 in 8 of 18 (44%), 1p36 in 9 of 15 (60%), 3p12-13 in 12 of 18 (67%), and 3q23-24 in 10 of 16 (63%) of AMHs, and for chromosome 1p13 in 13 of 13 (100%), 1p36 in 7 of 11 (64%), 3p12-13 in 4 of 11 (36%), and 3q23-24 in 11 of 12 (92%) of PCCs. Our results indicate that AMHs are not hyperplasias and, in clinical practice, should be regarded as PCCs, which has an impact on diagnosis and treatment of MEN2 patients. We therefore propose to replace the term AMH by micro-PCC to indicate adrenal medullary proliferations of less than 1 cm.

  19. IMPROVED ANTIFOAM AGENT STUDY END OF YEAR REPORT, EM PROJECT 3.2.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D.; Koopman, D.; Newell, J.

    2011-09-30

    Antifoam 747 is added to minimize foam produced by process gases and water vapor during chemical processing of sludge in the Defense Waste Processing Facility (DWPF). This allows DWPF to maximize acid addition and evaporation rates to minimize the cycle time in the Chemical Processing Cell (CPC). Improvements in DWPF melt rate due to the addition of bubblers in the melter have resulted in the need for further reductions in cycle time in the CPC. This can only be accomplished with an effective antifoam agent. DWPF production was suspended on March 22, 2011 as the result of a Flammable Gasmore » New Information/(NI) Potential Inadequacy in the Safety Analysis (PISA). The issue was that the DWPF melter offgas flammability strategy did not take into account the H and C in the antifoam, potentially flammable components, in the melter feed. It was also determined the DWPF was using much more antifoam than anticipated due to a combination of longer processing in the CPC due to high Hg, longer processing due to Actinide Removal Process (ARP)/Modular Caustic Side Solvent Extraction Unit (MCU) additions, and adding more antifoam than recommended. The resolution to the PISA involved and assessment of the impact of the antifoam on melter flammability and the implementation of a strategy to control additions within acceptable levels. This led to the need to minimize the use of Antifoam 747 in processing beginning in May 2011. DWPF has had limited success in using Antifoam 747 in caustic processing. Since starting up the ARP facility, the ARP product (similar chemically to caustic sludge) is added to the Sludge Receipt and Adjustment Tank (SRAT) at boiling and evaporated to maintain a constant SRAT volume. Although there is very little offgas generated during caustic boiling, there is a large volume of water vapor produced which can lead to foaming. High additions and more frequent use of antifoam are used to mitigate the foaming during caustic boiling. The result of these three issues above is that DWPF had three antifoam needs in FY2011: (1) Determine the cause of the poor Antifoam 747 performance during caustic boiling; (2) Determine the decomposition products of Antifoam 747 during CPC processing; and (3) Improve the effectiveness of Antifoam 747, in order to minimize the amount used. Testing was completed by Illinois Institute of Technology (IIT) and Savannah River National Laboratory (SRNL) researchers to address these questions. The testing results reported were funded by both DWPF and DOE/EM 31. Both sets of results are reported in this document for completeness. The results of this research are summarized: (1) The cause for the poor Antifoam 747 performance during caustic boiling was the high hydrolysis rate, cleaving the antifoam molecule in two, leading to poor antifoam performance. In testing with pH solutions from 1 to 13, the antifoam degraded quickly at a pH < 4 and pH > 10. As the antifoam decomposed it lost its spreading ability (wetting agent performance), which is crucial to its antifoaming performance. During testing of a caustic sludge simulants, there was more foam in tests with added Antifoam 747 than in tests without added antifoam. (2) Analyses were completed to determine the composition of the two antifoam components and Antifoam 747. In addition, the decomposition products of Antifoam 747 were determined during CPC processing of sludge simulants. The main decomposition products were identified primarily as Long Chain Siloxanes, boiling point > 400 C. Total antifoam recovery was 33% by mass. In a subsequent study, various compounds potentially related to antifoam were found using semi-volatile organic analysis and volatile organic analysis on the hexane extractions and hexane rinses. These included siloxanes, trimethyl silanol, methoxy trimethyl silane, hexamethyl disiloxane, aliphatic hydrocarbons, dioctyl phthalate, and emulsifiers. Cumulatively, these species amounted to less than 3% of the antifoam mass. The majority of the antifoam was identified using carbon analysis of the SRAT product (40-80% by mass) and silicon analysis (23-83% by mass) of the condensate. Both studies recommended a better solvent for antifoam and more specific tests for antifoam degradation products than the Si and C analyses used. (3) The DWPF Antifoam 747 Purchase Specification was revised in Month, 2011 with a goal of increasing the quality of Antifoam 747. The purchase specification was changed to specify the manufacturer and product for both components that are blended by Siovation to produce Antifoam 747 for DWPF. Testing of Antifoam produced using both the old and new antifoam specifications perform very similarly in testing. Since the change in purchase specification has not improved antifoam performance, an improved antifoam agent is required.« less

  20. Interim glycol flowsheet reduction/oxidation (redox) model for the Defense Waste Processing Facility (DWPF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C. M.; Williams, M. S.; Zamecnik, J. R.

    Control of the REDuction/OXidation (REDOX) state of glasses containing high concentrations of transition metals, such as High Level Waste (HLW) glasses, is critical in order to eliminate processing difficulties caused by overly reduced or overly oxidized melts. Operation of a HLW melter at Fe +2/ΣFe ratios of between 0.09 and 0.33, a range which is not overly oxidizing or overly reducing, helps retain radionuclides in the melt, i.e. long-lived radioactive 99Tc species in the less volatile reduced Tc 4+ state, 104Ru in the melt as reduced Ru +4 state as insoluble RuO 2, and hazardous volatile Cr 6+ in themore » less soluble and less volatile Cr +3 state in the glass. The melter REDOX control balances the oxidants and reductants from the feed and from processing additives such as antifoam. Currently, the Defense Waste Processing Facility (DWPF) is running a formic acid-nitric acid (FN) flowsheet where formic acid is the main reductant and nitric acid is the main oxidant. During decomposition formate and formic acid releases H 2 gas which requires close control of the melter vapor space flammability. A switch to a nitric acid-glycolic acid (GN) flowsheet is desired as the glycolic acid flowsheet releases considerably less H 2 gas upon decomposition. This would greatly simplify DWPF processing. Development of an EE term for glycolic acid in the GN flowsheet is documented in this study.« less

  1. EVALUATION OF THE IMPACT OF THE DEFENSE WASTE PROCESSING FACILITY (DWPF) LABORATORY GERMANIUM OXIDE USE ON RECYCLE TRANSFERS TO THE H-TANK FARM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C.; Laurinat, J.

    2011-08-15

    When processing High Level Waste (HLW) glass, the Defense Waste Processing Facility (DWPF) cannot wait until the melt or waste glass has been made to assess its acceptability, since by then no further changes to the glass composition and acceptability are possible. Therefore, the acceptability decision is made on the upstream feed stream, rather than on the downstream melt or glass product. This strategy is known as 'feed forward statistical process control.' The DWPF depends on chemical analysis of the feed streams from the Sludge Receipt and Adjustment Tank (SRAT) and the Slurry Mix Evaporator (SME) where the frit plusmore » adjusted sludge from the SRAT are mixed. The SME is the last vessel in which any chemical adjustments or frit additions can be made. Once the analyses of the SME product are deemed acceptable, the SME product is transferred to the Melter Feed Tank (MFT) and onto the melter. The SRAT and SME analyses have been analyzed by the DWPF laboratory using a 'Cold Chemical' method but this dissolution did not adequately dissolve all the elemental components. A new dissolution method which fuses the SRAT or SME product with cesium nitrate (CsNO{sub 3}), germanium (IV) oxide (GeO{sub 2}) and cesium carbonate (Cs{sub 2}CO{sub 3}) into a cesium germanate glass at 1050 C in platinum crucibles has been developed. Once the germanium glass is formed in that fusion, it is readily dissolved by concentrated nitric acid (about 1M) to solubilize all the elements in the SRAT and/or SME product for elemental analysis. When the chemical analyses are completed the acidic cesium-germanate solution is transferred from the DWPF analytic laboratory to the Recycle Collection Tank (RCT) where the pH is increased to {approx}12 M to be released back to the tank farm and the 2H evaporator. Therefore, about 2.5 kg/yr of GeO{sub 2}/year will be diluted into 1.4 million gallons of recycle. This 2.5 kg/yr of GeO{sub 2} may increase to 4 kg/yr when improvements are implemented to attain an annual canister production goal of 400 canisters. Since no Waste Acceptance Criteria (WAC) exists for germanium in the Tank Farm, the Effluent Treatment Project, or the Saltstone Production Facility, DWPF has requested an evaluation of the fate of the germanium in the caustic environment of the RCT, the 2H evaporator, and the tank farm. This report evaluates the effect of the addition of germanium to the tank farm based on: (1) the large dilution of Ge in the RCT and tank farm; (2) the solubility of germanium in caustic solutions (pH 12-13); (3) the potential of germanium to precipitate as germanium sodalites in the 2H Evaporator; and (4) the potential of germanium compounds to precipitate in the evaporator feed tank. This study concludes that the impacts of transferring up to 4 kg/yr germanium to the RCT (and subsequently the 2H evaporator feed tank and the 2H evaporator) results in <2 ppm per year (1.834 mg/L) which is the maximum instantaneous concentration expected from DWPF. This concentration is insignificant as most sodium germanates are soluble at the high pH of the feed tank and evaporator solutions. Even if sodium aluminosilicates form in the 2H evaporator, the Ge will likely substitute for some small amount of the Si in these structures and will be insignificant. It is recommended that the DWPF continue with their strategy to add germanium as a laboratory chemical to Attachment 8.2 of the DWPF Waste Compliance Plan (WCP).« less

  2. Corrosion Testing of Monofrax K-3 Refractory in Defense Waste Processing Facility (DWPF) Alternate Reductant Feeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, M.; Jantzen, C.; Burket, P.

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS) uses a combination of reductants and oxidants while converting high level waste (HLW) to a borosilicate waste form. A reducing flowsheet is maintained to retain radionuclides in their reduced oxidation states which promotes their incorporation into borosilicate glass. For the last 20 years of processing, the DWPF has used formic acid as the main reductant and nitric acid as the main oxidant. During reaction in the Chemical Process Cell (CPC), formate and formic acid release measurably significant H 2 gas which requires monitoring of certain vessel’s vapor spaces.more » A switch to a nitric acid-glycolic acid (NG) flowsheet from the nitric-formic (NF) flowsheet is desired as the NG flowsheet releases considerably less H 2 gas upon decomposition. This would greatly simplify DWPF processing from a safety standpoint as close monitoring of the H 2 gas concentration could become less critical. In terms of the waste glass melter vapor space flammability, the switch from the NF flowsheet to the NG flowsheet showed a reduction of H 2 gas production from the vitrification process as well. Due to the positive impact of the switch to glycolic acid determined on the flammability issues, evaluation of the other impacts of glycolic acid on the facility must be examined.« less

  3. Integration of the Uncertainties of Anion and TOC Measurements into the Flammability Control Strategy for Sludge Batch 8 at the DWPF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T. B.

    2013-03-14

    The Savannah River National Laboratory (SRNL) has been working with the Savannah River Remediation (SRR) Defense Waste Processing Facility (DWPF) in the development and implementation of a flammability control strategy for DWPF’s melter operation during the processing of Sludge Batch 8 (SB8). SRNL’s support has been in response to technical task requests that have been made by SRR’s Waste Solidification Engineering (WSE) organization. The flammability control strategy relies on measurements that are performed on Slurry Mix Evaporator (SME) samples by the DWPF Laboratory. Measurements of nitrate, oxalate, formate, and total organic carbon (TOC) standards generated by the DWPF Laboratory aremore » presented in this report, and an evaluation of the uncertainties of these measurements is provided. The impact of the uncertainties of these measurements on DWPF’s strategy for controlling melter flammability also is evaluated. The strategy includes monitoring each SME batch for its nitrate content and its TOC content relative to the nitrate content and relative to the antifoam additions made during the preparation of the SME batch. A linearized approach for monitoring the relationship between TOC and nitrate is developed, equations are provided that integrate the measurement uncertainties into the flammability control strategy, and sample calculations for these equations are shown to illustrate the impact of the uncertainties on the flammability control strategy.« less

  4. Modeling and simulation of large scale stirred tank

    NASA Astrophysics Data System (ADS)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the agitation of the vessel is adequate to produce a homogenous mixture but not so high that it produces excessive erosion to internal components. The main findings reported by this study were: (1) Careful consideration of the fluid yield stress characteristic is required to make predictions of fluid flow behavior. Laminar Models can predict flow patterns and stagnant regions in the tank until full movement of the flow field occurs. Power Curves and flow patterns were developed for the full scale mixing model to show the differences in expected performance of the mixing process for a broad range of fluids that exhibit Herschel--Bulkley and Bingham Plastic flow behavior. (2) The impeller power demand is independent of the flow model selection for turbulent flow fields in the region of the impeller. The laminar models slightly over predicted the agitator impeller power demand produced by turbulent models. (3) The CFD results show that the power number produced by the mixing system is independent of size. The 40 gallon model produced the same power number results as the 9300 gallon model for the same process conditions. (4) CFD Results show that the Scale-Up of fluid motion in a 40 gallon tank should compare with fluid motion at full scale, 9300 gallons by maintaining constant impeller Tip Speed.

  5. Antifoam degradation testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D. P.; Zamecnik, J. R.; Newell, D. D.

    2015-08-20

    This report describes the results of testing to quantify the degradation products resulting from the dilution and storage of Antifoam 747. Antifoam degradation is of concern to the Defense Waste Processing Facility (DWPF) due to flammable decomposition products in the vapor phase of the Chemical Process Cell vessels, as well as the collection of flammable and organic species in the offgas condensate. The discovery that hexamethyldisiloxane is formed from the antifoam decomposition was the basis for a Potential Inadequacy in the Safety Analysis declaration by the DWPF.

  6. HLW system plan - revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-01-14

    The projected ability of the Tank Farm to support DWPF startup and continued operation has diminished somewhat since revision 1 of this Plan. The 13 month delay in DWPF startup, which actually helps the Tank Farm condition in the near term, was more than offset by the 9 month delay in ITP startup, the delay in the Evaporator startups and the reduction to Waste Removal funding. This Plan does, however, describe a viable operating strategy for the success of the HLW System and Mission, albeit with less contingency and operating flexibility than in the past. HLWM has focused resources frommore » within the division on five near term programs: The three evaporator restarts, DWPF melter heatup and completion of the ITP outage. The 1H Evaporator was restarted 12/28/93 after a 9 month shutdown for an extensive Conduct of Operations upgrade. The 2F and 2H Evaporators are scheduled to restart 3/94 and 4/94, respectively. The RHLWE startup remains 11/17/97.« less

  7. The Reversal of Direct Oral Anticoagulants in Animal Models

    PubMed Central

    Honickel, Markus; Akman, Necib; Grottke, Oliver

    2017-01-01

    ABSTRACT Several direct oral anticoagulants (DOACs), including direct thrombin and factor Xa inhibitors, have been approved as alternatives to vitamin K antagonist anticoagulants. As with any anticoagulant, DOAC use carries a risk of bleeding. In patients with major bleeding or needing urgent surgery, reversal of DOAC anticoagulation may be required, presenting a clinical challenge. The optimal strategy for DOAC reversal is being refined, and may include use of hemostatic agents such as prothrombin complex concentrates (PCCs; a source of concentrated clotting factors), or DOAC-specific antidotes (which bind their target DOAC to abrogate its activity). Though promising, most specific antidotes are still in development. Preclinical animal research is the key to establishing the efficacy and safety of potential reversal agents. Here, we summarize published preclinical animal studies on reversal of DOAC anticoagulation. These studies (n = 26) were identified via a PubMed search, and used rodent, rabbit, pig, and non-human primate models. The larger of these animals have the advantages of similar blood volume/hemodynamics to humans, and can be used to model polytrauma. We find that in addition to varied species being used, there is variability in the models and assays used between studies; we suggest that blood loss (bleeding volume) is the most clinically relevant measure of DOAC anticoagulation-related bleeding and its reversal. The studies covered indicate that both PCCs and specific reversal agents have the potential to be used as part of a clinical strategy for DOAC reversal. For the future, we advocate the development and use of standardized, clinically, and pharmacologically relevant animal models to study novel DOAC reversal strategies. PMID:28471371

  8. Evaluation of materials and surface treatments for the DWPF melter pour spout bellows protective liner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imrich, K.J.; Bickford, D.F.; Wicks, G.G.

    1997-06-27

    A study was undertaken to evaluate a variety of materials and coatings for the DWPF pour spout bellows liner. The intent was to identify materials that would minimize or eliminate adherence of glass on the bellows liner wall and help minimize possible pluggage during glass pouring operations in DWPF. Glass has been observed adhering to the current bellow`s liner, which is made of 304L stainless steel. Materials were identified which successfully allowed molten glass to hit these surfaces and not adhere. Results of this study suggest that if these materials are used in the pouring system glass could still fallmore » into the canister without appreciable plugging, even if an unstable glass stream is produced. The materials should next be evaluated under the most realistic DWPF conditions possible. Other findings of this study include the following: (1) increasing coupon thickness produced a favorable increase in the glass sticking temperature; (2) highly polished surfaces, with the exception of the oxygen-free copper coupon coated with Armoloy dense chromium, did not produce a significant improvement in the glass sticking temperature, increasing angle of contact of the coupon to the falling glass did not yield a significant performance improvement; (3) electroplating with gold and silver and various diffusion coatings did not produce a significant increase in the glass sticking temperature. However, they may provide added oxidation and corrosion resistance for copper and bronze liners. Boron nitride coatings delaminated immediately after contact with the molten glass.« less

  9. Assessment of the Impact of a New Guanidine Suppressor In NGS on F/H Laboratory Analyses For DWPF and Saltstone MCU Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.

    2013-04-29

    Implementation of the Next Generation Solvent (NGS) in the Modular Caustic-Side Solvent Extraction Unit (MCU) will now proceed with a new suppressor compound, 1,2,3-tris(3,7-dimethyloctyl)guanidine (TiDG), replacing the originally planned suppressor for NGS, 1,3-dicyclohexyl-2-(11-methyldodecyl) guanidine (DCiTG). The Savannah River National Laboratory (SRNL) was tasked with evaluating the potential impact to F/H Laboratory analyses supporting the Defense Waste Processing Facility (DWPF) Waste Acceptance Criteria (WAC) used to qualify transfers of MCU Strip Effluent (SE) into the facility and the Saltstone WAC used to qualify transfers of Tank 50 containing Decontaminated Salt Solution (DSS) from MCU into Saltstone. This assigned scope is coveredmore » by a Task Technical and Quality Assurance Plan (TTQAP). Previous impact evaluations were conducted when the DCiTG suppressor was planned for NGS and concluded that there was no impact to either the determination of MCU SE pH nor the analysis of Isopar® L carryover in the MCU SE and DSS streams. SRNL reported on this series of cross-check studies between the SRNL and F/H Laboratories. The change in suppressor from DCiTG to TiDG in the NGS should not impact the measurement of Isopar® L or pH in SE or DSS necessary to satisfy DWPF and Saltstone WAC (Tank 50) criteria, respectively. A statistical study of the low bias observed in Isopar® L measurements in both SRNL and F/H Laboratories may be necessary now that the final NGS composition is fixed in order to quantify the low bias so that a proper correction can be applied to measurements critical to the DWPF and Saltstone WACs. Depending upon the final DWPF WAC requirement put in place for SE pH, it could become necessary to implement an alternative ICP-AES measurement of boron. The current blended solvent system testing in SRNL should address any impacts to Isopar® L carryover into either the DSS or the SE. It is recommended that SRNL monitor the current blended solvent work underway with simulants in SRNL as well as any DWPF CPC testing done with the new SE stream to ascertain whether any need develops that could result in modification of any currently planned F/H Laboratory testing protocols.« less

  10. Animal bites and stings reported by United States poison control centers, 2001-2005.

    PubMed

    Langley, Ricky L

    2008-01-01

    There is not a single data source for information on the extent of nonfatal injuries inflicted by animals. Although individuals bitten or stung by animals may not visit a health care provider, they may call poison control centers (PCCs) for information. These centers are one source of information on the frequency of occurrence of injuries from animals. The American Association of Poison Control Centers compiles an annual report of exposure calls to various agents, including chemicals, medications, animal bites and stings, plants, and use of antivenoms from their network of PCCs. An estimate of the severity of exposure for each call is also determined. This review examines summary data on different species of animal bites and stings reported by PCCs from 2001 to 2005. From 2001 to 2005 there were 472 760 reports of animal bites and stings, an average of 94,552 per year. There was a trend noted for increasing use of antivenom over this period. Twenty-seven deaths were recorded, most from snakebites. Poison control centers are a source of information for health care workers on management of animal bites and stings. The database maintained by the American Association of Poison Control Centers is another source of information on the magnitude and public health impact of injuries from animals.

  11. The Five S’s: A Communication Tool for Child Psychiatric Access Projects

    PubMed Central

    Harrison, Joyce; Wasserman, Kate; Steinberg, Janna; Platt, Rheanna; Coble, Kelly; Bower, Kelly

    2017-01-01

    Given the gap in child psychiatric services available to meet existing pediatric behavioral health needs, children and families are increasingly seeking behavioral health services from their primary care clinicians (PCCs). However, many pediatricians report not feeling adequately trained to meet these needs. As a result, child psychiatric access projects (CPAPs) are being developed around the country to support the integration of care for children. Despite the promise and success of these programs, there are barriers, including the challenge of effective communication between PCCs and child psychiatrists. Consultants from the Maryland CPAP, the Behavioral Health Integration in Pediatric Primary Care (BHIPP) project, have developed a framework called the Five S’s. The Five S’s are Safety, Specific Behaviors, Setting, Scary Things, and Screening/Services. It is a tool that can be used to help PCCs and child psychiatrists communicate and collaborate to formulate pediatric behavioral health cases for consultation or referral requests. Each of these components and its importance to the case consultation are described. Two case studies are presented that illustrate how the Five S’s tool can be used in clinical consultation between PCC and child psychiatrist. We also describe the utility of the tool beyond its use in behavioral health consultation. PMID:27919566

  12. Real-Time Sensor Validation System Developed for Reusable Launch Vehicle Testbed

    NASA Technical Reports Server (NTRS)

    Jankovsky, Amy L.

    1997-01-01

    A real-time system for validating sensor health has been developed for the reusable launch vehicle (RLV) program. This system, which is part of the propulsion checkout and control system (PCCS), was designed for use in an integrated propulsion technology demonstrator testbed built by Rockwell International and located at the NASA Marshall Space Flight Center. Work on the sensor health validation system, a result of an industry-NASA partnership, was completed at the NASA Lewis Research Center, then delivered to Marshall for integration and testing. The sensor validation software performs three basic functions: it identifies failed sensors, it provides reconstructed signals for failed sensors, and it identifies off-nominal system transient behavior that cannot be attributed to a failed sensor. The code is initiated by host software before the start of a propulsion system test, and it is called by the host program every control cycle. The output is posted to global memory for use by other PCCS modules. Output includes a list indicating the status of each sensor (i.e., failed, healthy, or reconstructed) and a list of features that are not due to a sensor failure. If a sensor failure is found, the system modifies that sensor's data array by substituting a reconstructed signal, when possible, for use by other PCCS modules.

  13. Strain-specific genetics, anatomy and function of enteric neural serotonergic pathways in inbred mice

    PubMed Central

    Neal, Kathleen B; Parry, Laura J; Bornstein, Joel C

    2009-01-01

    Serotonin (5-HT) powerfully affects small intestinal motility and 5-HT-immunoreactive (IR) neurones are highly conserved between species. 5-HT synthesis in central neurones and gastrointestinal mucosa depends on tissue-specific isoforms of the enzyme tryptophan hydroxylase (TPH). RT-PCR identified strain-specific expression of a polymorphism (1473C/G) of the tph2 gene in longitudinal muscle–myenteric plexus preparations of C57Bl/6 and Balb/c mice. The former expressed the high-activity C allele, the latter the low-activity G allele. Confocal microscopy was used to examine close contacts between 5-HT-IR varicosities and myenteric neurones immunoreactive for neuronal nitric oxide synthase (NOS) or calretinin in these two strains. Significantly more close contacts were identified to NOS- (P < 0.05) and calretinin-IR (P < 0.01) neurones in C57Bl/6 jejunum (NOS 1.6 ± 0.3, n= 52; calretinin 5.2 ± 0.4, n= 54), than Balb/c jejunum (NOS 0.9 ± 0.2, n= 78; calretinin 3.5 ± 0.3, n= 98). Propagating contractile complexes (PCCs) were identified in the isolated jejunum by constructing spatiotemporal maps from video recordings of cannulated segments in vitro. These clusters of contractions usually arose towards the anal end and propagated orally. Regular PCCs were initiated at intraluminal pressures of 6 cmH2O, and abolished by tetrodotoxin (1 μm). Jejunal PCCs from C57Bl/6 mice were suppressed by a combination of granisetron (1 μm, 5-HT3 antagonist) and SB207266 (10 nm, 5-HT4 antagonist), but PCCs from Balb/c mice were unaffected. There were, however, no strain-specific differences in sensitivity of longitudinal muscle contractions to exogenous 5-HT or blockade of 5-HT3 and 5-HT4 receptors. These data associate a genetic difference with significant structural and functional consequences for enteric neural serotonergic pathways in the jejunum. PMID:19064621

  14. Examination Of Sulfur Measurements In DWPF Sludge Slurry And SRAT Product Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.; Wiedenman, B. J.

    2012-11-29

    Savannah River National Laboratory (SRNL) was asked to re-sample the received SB7b WAPS material for wt. % solids, perform an aqua regia digestion and analyze the digested material by inductively coupled plasma - atomic emission spectroscopy (ICP-AES), as well as re-examine the supernate by ICP-AES. The new analyses were requested in order to provide confidence that the initial analytical subsample was representative of the Tank 40 sample received and to replicate the S results obtained on the initial subsample collected. The ICP-AES analyses for S were examined with both axial and radial detection of the sulfur ICP-AES spectroscopic emission linesmore » to ascertain if there was any significant difference in the reported results. The outcome of this second subsample of the Tank 40 WAPS material is the first subject of this report. After examination of the data from the new subsample of the SB7b WAPS material, a team of DWPF and SRNL staff looked for ways to address the question of whether there was in fact insoluble S that was not being accounted for by ion chromatography (IC) analysis. The question of how much S is reaching the melter was thought best addressed by examining a DWPF Slurry Mix Evaporator (SME) Product sample, but the significant dilution of sludge material, containing the S species in question, that results from frit addition was believed to add additional uncertainty to the S analysis of SME Product material. At the time of these discussions it was believed that all S present in a Sludge Receipt and Adjustment Tank (SRAT) Receipt sample would be converted to sulfate during the course of the SRAT cycle. A SRAT Product sample would not have the S dilution effect resulting from frit addition, and hence, it was decided that a DWPF SRAT Product sample would be obtained and submitted to SRNL for digestion and sample preparation followed by a round-robin analysis of the prepared samples by the DWPF Laboratory, F/H Laboratories, and SRNL for S and sulfate. The results of this round-robin analytical study are the second subject of this report.« less

  15. Defense Waste Processing Facility Simulant Chemical Processing Cell Studies for Sludge Batch 9

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Tara E.; Newell, J. David; Woodham, Wesley H.

    The Savannah River National Laboratory (SRNL) received a technical task request from Defense Waste Processing Facility (DWPF) and Saltstone Engineering to perform simulant tests to support the qualification of Sludge Batch 9 (SB9) and to develop the flowsheet for SB9 in the DWPF. These efforts pertained to the DWPF Chemical Process Cell (CPC). CPC experiments were performed using SB9 simulant (SB9A) to qualify SB9 for sludge-only and coupled processing using the nitric-formic flowsheet in the DWPF. Two simulant batches were prepared, one representing SB8 Tank 40H and another representing SB9 Tank 51H. The simulant used for SB9 qualification testing wasmore » prepared by blending the SB8 Tank 40H and SB9 Tank 51H simulants. The blended simulant is referred to as SB9A. Eleven CPC experiments were run with an acid stoichiometry ranging between 105% and 145% of the Koopman minimum acid equation (KMA), which is equivalent to 109.7% and 151.5% of the Hsu minimum acid factor. Three runs were performed in the 1L laboratory scale setup, whereas the remainder were in the 4L laboratory scale setup. Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles were performed on nine of the eleven. The other two were SRAT cycles only. One coupled flowsheet and one extended run were performed for SRAT and SME processing. Samples of the condensate, sludge, and off-gas were taken to monitor the chemistry of the CPC experiments.« less

  16. Results of Hg speciation testing on DWPF SMECT-8, OGCT-1, AND OGCT-2 samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C.

    2016-02-22

    The Savannah River National Laboratory (SRNL) was tasked with preparing and shipping samples for Hg speciation by Eurofins Frontier Global Sciences, Inc. in Seattle, WA on behalf of the Savannah River Remediation (SRR) Mercury Task Team. The sixteenth shipment of samples was designated to include a Defense Waste Processing Facility (DWPF) Slurry Mix Evaporator Condensate Tank (SMECT) sample from Sludge Receipt and Adjustment Tank (SRAT) Batch 738 processing and two Off-Gas Condensate Tank (OGCT) samples, one following Batch 736 and one following Batch 738. The DWPF sample designations for the three samples analyzed are provided. The Batch 738 ‘End ofmore » SME Cycle’ SMECT sample was taken at the conclusion of Slurry Mix Evaporator (SME) operations for this batch and represents the fourth SMECT sample examined from Batch 738. Batch 738 experienced a sludge slurry carryover event, which introduced sludge solids to the SMECT that were particularly evident in the SMECT-5 sample, but less evident in the ‘End of SME Cycle’ SMECT-8 sample.« less

  17. Impact of Glycolate Anion on Aqueous Corrosion in DWPF and Downstream Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mickalonis, J.

    Glycolic acid is being evaluated as an alternate reductant in the preparation of high level waste for the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS). During processing, the glycolic acid may not be completely consumed with small quantities of the glycolate anion being carried forward to other high level waste (HLW) facilities. The SRS liquid waste contractor requested an assessment of the impact of the glycolate anion on the corrosion of the materials of construction (MoC) throughout the waste processing system since this impact had not been previously evaluated. A literature review revealed that corrosion datamore » were not available for the MoCs in glycolic-bearing solutions applicable to SRS systems. Data on the material compatibility with only glycolic acid or its derivative products were identified; however, data were limited for solutions containing glycolic acid or the glycolate anion. For the proprietary coating systems applied to the DWPF concrete, glycolic acid was deemed compatible since the coatings were resistant to more aggressive chemistries than glycolic acid. Additionally, similar coating resins showed acceptable resistance to glycolic acid.« less

  18. Impact of Glycolate Anion on Aqueous Corrosion in DWPF and Downstream Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mickalonis, J.

    Glycolic acid is being evaluated as an alternate reductant in the preparation of high level waste for the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS). During processing, the glycolic acid may not be completely consumed with small quantities of the glycolate anion being carried forward to other high level waste (HLW) facilities. The SRS liquid waste contractor requested an assessment of the impact of the glycolate anion on the corrosion of the materials of construction (MoC) throughout the waste processing system since this impact had not been previously evaluated. A literature review revealed that corrosion datamore » were not available for the MoCs in glycolic-bearing solutions applicable to SRS systems. Data on the material compatibility with only glycolic acid or its derivative products were identified; however, data were limited for solutions containing glycolic acid or the glycolate anion. For the proprietary coating systems applied to the DWPF concrete, glycolic acid was deemed compatible since the coatings were resistant to more aggressive chemistries than glycolic acid. Additionally similar coating resins showed acceptable resistance to glycolic acid.« less

  19. Nucleation and crystal growth behavior of nepheline in simulated high-level waste glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K.; Amoroso, J.; Mcclane, D.

    The Savannah River National Laboratory (SRNL) has been tasked with supporting glass formulation development and process control strategies in key technical areas, relevant to the Department of Energy’s Office of River Protection (DOE-ORP) and related to high-level waste (HLW) vitrification at the Waste Treatment and Immobilization Plant (WTP). Of specific interest is the development of predictive models for crystallization of nepheline (NaAlSiO4) in HLW glasses formulated at high alumina concentrations. This report summarizes recent progress by researchers at SRNL towards developing a predicative tool for quantifying nepheline crystallization in HLW glass canisters using laboratory experiments. In this work, differential scanningmore » calorimetry (DSC) was used to obtain the temperature regions over which nucleation and growth of nepheline occur in three simulated HLW glasses - two glasses representative of WTP projections and one glass representative of the Defense Waste Processing Facility (DWPF) product. The DWPF glass, which has been studied previously, was chosen as a reference composition and for comparison purposes. Complementary quantitative X-ray diffraction (XRD) and optical microscopy confirmed the validity of the methodology to determine nucleation and growth behavior as a function of temperature. The nepheline crystallization growth region was determined to generally extend from ~ 500 to >850 °C, with the maximum growth rates occurring between 600 and 700 °C. For select WTP glass compositions (high Al2O3 and B2O3), the nucleation range extended from ~ 450 to 600 °C, with the maximum nucleation rates occurring at ~ 530 °C. For the DWPF glass composition, the nucleation range extended from ~ 450 to 750 °C with the maximum nucleation rate occurring at ~ 640 °C. The nepheline growth at the peak temperature, as determined by XRD, was between 35 - 75 wt.% /hour. A maximum nepheline growth rate of ~ 0.1 mm/hour at 700 °C was measured for the DWPF composition using optical microscopy. This research establishes a viable alternative to more traditional techniques for evaluating nepheline crystallization in large numbers of glasses, which are prohibitively time consuming or otherwise impractical. The ultimate objective is to combine the nucleation and growth information obtained from DSC, like that presented in this report, with computer simulations of glass cooling within the canister to accurately predict nepheline crystallization in HLW during processing through WTP.« less

  20. Winter Severe Weather: A Case Study of the Intense Squall Line of 6-7 January 1995 in the Carolinas

    DTIC Science & Technology

    1996-01-01

    line. 51 U- U- )C nM-,L Cz2r r Zr’.r - -52 cz bi2 0z CPIu Inu leu bts 533 rMPF DWPF 39 36- 30------- - - - - - - - - - - - - PMiL 014- Gusr DARR 10- 6...14u cz lo0 Go0 Go a- Nr, lou 58 ,t, 7!Ch 59 Lr, CC 7 ~ cr cz 60 z u-, II Zn zc z 11 L1X - CL Aar- c- 61 100 CCu ac- Go 62 mm IflIN Z N _ m 63 TMPF DWPF

  1. Molecular and Therapeutic Advances in the Diagnosis and Management of Malignant Pheochromocytomas and Paragangliomas

    PubMed Central

    Lowery, Aoife J.; Walsh, Siun; McDermott, Enda W.

    2013-01-01

    Pheochromocytomas (PCCs) and paragangliomas (PGLs) are rare catecholamine-secreting tumors derived from chromaffin cells originating in the neural crest. These tumors represent a significant diagnostic and therapeutic challenge because the diagnosis of malignancy is frequently made in retrospect by the development of metastatic or recurrent disease. Complete surgical resection offers the only potential for cure; however, recurrence can occur even after apparently successful resection of the primary tumor. The prognosis for malignant disease is poor because traditional treatment modalities have been limited. The last decade has witnessed exciting discoveries in the study of PCCs and PGLs; advances in molecular genetics have uncovered hereditary and germline mutations of at least 10 genes that contribute to the development of these tumors, and increasing knowledge of genotype-phenotype interactions has facilitated more accurate determination of malignant potential. Elucidating the molecular mechanisms responsible for malignant transformation in these tumors has opened avenues of investigation into targeted therapeutics that show promising results. There have also been significant advances in functional and radiological imaging and in the surgical approach to adrenalectomy, which remains the mainstay of treatment for PCC. In this review, we discuss the currently available diagnostic and therapeutic options for patients with malignant PCCs and PGLs and detail the molecular rationale and clinical evidence for novel and emerging diagnostic and therapeutic strategies. PMID:23576482

  2. Building Resiliency in a Palliative Care Team: A Pilot Study.

    PubMed

    Mehta, Darshan H; Perez, Giselle K; Traeger, Lara; Park, Elyse R; Goldman, Roberta E; Haime, Vivian; Chittenden, Eva H; Denninger, John W; Jackson, Vicki A

    2016-03-01

    Palliative care clinicians (PCCs) are vulnerable to burnout as a result of chronic stress related to working with seriously ill patients. Burnout can lead to absenteeism, ineffective communication, medical errors, and job turnover. Interventions that promote better coping with stress are needed in this population. This pilot study tested the feasibility of the Relaxation Response Resiliency Program for Palliative Care Clinicians, a program targeted to decrease stress and increase resiliency, in a multidisciplinary cohort of PCCs (N = 16) at a major academic medical center. A physician delivered the intervention over two months in five sessions (12 hours total). Data were collected the week before the program start and two months after completion. The main outcome was feasibility of the program. Changes in perceived stress, positive and negative affect, perspective taking, optimism, satisfaction with life, and self-efficacy were examined using nonparametric statistical tests. Effect size was quantified using Cohen's d. The intervention was feasible; all participants attended at least four of the five sessions, and there was no attrition. After the intervention, participants showed reductions in perceived stress and improvements in perspective taking. Our findings suggest that a novel team-based resiliency intervention based on elicitation of the relaxation response was feasible and may help promote resiliency and protect against the negative consequences of stress for PCCs. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  3. Pancreatic cancer cells express CD44 variant 9 and multidrug resistance protein 1 during mitosis.

    PubMed

    Kiuchi, Shizuka; Ikeshita, Shunji; Miyatake, Yukiko; Kasahara, Masanori

    2015-02-01

    Pancreatic cancer is one of the most lethal cancers with high metastatic potential and strong chemoresistance. Its intractable natures are attributed to high robustness in tumor cells for their survival. We demonstrate here that pancreatic cancer cells (PCCs) with an epithelial phenotype upregulate cell surface expression of CD44 variant 9 (CD44v9), an important cancer stem cell marker, during the mitotic phases of the cell cycle. Of five human CD44(+) PCC lines examined, three cell lines, PCI-24, PCI-43 and PCI-55, expressed E-cadherin and CD44 variants, suggesting that they have an epithelial phenotype. By contrast, PANC-1 and MIA PaCa-2 cells expressed vimentin and ZEB1, suggesting that they have a mesenchymal phenotype. PCCs with an epithelial phenotype upregulated cell surface expression of CD44v9 in prophase, metaphase, anaphase and telophase and downregulated CD44v9 expression in late-telophase, cytokinesis and interphase. Sorted CD44v9-negative PCI-55 cells resumed CD44v9 expression when they re-entered the mitotic stage. Interestingly, CD44v9(bright) mitotic cells expressed multidrug resistance protein 1 (MDR1) intracellularly. Upregulated expression of CD44v9 and MDR1 might contribute to the intractable nature of PCCs with high proliferative activity. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. The first PANDA tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreier, J.; Huggenberger, M.; Aubert, C.

    1996-08-01

    The PANDA test facility at PSI in Switzerland is used to study the long-term Simplified Boiling Water Reactor (SBWR) Passive Containment Cooling System (PCCS) performance. The PANDA tests demonstrate performance on a larger scale than previous tests and examine the effects of any non-uniform spatial distributions of steam and non-condensables in the system. The PANDA facility has a 1:1 vertical scale, and 1:25 ``system`` scale (volume, power, etc.). Steady-state PCCS condenser performance tests and extensive facility characterization tests have been completed. Transient system behavior tests were conducted late in 1995; results from the first three transient tests (M3 series) aremore » reviewed. The first PANDA tests showed that the overall global behavior of the SBWR containment was globally repeatable and very favorable; the system exhibited great ``robustness.``« less

  5. DWPF Recycle Evaporator Simulant Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M

    2005-04-05

    Testing was performed to determine the feasibility and processing characteristics of an evaporation process to reduce the volume of the recycle stream from the Defense Waste Processing Facility (DWPF). The concentrated recycle would be returned to DWPF while the overhead condensate would be transferred to the Effluent Treatment Plant. Various blends of evaporator feed were tested using simulants developed from characterization of actual recycle streams from DWPF and input from DWPF-Engineering. The simulated feed was evaporated in laboratory scale apparatus to target a 30X volume reduction. Condensate and concentrate samples from each run were analyzed and the process characteristics (foaming,more » scaling, etc) were visually monitored during each run. The following conclusions were made from the testing: Concentration of the ''typical'' recycle stream in DWPF by 30X was feasible. The addition of DWTT recycle streams to the typical recycle stream raises the solids content of the evaporator feed considerably and lowers the amount of concentration that can be achieved. Foaming was noted during all evaporation tests and must be addressed prior to operation of the full-scale evaporator. Tests were conducted that identified Dow Corning 2210 as an antifoam candidate that warrants further evaluation. The condensate has the potential to exceed the ETP WAC for mercury, silicon, and TOC. Controlling the amount of equipment decontamination recycle in the evaporator blend would help meet the TOC limits. The evaporator condensate will be saturated with mercury and elemental mercury will collect in the evaporator condensate collection vessel. No scaling on heating surfaces was noted during the tests, but splatter onto the walls of the evaporation vessels led to a buildup of solids. These solids were difficult to remove with 2M nitric acid. Precipitation of solids was not noted during the testing. Some of the aluminum present in the recycle streams was converted from gibbsite to aluminum oxide during the evaporation process. The following recommendations were made: Recycle from the DWTT should be metered in slowly to the ''typical'' recycle streams to avoid spikes in solids content to allow consistent processing and avoid process upsets. Additional studies should be conducted to determine acceptable volume ratios for the HEME dissolution and decontamination solutions in the evaporator feed. Dow Corning 2210 antifoam should be evaluated for use to control foaming. Additional tests are required to determine the concentration of antifoam required to prevent foaming during startup, the frequency of antifoam additions required to control foaming during steady state processing, and the ability of the antifoam to control foam over a range of potential feed compositions. This evaluation should also include evaluation of the degradation of the antifoam and impact on the silicon and TOC content of the condensate. The caustic HEME dissolution recycle stream should be neutralized to at least pH of 7 prior to blending with the acidic recycle streams. Dow Corning 2210 should be used during the evaporation testing using the radioactive recycle samples received from DWPF. Evaluation of additional antifoam candidates should be conducted as a backup for Dow Corning 2210. A camera and/or foam detection instrument should be included in the evaporator design to allow monitoring of the foaming behavior during operation. The potential for foam formation and high solids content should be considered during the design of the evaporator vessel.« less

  6. Assessment of the impact of the next generation solvent on DWPF melter off-gas flammability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, W. E.

    2013-02-13

    An assessment has been made to evaluate the impact on the DWPF melter off-gas flammability of replacing the current solvent used in the Modular Caustic-Side Solvent Extraction Process Unit (MCU) process with the Next Generation Solvent (NGS-MCU) and blended solvent. The results of this study showed that the concentrations of nonvolatile carbon and hydrogen of the current solvent in the Slurry Mix Evaporator (SME) product would both be about 29% higher than their counterparts of the NGS-MCU and blended solvent in the absence of guanidine partitioning. When 6 ppm of guanidine (TiDG) was added to the effluent transfer to DWPFmore » to simulate partitioning for the NGS-MCU and blended solvent cases and the concentration of Isopar{reg_sign} L in the effluent transfer was controlled below 87 ppm, the concentrations of nonvolatile carbon and hydrogen of the NGS-MCU and blended solvent were still about 12% and 4% lower, respectively, than those of the current solvent. It is, therefore, concluded that as long as the volume of MCU effluent transfer to DWPF is limited to 15,000 gallons per Sludge Receipt and Adjustment Tank (SRAT)/SME cycle and the concentration of Isopar{reg_sign} L in the effluent transfer is controlled below 87 ppm, using the current solvent assumption of 105 ppm Isopar{reg_sign} L or 150 ppm solvent in lieu of NGS-MCU or blended solvent in the DWPF melter off-gas flammability assessment is conservative for up to an additional 6 ppm of TiDG in the effluent due to guanidine partitioning. This report documents the calculations performed to reach this conclusion.« less

  7. Inhibiting localized corrosion during storage of dilute SRP wastes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oblath, S.B.; Congdon, J.W.

    1986-01-01

    High-level radioactive waste will be incorporated in borosilicate glass in the Defense Waste Processing Facility (DWPF) at the Savannah River Plant (SRP). As part of this process, large volumes of inorganic salt wastes will be decontaminated for disposal as low-level waste. The principal contaminants, /sup 137/Cs and /sup 90/Sr, are removed by treatment with sodium tetraphenylborate and sodium titanate. The resulting solids will be slurried with a dilute salt solution and stored in existing carbon steel tanks for several years prior to processing and disposal. Initial tests indicated a tendency for localized corrosion of the tanks. An investigation, using nonradioactivemore » simulants for the expected solution compositions, identified inhibitors which would protect the steel. Changes in solution compositions over time, due to radiolytic effects, were also accounted for by the simulants. Six inhibitors were identified which would protect the steel tanks. The effects these inhibitors would have on later processing steps in the DWPF were then evaluated. After this process, only sodium nitrite remained as an inhibitor that was both effective and compatible with the DWPF. The use of this inhibitor has been demonstrated on a real waste slurry.« less

  8. SLUDGE WASHING AND DEMONSTRATION OF THE DWPF FLOWSHEET IN THE SRNL SHIELDED CELLS FOR SLUDGE BATCH 7A QUALIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pareizs, J.; Billings, A.; Click, D.

    2011-07-08

    Waste Solidification Engineering (WSE) has requested that characterization and a radioactive demonstration of the next batch of sludge slurry (Sludge Batch 7a*) be completed in the Shielded Cells Facility of the Savannah River National Laboratory (SRNL) via a Technical Task Request (TTR). This characterization and demonstration, or sludge batch qualification process, is required prior to transfer of the sludge from Tank 51 to the Defense Waste Processing Facility (DWPF) feed tank (Tank 40). The current WSE practice is to prepare sludge batches in Tank 51 by transferring sludge from other tanks. Discharges of nuclear materials from H Canyon are oftenmore » added to Tank 51 during sludge batch preparation. The sludge is washed and transferred to Tank 40, the current DWPF feed tank. Prior to transfer of Tank 51 to Tank 40, SRNL simulates the Tank Farm and DWPF processes with a Tank 51 sample (referred to as the qualification sample). Sludge Batch 7a (SB7a) is composed of portions of Tanks 4, 7, and 12; the Sludge Batch 6 heel in Tank 51; and a plutonium stream from H Canyon. SRNL received the Tank 51 qualification sample (sample ID HTF-51-10-125) following sludge additions to Tank 51. This report documents: (1) The washing (addition of water to dilute the sludge supernate) and concentration (decanting of supernate) of the SB7a - Tank 51 qualification sample to adjust sodium content and weight percent insoluble solids to Tank Farm projections. (2) The performance of a DWPF Chemical Process Cell (CPC) simulation using the washed Tank 51 sample. The simulation included a Sludge Receipt and Adjustment Tank (SRAT) cycle, where acid was added to the sludge to destroy nitrite and reduce mercury, and a Slurry Mix Evaporator (SME) cycle, where glass frit was added to the sludge in preparation for vitrification. The SME cycle also included replication of five canister decontamination additions and concentrations. Processing parameters were based on work with a non-radioactive simulant. (3) Vitrification of a portion of the SME product and characterization and durability testing (as measured by the Product Consistency Test (PCT)) of the resulting glass. (4) Rheology measurements of the initial slurry samples and samples after each phase of CPC processing. This program was controlled by a Task Technical and Quality Assurance Plan (TTQAP), and analyses were guided by an Analytical Study Plan. This work is Technical Baseline Research and Development (R&D) for the DWPF. It should be noted that much of the data in this document has been published in interoffice memoranda. The intent of this technical report is bring all of the SB7a related data together in a single permanent record and to discuss the overall aspects of SB7a processing.« less

  9. SLUDGE BATCH 6/TANK 40 SIMULANT CHEMICAL PROCESS CELL SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koopman, David

    2010-04-28

    Phase III simulant flowsheet testing was completed using the latest composition estimates for SB6/Tank 40 feed to DWPF. The goals of the testing were to determine reasonable operating conditions and assumptions for the startup of SB6 processing in the DWPF. Testing covered the region from 102-159% of the current DWPF stoichiometric acid equation. Nitrite ion concentration was reduced to 90 mg/kg in the SRAT product of the lowest acid run. The 159% acid run reached 60% of the DWPF Sludge Receipt and Adjustment Tank (SRAT) limit of 0.65 lb H2/hr, and then sporadically exceeded the DWPF Slurry Mix Evaporator (SME)more » limit of 0.223 lb H2/hr. Hydrogen generation rates peaked at 112% of the SME limit, but higher than targeted wt% total solids levels may have been partially responsible for rates seen. A stoichiometric factor of 120% met both objectives. A processing window for SB6 exists from 102% to something close to 159% based on the simulant results. An initial recommendation for SB6 processing is at 115-120% of the current DWPF stoichiometric acid equation. The addition of simulated Actinide Removal Process (ARP) and Modular Caustic Side Solvent Extraction Unit (MCU) streams to the SRAT cycle had no apparent impact on the preferred stoichiometric factor. Hydrogen generation occurred continuously after acid addition in three of the four tests. The three runs at 120%, 118.4% with ARP/MCU, and 159% stoichiometry were all still producing around 0.1 lb hydrogen/hr at DWPF scale after 36 hours of boiling in the SRAT. The 120% acid run reached 23% of the SRAT limit and 37% of the SME limit. Conversely, nitrous oxide generation was subdued compared to previous sludge batches, staying below 29 lb/hr in all four tests or about a fourth as much as in comparable SB4 testing. Two processing issues, identified during SB6 Phase II flowsheet testing and qualification simulant testing, were monitored during Phase III. Mercury material balance closure was impacted by acid stoichiometry, and significant mercury was not accounted for in the highest acid run. Coalescence of elemental mercury droplets in the mercury water wash tank (MWWT) appeared to degrade with increasing stoichiometry. Observations were made of mercury scale formation in the SRAT condenser and MWWT. A tacky mercury amalgam with Rh, Pd, and Cu, plus some Ru and Ca formed on the impeller at 159% acid. It contained a significant fraction of the available Pd, Cu, and Rh as well as about 25% of the total mercury charged. Free (elemental) mercury was found in all of the SME products. Ammonia scrubbers were used during the tests to capture off-gas ammonia for material balance purposes. Significant ammonium ion formation was again observed during the SRAT cycle, and ammonia gas entered the off-gas as the pH rose during boiling. Ammonium ion production was lower than in the SB6 Phase II and the qualification simulant testing. Similar ammonium ion formation was seen in the ARP/MCU simulation as in the 120% flowsheet run. A slightly higher pH caused most of the ammonium to vaporize and collect in the ammonia scrubber reflux solution. Two periods of foaminess were noted. Neither required additional antifoam to control the foam growth. A steady foam layer formed during reflux in the 120% acid run. It was about an inch thick, but was 2-3 times more volume of bubbles than is typically seen during reflux. A similar foam layer also was seen during caustic boiling of the simulant during the ARP addition. While frequently seen with the radioactive sludge, foaminess during caustic boiling with simulants has been relatively rare. Two further flowsheet tests were performed and will be documented separately. One test was to evaluate the impact of process conditions that match current DWPF operation (lower rates). The second test was to evaluate the impact of SRAT/SME processing on the rheology of a modified Phase III simulant that had been made five times more viscous using ultrasonication.« less

  10. A novel 3-dimensional culture system uncovers growth stimulatory actions by TGFβ in pancreatic cancer cells.

    PubMed

    Sempere, Lorenzo F; Gunn, Jason R; Korc, Murray

    2011-08-01

    Transforming Growth Factor-β (TGF-β) exerts cell type-specific and context-dependent effects. Understanding the intrinsic effects of TGF-β on cancer cells in pancreatic ductal adenocarcinoma (PDAC) is a prerequisite for rationalized clinical implementation of TGF-β targeting therapies. Since the tumor microenvironment can affect how cancer cell respond to TGF-β, we employed a novel three-dimensional (3D) culturing system to recapitulate stromal and extracellular matrix interactions. We show here that TGF-β stimulates growth of human and murine pancreatic cancer cell lines (PCCs) when embedded in a 3% collagen IV/laminin-rich gelatinous medium (Matrigel™) over a solidified layer of soft agar. Moreover, in this novel 3D model, concomitant treatment with TGF-β1 and epidermal growth factor (EGF) enhanced PCC growth to a greater extent than either growth factor alone, and conferred increased chemoresistance to cytotoxic compounds. These cooperative growth-stimulatory effects were blocked by pharmacological inhibition of TGF-β type I receptor with SB431542 or the EGF receptor with erlotinib. Co-incubation with SB431542 and erlotinib enhanced the efficacy of gemcitabine and cisplatin in PCCs and in primary cell cultures established from pancreata of genetically-engineered mouse models of PDAC. These findings suggest that concomitant inhibition of TGF-β and EGF signaling may represent an effective therapeutic strategy in PDAC, and that this 3D culturing system could be utilized to test ex vivo the therapeutic response of pancreatic tumor biopsies from PDAC patients, thereby providing a functional assay to facilitate personalized targeted therapies.

  11. CPTAC Teams | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    The following are the current CPTAC teams, representing a network of Proteome Characterization Centers (PCCs), Proteogenomic Translational Research Centers (PTRCs), and Proteogenomic Data Analysis Centers (PGDACs). Teams are listed alphabetically by institution, with their respective Principal Investigators:

  12. Polymer-Cement Composites Containing Waste Perlite Powder

    PubMed Central

    Łukowski, Paweł

    2016-01-01

    Polymer-cement composites (PCCs) are materials in which the polymer and mineral binder create an interpenetrating network and co-operate, significantly improving the performance of the material. On the other hand, the need for the utilization of waste materials is a demand of sustainable construction. Various mineral powders, such as fly ash or blast-furnace slag, are successfully used for the production of cement and concrete. This paper deals with the use of perlite powder, which is a burdensome waste from the process of thermal expansion of the raw perlite, as a component of PCCs. The results of the testing of the mechanical properties of the composite and some microscopic observations are presented, indicating that there is a possibility to rationally and efficiently utilize waste perlite powder as a component of the PCC. This would lead to creating a new type of building material that successfully meets the requirements of sustainable construction. PMID:28773961

  13. DWPF DECON FRIT SUPERNATE ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D.; Crawford, C.

    2010-09-22

    The Savannah River National Laboratory (SRNL) has been requested to perform analyses on samples of the Defense Waste Processing Facility (DWPF) decon frit slurry (i.e., supernate samples and sump solid samples). Four 1-L liquid slurry samples were provided to SRNL by Savannah River Remediation (SRR) from the 'front-end' decon activities. Additionally, two 1-L sump solids samples were provided to SRNL for compositional and physical analysis. This report contains the results of the supernate analyses, while the solids (sump and slurry) results will be reported in a supplemental report. The analytical data from the decon frit supernate indicate that all ofmore » the radionuclide, organic, and inorganic concentrations met the limits in Revision 4 of the Effluent Treatment Plant (ETP) Waste Acceptance Criteria (WAC) with the exception of boron. The ETP WAC limit for boron is 15.0 mg/L while the average measured concentration (based on quadruplicate analysis) was 15.5 mg/L. The measured concentrations of Li, Na, and Si were also relatively high in the supernate analysis. These results are consistent with the relatively high measured value of B given the compositional make-up of Frit 418. Given these results, it was speculated that either (a) Frit 418 was dissolving into the supernate or aqueous fraction and/or (b) fine frit particulates were carried forward to the analytical instrument based on the sampling procedure used (i.e., the supernate samples were not filtered - only settled with the liquid fraction being transferred with a pipette). To address this issue, a filtered supernate sample (using a 0.45 um filter) was prepared and submitted for analysis. The results of the filtered sample were consistent with 'unfiltered or settled' sample - relatively high values of B, Li, Na, and Si were found. This suggests that Frit 418 is dissolving in the liquid phase which could be enhanced by the high surface area of the frit fines or particulates in suspension. Based on the results of this study, it is recommended that DWPF re-evaluate the technical basis for the B WAC limit (the only component that exceeds the ETP WAC limit from the supernate analyses) or assess if a waiver or exception can be obtained for exceeding this limit. Given the possible dissolution of B, Li, Na, and Si into the supernate (due to dissolution of frit), DWPF may need to assess if the release of these frit components into the supernate are a concern for the disposal options being considered. It should be noted that the results of this study may not be representative of future decon frit solutions or sump/slurry solids samples. Therefore, future DWPF decisions regarding the possible disposal pathways for either the aqueous or solid portions of the Decon Frit system need to factor in the potential differences. More specifically, introduction of a different frit or changes to other DWPF flowsheet unit operations (e.g., different sludge batch or coupling with other process streams) may impact not only the results but also the conclusions regarding acceptability with respect to the ETF WAC limits.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K. M.

    The U.S. Department of Energy (DOE), Office of Environmental Management (EM) is sponsoring an international, collaborative project to develop a fundamental model for sulfate solubility in nuclear waste glass. The solubility of sulfate has a significant impact on the achievable waste loading for nuclear waste forms within the DOE complex. These wastes can contain relatively high concentrations of sulfate, which has low solubility in borosilicate glass. This is a significant issue for low-activity waste (LAW) glass and is projected to have a major impact on the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Sulfate solubility has also been amore » limiting factor for recent high level waste (HLW) sludge processed at the Savannah River Site (SRS) Defense Waste Processing Facility (DWPF). The low solubility of sulfate in glass, along with melter and off-gas corrosion constraints, dictate that the waste be blended with lower sulfate concentration waste sources or washed to remove sulfate prior to vitrification. The development of enhanced borosilicate glass compositions with improved sulfate solubility will allow for higher waste loadings and accelerate mission completion.The objective of the current scope being pursued by SHU is to mature the sulfate solubility model to the point where it can be used to guide glass composition development for DWPF and WTP, allowing for enhanced waste loadings and waste throughput at these facilities. A series of targeted glass compositions was selected to resolve data gaps in the model and is identified as Stage III. SHU fabricated these glasses and sent samples to SRNL for chemical composition analysis. SHU will use the resulting data to enhance the sulfate solubility model and resolve any deficiencies. In this report, SRNL provides chemical analyses for the Stage III, simulated HLW glasses fabricated by SHU in support of the sulfate solubility model development.« less

  15. Experimental study on the heat transfer characteristics of a nuclear reactor containment wall cooled by gravitationally falling water

    NASA Astrophysics Data System (ADS)

    Pasek, Ari D.; Umar, Efrison; Suwono, Aryadi; Manalu, Reinhard E. E.

    2012-06-01

    Gravitationally falling water cooling is one of mechanism utilized by a modern nuclear Pressurized Water Reactor (PWR) for its Passive Containment Cooling System (PCCS). Since the cooling is closely related to the safety, water film cooling characteristics of the PCCS should be studied. This paper deals with the experimental study of laminar water film cooling on the containment model wall. The influences of water mass flow rate and wall heat rate on the heat transfer characteristic were studied. This research was started with design and assembly of a containment model equipped with the water cooling system, and calibration of all measurement devices. The containment model is a scaled down model of AP 1000 reactor. Below the containment steam is generated using electrical heaters. The steam heated the containment wall, and then the temperatures of the wall in several positions were measure transiently using thermocouples and data acquisition. The containment was then cooled by falling water sprayed from the top of the containment. The experiments were done for various wall heat rate and cooling water flow rate. The objective of the research is to find the temperature profile along the wall before and after the water cooling applied, prediction of the water film characteristic such as means velocity, thickness and their influence to the heat transfer coefficient. The result of the experiments shows that the wall temperatures significantly drop after being sprayed with water. The thickness of water film increases with increasing water flow rate and remained constant with increasing wall heat rate. The heat transfer coefficient decreases as film mass flow rate increase due to the increases of the film thickness which causes the increasing of the thermal resistance. The heat transfer coefficient increases slightly as the wall heat rate increases. The experimental results were then compared with previous theoretical studied.

  16. Results of Hg speciation testing on DWPF SMECT-4, SMECT-6, and RCT-2 samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.

    2016-02-04

    The Savannah River National Laboratory (SRNL) was tasked with preparing and shipping samples for Hg speciation by Eurofins Frontier Global Sciences, Inc. in Seattle, WA on behalf of the Savannah River Remediation (SRR) Mercury Task Team.i,ii The fifteenth shipment of samples was designated to include Defense Waste Processing Facility (DWPF) Slurry Mix Evaporator Condensate Tank (SMECT) samples from Sludge Receipt and Adjustment Tank (SRAT) Batch 738 and a Recycle Condensate Tank (RCT) sample from SRAT Batch 736. The DWPF sample designations for the three samples analyzed are provided in Table 1. The Batch 738 ‘Baseline’ SMECT sample was taken priormore » to Precipitate Reactor Feed Tank (PRFT) addition and concentration and therefore, precedes the SMECT-5 sample reported previously. iii The Batch 738 ‘End of SRAT Cycle’ SMECT sample was taken at the conclusion of SRAT operations for this batch (PRFT addition/concentration, acid additions, initial concentration, MCU addition, and steam stripping). Batch 738 experienced a sludge slurry carryover event, which introduced sludge solids to the SMECT that were particularly evident in the SMECT-5 sample, but less evident in the ‘End of SRAT Cycle’ SMECT-6 sample. The Batch 736 ‘After SME’ RCT sample was taken after completion of SMECT transfers at the end of the SME cycle.« less

  17. DEVELOPMENT OF AN ANTIFOAM TRACKING SYSTEM AS AN OPTION TO SUPPORT THE MELTER OFF-GAS FLAMMABILITY CONTROL STRATEGY AT THE DWPF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T.; Lambert, D.

    The Savannah River National Laboratory (SRNL) has been working with the Savannah River Remediation (SRR) Defense Waste Processing Facility (DWPF) in the development and implementation of an additional strategy for confidently satisfying the flammability controls for DWPF’s melter operation. An initial strategy for implementing the operational constraints associated with flammability control in DWPF was based upon an analytically determined carbon concentration from antifoam. Due to the conservative error structure associated with the analytical approach, its implementation has significantly reduced the operating window for processing and has led to recurrent Slurry Mix Evaporator (SME) and Melter Feed Tank (MFT) remediation. Tomore » address the adverse operating impact of the current implementation strategy, SRR issued a Technical Task Request (TTR) to SRNL requesting the development and documentation of an alternate strategy for evaluating the carbon contribution from antifoam. The proposed strategy presented in this report was developed under the guidance of a Task Technical and Quality Assurance Plan (TTQAP) and involves calculating the carbon concentration from antifoam based upon the actual mass of antifoam added to the process assuming 100% retention. The mass of antifoam in the Additive Mix Feed Tank (AMFT), in the Sludge Receipt and Adjustment Tank (SRAT), and in the SME is tracked by mass balance as part of this strategy. As these quantities are monitored, the random and bias uncertainties affecting their values are also maintained and accounted for. This report documents: 1) the development of an alternate implementation strategy and associated equations describing the carbon concentration from antifoam in each SME batch derived from the actual amount of antifoam introduced into the AMFT, SRAT, and SME during the processing of the batch. 2) the equations and error structure for incorporating the proposed strategy into melter off-gas flammability assessments. Sample calculations of the system are also included in this report. Please note that the system developed and documented in this report is intended as an alternative to the current, analytically-driven system being utilized by DWPF; the proposed system is not intended to eliminate the current system. Also note that the system developed in this report to track antifoam mass in the AMFT, SRAT, and SME will be applicable beyond just Sludge Batch 8. While the model used to determine acceptability of the SME product with respect to melter off-gas flammability controls must be reassessed for each change in sludge batch, the antifoam mass tracking methodology is independent of sludge batch composition and as such will be transferable to future sludge batches.« less

  18. Lipid rescue 911: Are poison centers recommending intravenous fat emulsion therapy for severe poisoning?

    PubMed

    Christian, Michael R; Pallasch, Erin M; Wahl, Michael; Mycyk, Mark B

    2013-09-01

    Intravenous fat emulsion (IFE) therapy is a novel treatment that has been used to reverse the acute toxicity of some xenobiotics with varied success. We sought to determine how US Poison Control Centers (PCCs) have incorporated IFE as a treatment strategy for poisoning. A closed-format multiple-choice survey instrument was developed, piloted, revised, and then sent electronically to every medical director of an accredited US PCC in March 2011. Addresses were obtained from the American Association of Poison Control Centers listserv, and participation was voluntary and remained anonymous. Data were analyzed using descriptive statistics. The majority of PCC medical directors completed the survey (45 out of 57; 79 %). Of the 45 respondents, all felt that IFE therapy played a role in the acute overdose setting. Most PCCs (30 out of 45; 67 %) have a protocol for IFE therapy. In a scenario with "cardiac arrest" due to a single xenobiotic, directors stated that their center would "always" or "often" recommend IFE after overdose of bupivacaine (43 out of 45; 96 %), verapamil (36 out of 45; 80 %), amitriptyline (31 out of 45; 69 %), or an unknown xenobiotic (12 out of 45; 27 %). In a scenario with "shock" due to a single xenobiotic, directors stated that their PCC would "always" or "often" recommend IFE after overdose of bupivacaine (40 out of 45; 89 %), verapamil (28 out of 45; 62 %), amitriptyline (25 out of 45; 56 %), or an unknown xenobiotic (8 out of 45; 18 %). IFE therapy is being recommended by US PCCs; protocols and dosing regimens are nearly uniform. Most directors feel that IFE is safe but are more likely to recommend IFE in patients with cardiac arrest than in patients with severe hemodynamic compromise.

  19. Impact of platform switching on marginal peri-implant bone-level changes. A systematic review and meta-analysis

    PubMed Central

    Strietzel, Frank Peter; Neumann, Konrad; Hertel, Moritz

    2015-01-01

    Objective To address the focused question, is there an impact of platform switching (PS) on marginal bone level (MBL) changes around endosseous implants compared to implants with platform matching (PM) implant-abutment configurations? Material and methods A systematic literature search was conducted using electronic databases PubMed, Web of Science, Journals@Ovid Full Text and Embase, manual search for human randomized clinical trials (RCTs) and prospective clinical controlled cohort studies (PCCS) reporting on MBL changes at implants with PS-, compared with PM-implant-abutment connections, published between 2005 and June 2013. Results Twenty-two publications were eligible for the systematic review. The qualitative analysis of 15 RCTs and seven PCCS revealed more studies (13 RCTs and three PCCS) showing a significantly less mean marginal bone loss around implants with PS- compared to PM-implant-abutment connections, indicating a clear tendency favoring the PS technique. A meta-analysis including 13 RCTs revealed a significantly less mean MBL change (0.49 mm [CI95% 0.38; 0.60]) at PS implants, compared with PM implants (1.01 mm [CI95% 0.62; 1.40] (P < 0.0001). Conclusions The meta-analysis revealed a significantly less mean MBL change at implants with a PS compared to PM-implant-abutment configuration. Studies included herein showed an unclear as well as high risk of bias mostly, and relatively short follow-up periods. The qualitative analysis revealed a tendency favoring the PS technique to prevent or minimize peri-implant marginal bone loss compared with PM technique. Due to heterogeneity of the included studies, their results require cautious interpretation. PMID:24438506

  20. Alternate Reductant Cold Cap Evaluation Furnace Phase II Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, F. C.; Stone, M. E.; Miller, D. H.

    2014-09-03

    Savannah River Remediation (SRR) conducted a Systems Engineering Evaluation (SEE) to determine the optimum alternate reductant flowsheet for the Defense Waste Processing Facility (DWPF). Specifically, two proposed flowsheets (nitric–formic–glycolic and nitric–formic–sugar) were evaluated based upon results from preliminary testing. Comparison of the two flowsheets among evaluation criteria indicated a preference towards the nitric–formic–glycolic flowsheet. Further research and development of this flowsheet eliminated the formic acid, and as a result, the nitric–glycolic flowsheet was recommended for further testing. Based on the development of a roadmap for the nitric–glycolic acid flowsheet, Waste Solidification Engineering (WS-E) issued a Technical Task Request (TTR) tomore » address flammability issues that may impact the implementation of this flowsheet. Melter testing was requested in order to define the DWPF flammability envelope for the nitric-glycolic acid flowsheet. The Savannah River National Laboratory (SRNL) Cold Cap Evaluation Furnace (CEF), a 1/12 th scale DWPF melter, was selected by the SRR Alternate Reductant project team as the melter platform for this testing. The overall scope was divided into the following sub-tasks as discussed in the Task Technical and Quality Assurance Plan (TTQAP): Phase I - A nitric–formic acid flowsheet melter test (unbubbled) to baseline the CEF cold cap and vapor space data to the benchmark melter flammability models; Phase II - A nitric–glycolic acid flowsheet melter test (unbubbled and bubbled) to: Define new cold cap reactions and global kinetic parameters in support of the melter flammability model development; Quantify off-gas surging potential of the feed; Characterize off-gas condensate for complete organic and inorganic carbon species. After charging the CEF with cullet from Phase I CEF testing, the melter was slurry-fed with glycolic flowsheet based SB6-Frit 418 melter feed at 36% waste loading and was operated continuously for 25 days. Process data was collected throughout testing and included melter operation parameters and off-gas chemistry. In order to generate off-gas data in support of the flammability model development for the nitric-glycolic flowsheet, vapor space steady state testing in the range of ~300-750°C was conducted under the following conditions, (i) 100% (nominal and excess antifoam levels) and 125% stoichiometry feed and (ii) with and without argon bubbling. Adjustments to feed rate, heater outputs and purge air flow were necessary in order to achieve vapor space temperatures in this range. Surge testing was also completed under nominal conditions for four days with argon bubbling and one day without argon bubbling.« less

  1. EVALUATION OF ARG-1 SAMPLES PREPARED BY CESIUM CARBONATE DISSOLUTION DURING THE ISOLOK SME ACCEPTABILITY TESTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T.; Hera, K.; Coleman, C.

    2011-12-05

    Evaluation of Defense Waste Processing Facility (DWPF) Chemical Process Cell (CPC) cycle time identified several opportunities to improve the CPC processing time. The Mechanical Systems & Custom Equipment Development (MS&CED) Section of the Savannah River National Laboratory (SRNL) recently completed the evaluation of one of these opportunities - the possibility of using an Isolok sampling valve as an alternative to the Hydragard valve for taking DWPF process samples at the Slurry Mix Evaporator (SME). The use of an Isolok for SME sampling has the potential to improve operability, reduce maintenance time, and decrease CPC cycle time. The SME acceptability testingmore » for the Isolok was requested in Task Technical Request (TTR) HLW-DWPF-TTR-2010-0036 and was conducted as outlined in Task Technical and Quality Assurance Plan (TTQAP) SRNLRP-2011-00145. RW-0333P QA requirements applied to the task, and the results from the investigation were documented in SRNL-STI-2011-00693. Measurement of the chemical composition of study samples was a critical component of the SME acceptability testing of the Isolok. A sampling and analytical plan supported the investigation with the analytical plan directing that the study samples be prepared by a cesium carbonate (Cs{sub 2}CO{sub 3}) fusion dissolution method and analyzed by Inductively Coupled Plasma - Optical Emission Spectroscopy (ICP-OES). The use of the cesium carbonate preparation method for the Isolok testing provided an opportunity for an additional assessment of this dissolution method, which is being investigated as a potential replacement for the two methods (i.e., sodium peroxide fusion and mixed acid dissolution) that have been used at the DWPF for the analysis of SME samples. Earlier testing of the Cs{sub 2}CO{sub 3} method yielded promising results which led to a TTR from Savannah River Remediation, LLC (SRR) to SRNL for additional support and an associated TTQAP to direct the SRNL efforts. A technical report resulting from this work was issued that recommended that the mixed acid method be replaced by the Cs{sub 2}CO{sub 3} method for the measurement of magnesium (Mg), sodium (Na), and zirconium (Zr) with additional testing of the method by DWPF Laboratory being needed before further implementation of the Cs{sub 2}CO{sub 3} method at that laboratory. While the SME acceptability testing of the Isolok does not address any of the open issues remaining after the publication of the recommendation for the replacement of the mixed acid method by the Cs{sub 2}CO{sub 3} method (since those issues are to be addressed by the DWPF Laboratory), the Cs{sub 2}CO{sub 3} testing associated with the Isolok testing does provide additional insight into the performance of the method as conducted by SRNL. The performance is to be investigated by looking to the composition measurement data generated by the samples of a standard glass, the Analytical Reference Glass - 1 (ARG-1), that were prepared by the Cs{sub 2}CO{sub 3} method and included in the SME acceptability testing of the Isolok. The measurements of these samples were presented as part of the study results, but no statistical analysis of these measurements was conducted as part of those results. It is the purpose of this report to provide that analysis, which was supported using JMP Version 7.0.2.« less

  2. The melatonin action on stromal stem cells within pericryptal area in colon cancer model under constant light

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannen, Vinicius, E-mail: kannen71@yahoo.com.br; Marini, Tassiana; Zanette, Dalila L.

    Research highlights: {yields} We investigated melatonin against the malignant effects of constant light. {yields} Melatonin supplementation increased its serum levels and its receptor expression. {yields} Melatonin decreased cancer stem cells and dysplastic injuries in colon tissue. {yields} Melatonin controlled proliferative process and apoptosis induction. -- Abstract: Constant light (LL) is associated with high incidence of colon cancer. MLT supplementation was related to the significant control of preneoplastic patterns. We sought to analyze preneoplastic patterns in colon tissue from animals exposed to LL environment (14 days; 300 lx), MLT-supplementation (10 mg/kg/day) and DMH-treatment (1,2 dimethylhydrazine; 125 mg/kg). Rodents were sacrificed andmore » MLT serum levels were measured by radioimmunoassay. Our results indicated that LL induced ACF development (p < 0.001) with a great potential to increase the number of CD133(+) and CD68(+) cells (p < 0.05 and p < 0.001). LL also increased the proliferative process (PCNA-Li; p < 0.001) as well as decreased caspase-3 protein (p < 0.001), related to higher COX-2 protein expression (p < 0.001) within pericryptal colonic stroma (PCCS). However, MLT-supplementation controlled the development of dysplastic ACF (p < 0.001) diminishing preneoplastic patterns into PCCS as CD133 and CD68 (p < 0.05 and p < 0.001). These events were relative to decreased PCNA-Li index and higher expression of caspase-3 protein. Thus, MLT showed a great potential to control the preneoplastic patterns induced by LL.« less

  3. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    ERIC Educational Resources Information Center

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

  4. YIELD STRESS REDUCTION OF DWPF MELTER FEED SLURRIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M; Michael02 Smith, M

    2006-12-28

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site vitrifies High Level Waste for repository internment. The process consists of three major steps: waste pretreatment, vitrification, and canister decontamination/sealing. The HLW consists of insoluble metal hydroxides (primarily iron, aluminum, magnesium, manganese, and uranium) and soluble sodium salts (carbonate, hydroxide, nitrite, nitrate, sulfate). The pretreatment process acidifies the sludge with nitric and formic acids, adds the glass formers as glass frit, then concentrates the resulting slurry to approximately 50 weight percent (wt%) total solids. This slurry is fed to the joule-heated melter where the remaining water is evaporated followedmore » by calcination of the solids and conversion to glass. The Savannah River National Laboratory (SRNL) is currently assisting DWPF efforts to increase throughput of the melter. As part of this effort, SRNL has investigated methods to increase the solids content of the melter feed to reduce the heat load required to complete the evaporation of water and allow more of the energy available to calcine and vitrify the waste. The process equipment in the facility is fixed and cannot process materials with high yield stresses, therefore increasing the solids content will require that the yield stress of the melter feed slurries be reduced. Changing the glass former added during pretreatment from an irregularly shaped glass frit to nearly spherical beads was evaluated. The evaluation required a systems approach which included evaluations of the effectiveness of beads in reducing the melter feed yield stress as well as evaluations of the processing impacts of changing the frit morphology. Processing impacts of beads include changing the settling rate of the glass former (which effects mixing and sampling of the melter feed slurry and the frit addition equipment) as well as impacts on the melt behavior due to decreased surface area of the beads versus frit. Beads were produced from the DWPF process frit by fire polishing. The frit was allowed to free fall through a flame, then quenched with a water spray. Approximately 90% of the frit was converted to beads by this process, as shown in Figure 1. Borosilicate beads of various diameters were also procured for initial testing.« less

  5. EVALUATION OF REQUIREMENTS FOR THE DWPF HIGHER CAPACITY CANISTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.; Estochen, E.; Jordan, J.

    2014-08-05

    The Defense Waste Processing Facility (DWPF) is considering the option to increase canister glass capacity by reducing the wall thickness of the current production canister. This design has been designated as the DWPF Higher Capacity Canister (HCC). A significant decrease in the number of canisters processed during the life of the facility would be achieved if the HCC were implemented leading to a reduced overall reduction in life cycle costs. Prior to implementation of the change, Savannah River National Laboratory (SRNL) was requested to conduct an evaluation of the potential impacts. The specific areas of interest included loading and deformationmore » of the canister during the filling process. Additionally, the effect of the reduced wall thickness on corrosion and material compatibility needed to be addressed. Finally the integrity of the canister during decontamination and other handling steps needed to be determined. The initial request regarding canister fabrication was later addressed in an alternate study. A preliminary review of canister requirements and previous testing was conducted prior to determining the testing approach. Thermal and stress models were developed to predict the forces on the canister during the pouring and cooling process. The thermal model shows the HCC increasing and decreasing in temperature at a slightly faster rate than the original. The HCC is shown to have a 3°F ΔT between the internal and outer surfaces versus a 5°F ΔT for the original design. The stress model indicates strain values ranging from 1.9% to 2.9% for the standard canister and 2.5% to 3.1% for the HCC. These values are dependent on the glass level relative to the thickness transition between the top head and the canister wall. This information, along with field readings, was used to set up environmental test conditions for corrosion studies. Small 304-L canisters were filled with glass and subjected to accelerated environmental testing for 3 months. No evidence of stress corrosion cracking was indicated on either the canisters or U-bend coupons. Calculations and finite element modeling were used to determine forces over a range of handling conditions along with possible forces during decontamination. While expected reductions in some physical characteristics were found in the HCC, none were found to be significant when compared to the required values necessary to perform its intended function. Based on this study and a review of successful testing of thinner canisters at West Valley Demonstration Project (WVDP), the mechanical properties obtained with the thinner wall do not significantly undermine the ability of the canister to perform its intended function.« less

  6. Defense Waste Processing Facility Process Enhancements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bricker, Jonathan

    2010-11-01

    Jonathan Bricker provides an overview of process enhancements currently being done at the Defense Waste Processing Facility (DWPF) at SRS. Some of these enhancements include: melter bubblers; reduction in water use, and alternate reductant.

  7. Impact of glycolate anion on aqueous corrosion in DWPF and downstream facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mickalonis, J. I.

    2015-12-15

    Glycolic acid is being evaluated as an alternate reductant in the preparation of high level waste for the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS). During processing, the glycolic acid may not be completely consumed with small quantities of the glycolate anion being carried forward to other high level waste (HLW) facilities. The impact of the glycolate anion on the corrosion of the materials of construction (MoC) throughout the waste processing system has not been previously evaluated. A literature review had revealed that corrosion data were not available for the MoCs in glycolic-bearing solutions applicable tomore » SRS systems. Data on the material compatibility with only glycolic acid or its derivative products were identified; however, data were limited for solutions containing glycolic acid or the glycolate anion.« less

  8. Characterization Of The As-Received Sludge Batch 9 Qualification Sample (Htf-51-15-81)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pareizs, J.

    Savannah River National Laboratory (SRNL) personnel have been requested to qualify the next sludge batch (Sludge Batch 9 – SB9) for processing at the Defense Waste Processing Facility (DWPF). To accomplish this task, Savannah River Remediation (SRR) has sent SRNL a 3-L slurried sample of Tank 51H (HTF-51-15-81) to be characterized, washed, and then used in a lab-scale demonstration of the DWPF flowsheet (potentially after combining with Tank 40H sludge). This report documents the first steps of the qualification process – characterization of the as-received Tank 51H qualification sample. These results will be used to support a reprojection of SB9more » by SRR from which final Tank 51H washing, frit development, and Chemical Processing Cell (CPC) activities will be based.« less

  9. Sludge batch 9 simulant runs using the nitric-glycolic acid flowsheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D. P.; Williams, M. S.; Brandenburg, C. H.

    Testing was completed to develop a Sludge Batch 9 (SB9) nitric-glycolic acid chemical process flowsheet for the Defense Waste Processing Facility’s (DWPF) Chemical Process Cell (CPC). CPC simulations were completed using SB9 sludge simulant, Strip Effluent Feed Tank (SEFT) simulant and Precipitate Reactor Feed Tank (PRFT) simulant. Ten sludge-only Sludge Receipt and Adjustment Tank (SRAT) cycles and four SRAT/Slurry Mix Evaporator (SME) cycles, and one actual SB9 sludge (SRAT/SME cycle) were completed. As has been demonstrated in over 100 simulations, the replacement of formic acid with glycolic acid virtually eliminates the CPC’s largest flammability hazards, hydrogen and ammonia. Recommended processingmore » conditions are summarized in section 3.5.1. Testing demonstrated that the interim chemistry and Reduction/Oxidation (REDOX) equations are sufficient to predict the composition of DWPF SRAT product and SME product. Additional reports will finalize the chemistry and REDOX equations. Additional testing developed an antifoam strategy to minimize the hexamethyldisiloxane (HMDSO) peak at boiling, while controlling foam based on testing with simulant and actual waste. Implementation of the nitric-glycolic acid flowsheet in DWPF is recommended. This flowsheet not only eliminates the hydrogen and ammonia hazards but will lead to shorter processing times, higher elemental mercury recovery, and more concentrated SRAT and SME products. The steady pH profile is expected to provide flexibility in processing the high volume of strip effluent expected once the Salt Waste Processing Facility starts up.« less

  10. Parent-Child Center Short-Term Assessment Study. Final Report.

    ERIC Educational Resources Information Center

    Hubbell, Ruth; Barrett, Barbara

    A short-term descriptive assessment, this study provides summary data on the Parent-Child Center (PCC) comprehensive early childhood intervention programs initiated in 1967 and operated by the Administration for Children, Youth, and Families. PCCs provide low income families with children under three with social service, health, and educational…

  11. PRELIMINARY EVALUATION OF DWPF IMPACTS OF BORIC ACID USE IN CESIUM STRIP FOR SWPF AND MCU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M.

    2010-09-28

    A new solvent system is being evaluated for use in the Modular Caustic-Side Solvent Extraction Unit (MCU) and in the Salt Waste Processing Facility (SWPF). The new system includes the option to replace the current dilute nitric acid strip solution with boric acid. To support this effort, the impact of using 0.01M, 0.1M, 0.25M and 0.5M boric acid in place of 0.001M nitric acid was evaluated for impacts on the DWPF facility. The evaluation only covered the impacts of boric acid in the strip effluent and does not address the other changes in solvents (i.e., the new extractant, called MaxCalix,more » or the new suppressor, guanidine). Boric acid additions may lead to increased hydrogen generation during the SRAT and SME cycles as well as change the rheological properties of the feed. The boron in the strip effluent will impact glass composition and could require each SME batch to be trimmed with boric acid to account for any changes in the boron from strip effluent additions. Addition of boron with the strip effluent will require changes in the frit composition and could lead to changes in melt behavior. The severity of the impacts from the boric acid additions is dependent on the amount of boric acid added by the strip effluent. The use of 0.1M or higher concentrations of boric acid in the strip effluent was found to significantly impact DWPF operations while the impact of 0.01M boric acid is expected to be relatively minor. Experimental testing is required to resolve the issues identified during the preliminary evaluation. The issues to be addressed by the testing are: (1) Impact on SRAT acid addition and hydrogen generation; (2) Impact on melter feed rheology; (3) Impact on glass composition control; (4) Impact on frit production; and (5) Impact on melter offgas. A new solvent system is being evaluated for use in the Modular Caustic-Side Solvent Extraction Unit (MCU) and in the Salt Waste Processing Facility (SWPF). The new system includes the option to replace the current dilute nitric acid strip solution with boric acid. To support this effort, the impact of using 0.01M, 0.1M, 0.25M and 0.5M boric acid in place of 0.001M nitric acid was evaluated for impacts on the DWPF facility. The evaluation only covered the impacts of boric acid in the strip effluent and does not address the other changes in solvents (i.e., the new extractant, called MaxCalix, or the new suppressor, guanidine). Experimental testing with the improved solvent is required to determine the impact of any changes in the entrained solvent on DWPF processing.« less

  12. Solidification of Savannah River plant high level waste

    NASA Astrophysics Data System (ADS)

    Maher, R.; Shafranek, L. F.; Kelley, J. A.; Zeyfang, R. W.

    1981-11-01

    Authorization for construction of the Defense Waste Processing Facility (DWPF) is expected in FY-83. The optimum time for stage 2 authorization is about three years later. Detailed design and construction will require approximately five years for stage 1, with stage 2 construction completed about two to three years later. Production of canisters of waste glass would begin in 1988, and the existing backlog of high level waste sludge stored at SRP would be worked off by about the year 2000. Stage 2 operation could begin in 1990. The technology and engineering are ready for construction and eventual operation of the DWPF for immobilizing high level radioactive waste at Savannah River Plant (SRP). Proceeding with this project will provide the public, and the leadership of this country, with a crucial demonstration that a major quanitity of existing high level nuclear wastes can be safely and permanently immobilized.

  13. Verification Of The Defense Waste Processing Facility's (DWPF) Process Digestion Methods For The Sludge Batch 8 Qualification Sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Click, D. R.; Edwards, T. B.; Wiedenman, B. J.

    2013-03-18

    This report contains the results and comparison of data generated from inductively coupled plasma – atomic emission spectroscopy (ICP-AES) analysis of Aqua Regia (AR), Sodium Peroxide/Sodium Hydroxide Fusion Dissolution (PF) and Cold Chem (CC) method digestions and Cold Vapor Atomic Absorption analysis of Hg digestions from the DWPF Hg digestion method of Sludge Batch 8 (SB8) Sludge Receipt and Adjustment Tank (SRAT) Receipt and SB8 SRAT Product samples. The SB8 SRAT Receipt and SB8 SRAT Product samples were prepared in the SRNL Shielded Cells, and the SRAT Receipt material is representative of the sludge that constitutes the SB8 Batch ormore » qualification composition. This is the sludge in Tank 51 that is to be transferred into Tank 40, which will contain the heel of Sludge Batch 7b (SB7b), to form the SB8 Blend composition.« less

  14. Antifoam Degradation Products in Off Gas and Condensate of Sludge Batch 9 Simulant Nitric-Formic Flowsheet Testing for the Defense Waste Processing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, T.

    Ten chemical processing cell (CPC) experiments were performed using simulant to evaluate Sludge Batch 9 for sludge-only and coupled processing using the nitric-formic flowsheet in the Defense Waste Processing Facility (DWPF). Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles were performed on eight of the ten. The other two were SRAT cycles only. Samples of the condensate, sludge, and off gas were taken to monitor the chemistry of the CPC experiments. The Savannah River National Laboratory (SRNL) has previously shown antifoam decomposes to form flammable organic products, (hexamethyldisiloxane (HMDSO), trimethylsilanol (TMS), and propanal), that are presentmore » in the vapor phase and condensate of the CPC vessels. To minimize antifoam degradation product formation, a new antifoam addition strategy was implemented at SRNL and DWPF to add antifoam undiluted.« less

  15. 77 FR 58533 - Notice of Availability of the Draft Environmental Impact Statement for the W.A. Parish Post...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-21

    ....A. Parish Post-Combustion CO 2 Capture and Sequestration Project, Southeastern TX AGENCY: U.S... availability of the Draft Environmental Impact Statement for the W.A. Parish Post-Combustion Carbon Dioxide.... Parish Post-Combustion CO 2 Capture and Sequestration Project (Parish PCCS Project). NRG's proposed...

  16. Modulation of interferon-γ synthesis by the effects of lignin-like enzymatically polymerized polyphenols on antigen-presenting cell activation and the subsequent cell-to-cell interactions.

    PubMed

    Yamanaka, Daisuke; Motoi, Masuro; Ishibashi, Ken-ichi; Miura, Noriko N; Adachi, Yoshiyuki; Ohno, Naohito

    2013-12-15

    Lignin-like polymerized polyphenols strongly activate lymphocytes and induce cytokine synthesis. We aimed to characterise the mechanisms of action of polymerized polyphenols on immunomodulating functions. We compared the reactivity of leukocytes from various organs to that of polymerized polyphenols. Splenocytes and resident peritoneal cavity cells (PCCs) responded to polymerized polyphenols and released several cytokines, whereas thymocytes and bone-marrow cells showed no response. Next, we eliminated antigen-presenting cells (APCs) from splenocytes to study their involvement in cytokine synthesis. We found that APC-negative splenocytes showed significantly reduced cytokine production induced by polymerized polyphenols. Additionally, adequate interferon-γ (IFN-γ) induction by polymerized polyphenols was mediated by the coexistence of APCs and T cells because the addition of T cells to PCCs increased IFN-γ production. Furthermore, inhibition of the T cell-APC interaction using neutralising antibodies significantly decreased cytokine production. Thus, cytokine induction by polymerized polyphenols was mediated by the interaction between APCs and T cells. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Designing Clinical Space for the Delivery of Integrated Behavioral Health and Primary Care.

    PubMed

    Gunn, Rose; Davis, Melinda M; Hall, Jennifer; Heintzman, John; Muench, John; Smeds, Brianna; Miller, Benjamin F; Miller, William L; Gilchrist, Emma; Brown Levey, Shandra; Brown, Jacqueline; Wise Romero, Pam; Cohen, Deborah J

    2015-01-01

    This study sought to describe features of the physical space in which practices integrating primary care and behavioral health care work and to identify the arrangements that enable integration of care. We conducted an observational study of 19 diverse practices located across the United States. Practice-level data included field notes from 2-4-day site visits, transcripts from semistructured interviews with clinicians and clinical staff, online implementation diary posts, and facility photographs. A multidisciplinary team used a 4-stage, systematic approach to analyze data and identify how physical layout enabled the work of integrated care teams. Two dominant spatial layouts emerged across practices: type-1 layouts were characterized by having primary care clinicians (PCCs) and behavioral health clinicians (BHCs) located in separate work areas, and type-2 layouts had BHCs and PCCs sharing work space. We describe these layouts and the influence they have on situational awareness, interprofessional "bumpability," and opportunities for on-the-fly communication. We observed BHCs and PCCs engaging in more face-to-face methods for coordinating integrated care for patients in type 2 layouts (41.5% of observed encounters vs 11.7%; P < .05). We show that practices needed to strike a balance between professional proximity and private work areas to accomplish job tasks. Private workspace was needed for focused work, to see patients, and for consults between clinicians and clinical staff. We describe the ways practices modified and built new space and provide 2 recommended layouts for practices integrating care based on study findings. Physical layout and positioning of professionals' workspace is an important consideration in practices implementing integrated care. Clinicians, researchers, and health-care administrators are encouraged to consider the role of professional proximity and private working space when creating new facilities or redesigning existing space to foster delivery of integrated behavioral health and primary care. © Copyright 2015 by the American Board of Family Medicine.

  18. Impact of Palliative Care Screening and Consultation in the ICU: A Multihospital Quality Improvement Project.

    PubMed

    Zalenski, Robert J; Jones, Spencer S; Courage, Cheryl; Waselewsky, Denise R; Kostaroff, Anna S; Kaufman, David; Beemath, Afzal; Brofman, John; Castillo, James W; Krayem, Hicham; Marinelli, Anthony; Milner, Bradley; Palleschi, Maria Teresa; Tareen, Mona; Testani, Sheri; Soubani, Ayman; Walch, Julie; Wheeler, Judy; Wilborn, Sonali; Granovsky, Hanna; Welch, Robert D

    2017-01-01

    There are few multicenter studies that examine the impact of systematic screening for palliative care and specialty consultation in the intensive care unit (ICU). To determine the outcomes of receiving palliative care consultation (PCC) for patients who screened positive on palliative care referral criteria. In a prospective quality assurance intervention with a retrospective analysis, the covariate balancing propensity score method was used to estimate the conditional probability of receiving a PCC and to balance important covariates. For patients with and without PCCs, outcomes studied were as follows: 1) change to "do not resuscitate" (DNR), 2) discharge to hospice, 3) 30-day readmission, 4) hospital length of stay (LOS), 5) total direct hospital costs. In 405 patients with positive screens, 161 (40%) who received a PCC were compared to 244 who did not. Patients receiving PCCs had higher rates of DNR-adjusted odds ratio (AOR) = 7.5; 95% CI 5.6-9.9) and hospice referrals-(AOR = 7.6; 95% CI 5.0-11.7). They had slightly lower 30-day readmissions-(AOR = 0.7; 95% CI 0.5-1.0); no overall difference in direct costs or LOS was found between the two groups. When patients receiving PCCs were stratified by time to PCC initiation, early consultation-by Day 4 of admission-was associated with reductions in LOS (1.7 days [95% CI -3.1, -1.2]) and average direct variable costs (-$1815 [95% CI -$3322, -$803]) compared to those who received no PCC. Receiving a PCC in the ICUs was significantly associated with more frequent DNR code status and hospice referrals, but not 30-day readmissions or hospital utilization. Early PCC was associated with significant LOS and direct cost reductions. Providing PCC early in the ICU should be considered. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  19. DEVELOPMENT OF REMOTE HANFORD CONNECTOR GASKET REPLACEMENT TOOLING FOR DWPF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krementz, D.; Coughlin, Jeffrey

    2009-05-05

    The Defense Waste Processing Facility (DWPF) requested the Savannah River National Laboratory (SRNL) to develop tooling and equipment to remotely replace gaskets in mechanical Hanford connectors to reduce personnel radiation exposure as compared to the current hands-on method. It is also expected that radiation levels will continually increase with future waste streams. The equipment is operated in the Remote Equipment Decontamination Cell (REDC), which is equipped with compressed air, two master-slave manipulators (MSM's) and an electro-mechanical manipulator (EMM) arm for operation of the remote tools. The REDC does not provide access to electrical power, so the equipment must be manuallymore » or pneumatically operated. The MSM's have a load limit at full extension of ten pounds, which limited the weight of the installation tool. In order to remotely replace Hanford connector gaskets several operations must be performed remotely, these include: removal of the spent gasket and retaining ring (retaining ring is also called snap ring), loading the new snap ring and gasket into the installation tool and installation of the new gasket into the Hanford connector. SRNL developed and tested tools that successfully perform all of the necessary tasks. Removal of snap rings from horizontal and vertical connectors is performed by separate air actuated retaining ring removal tools and is manipulated in the cell by the MSM. In order install a new gasket, the snap ring loader is used to load a new snap ring into a groove in the gasket installation tool. A new gasket is placed on the installation tool and retained by custom springs. An MSM lifts the installation tool and presses the mounted gasket against the connector block. Once the installation tool is in position, the gasket and snap ring are installed onto the connector by pneumatic actuation. All of the tools are located on a custom work table with a pneumatic valve station that directs compressed air to the desired tool and vents the tools as needed. Extensive testing of tooling operation was performed in the DWPF manipulator repair shop. This testing allowed the operators to gain confidence before the equipment was exposed to radioactive contamination. The testing also led to multiple design improvements. On July 17 and 29, 2008 the Remote Gasket Replacement Tooling was successfully demonstrated in the REDC at the DWPF of The Savannah River Site.« less

  20. SLUDGE BATCH 7B QUALIFICATION ACTIVITIES WITH SRS TANK FARM SLUDGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pareizs, J.; Click, D.; Lambert, D.

    2011-11-16

    Waste Solidification Engineering (WSE) has requested that characterization and a radioactive demonstration of the next batch of sludge slurry - Sludge Batch 7b (SB7b) - be completed in the Shielded Cells Facility of the Savannah River National Laboratory (SRNL) via a Technical Task Request (TTR). This characterization and demonstration, or sludge batch qualification process, is required prior to transfer of the sludge from Tank 51 to the Defense Waste Processing Facility (DWPF) feed tank (Tank 40). The current WSE practice is to prepare sludge batches in Tank 51 by transferring sludge from other tanks. Discharges of nuclear materials from Hmore » Canyon are often added to Tank 51 during sludge batch preparation. The sludge is washed and transferred to Tank 40, the current DWPF feed tank. Prior to transfer of Tank 51 to Tank 40, SRNL typically simulates the Tank Farm and DWPF processes with a Tank 51 sample (referred to as the qualification sample). With the tight schedule constraints for SB7b and the potential need for caustic addition to allow for an acceptable glass processing window, the qualification for SB7b was approached differently than past batches. For SB7b, SRNL prepared a Tank 51 and a Tank 40 sample for qualification. SRNL did not receive the qualification sample from Tank 51 nor did it simulate all of the Tank Farm washing and decanting operations. Instead, SRNL prepared a Tank 51 SB7b sample from samples of Tank 7 and Tank 51, along with a wash solution to adjust the supernatant composition to the final SB7b Tank 51 Tank Farm projections. SRNL then prepared a sample to represent SB7b in Tank 40 by combining portions of the SRNL-prepared Tank 51 SB7b sample and a Tank 40 Sludge Batch 7a (SB7a) sample. The blended sample was 71% Tank 40 (SB7a) and 29% Tank 7/Tank 51 on an insoluble solids basis. This sample is referred to as the SB7b Qualification Sample. The blend represented the highest projected Tank 40 heel (as of May 25, 2011), and thus, the highest projected noble metals content for SB7b. Characterization was performed on the Tank 51 SB7b samples and SRNL performed DWPF simulations using the Tank 40 SB7b material. This report documents: (1) The preparation and characterization of the Tank 51 SB7b and Tank 40 SB7b samples. (2) The performance of a DWPF Chemical Process Cell (CPC) simulation using the SB7b Tank 40 sample. The simulation included a Sludge Receipt and Adjustment Tank (SRAT) cycle, where acid was added to the sludge to destroy nitrite and reduce mercury, and a Slurry Mix Evaporator (SME) cycle, where glass frit was added to the sludge in preparation for vitrification. The SME cycle also included replication of five canister decontamination additions and concentrations. Processing parameters were based on work with a nonradioactive simulant. (3) Vitrification of a portion of the SME product and characterization and durability testing (as measured by the Product Consistency Test (PCT)) of the resulting glass. (4) Rheology measurements of the SRAT receipt, SRAT product, and SME product. This program was controlled by a Task Technical and Quality Assurance Plan (TTQAP), and analyses were guided by an Analytical Study Plan. This work is Technical Baseline Research and Development (R&D) for the DWPF. It should be noted that much of the data in this document has been published in interoffice memoranda. The intent of this technical report is bring all of the SB7b related data together in a single permanent record and to discuss the overall aspects of SB7b processing.« less

  1. Phase 2 Report--Mercury Behavior In The Defense Waste Processing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C.; Fellinger, T.

    2016-07-27

    The purpose of this report is to provide a summary of the DWPF processing history in regards to mercury, document the mercury results obtained on the product and condensate samples, and provide further recommendations based on the data obtained.

  2. AVP-IC50 Pred: Multiple machine learning techniques-based prediction of peptide antiviral activity in terms of half maximal inhibitory concentration (IC50).

    PubMed

    Qureshi, Abid; Tandon, Himani; Kumar, Manoj

    2015-11-01

    Peptide-based antiviral therapeutics has gradually paved their way into mainstream drug discovery research. Experimental determination of peptides' antiviral activity as expressed by their IC50 values involves a lot of effort. Therefore, we have developed "AVP-IC50 Pred," a regression-based algorithm to predict the antiviral activity in terms of IC50 values (μM). A total of 759 non-redundant peptides from AVPdb and HIPdb were divided into a training/test set having 683 peptides (T(683)) and a validation set with 76 independent peptides (V(76)) for evaluation. We utilized important peptide sequence features like amino-acid compositions, binary profile of N8-C8 residues, physicochemical properties and their hybrids. Four different machine learning techniques (MLTs) namely Support vector machine, Random Forest, Instance-based classifier, and K-Star were employed. During 10-fold cross validation, we achieved maximum Pearson correlation coefficients (PCCs) of 0.66, 0.64, 0.56, 0.55, respectively, for the above MLTs using the best combination of feature sets. All the predictive models also performed well on the independent validation dataset and achieved maximum PCCs of 0.74, 0.68, 0.59, 0.57, respectively, on the best combination of feature sets. The AVP-IC50 Pred web server is anticipated to assist the researchers working on antiviral therapeutics by enabling them to computationally screen many compounds and focus experimental validation on the most promising set of peptides, thus reducing cost and time efforts. The server is available at http://crdd.osdd.net/servers/ic50avp. © 2015 Wiley Periodicals, Inc.

  3. Psychometric Evaluation of the Knowledge, Skills, and Attitudes-Part I: Patient-Centered Care Scale (KSAI-PCCS): A Pilot Study

    ERIC Educational Resources Information Center

    Esslin, Patricia E.

    2016-01-01

    Recognition that adverse events are a significant cause for morbidity and mortality has led to a rise in global efforts to improve patient safety. Adaptations are needed in healthcare institutions and at the educational preparatory level for all healthcare providers. One change surrounds the significance of patient-centered care, an important…

  4. [Patterns of physical activity of people with chronic mental and behavioral disorders].

    PubMed

    Adamoli, Angélica Nickel; Azevedo, Mario Renato

    2009-01-01

    Since physical activity (PA) is capable of improving both the quality of life and the prognosis for individuals with mental and behavioral disorders (MBD), the main purpose of this study was to analyze the PA patterns in individuals with MBD frequenting a Psychosocial Care Center (PCC) in the city of Pelotas. The target population of this descriptive study consisted of individuals attended in any of the PCCs of Pelotas. The sample was selected from six PCCs and comprised 85 patients and their relatives. The mean age of the sample was 40.9 years (standard deviation 13.8). It was found that, in comparison with the general population, these individuals had a lower socioeconomic level and less schooling. The prevalence of leisure-time physical activity was low. In addition, women tended to dedicate the greater part of their time to household activities. Men participated more in the PA offered by the PCC than women. Therefore, incorporation of PA in PCC seems to be a feasible initiative for supporting the treatment of these patients and would offer a unique opportunity for the patients to engage in supervised and structured PA programs.

  5. Characterization of the SRNL-Washed tank 51 sludge batch 9 qualification sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pareizs, J. M.

    2016-01-01

    Savannah River National Laboratory (SRNL) personnel have been requested to qualify the next sludge batch (Sludge Batch 9 – SB9) for processing at the Defense Waste Processing Facility (DWPF). To accomplish this task, Savannah River Remediation (SRR) sent SRNL a 3-L sample of Tank 51H slurry to be characterized, washed, and then used in a lab-scale demonstration of the DWPF flowsheet (after combining with Tank 40H sludge). SRNL has washed the Tank 51H sample per the Tank Farm washing strategy as of October 20, 2015. A part of the qualification process is extensive radionuclide and chemical characterization of the SRNL-washedmore » Tank 51H slurry. This report documents the chemical characterization of the washed slurry; radiological characterization is in progress and will be documented in a separate report. The analytical results of this characterization are comparable to the Tank Farm projections. Therefore, it is recommended that SRNL use this washed slurry for the ongoing SB9 qualification activities.« less

  6. Results of an inter-laboratory study of glass formulation for the immobilization of excess plutonium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D.K.

    1999-12-08

    The primary focus of the current study is to determine allowable loadings of feed streams containing different ratios of plutonium, uranium, and minor components into the LaBS glass and to evaluate thermal stability with respect to the DWPF pour.

  7. The Impact Of The MCU Life Extension Solvent On Sludge Batch 8 Projected Operating Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D. K.; Edwards, T. B.

    2013-06-26

    As a part of the Actinide Removal Process (ARP)/Modular Caustic Side Solvent Extraction Unit (MCU) Life Extension Project, a next generation solvent (NGS) and a new strip acid will be deployed. The strip acid will be changed from dilute nitric acid to dilute boric acid (0.01 M). Because of these changes, experimental testing or evaluations with the next generation solvent are required to determine the impact of these changes (if any) to Chemical Process Cell (CPC) activities, glass formulation strategies, and melter operations at the Defense Waste Processing Facility (DWPF). The introduction of the dilute (0.01 M) boric acid streammore » into the DWPF flowsheet has a potential impact on glass formulation and frit development efforts since B203 is a major oxide in frits developed for DWPF. Prior knowledge of this stream can be accounted for during frit development efforts but that was not the case for Sludge Batch 8 (SB8). Frit 803 has already been recommended and procured for SB8 processing; altering the frit to account for the incoming boron from the strip effluent (SE) is not an option for SB8. Therefore, the operational robustness of Frit 803 to the introduction of SE including its compositional tolerances (i.e., up to 0.0125M boric acid) is of interest and was the focus of this study. The primary question to be addressed in the current study was: What is the impact (if any) on the projected operating windows for the Frit 803 - SB8 flowsheet to additions of B203 from the SE in the Sludge Receipt and Adjustment Tank (SRAT)? More specifically, will Frit 803 be robust to the potential compositional changes occurring in the SRAT due to sludge variation, varying additions of ARP and/or the introduction of SE by providing access to waste loadings (WLs) of interest to DWPF? The Measurement Acceptability Region (MAR) results indicate there is very little, if any, impact on the projected operating windows for the Frit 803 - SB8 system regardless of the presence or absence of ARP and SE (up to 2 wt% B203 contained in the SRAT and up to 2000 gallons of ARP). It should be noted that 0.95 wt% B203 is the nominal projected concentration in the SRAT based on a 0.0125M boric acid flowsheet with 70,000 liters of SE being added to the SRAT.« less

  8. Rheological Characterization of Unusual DWPF Slurry Samples (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koopman, D. C.

    2005-09-01

    A study was undertaken to identify and clarify examples of unusual rheological behavior in Defense Waste Processing Facility (DWPF) simulant slurry samples. Identification was accomplished by reviewing sludge, Sludge Receipt and Adjustment Tank (SRAT) product, and Slurry Mix Evaporator (SME) product simulant rheological results from the prior year. Clarification of unusual rheological behavior was achieved by developing and implementing new measurement techniques. Development of these new methods is covered in a separate report, WSRC-TR-2004-00334. This report includes a review of recent literature on unusual rheological behavior, followed by a summary of the rheological measurement results obtained on a set ofmore » unusual simulant samples. Shifts in rheological behavior of slurries as the wt. % total solids changed have been observed in numerous systems. The main finding of the experimental work was that the various unusual DWPF simulant slurry samples exhibit some degree of time dependent behavior. When a given shear rate is applied to a sample, the apparent viscosity of the slurry changes with time rather than remaining constant. These unusual simulant samples are more rheologically complex than Newtonian liquids or more simple slurries, neither of which shows significant time dependence. The study concludes that the unusual rheological behavior that has been observed is being caused by time dependent rheological properties in the slurries being measured. Most of the changes are due to the effect of time under shear, but SB3 SME products were also changing properties while stored in sample bottles. The most likely source of this shear-related time dependence for sludge is in the simulant preparation. More than a single source of time dependence was inferred for the simulant SME product slurries based on the range of phenomena observed. Rheological property changes were observed on the time-scale of a single measurement (minutes) as well as on a time scale of hours to weeks. The unusual shape of the slurry flow curves was not an artifact of the rheometric measurement. Adjusting the user-specified parameters in the rheometer measurement jobs can alter the shape of the flow curve of these time dependent samples, but this was not causing the unusual behavior. Variations in the measurement parameters caused the time dependence of a given slurry to manifest at different rates. The premise of the controlled shear rate flow curve measurement is that the dynamic response of the sample to a change in shear rate is nearly instantaneous. When this is the case, the data can be fitted to a time independent rheological equation, such as the Bingham plastic model. In those cases where this does not happen, interpretation of the data is difficult. Fitting time dependent data to time independent rheological equations, such as the Bingham plastic model, is also not appropriate.« less

  9. LITERATURE REVIEW ON IMPACT OF GLYCOLATE ON THE 2H EVAPORATOR AND THE EFFLUENT TREATMENT FACILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adu-Wusu, K.

    2012-05-10

    Glycolic acid (GA) is being studied as an alternate reductant in the Defense Waste Processing Facility (DWPF) feed preparation process. It will either be a total or partial replacement for the formic acid that is currently used. A literature review has been conducted on the impact of glycolate on two post-DWPF downstream systems - the 2H Evaporator system and the Effluent Treatment Facility (ETF). The DWPF recycle stream serves as a portion of the feed to the 2H Evaporator. Glycolate enters the evaporator system from the glycolate in the recycle stream. The overhead (i.e., condensed phase) from the 2H Evaporatormore » serves as a portion of the feed to the ETF. The literature search revealed that virtually no impact is anticipated for the 2H Evaporator. Glycolate may help reduce scale formation in the evaporator due to its high complexing ability. The drawback of the solubilizing ability is the potential impact on the criticality analysis of the 2H Evaporator system. It is recommended that at least a theoretical evaluation to confirm the finding that no self-propagating violent reactions with nitrate/nitrites will occur should be performed. Similarly, identification of sources of ignition relevant to glycolate and/or update of the composite flammability analysis to reflect the effects from the glycolate additions for the 2H Evaporator system are in order. An evaluation of the 2H Evaporator criticality analysis is also needed. A determination of the amount or fraction of the glycolate in the evaporator overhead is critical to more accurately assess its impact on the ETF. Hence, use of predictive models like OLI Environmental Simulation Package Software (OLI/ESP) and/or testing are recommended for the determination of the glycolate concentration in the overhead. The impact on the ETF depends on the concentration of glycolate in the ETF feed. The impact is classified as minor for feed glycolate concentrations {le} 33 mg/L or 0.44 mM. The ETF unit operations that will have minor/major impacts are chlorination, pH adjustment, 1st mercury removal, organics removal, 2nd mercury removal, and ion exchange. For minor impacts, the general approach is to use historical process operations data/modeling software like OLI/ESP and/or monitoring/compiled process operations data to resolve any uncertainties with testing as a last resort. For major impacts (i.e., glycolate concentrations > 33 mg/L or 0.44 mM), testing is recommended. No impact is envisaged for the following ETF unit operations regardless of the glycolate concentration - filtration, reverse osmosis, ion exchange resin regeneration, and evaporation.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamecnik, J. R.; Edwards, T. B.

    The conversions of nitrite to nitrate, the destruction of glycolate, and the conversion of glycolate to formate and oxalate were modeled for the Nitric-Glycolic flowsheet using data from Chemical Process Cell (CPC) simulant runs conducted by SRNL from 2011 to 2015. The goal of this work was to develop empirical correlations for these variables versus measureable variables from the chemical process so that these quantities could be predicted a-priori from the sludge composition and measurable processing variables. The need for these predictions arises from the need to predict the REDuction/OXidation (REDOX) state of the glass from the Defense Waste Processingmore » Facility (DWPF) melter. This report summarizes the initial work on these correlations based on the aforementioned data. Further refinement of the models as additional data is collected is recommended.« less

  11. CPTAC Announces New PTRCs, PCCs, and PGDACs | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    This week, the Office of Cancer Clinical Proteomics Research (OCCPR) at the National Cancer Institute (NCI), part of the National Institutes of Health, announced its aim to further the convergence of proteomics with genomics – “proteogenomics,” to better understand the molecular basis of cancer and accelerate research in these areas by disseminating research resources to the scientific community.

  12. IMPACTS OF ANTIFOAM ADDITIONS AND ARGON BUBBLING ON DEFENSE WASTE PROCESSING FACILITY REDUCTION/OXIDATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C.; Johnson, F.

    2012-06-05

    During melting of HLW glass, the REDOX of the melt pool cannot be measured. Therefore, the Fe{sup +2}/{Sigma}Fe ratio in the glass poured from the melter must be related to melter feed organic and oxidant concentrations to ensure production of a high quality glass without impacting production rate (e.g., foaming) or melter life (e.g., metal formation and accumulation). A production facility such as the Defense Waste Processing Facility (DWPF) cannot wait until the melt or waste glass has been made to assess its acceptability, since by then no further changes to the glass composition and acceptability are possible. therefore, themore » acceptability decision is made on the upstream process, rather than on the downstream melt or glass product. That is, it is based on 'feed foward' statistical process control (SPC) rather than statistical quality control (SQC). In SPC, the feed composition to the melter is controlled prior to vitrification. Use of the DWPF REDOX model has controlled the balanjce of feed reductants and oxidants in the Sludge Receipt and Adjustment Tank (SRAT). Once the alkali/alkaline earth salts (both reduced and oxidized) are formed during reflux in the SRAT, the REDOX can only change if (1) additional reductants or oxidants are added to the SRAT, the Slurry Mix Evaporator (SME), or the Melter Feed Tank (MFT) or (2) if the melt pool is bubble dwith an oxidizing gas or sparging gas that imposes a different REDOX target than the chemical balance set during reflux in the SRAT.« less

  13. TANK 40 FINAL SB7B CHEMICAL CHARACTERIZATION RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C.

    2012-03-15

    A sample of Sludge Batch 7b (SB7b) was taken from Tank 40 in order to obtain radionuclide inventory analyses necessary for compliance with the Waste Acceptance Product Specifications (WAPS). The SB7b WAPS sample was also analyzed for chemical composition including noble metals and fissile constituents, and these results are reported here. These analyses along with the WAPS radionuclide analyses will help define the composition of the sludge in Tank 40 that is currently being fed to the Defense Waste Processing Facility (DWPF) as SB7b. At the Savannah River National Laboratory (SRNL) the 3-L Tank 40 SB7b sample was transferred frommore » the shipping container into a 4-L high density polyethylene bottle and solids were allowed to settle over the weekend. Supernate was then siphoned off and circulated through the shipping container to complete the transfer of the sample. Following thorough mixing of the 3-L sample, a 558 g sub-sample was removed. This sub-sample was then utilized for all subsequent analytical samples. Eight separate aliquots of the slurry were digested, four with HNO{sub 3}/HCl (aqua regia) in sealed Teflon{reg_sign} vessels and four with NaOH/Na{sub 2}O{sub 2} (alkali or peroxide fusion) using Zr crucibles. Two Analytical Reference Glass - 1 (ARG-1) standards were digested along with a blank for each preparation. Each aqua regia digestion and blank was diluted to 1:100 mL with deionized water and submitted to Analytical Development (AD) for inductively coupled plasma - atomic emission spectroscopy (ICP-AES) analysis, inductively coupled plasma - mass spectrometry (ICP-MS) analysis, atomic absorption spectroscopy (AA) for As and Se, and cold vapor atomic absorption spectroscopy (CV-AA) for Hg. Equivalent dilutions of the alkali fusion digestions and blank were submitted to AD for ICP-AES analysis. Tank 40 SB7b supernate was collected from a mixed slurry sample in the SRNL Shielded Cells and submitted to AD for ICP-AES, ion chromatography (IC), total base/free OH{sup -}/other base, total inorganic carbon/total organic carbon (TIC/TOC) analyses, and Cs-137 gamma scan. Weighted dilutions of slurry were submitted for IC, TIC/TOC, and total base/free OH{sup -}/other base analyses. Activities for U-233, U-235, and Pu-239 were determined from the ICP-MS data for the aqua regia digestions of the Tank 40 WAPS slurry using the specific activity of each isotope. The Pu-241 value was determined from a Pu-238/-241 method developed by SRNL AD and previously described. The following conclusions were drawn from the analytical results reported here: (1) The ratios of the major elements for the SB7b WAPS sample are different from those measured for the SB7a WAPS sample. There is less Al and Mn relative to Fe than the previous sludge batch. (2) The elemental composition of this sample and the analyses conducted here are reasonable and consistent with DWPF batch data measurements in light of DWPF pre-sample concentration and SRAT product heel contributions to the DWPF SRAT receipt analyses. The element ratios for Al/Fe, Ca/Fe, Mn/Fe, and U/Fe agree within 10% between this work and the DWPF Sludge Receipt and Adjustment Tank (SRAT) receipt analyses. (3) Sulfur in the SB7b WAPS sample is 82% soluble, slightly less than results reported for SB3, SB4, and SB6 samples but unlike the 50% insoluble sulfur observed in the SB5 WAPS sample. In addition, 23% of the soluble sulfur is not present as sulfate in SB7b. (4) The average activities of the fissile isotopes of interest in the SB7b WAPS sample are (in {mu}Ci/g of total dried solids): 4.22E-02 U-233, 6.12E-04 U-235, 1.08E+01 Pu-239, and 5.09E+01 Pu-241. The full radionuclide composition will be reported in a future document. (5) The fission product noble metal and Ag concentrations appear to have largely peaked in previous DWPF sludge batches, with the exception of Ru, which still shows a slight increase in SB7b.« less

  14. CHARACTERIZATION OF A PRECIPITATE REACTOR FEED TANK (PRFT) SAMPLE FROM THE DEFENSE WASTE PROCESSING FACILITY (DWPF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.; Bannochie, C.

    2014-05-12

    A sample of from the Defense Waste Processing Facility (DWPF) Precipitate Reactor Feed Tank (PRFT) was pulled and sent to the Savannah River National Laboratory (SRNL) in June of 2013. The PRFT in DWPF receives Actinide Removal Process (ARP)/ Monosodium Titanate (MST) material from the 512-S Facility via the 511-S Facility. This 2.2 L sample was to be used in small-scale DWPF chemical process cell testing in the Shielded Cells Facility of SRNL. A 1L sub-sample portion was characterized to determine the physical properties such as weight percent solids, density, particle size distribution and crystalline phase identification. Further chemical analysismore » of the PRFT filtrate and dissolved slurry included metals and anions as well as carbon and base analysis. This technical report describes the characterization and analysis of the PRFT sample from DWPF. At SRNL, the 2.2 L PRFT sample was composited from eleven separate samples received from DWPF. The visible solids were observed to be relatively quick settling which allowed for the rinsing of the original shipping vials with PRFT supernate on the same day as compositing. Most analyses were performed in triplicate except for particle size distribution (PSD), X-ray diffraction (XRD), Scanning Electron Microscopy (SEM) and thermogravimetric analysis (TGA). PRFT slurry samples were dissolved using a mixed HNO3/HF acid for subsequent Inductively Coupled Plasma Atomic Emission Spectroscopy (ICPAES) and Inductively Coupled Plasma Mass Spectroscopy (ICP-MS) analyses performed by SRNL Analytical Development (AD). Per the task request for this work, analysis of the PRFT slurry and filtrate for metals, anions, carbon and base were primarily performed to support the planned chemical process cell testing and to provide additional component concentrations in addition to the limited data available from DWPF. Analysis of the insoluble solids portion of the PRFT slurry was aimed at detailed characterization of these solids (TGA, PSD, XRD and SEM) in support of the Salt IPT chemistry team. The overall conclusions from analyses performed in this study are that the PRFT slurry consists of 0.61 Wt.% insoluble MST solids suspended in a 0.77 M [Na+] caustic solution containing various anions such as nitrate, nitrite, sulfate, carbonate and oxalate. The corresponding measured sulfur level in the PRFT slurry, a critical element for determining how much of the PRFT slurry gets blended into the SRAT, is 0.437 Wt.% TS. The PRFT slurry does not contain insoluble oxalates nor significant quantities of high activity sludge solids. The lack of sludge solids has been alluded to by the Salt IPT chemistry team in citing that the mixing pump has been removed from Tank 49H, the feed tank to ARP-MCU, thus allowing the sludge solids to settle out. The PRFT aqueous slurry from DWPF was found to contain 5.96 Wt.% total dried solids. Of these total dried solids, relatively low levels of insoluble solids (0.61 Wt.%) were measured. The densities of both the filtrate and slurry were 1.05 g/mL. Particle size distribution of the PRFT solids in filtered caustic simulant and XRD analysis of washed/dried PRFT solids indicate that the PRFT slurry contains a bimodal distribution of particles in the range of 1 and 6 μm and that the particles contain sodium titanium oxide hydroxide Na2Ti2O4(OH)2 crystalline material as determined by XRD. These data are in excellent agreement with similar data obtained from laboratory sampling of vendor supplied MST. Scanning Electron Microscopy (SEM) combined with Energy Dispersive X-ray Spectroscopy (EDS) analysis of washed/dried PRFT solids shows the particles to be like previous MST analyses consisting of irregular shaped micron-sized solids consisting primarily of Na and Ti. Thermogravimetric analysis of the washed and unwashed PRFT solids shows that the washed solids are very similar to MST solids. The TGA mass loss signal for the unwashed solids shows similar features to TGA performed on cellulose nitrate filter paper indicating significant presence of the deteriorated filter in this unwashed sample. Neither the washed nor unwashed PRFT solids TGA traces showed any features that would indicate presence of sodium oxalate solids. The PRFT Filtrate elemental analysis shows that Na, S and Al are major soluble species with trace levels of B, Cr, Cu, K, Li, Si, Tc, Th and U present. Nitrate, nitrite, sulfate, oxalate, carbonate and hydroxide are major soluble anion species. There is good agreement between the analyzed TOC and the total carbon calculated from the sum of oxalate and minor species formate. Comparison of the amount and speciation of the carbon species between filtrate and slurry indicates no significant carbon-containing species, e.g., sodium oxalate, are present in the slurry solids. Dissolution of the PRFT slurry and subsequent analysis shows that Na, Ti, Si and U are the major elements present on a Wt.% total dried solids basis with 30, 5.8 and 0.47 and 0.11 Wt.% total dried solids, respectively. The amount of Al in the dissolved PRFT slurry is less than that calculated from the PRFT filtrate alone which suggests that the mixed acid digestion used in this work is not optimized for Al recovery. The concentrations of Ca, Fe, Hg and U are all low (at or below 0.11 wt%) and there is no detectable Mn or Ni present which indicates no significant HLW sludge solids are present in the PRFT slurry sample.« less

  15. ROAD MAP FOR DEVELOPMENT OF CRYSTAL-TOLERANT HIGH LEVEL WASTE GLASSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K.; Peeler, D.; Herman, C.

    The U.S. Department of Energy (DOE) is building a Tank Waste Treatment and Immobilization Plant (WTP) at the Hanford Site in Washington to remediate 55 million gallons of radioactive waste that is being temporarily stored in 177 underground tanks. Efforts are being made to increase the loading of Hanford tank wastes in glass while meeting melter lifetime expectancies and process, regulatory, and product quality requirements. This road map guides the research and development for formulation and processing of crystaltolerant glasses, identifying near- and long-term activities that need to be completed over the period from 2014 to 2019. The primary objectivemore » is to maximize waste loading for Hanford waste glasses without jeopardizing melter operation by crystal accumulation in the melter or melter discharge riser. The potential applicability to the Savannah River Site (SRS) Defense Waste Processing Facility (DWPF) will also be addressed in this road map. The planned research described in this road map is motivated by the potential for substantial economic benefits (significant reductions in glass volumes) that will be realized if the current constraints (T1% for WTP and TL for DWPF) are approached in an appropriate and technically defensible manner for defense waste and current melter designs. The basis of this alternative approach is an empirical model predicting the crystal accumulation in the WTP glass discharge riser and melter bottom as a function of glass composition, time, and temperature. When coupled with an associated operating limit (e.g., the maximum tolerable thickness of an accumulated layer of crystals), this model could then be integrated into the process control algorithms to formulate crystal-tolerant high-level waste (HLW) glasses targeting high waste loadings while still meeting process related limits and melter lifetime expectancies. The modeling effort will be an iterative process, where model form and a broader range of conditions, e.g., glass composition and temperature, will evolve as additional data on crystal accumulation are gathered. Model validation steps will be included to guide the development process and ensure the value of the effort (i.e., increased waste loading and waste throughput). A summary of the stages of the road map for developing the crystal-tolerant glass approach, their estimated durations, and deliverables is provided.« less

  16. VizieR Online Data Catalog: Planck Catalog of Compact Sources Release 1 (Planck, 2013)

    NASA Astrophysics Data System (ADS)

    Planck Collaboration

    2013-03-01

    Planck is a European Space Agency (ESA) mission, with significant contributions from the U.S. National Aeronautics and Space Agency (NASA). It is the third generation of space-based cosmic microwave background experiments, after the Cosmic Background Explorer (COBE) and the Wilkinson Microwave Anisotropy Probe (WMAP). Planck was launched on 14 May 2009 on an Ariane 5 rocket from Kourou, French Guiana. Following a cruise to the Earth-Sun L2 Lagrange point, cooling and in orbit checkout, Planck initiated the First Light Survey on 13 August 2009. Since then, Planck has been continuously measuring the intensity of the sky over a range of frequencies from 30 to 857GHz (wavelengths of 1cm to 350μm) with spatial resolutions ranging from about 33' to 5' respectively. The Low Frequency Instrument (LFI) on Planck provides temperature and polarization information using radiometers which operate between 30 and 70GHz. The High Frequency Instrument (HFI) uses pairs of polarization-sensitive bolometers at each of four frequencies between 100 and 353GHz but does not measure polarization information in the two upper HFI bands at 545 and 857GHz. The lowest frequencies overlap with WMAP, and the highest frequencies extend far into the submillimeter in order to improve separation between Galactic foregrounds and the cosmic microwave background (CMB). By extending to wavelengths longer than those at which the Infrared Astronomical Satellite (IRAS) operated, Planck is providing an unprecedented window into dust emission at far-infrared and submillimeter wavelengths. The PCCS (Planck Catalog of Compact Sources) is the list of sources detected in the first 15 months of Planck "nominal" mission. It consists of nine single-frequency catalogues of compact sources, both Galactic and extragalactic, detected over the entire sky. The PCCS covers the frequency range 30-857 GHz with higher sensitivity (it is 90% complete at 180mJy in the best channel) and better angular resolution than previous all-sky surveys in the microwave band. By construction its reliability is >80% and more than 65% of the sources have been detected at least in two contiguous Planck channels. Many of the Planck PCCS sources can be associated with stars with dust shells, stellar cores, radio galaxies, blazars, infrared luminous galaxies and Galactic interstellar medium features. (12 data files).

  17. Poison control center experience with tianeptine: an unregulated pharmaceutical product with potential for abuse.

    PubMed

    Marraffa, Jeanna M; Stork, Christine M; Hoffman, Robert S; Su, Mark K

    2018-05-25

    Interest in tianeptine as a potential drug of abuse is increasing in the United States. We performed a retrospective study of calls to the New York State Poison Control Centers (PCCs) designed to characterize one state's experience with tianeptine. Data were gathered from existing records utilizing the poison center data collection system, Toxicall® entered between 1 January 2000 through 1 April 2017. Information regarding patient demographics, reported dose and formulation of tianeptine, reported coingestants, brief narrative description of the case, disposition, and case outcome was collected. There were nine reported cases of tianeptine exposure. Seven were male with a mean age of 27. Three reported therapeutic use of tianeptine and five reported intentional abuse. One case was an unintentional pediatric exposure. Doses were reported in three cases; 12.5 mg in a pediatric unintentional exposure, and 5 and 10 g daily in the two reports of intentional abuse. Of note, five patients complained of symptoms consistent with opioid withdrawal. In one of two cases in which naloxone was administered, an improvement in mental status and the respiratory drive was noted. Outcomes reported in Toxicall® were minor in two cases, moderate in five cases, major in one case, and not reported in one case. These cases, reported to the New York State PCCs should alert readers to the potential for tianeptine abuse, dependence, and withdrawal.

  18. Bubblers Speed Nuclear Waste Processing at SRS

    ScienceCinema

    None

    2018-05-23

    At the Department of Energy's Savannah River Site, American Recovery and Reinvestment Act funding has supported installation of bubbler technology and related enhancements in the Defense Waste Processing Facility (DWPF). The improvements will accelerate the processing of radioactive waste into a safe, stable form for storage and permit expedited closure of underground waste tanks holding 37 million gallons of liquid nuclear waste.

  19. Analysis Of Condensate Samples In Support Of The Antifoam Degradation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hay, M.; Martino, C.

    2016-01-12

    The degradation of Antifoam 747 to form flammable decomposition products has resulted in declaration of a Potential Inadequacy in the Safety Analysis (PISA) for the Defense Waste Processing Facility (DWPF). Savannah River National Laboratory (SRNL) testing with simulants showed that hexamethyldisiloxane (HMDSO), trimethylsilanol (TMS), and 1-propanal are formed in the offgas from the decomposition of the antifoam. A total of ten DWPF condensate samples from Batch 735 and 736 were analyzed by SRNL for three degradation products and additional analytes. All of the samples were analyzed to determine the concentrations of HMDSO, TMS, and propanal. The results of the organicmore » analysis found concentrations for propanal and HMDSO near or below the detection limits for the analysis. The TMS concentrations ranged from below detection to 11 mg/L. The samples from Batch 736 were also analyzed for formate and oxalate anions, total organic carbon, and aluminum, iron, manganese, and silicon. Most of the samples contained low levels of formate and therefore low levels of organic carbon. These two values for each sample show reasonable agreement in most cases. Low levels of all the metals (Al, Fe, Mn, and Si) were present in most of the samples.« less

  20. ANALYSIS OF CONDENSATE SAMPLES IN SUPPORT OF THE ANTIFOAM DEGRADATION STUDY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hay, M.; Martino, C.

    2016-02-29

    The degradation of Antifoam 747 to form flammable decomposition products has resulted in declaration of a Potential Inadequacy in the Safety Analysis (PISA) for the Defense Waste Processing Facility (DWPF). Savannah River National Laboratory (SRNL) testing with simulants showed that hexamethyldisiloxane (HMDSO), trimethylsilanol (TMS), and 1-propanal are formed in the offgas from the decomposition of the antifoam. A total of ten DWPF condensate samples from Batch 735 and 736 were analyzed by SRNL for three degradation products and additional analytes. All of the samples were analyzed to determine the concentrations of HMDSO, TMS, and propanal. The results of the organicmore » analysis found concentrations for propanal and HMDSO near or below the detection limits for the analysis. The TMS concentrations ranged from below detection to 11 mg/L. The samples from Batch 736 were also analyzed for formate and oxalate anions, total organic carbon, and aluminum, iron, manganese, and silicon. Most of the samples contained low levels of formate and therefore low levels of organic carbon. These two values for each sample show reasonable agreement in most cases. Low levels of all the metals (Al, Fe, Mn, and Si) were present in most of the samples.« less

  1. Defense Waste Processing Facility Nitric- Glycolic Flowsheet Chemical Process Cell Chemistry: Part 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamecnik, J.; Edwards, T.

    The conversions of nitrite to nitrate, the destruction of glycolate, and the conversion of glycolate to formate and oxalate were modeled for the Nitric-Glycolic flowsheet using data from Chemical Process Cell (CPC) simulant runs conducted by Savannah River National Laboratory (SRNL) from 2011 to 2016. The goal of this work was to develop empirical correlation models to predict these values from measureable variables from the chemical process so that these quantities could be predicted a-priori from the sludge or simulant composition and measurable processing variables. The need for these predictions arises from the need to predict the REDuction/OXidation (REDOX) statemore » of the glass from the Defense Waste Processing Facility (DWPF) melter. This report summarizes the work on these correlations based on the aforementioned data. Previous work on these correlations was documented in a technical report covering data from 2011-2015. This current report supersedes this previous report. Further refinement of the models as additional data are collected is recommended.« less

  2. Therapeutic Correction of Thrombin Generation in Dilution-Induced Coagulopathy: Computational Analysis Based on a Data Set of Healthy Subjects

    DTIC Science & Technology

    2012-01-01

    Factor VIIa tended to primarily impact clotting time, thrombin peak time, and maximum slope of the thrombin curve, whereas in the case of PCC- FVII ...constituents of existing PCCs are the four coagulation factors (F) II (prothrombin), FVII , FIX, and FX.3 Notably, FVII inhibits thrombin generation by...proposed PCC composition (coagulation factors [F] II, IX, and X and the anticoagulant antithrombin), designated PCC-AT, was compared with that of

  3. Duty Module Validation for Accomplishing Taining Feedback. Volume 1. System Design for Training Feedback

    DTIC Science & Technology

    1977-11-14

    job element for developing PCCs for the measurement of an officer’s performance capability. As a result of the work already done on Duty Modules, new...exercise is ended. Recent studies to evaluate the effect on unit mission performance as a result of assign- ing varying numbers of women soldiers to...Passes driver’s test and demonstrates ability to perform effectively in all crew positions of track vehicles in his TOE. Radios, Telephones

  4. In vitro dissolution of strontium titanate to estimate clearance rates in human lungs

    NASA Astrophysics Data System (ADS)

    Anderson, Jeri Lynn

    At the In-Tank Precipitation facility (ITP) of the Savannah River Site, strontium and other radionuclides are removed from high-level radioactive waste and sent to the Defense Waste Processing Facility (DWPF). Strontium removal is accomplished by ion-exchange using monosodium titanate slurry which creates a form of strontium titanate with unknown solubility characteristics. In the case of accidental inhalation of a compound containing radioactive strontium, the ICRP, in Publication 66, recommends using default values for rates of absorption into body fluids at the lungs in the absence of reliable human or animal data. The default value depends on whether the absorption is considered to be fast, moderate, or slow (Type F, M, or S). Current dose assessment for an individual upon inadvertent exposure to airborne radioactive strontium assumes that all strontium compounds are Type F (soluble) or Type S (insoluble). Pure high-fired strontium titanate (SrTiOsb3) is considered Type S. The purpose of this project was to determine the solubility of strontium titanate in the form created at the ITP facility. An in vitro dissolution study was done with a precipitate simulant and with several types of strontium titanate and the results were compared. An in vivo study was also performed with high-fired SrTiOsb3 in rats. The data from both studies were used independently to assign the compounds to absorption type based on criteria specified in ICRP 71. Results of the in vitro studies showed that the DWPF simulant should be assigned to Type M and the strontium titanate should be assigned to Type S. It is possible the difference in the DWPF simulant is due to the other chemicals present. Results of the in vivo study verified that SrTiOsb3 should be assigned to Type S. Lung clearance data of SrTiOsb3 from rats showed that 85% cleared within the first 24 hours and the remaining 15% with a half-time of 130 days. The initial rapid clearance is attributed to deposition in airways as compared to the alveolar region.

  5. Organics Characterization Of DWPF Alternative Reductant Simulants, Glycolic Acid, And Antifoam 747

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, T. L.; Wiedenman, B. J.; Lambert, D. P.

    The present study examines the fate of glycolic acid and other organics added in the Chemical Processing Cell (CPC) of the Defense Waste Processing Facility (DWPF) as part of the glycolic alternate flowsheet. Adoption of this flowsheet is expected to provide certain benefits in terms of a reduction in the processing time, a decrease in hydrogen generation, simplification of chemical storage and handling issues, and an improvement in the processing characteristics of the waste stream including an increase in the amount of nitrate allowed in the CPC process. Understanding the fate of organics in this flowsheet is imperative because tankmore » farm waste processed in the CPC is eventually immobilized by vitrification; thus, the type and amount of organics present in the melter feed may affect optimal melt processing and the quality of the final glass product as well as alter flammability calculations on the DWPF melter off gas. To evaluate the fate of the organic compounds added as the part of the glycolic flowsheet, mainly glycolic acid and antifoam 747, samples of simulated waste that was processed using the DWPF CPC protocol for tank farm sludge feed were generated and analyzed for organic compounds using a variety of analytical techniques at the Savannah River National Laboratory (SRNL). These techniques included Ion Chromatography (IC), Gas Chromatography-Mass Spectrometry (GC-MS), Inductively Coupled Plasma-Atomic Emission Spectroscopy (ICP-AES), and Nuclear Magnetic Resonance (NMR) Spectroscopy. A set of samples were also sent to the Catholic University of America Vitreous State Laboratory (VSL) for analysis by NMR Spectroscopy at the University of Maryland, College Park. Analytical methods developed and executed at SRNL collectively showed that glycolic acid was the most prevalent organic compound in the supernatants of Slurry Mix Evaporator (SME) products examined. Furthermore, the studies suggested that commercially available glycolic acid contained minor amounts of impurities such as formic and diglycolic acid that were then carried over in the SME products. Oxalic acid present in the simulated tank farm waste was also detected. Finally, numerous other compounds, at low concentrations, were observed present in etheric extracts of aqueous supernate solutions of the SME samples and are thought to be breakdown products of antifoam 747. The data collectively suggest that although addition of glycolic acid and antifoam 747 will introduce a number of impurities and breakdown products into the melter feed, the concentrations of these organics is expected to remain low and may not significantly impact REDOX or off-gas flammability predictions. In the SME products examined presently, which contained variant amounts of glycolic acid and antifoam 747, no unexpected organic degradation product was found at concentrations above 500 mg/kg, a reasonable threshold concentration for an organic compound to be taken into account in the REDOX modeling. This statement does not include oxalic or formic acid that were sometimes observed above 500 mg/kg and acetic acid that has an analytical detection limit of 1250 mg/kg due to high glycolate concentration in the SME products tested. Once a finalized REDOX equation has been developed and implemented, REDOX properties of known organic species will be determined and their impact assessed. Although no immediate concerns arose during the study in terms of a negative impact of organics present in SME products of the glycolic flowsheet, evidence of antifoam degradation suggest that an alternative antifoam to antifoam 747 is worth considering. The determination and implementation of an antifoam that is more hydrolysis resistant would have benefits such as increasing its effectiveness over time and reducing the generation of degradation products.« less

  6. Broadband mixing of $${\\mathscr{P}}{\\mathscr{T}}$$-symmetric and $${\\mathscr{P}}{\\mathscr{T}}$$-broken phases in photonic heterostructures with a one-dimensional loss/gain bilayer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Özgün, Ege; Serebryannikov, Andriy E.; Ozbay, Ekmel

    Combining loss and gain components in one photonic heterostructure opens a new route to efficient manipulation by radiation, transmission, absorption, and scattering of electromagnetic waves. Therefore, loss/gain structures enablingmore » $${\\mathscr{P}}{\\mathscr{T}}$$-symmetric and $${\\mathscr{P}}{\\mathscr{T}}$$-broken phases for eigenvalues have extensively been studied in the last decade. In particular, translation from one phase to another, which occurs at the critical point in the two-channel structures with one-dimensional loss/gain components, is often associated with one-way transmission. In this report, broadband mixing of the $${\\mathscr{P}}{\\mathscr{T}}$$-symmetric and $${\\mathscr{P}}{\\mathscr{T}}$$-broken phases for eigenvalues is theoretically demonstrated in heterostructures with four channels obtained by combining a one-dimensional loss/gain bilayer and one or two thin polarization-converting components (PCCs). The broadband phase mixing in the four-channel case is expected to yield advanced transmission and absorption regimes. Various configurations are analyzed, which are distinguished in symmetry properties and polarization conversion regime of PCCs. The conditions necessary for phase mixing are then discussed. The simplest two-component configurations with broadband mixing are found, as well as the more complex three-component configurations wherein symmetric and broken sets are not yet mixed and appear in the neighbouring frequency ranges. Peculiarities of eigenvalue behaviour are considered for different permittivity ranges of loss/gain medium, i.e., from epsilon-near-zero to high-epsilon regime.« less

  7. Broadband mixing of [Formula: see text]-symmetric and [Formula: see text]-broken phases in photonic heterostructures with a one-dimensional loss/gain bilayer.

    PubMed

    Özgün, Ege; Serebryannikov, Andriy E; Ozbay, Ekmel; Soukoulis, Costas M

    2017-11-14

    Combining loss and gain components in one photonic heterostructure opens a new route to efficient manipulation by radiation, transmission, absorption, and scattering of electromagnetic waves. Therefore, loss/gain structures enabling [Formula: see text]-symmetric and [Formula: see text]-broken phases for eigenvalues have extensively been studied in the last decade. In particular, translation from one phase to another, which occurs at the critical point in the two-channel structures with one-dimensional loss/gain components, is often associated with one-way transmission. In this report, broadband mixing of the [Formula: see text]-symmetric and [Formula: see text]-broken phases for eigenvalues is theoretically demonstrated in heterostructures with four channels obtained by combining a one-dimensional loss/gain bilayer and one or two thin polarization-converting components (PCCs). The broadband phase mixing in the four-channel case is expected to yield advanced transmission and absorption regimes. Various configurations are analyzed, which are distinguished in symmetry properties and polarization conversion regime of PCCs. The conditions necessary for phase mixing are discussed. The simplest two-component configurations with broadband mixing are found, as well as the more complex three-component configurations wherein symmetric and broken sets are not yet mixed and appear in the neighbouring frequency ranges. Peculiarities of eigenvalue behaviour are considered for different permittivity ranges of loss/gain medium, i.e., from epsilon-near-zero to high-epsilon regime.

  8. Broadband mixing of $${\\mathscr{P}}{\\mathscr{T}}$$-symmetric and $${\\mathscr{P}}{\\mathscr{T}}$$-broken phases in photonic heterostructures with a one-dimensional loss/gain bilayer

    DOE PAGES

    Özgün, Ege; Serebryannikov, Andriy E.; Ozbay, Ekmel; ...

    2017-11-14

    Combining loss and gain components in one photonic heterostructure opens a new route to efficient manipulation by radiation, transmission, absorption, and scattering of electromagnetic waves. Therefore, loss/gain structures enablingmore » $${\\mathscr{P}}{\\mathscr{T}}$$-symmetric and $${\\mathscr{P}}{\\mathscr{T}}$$-broken phases for eigenvalues have extensively been studied in the last decade. In particular, translation from one phase to another, which occurs at the critical point in the two-channel structures with one-dimensional loss/gain components, is often associated with one-way transmission. In this report, broadband mixing of the $${\\mathscr{P}}{\\mathscr{T}}$$-symmetric and $${\\mathscr{P}}{\\mathscr{T}}$$-broken phases for eigenvalues is theoretically demonstrated in heterostructures with four channels obtained by combining a one-dimensional loss/gain bilayer and one or two thin polarization-converting components (PCCs). The broadband phase mixing in the four-channel case is expected to yield advanced transmission and absorption regimes. Various configurations are analyzed, which are distinguished in symmetry properties and polarization conversion regime of PCCs. The conditions necessary for phase mixing are then discussed. The simplest two-component configurations with broadband mixing are found, as well as the more complex three-component configurations wherein symmetric and broken sets are not yet mixed and appear in the neighbouring frequency ranges. Peculiarities of eigenvalue behaviour are considered for different permittivity ranges of loss/gain medium, i.e., from epsilon-near-zero to high-epsilon regime.« less

  9. Improving Adherence to Long-term Opioid Therapy Guidelines to Reduce Opioid Misuse in Primary Care A Cluster-Randomized Clinical Trial

    PubMed Central

    Liebschutz, Jane M.; Xuan, Ziming; Shanahan, Christopher W.; LaRochelle, Marc; Keosaian, Julia; Beers, Donna; Guara, George; O’Connor, Kristen; Alford, Daniel P.; Parker, Victoria; Weiss, Roger D.; Samet, Jeffrey H.; Crosson, Julie; Cushman, Phoebe A.; Lasser, Karen E.

    2017-01-01

    IMPORTANCE Prescription opioid misuse is a national crisis. Few interventions have improved adherence to opioid-prescribing guidelines. OBJECTIVE To determine whether a multicomponent intervention, Transforming Opioid Prescribing in Primary Care (TOPCARE; http://mytopcare.org/), improves guideline adherence while decreasing opioid misuse risk. DESIGN, SETTING, AND PARTICIPANTS Cluster-randomized clinical trial among 53 primary care clinicians (PCCs) and their 985 patients receiving long-term opioid therapy for pain. The study was conducted from January 2014 to March 2016 in 4 safety-net primary care practices. INTERVENTIONS Intervention PCCs received nurse care management, an electronic registry, 1-on-1 academic detailing, and electronic decision tools for safe opioid prescribing. Control PCCs received electronic decision tools only. MAIN OUTCOMES AND MEASURES Primary outcomes included documentation of guideline-concordant care (both a patient-PCC agreement in the electronic health record and at least 1 urine drug test [UDT]) over 12 months and 2 or more early opioid refills. Secondary outcomes included opioid dose reduction (ie, 10% decrease in morphine-equivalent daily dose [MEDD] at trial end) and opioid treatment discontinuation. Adjusted outcomes controlled for differing baseline patient characteristics: substance use diagnosis, mental health diagnoses, and language. RESULTS Of the 985 participating patients, 519 were men, and 466 were women (mean [SD] patient age, 54.7 [11.5] years). Patients received a mean (SD) MEDD of 57.8 (78.5) mg. At 1 year, intervention patients were more likely than controls to receive guideline-concordant care (65.9% vs 37.8%; P < .001; adjusted odds ratio [AOR], 6.0; 95% CI, 3.6–10.2), to have a patient-PCC agreement (of the 376 without an agreement at baseline, 53.8% vs 6.0%; P < .001; AOR, 11.9; 95% CI, 4.4–32.2), and to undergo at least 1 UDT (74.6% vs 57.9%; P < .001; AOR, 3.0; 95% CI, 1.8–5.0). There was no difference in odds of early refill receipt between groups (20.7% vs 20.1%; AOR, 1.1; 95% CI, 0.7–1.8). Intervention patients were more likely than controls to have either a 10% dose reduction or opioid treatment discontinuation (AOR, 1.6; 95% CI, 1.3–2.1; P < .001). In adjusted analyses, intervention patients had a mean (SE) MEDD 6.8 (1.6) mg lower than controls (P < .001). CONCLUSIONS AND RELEVANCE A multicomponent intervention improved guideline-concordant care but did not decrease early opioid refills. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT01909076 PMID:28715535

  10. Improving Adherence to Long-term Opioid Therapy Guidelines to Reduce Opioid Misuse in Primary Care: A Cluster-Randomized Clinical Trial.

    PubMed

    Liebschutz, Jane M; Xuan, Ziming; Shanahan, Christopher W; LaRochelle, Marc; Keosaian, Julia; Beers, Donna; Guara, George; O'Connor, Kristen; Alford, Daniel P; Parker, Victoria; Weiss, Roger D; Samet, Jeffrey H; Crosson, Julie; Cushman, Phoebe A; Lasser, Karen E

    2017-09-01

    Prescription opioid misuse is a national crisis. Few interventions have improved adherence to opioid-prescribing guidelines. To determine whether a multicomponent intervention, Transforming Opioid Prescribing in Primary Care (TOPCARE; http://mytopcare.org/), improves guideline adherence while decreasing opioid misuse risk. Cluster-randomized clinical trial among 53 primary care clinicians (PCCs) and their 985 patients receiving long-term opioid therapy for pain. The study was conducted from January 2014 to March 2016 in 4 safety-net primary care practices. Intervention PCCs received nurse care management, an electronic registry, 1-on-1 academic detailing, and electronic decision tools for safe opioid prescribing. Control PCCs received electronic decision tools only. Primary outcomes included documentation of guideline-concordant care (both a patient-PCC agreement in the electronic health record and at least 1 urine drug test [UDT]) over 12 months and 2 or more early opioid refills. Secondary outcomes included opioid dose reduction (ie, 10% decrease in morphine-equivalent daily dose [MEDD] at trial end) and opioid treatment discontinuation. Adjusted outcomes controlled for differing baseline patient characteristics: substance use diagnosis, mental health diagnoses, and language. Of the 985 participating patients, 519 were men, and 466 were women (mean [SD] patient age, 54.7 [11.5] years). Patients received a mean (SD) MEDD of 57.8 (78.5) mg. At 1 year, intervention patients were more likely than controls to receive guideline-concordant care (65.9% vs 37.8%; P < .001; adjusted odds ratio [AOR], 6.0; 95% CI, 3.6-10.2), to have a patient-PCC agreement (of the 376 without an agreement at baseline, 53.8% vs 6.0%; P < .001; AOR, 11.9; 95% CI, 4.4-32.2), and to undergo at least 1 UDT (74.6% vs 57.9%; P < .001; AOR, 3.0; 95% CI, 1.8-5.0). There was no difference in odds of early refill receipt between groups (20.7% vs 20.1%; AOR, 1.1; 95% CI, 0.7-1.8). Intervention patients were more likely than controls to have either a 10% dose reduction or opioid treatment discontinuation (AOR, 1.6; 95% CI, 1.3-2.1; P < .001). In adjusted analyses, intervention patients had a mean (SE) MEDD 6.8 (1.6) mg lower than controls (P < .001). A multicomponent intervention improved guideline-concordant care but did not decrease early opioid refills. clinicaltrials.gov Identifier: NCT01909076.

  11. Results Of Hg Speciation Testing On DWPF SMECT-1, SMECT-3, And SMECT-5 Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C.

    2016-01-07

    The Savannah River National Laboratory (SRNL) was tasked with preparing and shipping samples for Hg speciation by Eurofins Frontier Global Sciences, Inc. in Seattle, WA on behalf of the Savannah River Remediation (SRR) Mercury Task Team. The thirteenth shipment of samples was designated to include Defense Waste Processing Facility (DWPF) Slurry Mix Evaporator Condensate Tank (SMECT) from Sludge Receipt and Adjustment Tank (SRAT) Batch 736 and 738 samples. Triplicate samples of each material were prepared for this shipment. Each replicate was analyzed for seven Hg species: total Hg, total soluble (dissolved) Hg, elemental Hg [Hg(0)], ionic (inorganic) Hg [Hg(I) andmore » Hg(II)], methyl Hg [CH 3Hg-X, where X is a counter anion], ethyl Hg [CH 3CH 2-Hg-X, where X is a counter anion], and dimethyl Hg [(CH 3) 2Hg]. The difference between the total Hg and total soluble Hg measurements gives the particulate Hg concentration, i.e. Hg adsorbed to the surface of particulate matter in the sample but without resolution of the specific adsorbed species. The average concentrations of Hg species in the aqueous samples derived from Eurofins reported data corrected for dilutions performed by SRNL are tabulated.« less

  12. Inhibition Of Washed Sludge With Sodium Nitrite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Congdon, J. W.; Lozier, J. S.

    2012-09-25

    This report describes the results of electrochemical tests used to determine the relationship between the concentration of the aggressive anions in washed sludge and the minimum effective inhibitor concentration. Sodium nitrate was added as the inhibitor because of its compatibility with the DWPF process. A minimum of 0.05M nitrite is required to inhibit the washed sludge simulant solution used in this study. When the worst case compositions and safety margins are considered, it is expected that a minimum operating limit of nearly 0.1M nitrite will be specified. The validity of this limit is dependent on the accuracy of the concentrationsmore » and solubility splits previously reported. Sodium nitrite additions to obtain 0.1M nitrite concentrations in washed sludge will necessitate the additional washing of washed precipitate in order to decrease its sodium nitrite inhibitor requirements sufficiently to remain below the sodium limits in the feed to the DWPF. Nitrite will be the controlling anion in "fresh" washed sludge unless the soluble chloride concentration is about ten times higher than predicted by the solubility splits. Inhibition of "aged" washed sludge will not be a problem unless significant chloride dissolution occurs during storage. It will be very important tomonitor the composition of washed sludge during processing and storage.« less

  13. Hydrogen Production in Radioactive Solutions in the Defense Waste Processing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CRAWFORD, CHARLES L.

    2004-05-26

    In the radioactive slurries and solutions to be processed in the Defense Waste Processing Facility (DWPF), hydrogen will be produced continuously by radiolysis. This production results from alpha, beta, and gamma rays from decay of radionuclides in the slurries and solutions interacting with the water. More than 1000 research reports have published data concerning this radiolytic production. The results of these studies have been reviewed in a comprehensive monograph. Information about radiolytic hydrogen production from the different process tanks is necessary to determine air purge rates necessary to prevent flammable mixtures from accumulating in the vapor spaces above these tanks.more » Radiolytic hydrogen production rates are usually presented in terms of G values or molecules of hydrogen produced per 100ev of radioactive decay energy absorbed by the slurry or solution. With the G value for hydrogen production, G(H2), for a particular slurry and the concentrations of radioactive species in that slurry, the rate of H2 production for that slurry can be calculated. An earlier investigation estimated that the maximum rate that hydrogen could be produced from the sludge slurry stream to the DWPF is with a G value of 0.45 molecules per 100ev of radioactive decay energy sorbed by the slurry.« less

  14. DWPF Melt Cell Crawler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, C.R.

    2003-04-08

    On December 2, 2002, Remote and Specialty Equipment Systems (RSES) of the Savannah River Technology Center (SRTC) was requested to build a remotely operated crawler to assist in cleaning the Defense Waste Processing Facility (DWPF) melt cell floor of glass, tools, and other debris. The crawler was to assist a grapple and vacuum system in cleaning the cell. The crawler was designed to push glass and debris into piles so that the grapple could pick up the material and place it in waste bins. The crawler was also designed to maneuver the end of the vacuum hose, if needed. Inmore » addition, the crawler was designed to clean the area beneath the cell worktable that was inaccessible to the grapple and vacuum system. Originally, the system was to be ready for deployment by December 17. The date was moved up to December 12 to better utilize the available time for clean up. The crawler was designed and built in 10 days and completed cleaning the melt cell in 8 days. Due to initial problems with the grapple and vacuum system, the crawler completed essentially all of the cleanup tasks by itself. The crawler also cleaned an area on the west side of the cell that was not initially slated for cleaning.« less

  15. DETERMINATION OF REPORTABLE RADIONUCLIDES FOR DWPF SLUDGE BATCH 7B (MACROBATCH 9)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C. L.; Diprete, D. P.

    The Waste Acceptance Product Specifications (WAPS) 1.2 require that “The Producer shall report the inventory of radionuclides (in Curies) that have half-lives longer than 10 years and that are, or will be, present in concentrations greater than 0.05 percent of the total inventory for each waste type indexed to the years 2015 and 3115”. As part of the strategy to comply with WAPS 1.2, the Defense Waste Processing Facility (DWPF) will report for each waste type, all radionuclides (with half-lives greater than 10 years) that have concentrations greater than 0.01 percent of the total inventory from time of production throughmore » the 1100 year period from 2015 through 3115. The initial listing of radionuclides to be included is based on the design-basis glass as identified in the Waste Form Compliance Plan (WCP) and Waste Form Qualification Report (WQR). However, it is required that this list be expanded if other radionuclides with half-lives greater than 10 years are identified that may meet the greater than 0.01% criterion for Curie content. Specification 1.6 of the WAPS, International Atomic Energy Agency (IAEA) Safeguards Reporting for High Level Waste (HLW), requires that the ratio by weights of the following uranium and plutonium isotopes be reported: U-233, U-234, U-235, U-236, U-238, Pu-238, Pu-239, Pu-240, Pu-241, and Pu- 242. Therefore, the complete set of reportable radionuclides must also include this set of U and Pu isotopes. The DWPF is receiving radioactive sludge slurry from HLW Tank 40. The radioactive sludge slurry in Tank 40 is a blend of the heel from Sludge Batch 7a (SB7a) and Sludge Batch 7b (SB7b) that was transferred to Tank 40 from Tank 51. The blend of sludge in Tank 40 is also referred to as Macrobatch 9 (MB9). This report develops the list of reportable radionuclides and associated activities as a function of time. The DWPF will use this list and the activities as one of the inputs for the development of the Production Records that relate to radionuclide inventory. This work was initiated through Technical Task Request (TTR) HLW-DWPF-TTR-2011-0004; Rev. 0 entitled Sludge Batch 7b Qualification Studies. Specifically, this report details results from performing Subtask II, Item 2 of the TTR and, in part, meets Deliverable 6 of the TTR. The work was performed following the Task Technical and Quality Assurance Plan (TTQAP), SRNL-RP-2011-00247, Rev. 0 and Analytical Study Plan (ASP), SRNL-RP-2011-00248, Rev. 0. In order to determine the reportable radionuclides for SB7b (MB9), a list of radioisotopes that may meet the criteria as specified by the Department of Energy’s (DOE) WAPS was developed. All radioactive U- 235 fission products and all radioactive activation products that could be in the SRS HLW were considered. In addition, all U and Pu isotopes identified in WAPS 1.6 were included in the list. This list was then evaluated and some isotopes were excluded from the projection calculations. Based on measurements and analytical detection limits, 27 radionuclides have been identified as reportable for DWPF SB7b as specified by WAPS 1.2. The WCP and WQR require that all of the radionuclides present in the Design Basis glass be considered as the initial set of reportable radionuclides. For SB7b, all of the radionuclides in the Design Basis glass are reportable except for three radionuclides: Pd-107, Cs-135, and Th-230. At no time during the 1100- year period between 2015 and 3115 did any of these three radionuclides contribute to more than 0.01% of the radioactivity on a Curie basis. Two additional uranium isotopes (U-235 and -236) must be added to the list of reportable radionuclides in order to meet WAPS 1.6. All of the Pu isotopes (Pu-238, -239, -240, -241, and -242) and other U isotopes (U-233, -234, and -238) identified in WAPS 1.6 were already determined to be reportable according to WAPS 1.2 This brings the total number of reportable radionuclides for SB7b to 29. The radionuclide measurements made for SB7b are similar to those performed in the previous SB7a MB8 work. Some method development/refinement occurred during the conduct of these measurements, leading to lower detection limits and more accurate measurement of some isotopes than was previously possible. Improvement in the analytical measurements will likely continue, and this in turn should lead to improved detection limit values for some radionuclides and actual measurements for still others.« less

  16. Determination Of Reportable Radionuclides For DWPF Sludge Batch 7B (Macrobatch 9)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C. L.; DiPrete, D. P.

    The Waste Acceptance Product Specifications (WAPS) 1.2 require that “The Producer shall report the inventory of radionuclides (in Curies) that have half-lives longer than 10 years and that are, or will be, present in concentrations greater than 0.05 percent of the total inventory for each waste type indexed to the years 2015 and 3115”. As part of the strategy to comply with WAPS 1.2, the Defense Waste Processing Facility (DWPF) will report for each waste type, all radionuclides (with half-lives greater than 10 years) that have concentrations greater than 0.01 percent of the total inventory from time of production throughmore » the 1100 year period from 2015 through 3115. The initial listing of radionuclides to be included is based on the design-basis glass as identified in the Waste Form Compliance Plan (WCP) and Waste Form Qualification Report (WQR). However, it is required that this list be expanded if other radionuclides with half-lives greater than 10 years are identified that may meet the greater than 0.01% criterion for Curie content. Specification 1.6 of the WAPS, International Atomic Energy Agency (IAEA) Safeguards Reporting for High Level Waste (HLW), requires that the ratio by weights of the following uranium and plutonium isotopes be reported: U-233, U-234, U-235, U-236, U-238, Pu-238, Pu-239, Pu-240, Pu-241, and Pu-242. Therefore, the complete set of reportable radionuclides must also include this set of U and Pu isotopes. The DWPF is receiving radioactive sludge slurry from HLW Tank 40. The radioactive sludge slurry in Tank 40 is a blend of the heel from Sludge Batch 7a (SB7a) and Sludge Batch 7b (SB7b) that was transferred to Tank 40 from Tank 51. The blend of sludge in Tank 40 is also referred to as Macrobatch 9 (MB9). This report develops the list of reportable radionuclides and associated activities as a function of time. The DWPF will use this list and the activities as one of the inputs for the development of the Production Records that relate to radionuclide inventory. This work was initiated through Technical Task Request (TTR) HLW-DWPF-TTR-2011-0004; Rev. 0 entitled Sludge Batch 7b Qualification Studies. Specifically, this report details results from performing Subtask II, Item 2 of the TTR and, in part, meets Deliverable 6 of the TTR. The work was performed following the Task Technical and Quality Assurance Plan (TTQAP), SRNL-RP-2011-00247, Rev. 0 and Analytical Study Plan (ASP), SRNL-RP-2011-00248, Rev. 0. In order to determine the reportable radionuclides for SB7b (MB9), a list of radioisotopes that may meet the criteria as specified by the Department of Energy’s (DOE) WAPS was developed. All radioactive U-235 fission products and all radioactive activation products that could be in the SRS HLW were considered. In addition, all U and Pu isotopes identified in WAPS 1.6 were included in the list. This list was then evaluated and some isotopes were excluded from the projection calculations. Based on measurements and analytical detection limits, 27 radionuclides have been identified as reportable for DWPF SB7b as specified by WAPS 1.2. The WCP and WQR require that all of the radionuclides present in the Design Basis glass be considered as the initial set of reportable radionuclides. For SB7b, all of the radionuclides in the Design Basis glass are reportable except for three radionuclides: Pd-107, Cs-135, and Th-230. At no time during the 1100-year period between 2015 and 3115 did any of these three radionuclides contribute to more than 0.01% of the radioactivity on a Curie basis. Two additional uranium isotopes (U-235 and -236) must be added to the list of reportable radionuclides in order to meet WAPS 1.6. All of the Pu isotopes (Pu-238, -239, -240, -241, and -242) and other U isotopes (U-233, -234, and -238) identified in WAPS 1.6 were already determined to be reportable according to WAPS 1.2 This brings the total number of reportable radionuclides for SB7b to 29. The radionuclide measurements made for SB7b are similar to those performed in the previous SB7a MB8 work. Some method development/refinement occurred during the conduct of these measurements, leading to lower detection limits and more accurate measurement of some isotopes than was previously possible. Improvement in the analytical measurements will likely continue, and this in turn should lead to improved detection limit values for some radionuclides and actual measurements for still others.« less

  17. Yield Stress Reduction of DWPF Melter Feed Slurries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M.E.; Smith, M.E.

    2007-07-01

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site vitrifies High Level Waste for repository internment. The process consists of three major steps: waste pretreatment, vitrification, and canister decontamination/sealing. The HLW consists of insoluble metal hydroxides and soluble sodium salts. The pretreatment process acidifies the sludge with nitric and formic acids, adds the glass formers as glass frit, then concentrates the resulting slurry to approximately 50 weight percent (wt%) total solids. This slurry is fed to the joule-heated melter where the remaining water is evaporated followed by calcination of the solids and conversion to glass. The Savannah Rivermore » National Laboratory (SRNL) is currently assisting DWPF efforts to increase throughput of the melter. As part of this effort, SRNL has investigated methods to increase the solids content of the melter feed to reduce the heat load required to complete the evaporation of water and allow more of the energy available to calcine and vitrify the waste. The process equipment in the facility is fixed and cannot process materials with high yield stresses, therefore increasing the solids content will require that the yield stress of the melter feed slurries be reduced. Changing the glass former added during pretreatment from an irregularly shaped glass frit to nearly spherical beads was evaluated. The evaluation required a systems approach which included evaluations of the effectiveness of beads in reducing the melter feed yield stress as well as evaluations of the processing impacts of changing the frit morphology. Processing impacts of beads include changing the settling rate of the glass former (which effects mixing and sampling of the melter feed slurry and the frit addition equipment) as well as impacts on the melt behavior due to decreased surface area of the beads versus frit. Beads were produced from the DWPF process frit by fire polishing. The frit was allowed to free fall through a flame, then quenched with a water spray. Approximately 90% of the frit was converted to beads by this process. Yield stress reduction was measured by preparing melter feed slurries (using nonradioactive HLW simulants) that contain beads and comparing the yield stress with melter feed containing frit. A second set of tests was performed with beads of various diameters to determine if a decrease in diameter affected the results. Smaller particle size was shown to increase yield stress when frit is utilized. The settling rate of the beads was required to match the settling rate of the frit, therefore a decrease in particle size was anticipated. Settling tests were conducted in water, xanthan gum solutions, and in non-radioactive simulants of the HLW. The tests used time-lapse video-graphy as well as solids sampling to evaluate the settling characteristics of beads compared to frit of the same particle size. A preliminary melt rate evaluation was performed using a dry-fed Melt Rate Furnace (MRF) developed by SRNL. Preliminary evaluation of the impact of beading the frit on the frit addition system were completed by conducting flow loop testing. A recirculation loop was built with a total length of about 30 feet. Pump power, flow rate, outlet pressure, and observations of the flow in the horizontal upper section of the loop were noted. The recirculation flow was then gradually reduced and the above items recorded until settling was noted in the recirculation line. Overall, the data shows that the line pressure increased as the solids were increased for the same flow rate. In addition, the line pressure was higher for Frit 320 than the beads at the same solids level and flow. With the observations, a determination of minimum velocity to prevent settling could be done, but a graph of the line pressures versus velocity for the various tests was deemed to more objective. The graph shows that the inflection point in pressure drop is about the same for the beads and Frit 320. This indicates that the bead slurry would not require higher flows rates than frit slurry at DWPF during transfers. Another key finding was that the pump impeller was not significantly damaged by the bead slurry, while the Frit 320 slurry rapidly destroyed the impeller. Evidence of this was first observed when black particles were seen in the Frit 320 slurry being recirculated and then confirmed by a post-test inspection of the impeller. Finally, the pumping of bead slurry could be recovered even if flow is stopped. The Frit 320 slurry could not be restarted after stopping flow due to the nature of the frit to pack tightly when settled. Beads were shown to represent a significant process improvement versus frit for the DWPF process in lowering yield stress of the melter feed. Lower erosion of process equipment is another expected benefit.« less

  18. DWPF DECON FRIT: SUMP AND SLURRY SOLIDS ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.; Peeler, D.; Click, D.

    The Savannah River National Laboratory (SRNL) has been requested to perform analyses on samples of the Defense Waste Processing Facility (DWPF) decon frit slurry (i.e., supernate samples and sump solid samples). Four 1-L liquid slurry samples were provided to SRNL by Savannah River Remediation (SRR) from the 'front-end' decon activities. Additionally, two 1-L sump solids samples were provided to SRNL for compositional and physical analysis. In this report, the physical and chemical characterization results of the slurry solids and sump solids are reported. Crawford et al. (2010) provide the results of the supernate analysis. The results of the sump solidsmore » are reported on a mass basis given the samples were essentially dry upon receipt. The results of the slurry solids were converted to a volume basis given approximately 2.4 grams of slurry solids were obtained from the {approx}4 liters of liquid slurry sample. Although there were slight differences in the analytical results between the sump solids and slurry solids the following general summary statements can be made. Slight differences in the results are also captured for specific analysis. (1) Physical characterization - (a) SEM/EDS analysis suggested that the samples were enriched in Li and Si (B and Na not detectable using the current EDS system) which is consistent with two of the four principle oxides of Frit 418 (B{sub 2}O{sub 3}, Na{sub 2}O, Li{sub 2}O and SiO{sub 2}). (b) SEM/EDS analysis also identified impurities which were elementally consistent with stainless steel (i.e., Fe, Ni, Cr contamination). (c) XRD results indicated that the sump solids samples were amorphous which is consistent with XRD results expected for a Frit 418 based sample. (d) For the sump solids, SEM/EDS analysis indicated that the particle size of the sump solids were consistent with that of an as received Frit 418 sample from a current DWPF vendor. (e) For the slurry solids, SEM/EDS analysis indicated that the particle size range of the slurry solids was much broader than compared to the sump solids. More specifically, there were significantly more fines in the slurry solids as compared to the sump solids. (f) PSD results indicated that > 99% of both the sump and slurry solids were less than 350 microns. The PSD results also supported SEM/EDS analysis that there were significantly more fines in the slurry solids as compared to the sump solids. (2) Weight Percent Solids - Based on the measured supernate density and mass of insoluble solids (2.388 grams) filtered from the four liters of liquid slurry samples, the weight percent insoluble solids was estimated to be 0.060 wt%. This level of insoluble solids is higher than the ETP WAC limit of 100 mg/L, or 0.01 wt% which suggests a separation technology of some type would be required. (3) Chemical Analysis - (a) Elemental results from ICP-ES analysis indicated that the sump solids and slurry were very consistent with the nominal composition of Frit 418. There were other elements identified by ICP analysis which were either consistent with the presence of stainless steel (as identified by SEM/EDS analysis) or impurities that have been observed in 'as received' Frit 418 from the vendor. (b) IC anion analysis of the sump solids and slurry solids indicated all of the species were less than detection limits. (c) Radionuclide analysis of the sump solids also indicated that most of the analytes were either at or below the detection limits. (d) Organic analysis of the sump solids and slurry solids indicated all of the species were less than detection limits. It should be noted that the results of this study may not be representative of future decon frit solutions or sump/slurry solids samples. Therefore, future DWPF decisions regarding the possible disposal pathways for either the aqueous or solid portions of the Decon Frit system need to factor in the potential differences. More specifically, introduction of a different frit or changes to other DWPF flowsheet unit operations (e.g., different sludge batch or coupling with other process streams) may impact not only the results but also the conclusions regarding acceptability with respect to the ETF WAC limits or other alternative disposal options.« less

  19. Implementation of flowsheet change to minimize hydrogen and ammonia generation during chemical processing of high level waste in the Defense Waste Processing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Dan P.; Woodham, Wesley H.; Williams, Matthew S.

    Testing was completed to develop a chemical processing flowsheet for the Defense Waste Processing Facility (DWPF), designed to vitrify and stabilize high level radioactive waste. DWPF processing uses a reducing acid (formic acid) and an oxidizing acid (nitric acid) to rheologically thin the slurry and complete the necessary acid base and reduction reactions (primarily mercury and manganese). Formic acid reduces mercuric oxide to elemental mercury, allowing the mercury to be removed during the boiling phase of processing through steam stripping. In runs with active catalysts, formic acid can decompose to hydrogen and nitrate can be reduced to ammonia, both flammablemore » gases, due to rhodium and ruthenium catalysis. Replacement of formic acid with glycolic acid eliminates the generation of rhodium- and ruthenium-catalyzed hydrogen and ammonia. In addition, mercury reduction is still effective with glycolic acid. Hydrogen, ammonia and mercury are discussed in the body of the report. Ten abbreviated tests were completed to develop the operating window for implementation of the flowsheet and determine the impact of changes in acid stoichiometry and the blend of nitric and glycolic acid as it impacts various processing variables over a wide processing region. Three full-length 4-L lab-scale simulations demonstrated the viability of the flowsheet under planned operating conditions. The flowsheet is planned for implementation in early 2017.« less

  20. Sludge batch 9 (SB9) acceptance evaluation. Radionuclide concentrations in tank 51 SB9 qualification sample prepared at SRNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.; Diprete, D. P.; Pareizs, J. M.

    Presented in this report are radionuclide concentrations required as part of the program of qualifying Sludge Batch 9 (SB9) for processing in the Defense Waste Processing Facility (DWPF). The SB9 material is currently in Tank 51 and has been washed and prepared for transfer to Tank 40. The acceptance evaluation needs to be completed prior to the transfer of the material in Tank 51 to Tank 40. The sludge slurry in Tank 40 has already been qualified for DWPF processing and is currently being processed as Sludge Batch 8 (SB8). The radionuclide concentrations were measured or estimated in the Tankmore » 51 SB9 Washed Qualification Sample prepared at Savannah River National Laboratory (SRNL). This sample was prepared from a three liter sample of Tank 51 sludge slurry (HTF-51-15-81) taken on July 23, 2015. The sample was delivered to SRNL where it was initially characterized in the Shielded Cells. Under the direction of Savannah River Remediation (SRR) it was then adjusted per the Tank Farm washing strategy as of October 20, 2015. This final slurry now has a composition expected to be similar to that of the slurry in Tank 51 after final preparations have been made for transfer of that slurry to Tank 40.« less

  1. Sludge batch 9 (SB9) accepance evaluation: Radionuclide concentrations in tank 51 SB9 qualification sample prepared at SRNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C.; Diprete, D.; Pareizs, J.

    Presented in this report are radionuclide concentrations required as part of the program of qualifying Sludge Batch 9 (SB9) for processing in the Defense Waste Processing Facility (DWPF). The SB9 material is currently in Tank 51 and has been washed and prepared for transfer to Tank 40. The acceptance evaluation needs to be completed prior to the transfer of the material in Tank 51 to Tank 40. The sludge slurry in Tank 40 has already been qualified for DWPF processing and is currently being processed as Sludge Batch 8 (SB8). The radionuclide concentrations were measured or estimated in the Tankmore » 51 SB9 Washed Qualification Sample prepared at Savannah River National Laboratory (SRNL). This sample was prepared from a three liter sample of Tank 51 sludge slurry (HTF-51-15-81) taken on July 23, 2015. The sample was delivered to SRNL where it was initially characterized in the Shielded Cells. Under the direction of Savannah River Remediation (SRR) it was then adjusted per the Tank Farm washing strategy as of October 20, 2015. This final slurry now has a compositioniv expected to be similar to that of the slurry in Tank 51 after final preparations have been made for transfer of that slurry to Tank 40.« less

  2. The Impact Of The MCU Life Extension Solvent On Sludge Batch 8 Projected Operating Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D. K.; Edwards, T. B.; Stone, M. E.

    2013-08-14

    As a part of the Actinide Removal Process (ARP)/Modular Caustic Side Solvent Extraction Unit (MCU) Life Extension Project, a next generation solvent (NGS) and a new strip acid will be deployed. The strip acid will be changed from dilute nitric acid to dilute boric acid (0.01 M). Because of these changes, experimental testing or evaluations with the next generation solvent are required to determine the impact of these changes (if any) to Chemical Process Cell (CPC) activities, glass formulation strategies, and melter operations at the Defense Waste Processing Facility (DWPF). The introduction of the dilute (0.01M) boric acid stream intomore » the DWPF flowsheet has a potential impact on glass formulation and frit development efforts since B2O3 is a major oxide in frits developed for DWPF. Prior knowledge of this stream can be accounted for during frit development efforts but that was not the case for Sludge Batch 8 (SB8). Frit 803 has already been recommended and procured for SB8 processing; altering the frit to account for the incoming boron from the strip effluent (SE) is not an option for SB8. Therefore, the operational robustness of Frit 803 to the introduction of SE including its compositional tolerances (i.e., up to 0.0125M boric acid) is of interest and was the focus of this study. The primary question to be addressed in the current study was: What is the impact (if any) on the projected operating windows for the Frit 803 – SB8 flowsheet to additions of B2O3 from the SE in the Sludge Receipt and Adjustment Tank (SRAT)? More specifically, will Frit 803 be robust to the potential compositional changes occurring in the SRAT due to sludge variation, varying additions of ARP and/or the introduction of SE by providing access to waste loadings (WLs) of interest to DWPF? The Measurement Acceptability Region (MAR) results indicate there is very little, if any, impact on the projected operating windows for the Frit 803 – SB8 system regardless of the presence or absence of ARP and SE (up to 2 wt% B2O3 contained in the SRAT and up to 2000 gallons of ARP). It should be noted that 0.95 wt% B2O3 is the nominal projected concentration in the SRAT based on a 0.0125M boric acid flowsheet with 70,000 liters of SE being added to the SRAT. The impact on CPC processing of a 0.01M boric acid solution for elution of cesium during Modular Caustic Side Solvent Extraction Unit (MCU) processing has previously been evaluated by the Savannah River National Laboratory (SRNL). Increasing the acid strength to 0.0125M boric acid to account for variations in the boric acid strength has been reviewed versus the previous evaluation. The amount of acid from the boric acid represented approximately 5% of the total acid during the previous evaluation. An increase from 0.01 to 0.0125M boric acid represents a change of approximately 1.3% which is well within the error of the acid calculation. Therefore, no significant changes to CPC processing (hydrogen generation, metal solubilities, rheological properties, REDOX control, etc.) are expected from an increase in allowable boric acid concentration from 0.01M to 0.0125M.« less

  3. SLUDGE BATCH 4 BASELINE MELT RATE FURNACE AND SLURRY-FED MELT RATE FURNACE TESTS WITH FRITS 418 AND 510 (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, M. E.; Jones, T. M.; Miller, D. H.

    Several Slurry-Fed Melt Rate Furnace (SMRF) tests with earlier projections of the Sludge Batch 4 (SB4) composition have been performed.1,2 The first SB4 SMRF test used Frits 418 and 320, however it was found after the test that the REDuction/OXidation (REDOX) correlation at that time did not have the proper oxidation state for manganese. Because the manganese level in the SB4 sludge was higher than previous sludge batches tested, the impact of the higher manganese oxidation state was greater. The glasses were highly oxidized and very foamy, and therefore the results were inconclusive. After resolving this REDOX issue, Frits 418,more » 425, and 503 were tested in the SMRF with the updated baseline SB4 projection. Based on dry-fed Melt Rate Furnace (MRF) tests and the above mentioned SMRF tests, two previous frit recommendations were made by the Savannah River National Laboratory (SRNL) for processing of SB4 in the Defense Waste Processing Facility (DWPF). The first was Frit 503 based on the June 2006 composition projections.3 The recommendation was changed to Frit 418 as a result of the October 2006 composition projections (after the Tank 40 decant was implemented as part of the preparation plan). However, the start of SB4 processing was delayed due to the control room consolidation outage and the repair of the valve box in the Tank 51 to Tank 40 transfer line. These delays resulted in changes to the projected SB4 composition. Due to the slight change in composition and based on preliminary dry-fed MRF testing, SRNL believed that Frit 510 would increase throughput in processing SB4 in DWPF. Frit 418, which was used in processing Sludge Batch 3 (SB3), was a viable candidate and available in DWPF. Therefore, it was used during the initial SB4 processing. Due to the potential for higher melt rates with Frit 510, SMRF tests with the latest SB4 composition (1298 canisters) and Frits 510 and 418 were performed at a targeted waste loading (WL) of 35%. The '1298 canisters' describes the number of equivalent canisters that would be produced from the beginning of the current contract period before SB3 is blended with SB4. The melt rate for the SMRF SB4/Frit 510 test was 14.6 grams/minute. Due to cold cap mounding problems with the SMRF SB4/Frit 418 feed at 50 weight % solids that prevented a melt rate determination, this feed was diluted to 45 weight % solids. The melt rate for this diluted feed was 8.9 grams/minute. A correction factor of 1.2 for estimating the melt rate at 50 weight % solids from 45 weight % solids test results (based on previous SMRF testing5) was then used to estimate a melt rate of 10.7 grams/minute for SB4/Frit 418 at 50 weight % solids. Therefore, the use of Frit 510 versus Frit 418 with SB4 resulted in a higher melt rate (14.6 versus an estimated 10.7 grams/minute). For reference, a previous SMRF test with SB3/Frit 418 feed at 35% waste loading and 50 weight % solids resulted in a melt rate of 14.1 grams/minute. Therefore, depending on the actual feed rheology, the use of Frit 510 with SB4 could result in similar melt rates as experienced with SB3/Frit 418 feed in the DWPF.« less

  4. Cytogenetic effects of space-relevant hze-particles in human blood lymphocytes

    NASA Astrophysics Data System (ADS)

    Lee, R.; Nasonova, E.; Ritter, S.

    The analysis of aberrations in human lymphocytes collected 48 h after exposure is used since the 1960s to estimate the radiation risk. However, evidence is increasing that this protocol is not reliable in the case of high LET exposure, because particle induced cell cycle delays influence the aberration yield. To contribute to this issue lymphocytes obtained from a healthy donor were irradiated with Fe-ions (200 MeV/n, 440 keV/μ m), iron-like particles (˜ 4 MeV/n Ni- and Cr-ions, ˜ 4000 keV/μ m) and X-rays. Directly after irradiation PHA and BrdU was added to the cell culture medium. Aberrations were measured in first mitoses collected at 48, 60 and 72 h post-irradiation following colcemid treatment and in prematurely condensed G2-cells (PCCs) at 48 h using calyculin A. Samples were stained with the FPG-technique to allow cell cycle discrimination. Additionally, the mitotic index, the BrdU-labelling index and the number of apoptotic cells were determined at several time-points. Analysis of the BrdU-labelling indices and the mitotic indices revealed a dose- and LET-dependent delay in the cell cycle progression. Cells that reached the first mitosis 48 h after high LET exposure carried only a few aberrations. However, cells that entered the first mitosis 60 to 72 h after high LET exposure carried at least five times more aberrations than those collected at 48 h. The analysis of chromosomal damage in G2-PCCs showed that the delayed entry of severely damaged cells into mitosis results from a prolonged arrest in G2. Conversely, after X-ray exposure a stable aberration-yield was observed in lymphocytes collected at different time-points post-irradiation and the number of aberrations measured in G2-PCCs was only slightly higher than in metaphase cells. Furthermore, only in samples exposed to stopping heavy charged particles a high frequency of apoptotic cells was detected indicating that under this exposure conditions a large proportion of heavily damaged cells is rapidly removed from the cell population. In summary, our data demonstrate that the genetic risk of high energy Fe-ions will be considerably underestimated, when metaphase cells are only analysed at 48 h post-treatment, because of a selective delay of heavily damaged lymphocytes. Similarly, stopping heavy particles affect the time-course of aberrations. However, due to the low mitotic indices and the high apoptotic rates observed in samples exposed to stopping particles, it is reasonable to assume that most aberrant cells are rapidly eliminated from the cell population. Thus, the neoplastic risk of low energy, high LET particles is expected to be low. Supported by BMBF (Bonn), grant 02S8203.

  5. Mercury Phase II Study - Mercury Behavior across the High-Level Waste Evaporator System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.; Crawford, C. L.; Jackson, D. G.

    2016-06-17

    The Mercury Program team’s effort continues to develop more fundamental information concerning mercury behavior across the liquid waste facilities and unit operations. Previously, the team examined the mercury chemistry across salt processing, including the Actinide Removal Process/Modular Caustic Side Solvent Extraction Unit (ARP/MCU), and the Defense Waste Processing Facility (DWPF) flowsheets. This report documents the data and understanding of mercury across the high level waste 2H and 3H evaporator systems.

  6. Road Map for Development of Crystal-Tolerant High Level Waste Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matyas, Josef; Vienna, John D.; Peeler, David

    This road map guides the research and development for formulation and processing of crystal-tolerant glasses, identifying near- and long-term activities that need to be completed over the period from 2014 to 2019. The primary objective is to maximize waste loading for Hanford waste glasses without jeopardizing melter operation by crystal accumulation in the melter or melter discharge riser. The potential applicability to the Savannah River Site (SRS) Defense Waste Processing Facility (DWPF) is also addressed in this road map.

  7. Multivariate Statistical Modelling of Drought and Heat Wave Events

    NASA Astrophysics Data System (ADS)

    Manning, Colin; Widmann, Martin; Vrac, Mathieu; Maraun, Douglas; Bevaqua, Emanuele

    2016-04-01

    Multivariate Statistical Modelling of Drought and Heat Wave Events C. Manning1,2, M. Widmann1, M. Vrac2, D. Maraun3, E. Bevaqua2,3 1. School of Geography, Earth and Environmental Sciences, University of Birmingham, Edgbaston, Birmingham, UK 2. Laboratoire des Sciences du Climat et de l'Environnement, (LSCE-IPSL), Centre d'Etudes de Saclay, Gif-sur-Yvette, France 3. Wegener Center for Climate and Global Change, University of Graz, Brandhofgasse 5, 8010 Graz, Austria Compound extreme events are a combination of two or more contributing events which in themselves may not be extreme but through their joint occurrence produce an extreme impact. Compound events are noted in the latest IPCC report as an important type of extreme event that have been given little attention so far. As part of the CE:LLO project (Compound Events: muLtivariate statisticaL mOdelling) we are developing a multivariate statistical model to gain an understanding of the dependence structure of certain compound events. One focus of this project is on the interaction between drought and heat wave events. Soil moisture has both a local and non-local effect on the occurrence of heat waves where it strongly controls the latent heat flux affecting the transfer of sensible heat to the atmosphere. These processes can create a feedback whereby a heat wave maybe amplified or suppressed by the soil moisture preconditioning, and vice versa, the heat wave may in turn have an effect on soil conditions. An aim of this project is to capture this dependence in order to correctly describe the joint probabilities of these conditions and the resulting probability of their compound impact. We will show an application of Pair Copula Constructions (PCCs) to study the aforementioned compound event. PCCs allow in theory for the formulation of multivariate dependence structures in any dimension where the PCC is a decomposition of a multivariate distribution into a product of bivariate components modelled using copulas. A copula is a multivariate distribution function which allows one to model the dependence structure of given variables separately from the marginal behaviour. We firstly look at the structure of soil moisture drought over the entire of France using the SAFRAN dataset between 1959 and 2009. Soil moisture is represented using the Standardised Precipitation Evapotranspiration Index (SPEI). Drought characteristics are computed at grid point scale where drought conditions are identified as those with an SPEI value below -1.0. We model the multivariate dependence structure of drought events defined by certain characteristics and compute return levels of these events. We initially find that drought characteristics such as duration, mean SPEI and the maximum contiguous area to a grid point all have positive correlations, though the degree to which they are correlated can vary considerably spatially. A spatial representation of return levels then may provide insight into the areas most prone to drought conditions. As a next step, we analyse the dependence structure between soil moisture conditions preceding the onset of a heat wave and the heat wave itself.

  8. Motility of the oesophagus and small bowel in adults treated for Hirschsprung's disease during early childhood.

    PubMed

    Medhus, A W; Bjørnland, K; Emblem, R; Husebye, E

    2010-02-01

    Dysmotility of the upper gastrointestinal (GI) tract has been reported in children with Hirschsprung's disease (HD). In the present study, motility of the oesophagus and the small bowel was studied in adults treated for HD during early childhood to elucidate whether there are alterations in motility of the upper GI tract in this patient group. [Correction added after online publication 15 Sep: The preceding sentence has been rephrased for better clarity.] Ambulatory small bowel manometry with recording sites in duodenum/jejunum was performed in 16 adult patients with surgically treated HD and 17 healthy controls. In addition, oesophageal manometry was performed with station pull-through technique. The essential patterns of small bowel motility were recognized in all patients and controls. During fasting, phase III of the migrating motor complex (MMC) was more prominent in patients with HD than in controls when accounting for duration and propagation velocity (P = 0.006). Phase I of the MMC was of shorter duration (P = 0.008), and phase II tended to be of longer duration (P = 0.05) in the patients. During daytime fasting, propagated clustered contractions (PCCs) were more frequent in the patients (P = 0.01). Postprandially, the patients demonstrated a higher contractile frequency (P = 0.02), a shorter duration of contractions (P = 0.008) and more frequent PCCs (P < 0.001). The patients had normal oesophageal motility. This study demonstrates that adult patients with HD have preserved essential patterns of oesophageal and small bowel motility. However, abnormalities mainly characterized by increased contractile activity of the small bowel during fasting and postprandially are evident. These findings indicate alterations in neuronal control of motility and persistent involvement of the upper GI tract in this disease.

  9. Role of Geography and Nurse Practitioner Scope-of-Practice in Efforts to Expand Primary Care System Capacity: Health Reform and the Primary Care Workforce.

    PubMed

    Graves, John A; Mishra, Pranita; Dittus, Robert S; Parikh, Ravi; Perloff, Jennifer; Buerhaus, Peter I

    2016-01-01

    Little is known about the geographic distribution of the overall primary care workforce that includes both physician and nonphysician clinicians--particularly in areas with restrictive nurse practitioner scope-of-practice laws and where there are relatively large numbers of uninsured. We investigated whether geographic accessibility to primary care clinicians (PCCs) differed across urban and rural areas and across states with more or less restrictive scope-of-practice laws. An observational study. 2013 Area Health Resource File (AHRF) and US Census Bureau county travel data. The measures included percentage of the population in low-accessibility, medium-accessibility, and high-accessibility areas; number of geographically accessible primary care physicians (PCMDs), nurse practitioners (PCNPs), and physician assistants (PCPAs) per 100,000 population; and number of uninsured per PCC. We found divergent patterns in the geographic accessibility of PCCs. PCMDs constituted the largest share of the workforce across all settings, but were relatively more concentrated within urban areas. Accessibility to nonphysicians was highest in rural areas: there were more accessible PCNPs per 100,000 population in rural areas of restricted scope-of-practice states (21.4) than in urban areas of full practice states (13.9). Despite having more accessible nonphysician clinicians, rural areas had the largest number of uninsured per PCC in 2012. While less restrictive scope-of-practice states had up to 40% more PCNPs in some areas, we found little evidence of differences in the share of the overall population in low-accessibility areas across scope-of-practice categorizations. Removing restrictive scope-of-practice laws may expand the overall capacity of the primary care workforce, but only modestly in the short run. Additional efforts are needed that recognize the locational tendencies of physicians and nonphysicains.

  10. Lactobacillus acidophilus suppresses intestinal inflammation by inhibiting endoplasmic reticulum stress.

    PubMed

    Kim, Da Hye; Kim, Soochan; Lee, Jin Ha; Kim, Jae Hyeon; Che, Xiumei; Ma, Hyun Woo; Seo, Dong Hyuk; Kim, Tae Il; Kim, Won Ho; Kim, Seung Won; Cheon, Jae Hee

    2018-06-22

    Nuclear factor kappa B (NF-κB) activation and endoplasmic reticulum (ER) stress signaling play significant roles in the pathogenesis of inflammatory bowel disease (IBD). Thus, we evaluated whether new therapeutic probiotics have anti-colitic effects and we investigated their mechanisms related to NF-κB and ER-stress pathways. Luciferase, nitric oxide (NO), and cytokine assays using HT-29 or RAW264.7 cells were conducted. Mouse colitis was induced using dextran sulfate sodium (DSS) and confirmed by disease activity index and histology. Macrophages and T-cell subsets in isolated peritoneal cavity cells (PCCs) and splenocytes were analyzed by flow cytometry. Gene and cytokine expression profiles were determined using RT-PCR. Lactobacillus acidophilus (LA1) and Pediococcus pentosaceus inhibited NO production in RAW264.7 cells, but only LA1 inhibited Tnfa and induced Il10 expression. LA1 increased the life span of DSS-treated mice and attenuated the severity of colitis by inducing M2 macrophages in PCCs and Th2 and Treg cells in splenocytes. The restoration of goblet cells in the colon was accompanied by the induction of Il10 expression and the suppression of proinflammatory cytokines. Additionally, we found that LA1 exerts an anti-colitic effect by improving ER stress in HT-29 cells as well as in vivo. We showed that LA1 significantly interferes with ER stress and suppresses NF-κB activation. Our findings suggest that LA1 can be used as a potent immunomodulator in IBD treatment and the regulation of ER stress may have significant implications in treating IBD. This article is protected by copyright. All rights reserved.

  11. The regional cerebral blood flow changes in major depressive disorder with and without psychotic features.

    PubMed

    Gonul, Ali Saffet; Kula, Mustafa; Bilgin, Arzu Guler; Tutus, Ahmet; Oguz, Aslan

    2004-09-01

    Depressive patients with psychotic features demonstrate distinct biological abnormalities in the hypothalamic-pituitary-adrenal axis (HPA), dopaminergic activity, electroencephalogram sleep profiles and measures of serotonergic function when compared to nonpsychotic depressive patients. However, very few functional neuroimaging studies were specifically designed for studying the effects of psychotic features on neuroimaging findings in depressed patients. The objective of the present study was to compare brain Single Photon Emission Tomography (SPECT) images in a group of unmedicated depressive patients with and without psychotic features. Twenty-eight patients who fully met DSM-IV criteria for major depressive disorder (MDD, 12 had psychotic features) were included in the study. They were compared with 16 control subjects matched for age, gender and education. Both psychotic and nonpsychotic depressed patients showed significantly lower regional cerebral blood flow (rCBF) values in the left and right superior frontal cortex, and left anterior cingulate cortex compared to those of controls. In comparison with depressive patients without psychotic features (DwoPF), depressive patients with psychotic features (DwPF) showed significantly lower rCBF perfusion ratios in left parietal cortex, left cerebellum but had higher rCBF perfusion ratio in the left inferior frontal cortex and caudate nucleus. The present study showed that DwPF have a different rCBF pattern compared to patients without psychotic features. Abnormalities involving inferior frontal cortex, striatum and cerebellum may play an important role in the generation of psychotic symptoms in depression.

  12. Simulated Waste Testing Of Glycolate Impacts On The 2H-Evaporator System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martino, C. J.

    2013-08-13

    Glycolic acid is being studied as a total or partial replacement for formic acid in the Defense Waste Processing Facility (DWPF) feed preparation process. After implementation, the recycle stream from DWPF back to the high-level waste tank farm will contain soluble sodium glycolate. Most of the potential impacts of glycolate in the tank farm were addressed via a literature review, but several outstanding issues remained. This report documents the non-radioactive simulant tests impacts of glycolate on storage and evaporation of Savannah River Site high-level waste. The testing for which non-radioactive simulants could be used involved the following: the partitioning ofmore » glycolate into the evaporator condensate, the impacts of glycolate on metal solubility, and the impacts of glycolate on the formation and dissolution of sodium aluminosilicate scale within the evaporator. The following are among the conclusions from this work: Evaporator condensate did not contain appreciable amounts of glycolate anion. Of all tests, the highest glycolate concentration in the evaporator condensate was 0.38 mg/L. A significant portion of the tests had glycolate concentration in the condensate at less than the limit of quantification (0.1 mg/L). At ambient conditions, evaporator testing did not show significant effects of glycolate on the soluble components in the evaporator concentrates. Testing with sodalite solids and silicon containing solutions did not show significant effects of glycolate on sodium aluminosilicate formation or dissolution.« less

  13. Actual Waste Demonstration of the Nitric-Glycolic Flowsheet for Sludge Batch 9 Qualification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Newell; Pareizs, J. M.; Martino, C. J.

    For each sludge batch that is processed in the Defense Waste Processing Facility (DWPF), the Savannah River National Laboratory (SRNL) performs qualification testing to demonstrate that the sludge batch is processable. Testing performed by the Savannah River National Laboratory has shown glycolic acid to be effective in replacing the function of formic acid in the DWPF chemical process. The nitric-glycolic flowsheet reduces mercury, significantly lowers the catalytic generation of hydrogen and ammonia which could allow purge reduction in the Sludge Receipt and Adjustment Tank (SRAT), stabilizes the pH and chemistry in the SRAT and the Slurry Mix Evaporator (SME), allowsmore » for effective rheology adjustment, and is favorable with respect to melter flammability. In order to implement the new flowsheet, SRAT and SME cycles, designated SC-18, were performed using a Sludge Batch (SB) 9 slurry blended from SB8 Tank 40H and Tank 51H samples. The SRAT cycle involved adding nitric and glycolic acids to the sludge, refluxing to steam strip mercury, and dewatering to a targeted solids concentration. Data collected during the SRAT cycle included offgas analyses, process temperatures, heat transfer, and pH measurements. The SME cycle demonstrated the addition of glass frit and the replication of six canister decontamination additions. The demonstration concluded with dewatering to a targeted solids concentration. Data collected during the SME cycle included offgas analyses, process temperatures, heat transfer, and pH measurements. Slurry and condensate samples were collected for subsequent analysis« less

  14. Glycolic acid physical properties and impurities assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D. P.; Pickenheim, B. R.; Bibler, N. E.

    This document has been revised due to recent information that the glycolic acid used in Savannah River National Laboratory (SRNL) experiments contains both formaldehyde and methoxyacetic acid. These impurities were in the glycolic acid used in the testing included in this report and in subsequent testing using DuPont (now called Chemours) supplied Technical Grade 70 wt% glycolic acid. However, these impurities were not reported in earlier revisions. Additional data concerning the properties of glycolic acid have also been added to this report. The Defense Waste Processing Facility (DWPF) is planning to implement a nitric-glycolic acid flowsheets to increase attainment tomore » meet closure commitment dates during Sludge Batch 9. In fiscal year 2009, SRNL was requested to determine the physical properties of formic and glycolic acid blends. Blends of formic acid in glycolic acid were prepared and their physical properties tested. Increasing amounts of glycolic acid led to increases in blend density, viscosity and surface tension as compared to the 90 wt% formic acid that is currently used at DWPF. These increases are small, however, and are not expected to present any difficulties in terms of processing. The effect of sulfur impurities in Technical Grade glycolic acid was studied for its impact on DWPF glass quality. While the glycolic acid specification allows for more sulfate than the current formic acid specification, the ultimate impact is expected to be on the order of 0.033 wt% sulfur in glass. Note that lower sulfur content glycolic acid could likely be procured at some increased cost if deemed necessary. A paper study on the effects of radiation on glycolic acid was performed. The analysis indicates that substitution of glycolic acid for formic acid would not increase the radiolytic production rate of H2 and cause an adverse effect in the Slurry Receipt and Adjustment Tank (SRAT) or Slurry Mix Evaporator (SME) process. It has been cited that glycolic acid solutions that are depleted of O2 when subjected to large radiation doses produced considerable quantities of a non-diffusive polymeric material. Considering a constant air purge is maintained in the SRAT and the solution is continuously mixed, oxygen depletion seems unlikely, however, if this polymer is formed in the SRAT solution, the rheology of the solution may be affected and pumping of the solution may be hindered. However, an irradiation test with a simulated SRAT product supernate containing glycolic acid in an oxygen depleted atmosphere found no evidence of polymerization.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiedenman, B. J.; White, T. L.; Mahannah, R. N.

    Ion Chromatography (IC) is the principal analytical method used to support studies of Sludge Reciept and Adjustment Tank (SRAT) chemistry at DWPF. A series of prior analytical ''Round Robin'' (RR) studies included both supernate and sludge samples from SRAT simulant, previously reported as memos, are tabulated in this report.2,3 From these studies it was determined to standardize IC column size to 4 mm diameter, eliminating the capillary column from use. As a follow on test, the DWPF laboratory, the PSAL laboratory, and the AD laboratory participated in the current analytical RR to determine a suite of anions in SRAT simulantmore » by IC, results also are tabulated in this report. The particular goal was to confirm the laboratories ability to measure and quantitate glycolate ion. The target was + or - 20% inter-lab agreement of the analyte averages for the RR. Each of the three laboratories analyzed a batch of 12 samples. For each laboratory, the percent relative standard deviation (%RSD) of the averages on nitrate, glycolate, and oxalate, was 10% or less. The three laboratories all met the goal of 20% relative agreement for nitrate and glycolate. For oxalate, the PSAL laboratory reported an average value that was 20% higher than the average values reported by the DWPF laboratory and the AD laboratory. Because of this wider window of agreement, it was concluded to continue the practice of an additional acid digestion for total oxalate measurement. It should also be noted that large amounts of glycolate in the SRAT samples will have an impact on detection limits of near eluting peaks, namely Fluoride and Formate. A suite of scoping experiments are presented in the report to identify and isolate other potential interlaboratory disceprancies. Specific ion chromatography inter-laboratory method conditions and differences are tabulated. Most differences were minor but there are some temperature control equipment differences that are significant leading to a recommendation of a heated jacket for analytical columns that are remoted for use in radiohoods. A suggested method improvement would be to implement column temperture control at a temperature slightly above ambient to avoid peak shifting due to temperature fluctuations. Temperature control in this manner would improve short and longer term peak retention time stability. An unknown peak was observed during the analysis of glycolic acid and SRAT simulant. The unknown peak was determined to best match diglycolic acid. The development of a method for acetate is summaraized, and no significant amount of acetate was observed in the SRAT products tested. In addition, an alternative Gas Chromatograph (GC) method for glycolate is summarized.« less

  16. ACTUAL WASTE TESTING OF GYCOLATE IMPACTS ON THE SRS TANK FARM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martino, C.

    2014-05-28

    Glycolic acid is being studied as a replacement for formic acid in the Defense Waste Processing Facility (DWPF) feed preparation process. After implementation, the recycle stream from DWPF back to the high-level waste Tank Farm will contain soluble sodium glycolate. Most of the potential impacts of glycolate in the Tank Farm were addressed via a literature review and simulant testing, but several outstanding issues remained. This report documents the actual-waste tests to determine the impacts of glycolate on storage and evaporation of Savannah River Site high-level waste. The objectives of this study are to address the following: Determine the extentmore » to which sludge constituents (Pu, U, Fe, etc.) dissolve (the solubility of sludge constituents) in the glycolate-containing 2H-evaporator feed. Determine the impact of glycolate on the sorption of fissile (Pu, U, etc.) components onto sodium aluminosilicate solids. The first objective was accomplished through actual-waste testing using Tank 43H and 38H supernatant and Tank 51H sludge at Tank Farm storage conditions. The second objective was accomplished by contacting actual 2H-evaporator scale with the products from the testing for the first objective. There is no anticipated impact of up to 10 g/L of glycolate in DWPF recycle to the Tank Farm on tank waste component solubilities as investigated in this test. Most components were not influenced by glycolate during solubility tests, including major components such as aluminum, sodium, and most salt anions. There was potentially a slight increase in soluble iron with added glycolate, but the soluble iron concentration remained so low (on the order of 10 mg/L) as to not impact the iron to fissile ratio in sludge. Uranium and plutonium appear to have been supersaturated in 2H-evaporator feed solution mixture used for this testing. As a result, there was a reduction of soluble uranium and plutonium as a function of time. The change in soluble uranium concentration was independent of added glycolate concentration. The change in soluble plutonium content was dependent on the added glycolate concentration, with higher levels of glycolate (5 g/L and 10 g/L) appearing to suppress the plutonium solubility. The inclusion of glycolate did not change the dissolution of or sorption onto actual-waste 2H-evaporator pot scale to an extent that will impact Tank Farm storage and concentration. The effects that were noted involved dissolution of components from evaporator scale and precipitation of components onto evaporator scale that were independent of the level of added glycolate.« less

  17. PANDA asymmetric-configuration passive decay heat removal test results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, O.; Dreier, J.; Aubert, C.

    1997-12-01

    PANDA is a large-scale, low-pressure test facility for investigating passive decay heat removal systems for the next generation of LWRs. In the first series of experiments, PANDA was used to examine the long-term LOCA response of the Passive Containment Cooling System (PCCS) for the General Electric (GE) Simplified Boiling Water Reactor (SBWR). The test objectives include concept demonstration and extension of the database available for qualification of containment codes. Also included is the study of the effects of nonuniform distributions of steam and noncondensable gases in the Dry-well (DW) and in the Suppression Chamber (SC). 3 refs., 9 figs.

  18. Defense Waste Processing Facility Canister Closure Weld Current Validation Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korinko, P. S.; Maxwell, D. N.

    Two closure welds on filled Defense Waste Processing Facility (DWPF) canisters failed to be within the acceptance criteria in the DWPF operating procedure SW4-15.80-2.3 (1). In one case, the weld heat setting was inadvertently provided to the canister at the value used for test welds (i.e., 72%) and this oversight produced a weld at a current of nominally 210 kA compared to the operating procedure range (i.e., 82%) of 240 kA to 263 kA. The second weld appeared to experience an instrumentation and data acquisition upset. The current for this weld was reported as 191 kA. Review of the datamore » from the Data Acquisition System (DAS) indicated that three of the four current legs were reading the expected values, approximately 62 kA each, and the fourth leg read zero current. Since there is no feasible way by further examination of the process data to ascertain if this weld was actually welded at either the target current or the lower current, a test plan was executed to provide assurance that these Nonconforming Welds (NCWs) meet the requirements for strength and leak tightness. Acceptance of the welds is based on evaluation of Test Nozzle Welds (TNW) made specifically for comparison. The TNW were nondestructively and destructively evaluated for plug height, heat tint, ultrasonic testing (UT) for bond length and ultrasonic volumetric examination for weld defects, burst pressure, fractography, and metallography. The testing was conducted in agreement with a Task Technical and Quality Assurance Plan (TTQAP) (2) and applicable procedures.« less

  19. Automated System Checkout to Support Predictive Maintenance for the Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Deb, Somnath; Kulkarni, Deepak; Wang, Yao; Lau, Sonie (Technical Monitor)

    1998-01-01

    The Propulsion Checkout and Control System (PCCS) is a predictive maintenance software system. The real-time checkout procedures and diagnostics are designed to detect components that need maintenance based on their condition, rather than using more conventional approaches such as scheduled or reliability centered maintenance. Predictive maintenance can reduce turn-around time and cost and increase safety as compared to conventional maintenance approaches. Real-time sensor validation, limit checking, statistical anomaly detection, and failure prediction based on simulation models are employed. Multi-signal models, useful for testability analysis during system design, are used during the operational phase to detect and isolate degraded or failed components. The TEAMS-RT real-time diagnostic engine was developed to utilize the multi-signal models by Qualtech Systems, Inc. Capability of predicting the maintenance condition was successfully demonstrated with a variety of data, from simulation to actual operation on the Integrated Propulsion Technology Demonstrator (IPTD) at Marshall Space Flight Center (MSFC). Playback of IPTD valve actuations for feature recognition updates identified an otherwise undetectable Main Propulsion System 12 inch prevalve degradation. The algorithms were loaded into the Propulsion Checkout and Control System for further development and are the first known application of predictive Integrated Vehicle Health Management to an operational cryogenic testbed. The software performed successfully in real-time, meeting the required performance goal of 1 second cycle time.

  20. Passive containment cooling system with drywell pressure regulation for boiling water reactor

    DOEpatents

    Hill, Paul R.

    1994-01-01

    A boiling water reactor having a regulating valve for placing the wetwell in flow communication with an intake duct of the passive containment cooling system. This subsystem can be adjusted to maintain the drywell pressure at (or slightly below or above) wetwell pressure after the initial reactor blowdown transient is over. This addition to the PCCS design has the benefit of eliminating or minimizing steam leakage from the drywell to the wetwell in the longer-term post-LOCA time period and also minimizes the temperature difference between drywell and wetwell. This in turn reduces the rate of long-term pressure buildup of the containment, thereby extending the time to reach the design pressure limit.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newell, J; Miller, D; Stone, M

    The Savannah River National Laboratory (SRNL) was tasked to provide an assessment of the downstream impacts to the Defense Waste Processing Facility (DWPF) of decisions regarding the implementation of Al-dissolution to support sludge mass reduction and processing. Based on future sludge batch compositional projections from the Liquid Waste Organization's (LWO) sludge batch plan, assessments have been made with respect to the ability to maintain comparable projected operating windows for sludges with and without Al-dissolution. As part of that previous assessment, candidate frits were identified to provide insight into melt rate for average sludge batches representing with and without Al-dissolution flowsheets.more » Initial melt rate studies using the melt rate furnace (MRF) were performed using five frits each for Cluster 2 and Cluster 4 compositions representing average without and with Al-dissolution. It was determined, however, that the REDOX endpoint (Fe{sup 2+}/{Sigma}Fe for the glass) for Clusters 2 and 4 resulted in an overly oxidized feed which negatively affected the initial melt rate tests. After the sludge was adjusted to a more reduced state, additional testing was performed with frits that contained both high and low concentrations of sodium and boron oxides. These frits were selected strictly based on the ability to ascertain compositional trends in melt rate and did not necessarily apply to any acceptability criteria for DWPF processing. The melt rate data are in general agreement with historical trends observed at SRNL and during processing of SB3 (Sludge Batch 3)and SB4 in DWPF. When MAR acceptability criteria were applied, Frit 510 was seen to have the highest melt rate at 0.67 in/hr for Cluster 2 (without Al-dissolution), which is compositionally similar to SB4. For Cluster 4 (with Al-dissolution), which is compositionally similar to SB3, Frit 418 had the highest melt rate at 0.63 in/hr. Based on this data, there appears to be a slight advantage of the Frit 510 based system without Al-dissolution relative to the Frit 418 based system with Al-dissolution. Though the without aluminum dissolution scenario suggests a slightly higher melt rate with frit 510, several points must be taken into consideration: (1) The MRF does not have the ability to assess liquid feeds and, thus, rheology impacts. Instead, the MRF is a 'static' test bed in which a mass of dried melter feed (SRAT product plus frit) is placed in an 'isothermal' furnace for a period of time to assess melt rate. These conditions, although historically effective in terms of identifying candidate frits for specific sludge batches and mapping out melt rate versus waste loading trends, do not allow for assessments of the potential impact of feed rheology on melt rate. That is, if the rheological properties of the slurried melter feed resulted in the mounding of the feed in the melter (i.e., the melter feed was thick and did not flow across the cold cap), melt rate and/or melter operations (i.e., surges) could be negatively impacted. This could affect one or both flowsheets. (2) Waste throughput factors were not determined for Frit 510 and Frit 418 over multiple waste loadings. In order to provide insight into the mission life versus canister count question, one needs to define the maximum waste throughput for both flowsheets. Due to funding limitations, the melt rate testing only evaluated melt rate at a fixed waste loading. (3) DWPF will be processing SB5 through their facility in mid-November 2008. Insight into the over arching questions of melt rate, waste throughput, and mission life can be obtained directly from the facility. It is recommended that processing of SB5 through the facility be monitored closely and that data be used as input into the decision making process on whether to implement Al-dissolution for future sludge batches.« less

  2. Final Report - "Foaming and Antifoaming and Gas Entrainment in Radioactive Waste Pretreatment and Immobilization Processes"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wasan, Darsh T.

    2007-10-09

    The Savannah River Site (SRS) and Hanford site are in the process of stabilizing millions of gallons of radioactive waste slurries remaining from production of nuclear materials for the Department of Energy (DOE). The Defense Waste Processing Facility (DWPF) at SRS is currently vitrifying the waste in borosilicate glass, while the facilities at the Hanford site are in the construction phase. Both processes utilize slurry-fed joule-heated melters to vitrify the waste slurries. The DWPF has experienced difficulty during operations. The cause of the operational problems has been attributed to foaming, gas entrainment and the rheological properties of the process slurries.more » The rheological properties of the waste slurries limit the total solids content that can be processed by the remote equipment during the pretreatment and meter feed processes. Highly viscous material can lead to air entrainment during agitation and difficulties with pump operations. Excessive foaming in waste evaporators can cause carryover of radionuclides and non-radioactive waste to the condensate system. Experimental and theoretical investigations of the surface phenomena, suspension rheology and bubble generation of interactions that lead to foaming and air entrainment problems in the DOE High Level and Low Activity Radioactive Waste separation and immobilization processes were pursued under this project. The first major task accomplished in the grant proposal involved development of a theoretical model of the phenomenon of foaming in a three-phase gas-liquid-solid slurry system. This work was presented in a recently completed Ph.D. thesis (9). The second major task involved the investigation of the inter-particle interaction and microstructure formation in a model slurry by the batch sedimentation method. Both experiments and modeling studies were carried out. The results were presented in a recently completed Ph.D. thesis. The third task involved the use of laser confocal microscopy to study the effectiveness of three slurry rheology modifiers. An effective modifier was identified which resulted in lowering the yield stress of the waste simulant. Therefore, the results of this research have led to the basic understanding of the foaming/antifoaming mechanism in waste slurries as well as identification of a rheology modifier, which enhances the processing throughput, and accelerates the DOE mission. The objectives of this research effort were to develop a fundamental understanding of the physico-chemical mechanisms that produced foaming and air entrainment in the DOE High Level (HLW) and Low Activity (LAW) radioactive waste separation and immobilization processes, and to develop and test advanced antifoam/defoaming/rheology modifier agents. Antifoams/rheology modifiers developed from this research ere tested using non-radioactive simulants of the radioactive wastes obtained from Hanford and the Savannah River Site (SRS).« less

  3. Application of selection and estimation regular vine copula on go public company share

    NASA Astrophysics Data System (ADS)

    Hasna Afifah, R.; Noviyanti, Lienda; Bachrudin, Achmad

    2018-03-01

    The accuracy of financial risk management involving a large number of assets is needed, but information about dependencies among assets cannot be adequately analyzed. To analyze dependencies on a number of assets, several tools have been added to standard multivariate copula. However, these tools have not been adequately used in apps with higher dimensions. The bivariate parametric copula families can be used to solve it. The multivariate copula can be built from the bivariate parametric copula which is connected by a graphical representation to become Pair Copula Constructions (PCCs) or vine copula. The application of C-vine and D-vine copula have been used in some researches, but the use of C-vine and D-vine copula is more limited than R-vine copula. Therefore, this study used R-vine copula to provide flexibility for modeling complex dependencies on a high dimension. Since copula is a static model, while stock values change over time, then copula should be combined with the ARMA- GARCH model for modeling the movement of shares (volatility). The objective of this paper is to select and estimate R-vine copula which is used to analyze PT Jasa Marga (Persero) Tbk (JSMR), PT Waskita Karya (Persero) Tbk (WSKT), and PT Bank Mandiri (Persero) Tbk (BMRI) from august 31, 2014 to august 31, 2017. From the method it is obtained that the selected copulas for 2 edges at the first tree are survival Gumbel and the copula for edge at the second tree is Gaussian.

  4. Results For The Fourth Quarter 2014 Tank 50 WAC Slurry Sample: Chemical And Radionuclide Contaminants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.

    2015-09-30

    This report details the chemical and radionuclide contaminant results for the characterization of the Calendar Year (CY) 2014 Fourth Quarter sampling of Tank 50 for the Saltstone Waste Acceptance Criteria (WAC) in effect at that time. Information from this characterization will be used by DWPF & Saltstone Facility Engineering (DSFE) to support the transfer of low-level aqueous waste from Tank 50 to the Salt Feed Tank in the Saltstone Facility in Z-Area, where the waste will be immobilized. This information is also used to update the Tank 50 Waste Characterization System.

  5. Barriers and facilitators to the implementation of antenatal syphilis screening and treatment for the prevention of congenital syphilis in the Democratic Republic of Congo and Zambia: results of qualitative formative research.

    PubMed

    Nkamba, Dalau; Mwenechanya, Musaku; Kilonga, Arlette Mavila; Cafferata, Maria Luisa; Berrueta, Amanda Mabel; Mazzoni, Agustina; Althabe, Fernando; Garcia-Elorrio, Ezequiel; Tshefu, Antoniette K; Chomba, Elwyn; Buekens, Pierre M; Belizan, Maria

    2017-08-14

    The impact of untreated syphilis during pregnancy on neonatal health remains a major public health threat worldwide. Given the high prevalence of syphilis during pregnancy in Zambia and Democratic Republic of Congo (DRC), the Preventive Congenital Syphilis Trial (PCS Trial), a cluster randomized trial, was proposed to increase same-day screening and treatment of syphilis during antenatal care visits. To design an accepted and feasible intervention, we conducted a qualitative  formative research. Our objective was to identify context-specific  barriers and facilitators to the implementation of antenatal screening and treatment during pregnancy. Qualitative research included in-depth semi-structured interviews with clinic administrators, group interviews with health care providers, and focus groups with pregnant women in primary care clinics (PCCs) in Kinshasa (DRC) and Lusaka (Zambia). A total of 112 individuals participated in the interviews and focus groups. Barriers for the implementation of syphilis testing and treatment were identified at the a) system level: fragmentation of the health system, existence of ANC guidelines in conflict with proposed intervention, poor accessibility of clinics (geographical and functional), staff and product shortages at the PCCs; b) healthcare providers' level: lack of knowledge and training about evolving best practices, reservations regarding same-day screening and treatment; c) Pregnant women level: late enrollment in ANC, lack of knowledge about consequences and treatment of syphilis, and stigma. Based on these results, we developed recommendations for the design of the PCS Trial intervention. This research allowed us to identify barriers and facilitators to improve the feasibility and acceptability of a behavioral intervention. Formative research is a critical step in designing appropriate and effective interventions by closing the "know-do gap".

  6. Pheochromocytoma of the Organ Zuckerkandl.

    PubMed

    Lee, C; Chang, E; Gimenez, J; McCarron, R

    2017-01-01

    Pheochromocytomas (PCCs);, or intra-adrenal paragangliomas (PGLs);, are neuroendocrine tumors arising within the adrenal medulla. Extra-adrenal paragangliomas may arise in the sympathetic or parasympathetic paraganglia and more rarely in other organs. One of the most common extra-adrenal sites is in the organ of Zuckerkandl, a collection of chromaffin cells near the origin of the inferior mesenteric artery or near the aortic bifurcation. The following is a case of a patient with resistant hypertension secondary to an extra-adrenal paraganglioma in the organ of Zuckerkandl. The patient is a 43 year old man with a history of depression, type 2 diabetes mellitus, and hypertension who was sent to the emergency department by his primary care physician for severely elevated blood pressures. Patient also had diaphoresis, tachycardia, and a new, fine tremor of his left hand. Upon presentation, the patient's blood pressure was 260/120 mmHg with a heart rate of 140 beats per minute. Plasma fractionated metanephrines sent on admission revealed significantly elevated levels of total plasma metanephrines (2558 pg/mL);, free metanephrine (74 pg/ml); and free normetanephrine (2484pg/mL);. An I-123 metaiodobenzylguanidine (MIBG); scan showed abnormal uptake in the lower abdomen at the level of the aortic bifurcation. Patient was started on alpha-blockade, with subsequent addition of a beta-blocker prior to surgery. Patient underwent surgical removal of the tumor with pathology consistent with a paraganglioma. Pheochromocytomas and paragangliomas are responsible for approximately 0.5 percent of cases of secondary hypertension. Many different biochemical markers have been used to aid in the diagnosis of PCC/PGL including plasma catecholamines, plasma metanephrines, urine fractionated metanephrines, urine catecholamines, total metanephrines and vanillymandellic acid. Definitive management of a PCC and PGL involves surgical removal of the tumor. Finally, there should be a discussion with each patient to determine if he or she should undergo genetic testing, as studies show that approximately 25 percent of catecholamine producing PCCs and PGLs are due to heritable genetic mutations.

  7. PROCESSING ALTERNATIVES FOR DESTRUCTION OF TETRAPHENYLBORATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D; Thomas Peters, T; Samuel Fink, S

    Two processes were chosen in the 1980's at the Savannah River Site (SRS) to decontaminate the soluble High Level Waste (HLW). The In Tank Precipitation (ITP) process (1,2) was developed at SRS for the removal of radioactive cesium and actinides from the soluble HLW. Sodium tetraphenylborate was added to the waste to precipitate cesium and monosodium titanate (MST) was added to adsorb actinides, primarily uranium and plutonium. Two products of this process were a low activity waste stream and a concentrated organic stream containing cesium tetraphenylborate and actinides adsorbed on monosodium titanate (MST). A copper catalyzed acid hydrolysis process wasmore » built to process (3, 4) the Tank 48H cesium tetraphenylborate waste in the SRS's Defense Waste Processing Facility (DWPF). Operation of the DWPF would have resulted in the production of benzene for incineration in SRS's Consolidated Incineration Facility. This process was abandoned together with the ITP process in 1998 due to high benzene in ITP caused by decomposition of excess sodium tetraphenylborate. Processing in ITP resulted in the production of approximately 1.0 million liters of HLW. SRS has chosen a solvent extraction process combined with adsorption of the actinides to decontaminate the soluble HLW stream (5). However, the waste in Tank 48H is incompatible with existing waste processing facilities. As a result, a processing facility is needed to disposition the HLW in Tank 48H. This paper will describe the process for searching for processing options by SRS task teams for the disposition of the waste in Tank 48H. In addition, attempts to develop a caustic hydrolysis process for in tank destruction of tetraphenylborate will be presented. Lastly, the development of both a caustic and acidic copper catalyzed peroxide oxidation process will be discussed.« less

  8. Results from tests of TFL Hydragard sampling loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steimke, J.L.

    When the Defense Waste Processing Facility (DWPF) is operational, processed radioactive sludge will be transferred in batches to the Slurry Mix Evaporator (SME), where glass frit will be added and the contents concentrated by boiling. Batches of the slurry mixture are transferred from the SME to the Melter Feed Tank (MFT). Hydragard{reg_sign} sampling systems are used on the SME and the MFT for collecting slurry samples in vials for chemical analysis. An accurate replica of the Hydragard sampling system was built and tested in the thermal Fluids Laboratory (TFL) to determine the hydragard accuracy. It was determined that the originalmore » Hydragard valve frequently drew a non-representative sample stream through the sample vial that ranged from frit enriched to frit depleted. The Hydragard valve was modified by moving the plunger and its seat backwards so that the outer surface of the plunger was flush with the inside diameter of the transfer line when the valve was open. The slurry flowing through the vial accurately represented the composition of the slurry in the reservoir for two types of slurries, different dilution factors, a range of transfer flows and a range of vial flows. It was then found that the 15 ml of slurry left in the vial when the Hydragard valve was closed, which is what will be analyzed at DWPF, had a lower ratio of frit to sludge as characterized by the lithium to iron ratio than the slurry flowing through it. The reason for these differences is not understood at this time but it is recommended that additional experimentation be performed with the TFL Hydragard loop to determine the cause.« less

  9. VizieR Online Data Catalog: Second Planck Catalogue of Compact Sources (PCCS2) (Planck+, 2016)

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argueso, F.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Beichman, C.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bohringer, H.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Carvalho, P.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Clemens, M.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; De Rosa, A.; de Zotti, G.; Delabrouille, J.; Desert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejse, L. A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P. M.; Macias-Perez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschenes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Negrello, M.; Netterfield, C. B.; Norgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prezeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J. A.; Rusholme, B.; Sandri, M.; Sanghera, H. S.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Torni Koski, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Walter, B.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2017-01-01

    The Low Frequency Instrument (LFI) DPC produced the 30, 44, and 70GHz maps after the completion of eight full surveys (spanning the period 12 August 2009 to 3 August 2013). In addition, special LFI maps covering the period 1 April 2013 to 30 June 2013 were produced in order to compare the Planck flux-density scales with those of the Very Large Array and the Australia Telescope Compact Array, by performing simultaneous observations of a sample of sources over that period. The High Frequency Instrument (HFI) DPC produced the 100, 143, 217, 353, 545, and 857GHz maps after five full surveys (2009 August 12 to 2012 January 11). (16 data files).

  10. Epidemiology of organophosphate pesticide poisoning in Taiwan.

    PubMed

    Lin, Tzeng Jih; Walter, Frank Gardner; Hung, Dong Zong; Tsai, Jin Lian; Hu, Sheng Chuan; Chang, Jung San; Deng, Jou-Fang; Chase, Jung San; Denninghoff, Kurt; Chan, Hon Man

    2008-11-01

    The nationwide epidemiology of organophosphate pesticide (OP) poisoning has never been reported in detail for Taiwan. This study retrospectively reviewed all human OP exposures reported to Taiwan's Poison Control Centers (PCCs) from July 1985 through December 2006. There were 4799 OP exposures. Most OP exposures were acute (98.37%) ingestions (74.50%) of a single OP (80.37%) to attempt suicide (64.72%) in adults (93.25%). Males were the most common gender (64.95%). Most patients (61.97%) received atropine and/or pralidoxime. The mortality rate for all 4799 OP exposures was 12.71%. Exposures to single OPs without co-intoxicants caused 524 deaths; of these, 63.36% were due to dimethyl OPs. Dimethyl OPs cause the majority of deaths in Taiwan.

  11. Passive containment cooling system with drywell pressure regulation for boiling water reactor

    DOEpatents

    Hill, P.R.

    1994-12-27

    A boiling water reactor is described having a regulating valve for placing the wetwell in flow communication with an intake duct of the passive containment cooling system. This subsystem can be adjusted to maintain the drywell pressure at (or slightly below or above) wetwell pressure after the initial reactor blowdown transient is over. This addition to the PCCS design has the benefit of eliminating or minimizing steam leakage from the drywell to the wetwell in the longer-term post-LOCA time period and also minimizes the temperature difference between drywell and wetwell. This in turn reduces the rate of long-term pressure buildup of the containment, thereby extending the time to reach the design pressure limit. 4 figures.

  12. Factors Associated with Parental Satisfaction with a Pediatric Crisis Clinic (PCC).

    PubMed

    Lee, Jonathan; Korczak, Daphne

    2014-05-01

    Little is known about parental satisfaction with pediatric crisis clinics (PCCs) that provide a single consultation to families in need of urgent psychiatric care. Parental satisfaction may improve long-term adherence to physician recommendations. To explore parental satisfaction with a PCC. Parental satisfaction was ascertained by a structured telephone interview following crisis consultation at the PCC of an academic, tertiary care centre. Parents of 71% (n = 124) of 174 pediatric patients seen in the PCC from 2007-2008 participated in the post-consultation interview. The majority of parents stated they were either somewhat satisfied (49/122, 40.2%) or very satisfied (49/122, 40.2%) with the PCC. Parental satisfaction correlated with time between referral and consultation (p<0.05), the degree to which parents felt listened to by the consultant (p<0.01), the amount of psychoeducation parents felt they received (p<0.01), and appointment length (p<0.001). Parents were satisfied overall with an urgent care service model. Satisfaction was correlated with the time between referral and consultation, degree to which they felt their consultant had listened to them, and the amount of information they received at the consultation's conclusion.

  13. Factors Associated with Parental Satisfaction with a Pediatric Crisis Clinic (PCC)

    PubMed Central

    Lee, Jonathan; Korczak, Daphne

    2014-01-01

    Introduction: Little is known about parental satisfaction with pediatric crisis clinics (PCCs) that provide a single consultation to families in need of urgent psychiatric care. Parental satisfaction may improve long-term adherence to physician recommendations. Objective: To explore parental satisfaction with a PCC. Methods: Parental satisfaction was ascertained by a structured telephone interview following crisis consultation at the PCC of an academic, tertiary care centre. Methods: Parents of 71% (n = 124) of 174 pediatric patients seen in the PCC from 2007–2008 participated in the post-consultation interview. Results: The majority of parents stated they were either somewhat satisfied (49/122, 40.2%) or very satisfied (49/122, 40.2%) with the PCC. Parental satisfaction correlated with time between referral and consultation (p<0.05), the degree to which parents felt listened to by the consultant (p<0.01), the amount of psychoeducation parents felt they received (p<0.01), and appointment length (p<0.001). Conclusions: Parents were satisfied overall with an urgent care service model. Satisfaction was correlated with the time between referral and consultation, degree to which they felt their consultant had listened to them, and the amount of information they received at the consultation’s conclusion. PMID:24872827

  14. ANALYTICAL PLANS SUPPORTING THE SWPF GAP ANALYSIS BEING CONDUCTED WITH ENERGYSOLUTIONS AND THE VITREOUS STATE LABORATORY AT THE CUA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T.; Peeler, D.

    2014-10-28

    EnergySolutions (ES) and its partner, the Vitreous State Laboratory (VSL) of The Catholic University of America (CUA), are to provide engineering and technical services support to Savannah River Remediation, LLC (SRR) for ongoing operation of the Defense Waste Processing Facility (DWPF) flowsheet as well as for modifications to improve overall plant performance. SRR has requested that the glass formulation team of Savannah River National Laboratory (SRNL) and ES-VSL develop a technical basis that validates the current Product Composition Control System models for use during the processing of the coupled flowsheet or that leads to the refinements of or modifications tomore » the models that are needed so that they may be used during the processing of the coupled flowsheet. SRNL has developed a matrix of test glasses that are to be batched and fabricated by ES-VSL as part of this effort. This document provides two analytical plans for use by ES-VSL: one plan is to guide the measurement of the chemical composition of the study glasses while the second is to guide the measurement of the durability of the study glasses based upon the results of testing by ASTM’s Product Consistency Test (PCT) Method A.« less

  15. Comparison of two approaches for determining ground-water discharge and pumpage in the lower Arkansas River Basin, Colorado, 1997-98

    USGS Publications Warehouse

    Dash, Russell G.; Troutman, Brent M.; Edelmann, Patrick

    1999-01-01

    In March 1994, the Colorado Division of Water Resources (CDWR) adopted ?Rules Governing the Measurement of Tributary Ground Water Diversions Located in the Arkansas River Basin? (Office of the State Engineer, 1994); these initial rules were amended in February 1996 (Office of the State Engineer, 1996). The amended rules require users of wells that divert tributary ground water to annually report the water pumped monthly by each well. The rules allow a well owner to report the pumpage measured by a totalizing flowmeter (TFM) or pumpage determined from electrical power data and a power conversion coefficient (PCC) (Hurr and Litke, 1989).Opinions by representatives of the State of Kansas, presented before the Special Master hearing a court case [State of Kansas v. State of Colorado, No. 105 Original (1996)] concerning post-Compact well pumping, stated that the PCC approach does not provide the same level of accuracy and reliability as a TFM when used to determine pumpage. In 1997, the U.S. Geological Survey (USGS), in cooperation with the CDWR, began a 2-year study to compare ground-water pumpage estimates made using the TFM and the PCC approaches. The study area was along the Arkansas River between Pueblo, Colorado, and the Colorado-Kansas State line (fig. 1).The two approaches for estimating ground-water discharge and pumpage were compared for more than 100 wells completed in the alluvial aquifer of the Arkansas River Basin. The TFM approach uses an inline flowmeter to directly measure instantaneous discharge and the total volume of water pumped at a well. The PCC approach uses electrical power consumption records and a power conversion coefficient to estimate the pumpage at ground-water wells.This executive summary describes the results of the comparison of the two approaches. Specifically, (1) the differences in instantaneous discharge measured with three portable flowmeters and measured with an inline TFM are evaluated, and the statistical differences in paired instantaneous discharge between the two approaches are determined; (2) short- and long-term variations in the PCC?s are presented; (3) differences in pumpage between the two approaches are evaluated, and the statistical differences in pumpage between the two approaches are determined; (4) potential sources of discrepancy between pumpage estimates are discussed; and (5) differences in total network pumpage using the two approaches are presented.During the irrigation seasons of 1997 and 1998, instantaneous discharge and electrical power demand were measured at randomly selected wells to determine PCC?s. At more than 100 wells, the PCC?s determined during the 1998 season were applied to total electrical power consumption data that was recorded between the initial and final readings at each network well site in 1998 to estimate total ground-water pumpage.At each site, an inline TFM was installed in a full-flowing, acceptable test section of pipe on the discharge side of the pump where the measurement of discharge was made. Measurements of instantaneous ground-water discharge also were made using three different types of portable flowmeters. The average velocity multiplied by the cross-sectional area of the discharge pipe was used to compute the discharge in gallons per minute. Whenever possible, discharge measurements were made at each network site using all three types of portable flowmeters.

  16. NAT2, meat consumption and colorectal cancer incidence: an ecological study among 27 countries.

    PubMed

    Ognjanovic, Simona; Yamamoto, Jennifer; Maskarinec, Gertraud; Le Marchand, Loïc

    2006-11-01

    The polymorphic gene NAT2 is a major determinant of N-acetyltransferase activity and, thus, may be responsible for differences in one's ability to bioactivate heterocyclic amines, a class of procarcinogens in cooked meat. An unusually marked geographic variation in enzyme activity has been described for NAT2. The present study re-examines the international direct correlation reported for meat intake and colorectal cancer (CRC) incidence, and evaluates the potential modifying effects of NAT2 phenotype and other lifestyle factors on this correlation. Country-specific CRC incidence data, per capita consumption data for meat and other dietary factors, prevalence of the rapid/intermediate NAT2 phenotype, and prevalence of smoking for 27 countries were used. Multiple linear regression models were fit and partial correlation coefficients (PCCs) were computed for men and women separately. Inclusion of the rapid/intermediate NAT2 phenotype with meat consumption improved the fit of the regression model for CRC incidence in both sexes (males-R (2) = 0.78, compared to 0.70 for meat alone; p for difference in model fit-0.009; females-R (2) = 0.76 compared to 0.69 for meat alone; p = 0.02). Vegetable consumption (inversely and in both sexes) and fish consumption (directly and in men only) were also weakly correlated with CRC, whereas smoking prevalence and alcohol consumption had no effects on the models. The PCC between NAT2 and CRC incidence was 0.46 in males and 0.48 in females when meat consumption was included in the model, compared to 0.14 and 0.15, respectively, when it was not. These data suggest that, in combination with meat intake, some proportion of the international variability in CRC incidence may be attributable to genetic susceptibility to heterocyclic amines, as determined by NAT2 genotype.

  17. Cost-effectiveness analysis of internet-mediated cognitive behavioural therapy for depression in the primary care setting: results based on a controlled trial

    PubMed Central

    Metsini, Alexandra; Madsen, Jens-Henrik; Hange, Dominique; Petersson, Eva-Lisa L; Eriksson, Maria CM; Kivi, Marie; Andersson, Per-Åke Å; Svensson, Mikael

    2018-01-01

    Objective To perform a cost-effectiveness analysis of a randomised controlled trial of internet-mediated cognitive behavioural therapy (ICBT) compared with treatment as usual (TaU) for patients with mild to moderate depression in the Swedish primary care setting. In particular, the objective was to assess from a healthcare and societal perspective the incremental cost-effectiveness ratio (ICER) of ICBT versus TaU at 12 months follow-up. Design A cost-effectiveness analysis alongside a pragmatic effectiveness trial. Setting Sixteen primary care centres (PCCs) in south-west Sweden. Participants Ninety patients diagnosed with mild to moderate depression at the PCCs. Main outcome measure ICERs calculated as (CostICBT−CostTaU)/(Health outcomeICBT−Health outcomeTaU)=ΔCost/ΔHealth outcomes, the health outcomes being changes in the Beck Depression Inventory-II (BDI-II) score and quality-adjusted life-years (QALYs). Results The total cost per patient for ICBT was 4044 Swedish kronor (SEK) (€426) (healthcare perspective) and SEK47 679 (€5028) (societal perspective). The total cost per patient for TaU was SEK4434 (€468) and SEK50 343 (€5308). In both groups, the largest cost was associated with productivity loss. The differences in cost per patient were not statistically significant. The mean reduction in BDI-II score was 13.4 and 13.8 units in the ICBT and TaU groups, respectively. The mean QALYs per patient was 0.74 and 0.79 in the ICBT and TaU groups, respectively. The differences in BDI-II score reduction and mean QALYs were not statistically significant. The uncertainty of the study estimates when assessed by bootstrapping indicated that no firm conclusion could be drawn as to whether ICBT treatment compared with TaU was the most cost-effective use of resources. Conclusions ICBT was regarded to be as cost-effective as TaU as costs, health outcomes and cost-effectiveness were similar for ICBT and TaU, both from a healthcare and societal perspective. Trial registration number ID NR 30511. PMID:29903785

  18. The genomic landscape of phaeochromocytoma.

    PubMed

    Flynn, Aidan; Benn, Diana; Clifton-Bligh, Roderick; Robinson, Bruce; Trainer, Alison H; James, Paul; Hogg, Annette; Waldeck, Kelly; George, Joshy; Li, Jason; Fox, Stephen B; Gill, Anthony J; McArthur, Grant; Hicks, Rodney J; Tothill, Richard W

    2015-05-01

    Phaeochromocytomas (PCCs) and paragangliomas (PGLs) are rare neural crest-derived tumours originating from adrenal chromaffin cells or extra-adrenal sympathetic and parasympathetic tissues. More than a third of PCC/PGL cases are associated with heritable syndromes involving 13 or more known genes. These genes have been broadly partitioned into two groups based on pseudo-hypoxic and receptor tyrosine kinase (RTK) signalling pathways. Many of these genes can also become somatically mutated, although up to one third of sporadic cases have no known genetic driver. Furthermore, little is known of the genes that co-operate with known driver genes to initiate and drive tumourigenesis. To explore the genomic landscape of PCC/PGL, we applied exome sequencing, high-density SNP-array analysis, and RNA sequencing to 36 PCCs and four functional PGL tumours. All tumours displayed low mutation frequency, in contrast to frequent large segmental copy-number alterations, aneuploidy, and evidence for chromothripsis in one case. Multi-region sampling of one benign familial PCC tumour provided evidence for the timing of mutations during tumourigenesis and ongoing clonal evolution. Thirty-one of 40 (77.5%) cases could be explained by germline or somatic mutations or structural alterations affecting known PCC/PGL genes. Deleterious somatic mutations were also identified in known tumour-suppressor genes associated with genome maintenance and epigenetic modulation. A multitude of other genes were also found mutated that are likely important for normal neuroendocrine cell function. We revisited the gene-expression subtyping of PCC/PGL by integrating published microarray data with our RNA-seq data, enabling the identification of six robust gene-expression subtypes. The majority of cases in our cohort with no identifiable driver mutation were classified into a gene-expression subtype bearing similarity to MAX mutant PCC/PGL. Our data suggest there are yet unknown PCC/PGL cancer genes that can phenocopy MAX mutant PCC/PGL tumours. This study provides new insight into the molecular diversity and genetic origins of PCC/PGL tumours. Copyright © 2014 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.

  19. The prognostic effect of cardiac rehabilitation in the era of acute revascularisation and statin therapy: A systematic review and meta-analysis of randomized and non-randomized studies – The Cardiac Rehabilitation Outcome Study (CROS)

    PubMed Central

    Davos, Constantinos H; Doherty, Patrick; Saure, Daniel; Metzendorf, Maria-Inti; Salzwedel, Annett; Völler, Heinz; Jensen, Katrin; Schmid, Jean-Paul

    2016-01-01

    Background The prognostic effect of multi-component cardiac rehabilitation (CR) in the modern era of statins and acute revascularisation remains controversial. Focusing on actual clinical practice, the aim was to evaluate the effect of CR on total mortality and other clinical endpoints after an acute coronary event. Design Structured review and meta-analysis. Methods Randomised controlled trials (RCTs), retrospective controlled cohort studies (rCCSs) and prospective controlled cohort studies (pCCSs) evaluating patients after acute coronary syndrome (ACS), coronary artery bypass grafting (CABG) or mixed populations with coronary artery disease (CAD) were included, provided the index event was in 1995 or later. Results Out of n = 18,534 abstracts, 25 studies were identified for final evaluation (RCT: n = 1; pCCS: n = 7; rCCS: n = 17), including n = 219,702 patients (after ACS: n = 46,338; after CABG: n = 14,583; mixed populations: n = 158,781; mean follow-up: 40 months). Heterogeneity in design, biometrical assessment of results and potential confounders was evident. CCSs evaluating ACS patients showed a significantly reduced mortality for CR participants (pCCS: hazard ratio (HR) 0.37, 95% confidence interval (CI) 0.20–0.69; rCCS: HR 0.64, 95% CI 0.49–0.84; odds ratio 0.20, 95% CI 0.08–0.48), but the single RCT fulfilling Cardiac Rehabilitation Outcome Study (CROS) inclusion criteria showed neutral results. CR participation was also associated with reduced mortality after CABG (rCCS: HR 0.62, 95% CI 0.54–0.70) and in mixed CAD populations. Conclusions CR participation after ACS and CABG is associated with reduced mortality even in the modern era of CAD treatment. However, the heterogeneity of study designs and CR programmes highlights the need for defining internationally accepted standards in CR delivery and scientific evaluation. PMID:27777324

  20. New PANDA Tests to Investigate Effects of Light Gases on Passive Safety Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paladino, D.; Auban, O.; Candreia, P.

    The large- scale thermal-hydraulic PANDA facility (located at PSI in Switzerland), has been used over the last few years for investigating different passive decay- heat removal systems and containment phenomena for the next generation of light water reactors (Simplified Boiling Water Reactor: SBWR; European Simplified Boiling Water Reactor: ESBWR; Siedewasserreaktor: SWR-1000). Currently, as part of the European Commission 5. EURATOM Framework Programme project 'Testing and Enhanced Modelling of Passive Evolutionary Systems Technology for Containment Cooling' (TEMPEST), a new series of tests is being planned in the PANDA facility to experimentally investigate the distribution of non-condensable gases inside the containment andmore » their effect on the performance of the 'Passive Containment Cooling System' (PCCS). Hydrogen release caused by the metal-water reaction in the case of a postulated severe accident will be simulated in PANDA by injecting helium into the reactor pressure vessel. In order to provide suitable data for Computational Fluid Dynamic (CFD) code assessment and improvement, the instrumentation in PANDA has been upgraded for the new tests. In the present paper, a detailed discussion is given of the new PANDA tests to be performed to investigate the effects of light gas on passive safety systems. The tests are scheduled for the first half of the year 2002. (authors)« less

  1. A comparison of individual and population-derived vascular input functions for quantitative DCE-MRI in rats.

    PubMed

    Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E

    2014-05-01

    Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Impact of scaling on the nitric-glycolic acid flowsheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D.

    Savannah River Remediation (SRR) is considering using glycolic acid as a replacement for formic acid in Sludge Receipt and Adjustment Tank (SRAT) processing in the Defense Waste Processing Facility (DWPF). Catalytic decomposition of formic acid is responsible for the generation of hydrogen, a potentially flammable gas, during processing. To prevent the formation of a flammable mixture in the offgas, an air purge is used to dilute the hydrogen concentration below the 60% of the Composite Lower Flammability Limit (CLFL). The offgas is continuously monitored for hydrogen using Gas Chromatographs (GCs). Since formic acid is much more volatile and toxic thanmore » glycolic acid, a formic acid spill would lead to the release of much larger quantities to the environment. Switching from formic acid to glycolic acid is expected to eliminate the hydrogen flammability hazard leading to lower air purges, thus downgrading of Safety Significant GCs to Process Support GCs, and minimizing the consequence of a glycolic acid tank leak in DWPF. Overall this leads to a reduction in process operation costs and an increase in safety margin. Experiments were completed at three different scales to demonstrate that the nitric-glycolic acid flowsheet scales from the 4-L lab scale to the 22-L bench scale and 220-L engineering scale. Ten process demonstrations of the sludge-only flowsheet for SRAT and Slurry Mix Evaporator (SME) cycles were performed using Sludge Batch 8 (SB8)-Tank 40 simulant. No Actinide Removal Process (ARP) product or strip effluent was added during the runs. Six experiments were completed at the 4-L scale, two experiments were completed at the 22-L scale, and two experiments were completed at the 220-L scale. Experiments completed at the 4-L scale (100 and 110% acid stoichiometry) were repeated at the 22-L and 220-L scale for scale comparisons.« less

  3. The PANDA tests for SBWR certification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varadi, G.; Dreier, J.; Bandurski, Th.

    1996-03-01

    The ALPHA project is centered around the experimental and analytical investigation of the long-term decay heat removal from the containments of the next generation of {open_quotes}passive{close_quotes} ALWRs. The project includes integral system tests in the large-scale (1:25 in volume) PANDA facility as well as several other series of tests and supporting analytical work. The first series of experiments to be conducted in PANDA have become a required experimental element in the certification process for the General Electric Simplified Boiling Water Reactor (SBWR). The PANDA general experimental philosophy, facility design, scaling, and instrumentation are described. Steady-state PCCS condenser performance tests andmore » extensive facility characterization tests were already conducted. The transient system behavior tests are underway; preliminary results from the first transient test M3 are reviewed.« less

  4. Classification of platelet concentrates: from pure platelet-rich plasma (P-PRP) to leucocyte- and platelet-rich fibrin (L-PRF).

    PubMed

    Dohan Ehrenfest, David M; Rasmusson, Lars; Albrektsson, Tomas

    2009-03-01

    The topical use of platelet concentrates is recent and its efficiency remains controversial. Several techniques for platelet concentrates are available; however, their applications have been confusing because each method leads to a different product with different biology and potential uses. Here, we present classification of the different platelet concentrates into four categories, depending on their leucocyte and fibrin content: pure platelet-rich plasma (P-PRP), such as cell separator PRP, Vivostat PRF or Anitua's PRGF; leucocyte- and platelet-rich plasma (L-PRP), such as Curasan, Regen, Plateltex, SmartPReP, PCCS, Magellan or GPS PRP; pure plaletet-rich fibrin (P-PRF), such as Fibrinet; and leucocyte- and platelet-rich fibrin (L-PRF), such as Choukroun's PRF. This classification should help to elucidate successes and failures that have occurred so far, as well as providing an objective approach for the further development of these techniques.

  5. Miscibility Evaluation Of The Next Generation Solvent With Polymers Currently Used At DWPF, MCU, And Saltstone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fondeur, F. F.

    The Office of Waste Processing, within the Office of Technology Innovation and Development, funded the development of an enhanced Caustic-Side Solvent Extraction (CSSX) solvent for deployment at the Savannah River Site for removal of cesium from High Level Waste. This effort lead to the development of the Next Generation Solvent (NGS) with Tris (3,7-dimethyl octyl) guanidine (TiDG). The first deployment target for the NGS solvent is within the Modular CSSX Unit (MCU). Deployment of a new chemical within an existing facility requires verification that the new chemical components are compatible with the installed equipment. In the instance of a newmore » organic solvent, the primary focus is on compatibility of the solvent with organic polymers used in the affected facility. This report provides the calculated data from exposing these polymers to the Next Generation Solvent. An assessment of the dimensional stability of polymers known to be used or present in the MCU, Defense Waste Processing Facility (DWPF), and Saltstone facilities that will be exposed to the NGS showed that TiDG could selectively affect the elastomers and some thermoplastics to varying extents, but the typical use of these polymers in a confined geometry will likely prevent the NGS from impacting component performance. The polymers identified as of primary concern include Grafoil® (flexible graphite), Tefzel®, Isolast®, ethylene-propylene-diene monomer (EPDM) rubber, nitrile-butadiene rubber (NBR), styrene-butadiene rubber (SBR), ultra high molecular weight polyethylene (UHMWPE), and fluorocarbon rubber (FKM). Certain polymers like NBR and EPDM were found to interact mildly with NGS but their calculated swelling and the confined geometry will impede interaction with NGS. In addition, it was found that Vellumoid (cellulose fibers-reinforced glycerin and protein) may leach protein and Polyvinyl Chloride (PVC) may leach plasticizer (such as Bis-Ethylhexyl-Phthalates) into the NGS solvent. Either case will not impact decontamination or immobilization operations at Savannah River Site (SRS). Some applications have zero tolerance for dimensional changes such as the operation of valves while other applications a finite dimensional change improves the function of the application such as seals and gaskets. Additional considerations are required before using the conclusions from this work to judge outcomes in field applications. Decane, a component of Isopar L that is most likely to interact with the polymers, mildly interacted with the elastomers and the propylene based polymers but their degree of swelling is at most 10% and the confined geometry that they are typically placed in indicate this is not significant. In addition, it was found that Vellumoid may leach protein into the NGS solvent. Since Vellumoid is used at the mixer in Saltstone where it sees minimum quantities of solvent, this leaching has no effect on the extraction process at MCU or the immobilization process at saltstone. No significant interaction is expected between MaxCalix and the polymers and elastomers used at MCU, DWPF, and Saltstone. Overall, minimal and insignificant interactions are expected on extraction and immobilization operations when MCU switches from CSSX to NGS solvent. It is expected that contacting NGS will not accelerate the aging rate of polymers and elastomers under radiation and heat. This is due to the minimal interaction between NGS and the polymers and the confined geometries for these polymers. SRNL recommends the use of the HSP method (for screening) and some testing to evaluate the impact of other organic such as alcohols, glycolate, and their byproducts on the polymers used throughout the site.« less

  6. THE IMPACT OF THE MCU LIFE EXTENSION SOLVENT ON DWPF GLASS FORMULATION EFFORTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D; Edwards, T

    2011-03-24

    As a part of the Actinide Removal Process (ARP)/Modular Caustic Side Solvent Extraction Unit (MCU) Life Extension Project, a next generation solvent (NG-CSSX), a new strip acid, and modified monosodium titanate (mMST) will be deployed. The strip acid will be changed from dilute nitric acid to dilute boric acid (0.01 M). Because of these changes, experimental testing with the next generation solvent and mMST is required to determine the impact of these changes in 512-S operations as well as Chemical Process Cell (CPC), Defense Waste Processing Facility (DWPF) glass formulation activities, and melter operations at DWPF. To support programmatic objectives,more » the downstream impacts of the boric acid strip effluent (SE) to the glass formulation activities and melter operations are considered in this study. More specifically, the impacts of boric acid additions to the projected SB7b operating windows, potential impacts to frit production temperatures, and the potential impact of boron volatility are evaluated. Although various boric acid molarities have been reported and discussed, the baseline flowsheet used to support this assessment was 0.01M boric acid. The results of the paper study assessment indicate that Frit 418 and Frit 418-7D are robust to the implementation of the 0.01M boric acid SE into the SB7b flowsheet (sludge-only or ARP-added). More specifically, the projected operating windows for the nominal SB7b projections remain essentially constant (i.e., 25-43 or 25-44% waste loading (WL)) regardless of the flowsheet options (sludge-only, ARP added, and/or the presence of the new SE). These results indicate that even if SE is not transferred to the Sludge Receipt and Adjustment Tank (SRAT), there would be no need to add boric acid (from a trim tank) to compositionally compensate for the absence of the boric acid SE in either a sludge-only or ARP-added SB7b flowsheet. With respect to boron volatility, the Measurement Acceptability Region (MAR) assessments also suggest that Slurry Mix Evaporator (SME) acceptability decisions would not be different assuming either 100% of the B{sub 2}O{sub 3} from the SE were retained or volatilized. More specifically, the 0.84 wt% B{sub 2}O{sub 3} in the SE is so minor that its presence in the SME analysis does not influence SME acceptability decisions. In fact, using the 100% retention and 100% volatilization composition projections, only minor differences in the predicted properties of the glass product occur with all of the glasses being acceptable over a WL interval of 32-42%. Based on the 0.01M boric acid flowsheet, there is very little difference between Frit 418 and Frit 418-7D (a frit that was compositionally altered to account for the 0.84 wt% B{sub 2}O{sub 3} in the SE) with respect to melt temperature. In fact, when one evaluates the composition of Frit 418-7D, it lies within the current Frit 418 vendor specifications and therefore could have been produced by the vendor targeting the nominal composition of Frit 418.« less

  7. Nitric-glycolic flowsheet reduction/oxidation (redox) model for the defense waste processing facility (DWPF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C. M.; Williams, M. S.; Edwards, T. B.

    Control of the REDuction/OXidation (REDOX) state of glasses containing high concentrations of transition metals, such as High Level Waste (HLW) glasses, is critical in order to eliminate processing difficulties caused by overly reduced or overly oxidized melts. Operation of a HLW melter at Fe +2/ΣFe ratios of between 0.09 and 0.33, retains radionuclides in the melt and thus the final glass. Specifically, long-lived radioactive 99Tc species are less volatile in the reduced Tc 4+ state as TcO 2 than as NaTcO 4 or Tc 2O 7, and ruthenium radionuclides in the reduced Ru 4+ state are insoluble RuO 2 inmore » the melt which are not as volatile as NaRuO 4 where the Ru is in the +7 oxidation state. Similarly, hazardous volatile Cr 6+ occurs in oxidized melt pools as Na 2CrO 4 or Na 2Cr 2O 7, while the Cr +3 state is less volatile and remains in the melt as NaCrO 2 or precipitates as chrome rich spinels. The melter REDOX control balances the oxidants and reductants from the feed and from processing additives such as antifoam.« less

  8. Overview of Corrosion, Erosion, and Synergistic Effects of Erosion and Corrosion in the WTP Pre-treatment Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imrich, K. J.

    2015-03-27

    Corrosion is an extremely complex process that is affected by numerous factors. Addition of a flowing multi-phase solution further complicates the analysis. The synergistic effects of the multiple corrosive species as well as the flow-induced synergistic effects from erosion and corrosion must be thoroughly evaluated in order to predict material degradation responses. Public domain data can help guide the analysis, but cannot reliably provide the design basis especially when the process is one-of-a-kind, designed for 40 plus years of service, and has no viable means for repair or replacement. Testing in representative simulants and environmental conditions with prototypic components willmore » provide a stronger technical basis for design. This philosophy was exemplified by the Defense Waste Processing Facility (DWPF) at the Savannah River Site and only after 15 plus years of successful operation has it been validated. There have been “hiccups”, some identified during the cold commissioning phase and some during radioactive operations, but they were minor and overcome. In addition, the system is robust enough to tolerate most flowsheet changes and the DWPF design allows minor modifications and replacements – approaches not available with the Hanford Waste Treatment Plant (WTP) “Black Cell” design methodology. Based on the available data, the synergistic effect between erosion and corrosion is a credible – virtually certain – degradation mechanism and must be considered for the design of the WTP process systems. Testing is recommended due to the number of variables (e.g., material properties, process parameters, and component design) that can affect synergy between erosion and corrosion and because the available literature is of limited applicability for the complex process chemistries anticipated in the WTP. Applicable testing will provide a reasonable and defensible path forward for design of the WTP Black Cell and Hard-to-Reach process equipment. These conclusions are consistent with findings from the various Bechtel National Inc., Independent Review Teams, and Department of Energy (DOE) reviews. A test methodology is outlined, which should provide a clear, logical road map for the testing that is necessary to provide applicable and defensible data essential to support design calculations.« less

  9. Description of Defense Waste Processing Facility reference waste form and canister. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, R.G.

    1983-08-01

    The Defense Waste Processing Facility (DWPF) will be located at the Savannah River Plant in Aiken, SC, and is scheduled for construction authorization during FY-1984. The reference waste form is borosilicate glass containing approx. 28 wt % sludge oxides, with the balance glass frit. Borosilicate glass was chosen because of its high resistance to leaching by water, its relatively high solubility for nuclides found in the sludge, and its reasonably low melting temperature. The glass frit contains about 58% SiO/sub 2/ and 15% B/sub 2/O/sub 3/. Leachabilities of SRP waste glasses are expected to approach 10/sup -8/ g/m/sup 2/-day basedmore » upon 1000-day tests using glasses containing SRP radioactive waste. Tests were performed under a wide variety of conditions simulating repository environments. The canister is filled with 3260 lb of glass which occupies about 85% of the free canister volume. The filled canister will generate approx. 470 watts when filled with oxides from 5-year-old sludge and 15-year-old supernate from the sludge and supernate processes. The radionuclide content of the canister is about 177,000 ci, with a radiation level of 5500 rem/h at canister surface contact. The reference canister is fabricated of standard 24-in.-OD, Schedule 20, 304L stainless steel pipe with a dished bottom, domed head, and a combined lifting and welding flange on the head neck. The overall canister length is 9 ft 10 in. with a 3/8-in. wall thickness. The 3-m canister length was selected to reduce equipment cell height in the DWPF to a practical size. The canister diameter was selected as an optimum size from glass quality considerations, a logical size for repository handling and to ensure that a filled canister with its double containment shipping cask could be accommodated on a legal-weight truck. The overall dimensions and weight appear to be compatible with preliminary assessments of repository requirements. 10 references.« less

  10. Definition of an Acceptable Glass composition Region (AGCR) via an Index System and a Partitioning Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D. K.; Taylor, A. S.; Edwards, T.B.

    2005-06-26

    The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less

  11. Impact of Salt Waste Processing Facility Streams on the Nitric-Glycolic Flowsheet in the Chemical Processing Cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martino, C.

    An evaluation of the previous Chemical Processing Cell (CPC) testing was performed to determine whether the planned concurrent operation, or “coupled” operations, of the Defense Waste Processing Facility (DWPF) with the Salt Waste Processing Facility (SWPF) has been adequately covered. Tests with the nitricglycolic acid flowsheet, which were both coupled and uncoupled with salt waste streams, included several tests that required extended boiling times. This report provides the evaluation of previous testing and the testing recommendation requested by Savannah River Remediation. The focus of the evaluation was impact on flammability in CPC vessels (i.e., hydrogen generation rate, SWPF solvent components,more » antifoam degradation products) and processing impacts (i.e., acid window, melter feed target, rheological properties, antifoam requirements, and chemical composition).« less

  12. Results for the First, Second, and Third Quarter Calendar Year 2015 Tank 50H WAC slurry samples chemical and radionuclide contaminants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.

    2016-02-18

    This report details the chemical and radionuclide contaminant results for the characterization of the Calendar Year (CY) 2015 First, Second, and Third Quarter sampling of Tank 50H for the Saltstone Waste Acceptance Criteria (WAC) in effect at that time. Information from this characterization will be used by Defense Waste Processing Facility (DWPF) & Saltstone Facility Engineering (D&S-FE) to support the transfer of low-level aqueous waste from Tank 50H to the Salt Feed Tank in the Saltstone Facility in Z-Area, where the waste will be immobilized. This information is also used to update the Tank 50H Waste Characterization System. Previous memorandamore » documenting the WAC analyses results have been issued for these three samples.« less

  13. Digestion of Crystalline Silicotitanate (CST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DARREL, WALKER

    2004-11-04

    Researchers tested methods for chemically dissolving crystalline silicotitanate (CST) as a substitute for mechanical grinding to reduce particle size before vitrification. Testing used the commercially available form of CST, UOP IONSIV(R) IE-911. Reduction of the particle size to a range similar to that of the glass frit used by the Defense Waste Processing Facility (DWPF) could reduce problems with coupling cesium ion exchange to the vitrification process. This study found that IONSIV(R) IE-911 dissolves completely using a combination of acid, hydrogen peroxide, and fluoride ion. Neutralization of the resulting acidic solution precipitates components of the IONSIV(R) IE-911. Digestion requires extremelymore » corrosive conditions. Also, large particles may reform during neutralization, and the initiation and rate of gas generation are unpredictable. Therefore, the method is not recommended as a substitute for mechanical grinding.« less

  14. Prothrombin complex concentrates and a specific antidote to dabigatran are effective ex-vivo in reversing the effects of dabigatran in an anticoagulation/liver trauma experimental model.

    PubMed

    Grottke, Oliver; van Ryn, Joanne; Spronk, Henri M H; Rossaint, Rolf

    2014-02-05

    New oral anticoagulants are effective alternatives to warfarin. However, no specific reversal agents are available for life-threatening bleeding or emergency surgery. Using a porcine model of trauma, this study assessed the ability of prothrombin complex concentrate (PCC), activated PCC (aPCC), recombinant FVIIa (rFVIIa) and a specific antidote to dabigatran (aDabi-Fab) to reverse the anticoagulant effects of dabigatran. Dabigatran etexilate (DE) was given orally for 3 days (30 mg/kg bid) and intravenously on day 4 to achieve consistent, supratherapeutic concentrations of dabigatran. Blood samples were collected at baseline, after oral DE, after intravenous dabigatran, and 60 minutes post-injury. PCC (30 and 60 U/kg), aPCC (30 and 60 U/kg), rFVIIa (90 and 180 μg/kg) and antidote (60 and 120 mg/kg) were added to blood samples ex-vivo. Coagulation was assessed by thromboelastometry, global coagulation assays and diluted thrombin time. Plasma concentrations of dabigatran were 380 ± 106 ng/ml and 1423 ± 432 ng/ml after oral and intravenous administration, respectively, and all coagulation parameters were affected by dabigatran. Both PCCs and aDabi-Fab, but not rFVIIa, reversed the effects of dabigatran on thromboelastometry parameters and prothrombin time. In contrast, aPTT was only normalised by aDabi-Fab. Plasma concentration (activity) of dabigatran remained elevated after PCC and rFVIIa therapy, but was not measureable after aDabi-Fab. In conclusion, PCC and aPCC were effective in reducing the anticoagulant effects of dabigatran under different conditions, while aDabi-Fab fully corrected all coagulation measures and decreased the plasma concentration of dabigatran below the limit of detection. No significant effects were observed with rFVIIa.

  15. EANM 2012 guidelines for radionuclide imaging of phaeochromocytoma and paraganglioma

    PubMed Central

    Timmers, Henri J.; Hindié, Elif; Guillet, Benjamin A.; Neumann, Hartmut P.; Walz, Martin K.; Opocher, Giuseppe; de Herder, Wouter W.; Boedeker, Carsten C.; de Krijger, Ronald R.; Chiti, Arturo; Al-Nahhas, Adil; Pacak, Karel

    2016-01-01

    Purpose Radionuclide imaging of phaeochromocytomas (PCCs) and paragangliomas (PGLs) involves various functional imaging techniques and approaches for accurate diagnosis, staging and tumour characterization. The purpose of the present guidelines is to assist nuclear medicine practitioners in performing, interpreting and reporting the results of the currently available SPECT and PET imaging approaches. These guidelines are intended to present information specifically adapted to European practice. Methods Guidelines from related fields, issued by the European Association of Nuclear Medicine and the Society of Nuclear Medicine, were taken into consideration and are partially integrated within this text. The same was applied to the relevant literature, and the final result was discussed with leading experts involved in the management of patients with PCC/PGL. The information provided should be viewed in the context of local conditions, laws and regulations. Conclusion Although several radionuclide imaging modalities are considered herein, considerable focus is given to PET imaging which offers high sensitivity targeted molecular imaging approaches. PMID:22926712

  16. Thrombocytogenesis by megakaryocyte; Interpretation by protoplatelet hypothesis

    PubMed Central

    KOSAKI, Goro; KAMBAYASHI, Junichi

    2011-01-01

    Serial transmission electron microscopy of human megakaryocytes (MKs) revealed their polyploidization and gradual maturation through consecutive transition in characteristics of various organelles and others. At the beginning of differentiation, MK with ploidy 32N, e.g., has 16 centrosomes in the cell center surrounded by 32N nucleus. Each bundle of microtubules (MTs) emanated from the respective centrosome supports and organizes 16 equally volumed cytoplasmic compartments which together compose one single 32N MK. During the differentiation, single centriole separated from the centriole pair, i.e., centrosome, migrates to the most periphery of the cell through MT bundle, corresponding to a half of the interphase array originated from one centrosome, supporting one “putative cytoplasmic compartment” (PCC). Platelet demarcation membrane (DM) is constructed on the boundary surface between neighbouring PCCs. Matured PCC, composing of a tandem array of platelet territories covered by a sheet of DM is designated as protoplatelet. Eventually, the rupture of MK results in release of platelets from protoplatelets. PMID:21558761

  17. Breathing pattern and thoracoabdominal asynchrony in horses with chronic obstructive and inflammatory lung disease.

    PubMed

    Haltmayer, E; Reiser, S; Schramel, J P; van den Hoven, R

    2013-10-01

    The aim of the study was to show that changes in thoracoabdominal asynchrony (TAA) between quiet breathing and CO2-induced hyperpnoea can be used to differentiate between horses with healthy airways and those suffering from inflammatory airway disease (IAD) or recurrent airway obstruction (RAO). The level of TAA was displayed by the Pearson's correlation coefficient (PCC) of thoracic and abdominal signals, generated by respiratory ultrasonic plethysmography (RUP) during quiet breathing and hyperpnoea. Changes in TAA were expressed as the quotient of the PCCs (PCCQ) during normal breathing and hyperpnoea. Horses with RAO and IAD showed significant higher median PCCQ than healthy horses. Median PCCQ of horses with RAO and IAD was not significantly different. Horses affected by a pulmonary disorder showed lower TAA compared to the control group. This study suggests that TAA provides a useful parameter to differentiate horses with RAO and IAD from healthy horses. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raszewski, F; Tommy Edwards, T; David Peeler, D

    The Liquid Waste Organization (LWO) has requested that the Savannah River National Laboratory (SRNL) to assess the impact of a 100K gallon decant volume from Tank 40H on the existing sludge-only Sludge Batch 4 (SB4)-Frit 510 flowsheet and the coupled operations flowsheet (SB4 with the Actinide Removal Process (ARP)). Another potential SB4 flowsheet modification of interest includes the addition of 3 wt% sodium (on a calcined oxide basis) to a decanted sludge-only or coupled operations flowsheet. These potential SB4 flowsheet modifications could result in significant compositional shifts to the SB4 system. This paper study provides an assessment of the impactmore » of these compositional changes to the projected glass operating windows and to the variability study for the Frit 510-SB4 system. The influence of the compositional changes on melt rate was not assessed in this study nor was it requested. Nominal Stage paper study assessments were completed using the projected compositions for the various flowsheet options coupled with Frit 510 (i.e., variation was not applied to the sludge and frit compositions). In order to gain insight into the impacts of sludge variation and/or frit variation (due to the procurement specifications) on the projected operating windows, three versions of the Variation Stage assessment were performed: (1) the traditional Variation Stage assessment in which the nominal Frit 510 composition was coupled with the extreme vertices (EVs) of each sludge, (2) an assessment of the impact of possible frit variation (within the accepted frit specification tolerances) on each nominal SB4 option, and (3) an assessment of the impact of possible variation in the Frit 510 composition due to the vendor's acceptance specifications coupled with the EVs of each sludge case. The results of the Nominal Stage assessment indicate very little difference among the various flowsheet options. All of the flowsheets provide DWPF with the possibility of targeting waste loadings (WLs) from the low 30s to the low 40s with Frit 510. In general, the Tank 40H decant has a slight negative impact on the operating window, but DWPF still has the ability to target current WLs (34%) and higher WLs if needed. While the decant does not affect practical WL targets in DWPF, melt rate could be reduced due to the lower Na{sub 2}O content. If true, the addition of 3 wt% Na{sub 2}O to the glass system may regain melt rate, assuming that the source of alkali is independent of the impact on melt rate. Coupled operations with Frit 510 via the addition of ARP to the decanted SB4 flowsheet also appears to be viable based on the projected operating windows. The addition of both ARP and 3 wt% Na{sub 2}O to a decanted Tank 40H sludge may be problematic using Frit 510. Although the Nominal Stage assessments provide reasonable operating windows for the SB4 flowsheets being considered with Frit 510, introduction of potential sludge and/or frit compositional variation does have a negative impact. The magnitude of the impact on the projected operating windows is dependent on the specific flowsheet options as well as the applied variation. The results of the traditional Variation Stage assessments indicate that the three proposed Tank 40H decanted flowsheet options (Case No.2--100K gallon decant, Case No.3--100K gallon decant and 3 wt% Na{sub 2}O addition and Case No.4--100K gallon decant and ARP) demonstrate a relatively high degree of robustness to possible sludge variation over WLs of interest with Frit 510. However, the case where the addition of both ARP and 3 wt% Na{sub 2}O is considered was problematic during the traditional Variation Stage assessment. The impact of coupling the frit specifications with the nominal SB4 flowsheet options on the projected operating windows is highly dependent on whether the upper WLs are low viscosity or liquidus temperature limited in the Nominal Stage assessments. Systems that are liquidus temperature limited exhibit a high degree of robustness to the applied frit and sludge variation, while those that are low viscosity limited show significant reductions (6 percentage points) in the upper WLs that can be obtained. When both frit and sludge variations are applied, the paper study results indicate that DWPF could be severely restricted in terms of projected operating windows for the ARP and Na{sub 2}O addition options. An experimental variability study was not performed using the final SB4 composition and Frit 510 since glasses in the ComPro{trademark} data base were identified that bounded the potential operating window of this system. The bounding ARP case was not considered in that assessment. After the flowsheet cases were identified, an electronic search of ComPro{trademark} identified approximately 12 historical glasses within the compositional regions defined by at least one of the five flowsheet options, but the compositional coverage did not appear adequate to bound all cases.« less

  19. Evaluation and assessment of the efficacy of an abatement strategy in a former lead smelter community, Boolaroo, Australia.

    PubMed

    Harvey, P J; Taylor, M P; Kristensen, L J; Grant-Vest, S; Rouillon, M; Wu, L; Handley, H K

    2016-08-01

    This study examines the recent soil Lead Abatement Strategy (LAS) in Boolaroo, New South Wales, Australia, that was designed to "achieve a reduction in human exposure to lead dust contamination in surface soils". The abatement programme addressed legacy contamination of residential areas following closure of lead smelting operations in 2003 at the Pasminco Cockle Creek Smelter (PCCS). The principal objective of the LAS was to "cap and cover" lead-contaminated soils within the urban environment surrounding the PCCS. Soil lead concentrations of 2500-5000 mg/kg were scheduled for removal and replacement, while concentrations between 1500 and 2500 mg/kg were replaced only under limited circumstances. To date, there has been no industry, government or independent assessment of the clean-up programme that involved >2000 homes in the township of Boolaroo. Thus, by measuring post-abatement soil lead concentrations in Boolaroo, this study addresses this knowledge gap and evaluates the effectiveness of the LAS for reducing the potential for lead exposure. Soil lead concentrations above the Australian residential soil health investigation level value for residential soils (300 mg/kg) were identified at all but one of the residential properties examined (n = 19). Vacuum dust samples (n = 17) from the same homes had a mean lead concentration of 495 mg/kg (median 380 mg/kg). Bio-accessibility testing revealed that lead in household vacuum dust was readily accessible (% bio-accessible) (mean = 92 %, median = 90 %), demonstrating that the risk of exposure via this pathway remains. Assessment of a limited number of properties (n = 8) where pre-abatement soil lead levels were available for comparison showed they were not statistically different to post-abatement. Although the LAS did not include treatment of non-residential properties, sampling of community areas including public sports fields, playgrounds and schools (n = 32) was undertaken to determine the contamination legacy in these areas. Elevated mean soil lead concentrations were found across public lands: sports fields = 5130 mg/kg (median = 1275 mg/kg), playgrounds and schools = 812 mg/kg (median = 920 mg/kg) and open space = 778 mg/kg (median = 620 mg/kg). Overall, the study results show that the LAS programme that was dominated by a "cap and cover" approach to address widespread lead contamination was inadequate for mitigating current and future risk of lead exposures.

  20. IMPACT OF NOBLE METALS AND MERCURY ON HYDROGEN GENERATION DURING HIGH LEVEL WASTE PRETREATMENT AT THE SAVANNAH RIVER SITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M; Tommy Edwards, T; David Koopman, D

    2009-03-03

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site vitrifies radioactive High Level Waste (HLW) for repository internment. The process consists of three major steps: waste pretreatment, vitrification, and canister decontamination/sealing. HLW consists of insoluble metal hydroxides (primarily iron, aluminum, calcium, magnesium, manganese, and uranium) and soluble sodium salts (carbonate, hydroxide, nitrite, nitrate, and sulfate). The pretreatment process in the Chemical Processing Cell (CPC) consists of two process tanks, the Sludge Receipt and Adjustment Tank (SRAT) and the Slurry Mix Evaporator (SME) as well as a melter feed tank. During SRAT processing, nitric and formic acids are addedmore » to the sludge to lower pH, destroy nitrite and carbonate ions, and reduce mercury and manganese. During the SME cycle, glass formers are added, and the batch is concentrated to the final solids target prior to vitrification. During these processes, hydrogen can be produced by catalytic decomposition of excess formic acid. The waste contains silver, palladium, rhodium, ruthenium, and mercury, but silver and palladium have been shown to be insignificant factors in catalytic hydrogen generation during the DWPF process. A full factorial experimental design was developed to ensure that the existence of statistically significant two-way interactions could be determined without confounding of the main effects with the two-way interaction effects. Rh ranged from 0.0026-0.013% and Ru ranged from 0.010-0.050% in the dried sludge solids, while initial Hg ranged from 0.5-2.5 wt%, as shown in Table 1. The nominal matrix design consisted of twelve SRAT cycles. Testing included: a three factor (Rh, Ru, and Hg) study at two levels per factor (eight runs), three duplicate midpoint runs, and one additional replicate run to assess reproducibility away from the midpoint. Midpoint testing was used to identify potential quadratic effects from the three factors. A single sludge simulant was used for all tests and was spiked with the required amount of noble metals immediately prior to performing the test. Acid addition was kept effectively constant except to compensate for variations in the starting mercury concentration. SME cycles were also performed during six of the tests.« less

  1. Acute Pesticide-Related Illness Among Farmworkers: Barriers to Reporting to Public Health Authorities

    PubMed Central

    Prado, Joanne Bonnar; Mulay, Prakash R.; Kasner, Edward J.; Bojes, Heidi K.; Calvert, Geoffrey M.

    2018-01-01

    Farmworkers are at high risk of acute occupational pesticide-related illness (AOPI) and AOPI surveillance is vital to preventing these illnesses. Data on such illnesses are collected and analyzed to identify high-risk groups, high-risk pesticides, and root causes. Interventions to address these risks and root causes include farmworker outreach, education, and regulation. Unfortunately, it is well known that AOPI is underreported, meaning that the true burden of this condition remains unknown. This article reviews the barriers to reporting of farmworker AOPI to public health authorities and provides some practical solutions. Information is presented using the social-ecological model spheres of influence. Factors that contribute to farmworker AOPI underreporting include fear of job loss or deportation, limited English proficiency (LEP), limited access to health care, lack of clinician recognition of AOPI, farmworker ineligibility for workers’ compensation (WC) benefits in many states, insufficient resources to conduct AOPI surveillance, and constraints in coordinating AOPI investigations across state agencies. Solutions to address these barriers include: emphasizing that employers encourage farmworkers to report safety concerns; raising farmworker awareness of federally qualified health centers (FQHCs) and increasing the availability of these clinics; improving environmental toxicology training to health-care students and professionals; encouraging government agencies to investigate pesticide complaints and provide easy-to-read reports of investigation findings; fostering public health reporting from electronic medical records, poison control centers (PCCs), and WC; expanding and strengthening AOPI state-based surveillance programs; and developing interagency agreements to outline the roles and responsibilities of each state agency involved with pesticide safety. PMID:28762882

  2. MAX mutations status in Swedish patients with pheochromocytoma and paraganglioma tumours.

    PubMed

    Crona, Joakim; Maharjan, Rajani; Delgado Verdugo, Alberto; Stålberg, Peter; Granberg, Dan; Hellman, Per; Björklund, Peyman

    2014-03-01

    Pheochromocytoma (PCC) and Paraganglioma are rare tumours originating from neuroendocrine cells. Up to 60% of cases have either germline or somatic mutation in one of eleven described susceptibility loci, SDHA, SDHB, SDHC, SDHD, SDHAF2, VHL, EPAS1, RET, NF1, TMEM127 and MYC associated factor-X (MAX). Recently, germline mutations in MAX were found to confer susceptibility to PCC and paraganglioma (PGL). A subsequent multicentre study found about 1% of PCCs and PGLs to have germline or somatic mutations in MAX. However, there has been no study investigating the frequency of MAX mutations in a Scandinavian cohort. We analysed tumour specimens from 63 patients with PCC and PGL treated at Uppsala University hospital, Sweden, for re-sequencing of MAX using automated Sanger sequencing. Our results show that 0% (0/63) of tumours had mutations in MAX. Allele frequencies of known single nucleotide polymorphisms rs4902359, rs45440292, rs1957948 and rs1957949 corresponded to those available in the Single Nucleotide Polymorphism Database. We conclude that MAX mutations remain unusual events and targeted genetic screening should be considered after more common genetic events have been excluded.

  3. Palliative Care Planner: A Pilot Study to Evaluate Acceptability and Usability of an Electronic Health Records System-integrated, Needs-targeted App Platform.

    PubMed

    Cox, Christopher E; Jones, Derek M; Reagan, Wen; Key, Mary D; Chow, Vinca; McFarlin, Jessica; Casarett, David; Creutzfeldt, Claire J; Docherty, Sharron L

    2018-01-01

    The quality and patient-centeredness of intensive care unit (ICU)-based palliative care delivery is highly variable. To develop and pilot an app platform for clinicians and ICU patients and their family members that enhances the delivery of needs-targeted palliative care. In the development phase of the study, we developed an electronic health record (EHR) system-integrated mobile web app system prototype, PCplanner (Palliative Care Planner). PCplanner screens the EHR for ICU patients meeting any of five prompts (triggers) for palliative care consultation, allows families to report their unmet palliative care needs, and alerts clinicians to these needs. The evaluation phase included a prospective before/after study conducted at a large academic medical center. Two control populations were enrolled in the before period to serve as context for the intervention. First, 25 ICU patients who received palliative care consults served as patient-level controls. Second, 49 family members of ICU patients who received mechanical ventilation for at least 48 hours served as family-level controls. Afterward, 14 patients, 18 family members, and 10 clinicians participated in the intervention evaluation period. Family member outcomes measured at baseline and 4 days later included acceptability (Client Satisfaction Questionnaire [CSQ]), usability (Systems Usability Scale [SUS]), and palliative care needs, assessed with the adapted needs of social nature, existential concerns, symptoms, and therapeutic interaction (NEST) scale; the Patient-Centeredness of Care Scale (PCCS); and the Perceived Stress Scale (PSS). Patient outcomes included frequency of goal concordant treatment, hospital length of stay, and discharge disposition. Family members reported high PCplanner acceptability (mean CSQ, 14.1 [SD, 1.4]) and usability (mean SUS, 21.1 [SD, 1.7]). PCplanner family member recipients experienced a 12.7-unit reduction in NEST score compared with a 3.4-unit increase among controls (P = 0.002), as well as improved mean scores on the PCCS (6.6 [SD, 5.8]) and the PSS (-0.8 [SD, 1.9]). The frequency of goal-concordant treatment increased over the course of the intervention (n = 14 [SD, 79%] vs. n = 18 [SD, 100%]). Compared with palliative care controls, intervention patients received palliative care consultation sooner (3.9 [SD, 2.7] vs. 6.9 [SD, 7.1] mean days), had a shorter mean hospital length of stay (20.5 [SD, 9.1] vs. 22.3 [SD, 16.0] patient number), and received hospice care more frequently (5 [36%] vs. 5 [20%]), although these differences were not statistically significant. PCplanner represents an acceptable, usable, and clinically promising systems-based approach to delivering EHR-triggered, needs-targeted ICU-based palliative care within a standard clinical workflow. A clinical trial in a larger population is needed to evaluate its efficacy.

  4. Glycolic acid physical properties and impurities assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D. P.; Pickenheim, B. R.; Hay, M. S.

    This document has been revised to add analytical data for fresh, 1 year old, and 4 year old glycolic acid as recommended in Revision 2 of this document. This was needed to understand the concentration of formaldehyde and methoxyacetic acid, impurities present in the glycolic acid used in Savannah River National Laboratory (SRNL) experiments. Based on this information, the concentration of these impurities did not change during storage. These impurities were in the glycolic acid used in the testing included in this report and in subsequent testing using DuPont (now called Chemours) supplied Technical Grade 70 wt% glycolic acid. However,more » these impurities were not reported in the first two versions of this report. The Defense Waste Processing Facility (DWPF) is planning to implement a nitric-glycolic acid flowsheets to increase attainment to meet closure commitment dates during Sludge Batch 9. In fiscal year 2009, SRNL was requested to determine the physical properties of formic and glycolic acid blends.« less

  5. A Mulitivariate Statistical Model Describing the Compound Nature of Soil Moisture Drought

    NASA Astrophysics Data System (ADS)

    Manning, Colin; Widmann, Martin; Bevacqua, Emanuele; Maraun, Douglas; Van Loon, Anne; Vrac, Mathieu

    2017-04-01

    Soil moisture in Europe acts to partition incoming energy into sensible and latent heat fluxes, thereby exerting a large influence on temperature variability. Soil moisture is predominantly controlled by precipitation and evapotranspiration. When these meteorological variables are accumulated over different timescales, their joint multivariate distribution and dependence structure can be used to provide information of soil moisture. We therefore consider soil moisture drought as a compound event of meteorological drought (deficits of precipitation) and heat waves, or more specifically, periods of high Potential Evapotraspiration (PET). We present here a statistical model of soil moisture based on Pair Copula Constructions (PCC) that can describe the dependence amongst soil moisture and its contributing meteorological variables. The model is designed in such a way that it can account for concurrences of meteorological drought and heat waves and describe the dependence between these conditions at a local level. The model is composed of four variables; daily soil moisture (h); a short term and a long term accumulated precipitation variable (Y1 and Y_2) that account for the propagation of meteorological drought to soil moisture drought; and accumulated PET (Y_3), calculated using the Penman Monteith equation, which can represent the effect of a heat wave on soil conditions. Copula are multivariate distribution functions that allow one to model the dependence structure of given variables separately from their marginal behaviour. PCCs then allow in theory for the formulation of a multivariate distribution of any dimension where the multivariate distribution is decomposed into a product of marginal probability density functions and two-dimensional copula, of which some are conditional. We apply PCC here in such a way that allows us to provide estimates of h and their uncertainty through conditioning on the Y in the form h=h|y_1,y_2,y_3 (1) Applying the model to various Fluxnet sites across Europe, we find the model has good skill and can particularly capture periods of low soil moisture well. We illustrate the relevance of the dependence structure of these Y variables to soil moisture and show how it may be generalised to offer information of soil moisture on a widespread scale where few observations of soil moisture exist. We then present results from a validation study of a selection of EURO CORDEX climate models where we demonstrate the skill of these models in representing these dependencies and so offer insight into the skill seen in the representation of soil moisture in these models.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, V.; Shah, H.; Bannochie, C. J.

    Mercury (Hg) in the Savannah River Site Liquid Waste System (LWS) originated from decades of canyon processing where it was used as a catalyst for dissolving the aluminum cladding of reactor fuel. Approximately 60 metric tons of mercury is currently present throughout the LWS. Mercury has long been a consideration in the LWS, from both hazard and processing perspectives. In February 2015, a Mercury Program Team was established at the request of the Department of Energy to develop a comprehensive action plan for long-term management and removal of mercury. Evaluation was focused in two Phases. Phase I activities assessed themore » Liquid Waste inventory and chemical processing behavior using a system-by-system review methodology, and determined the speciation of the different mercury forms (Hg+, Hg++, elemental Hg, organomercury, and soluble versus insoluble mercury) within the LWS. Phase II activities are building on the Phase I activities, and results of the LWS flowsheet evaluations will be summarized in three reports: Mercury Behavior in the Salt Processing Flowsheet (i.e. this report); Mercury Behavior in the Defense Waste Processing Facility (DWPF) Flowsheet; and Mercury behavior in the Tank Farm Flowsheet (Evaporator Operations). The evaluation of the mercury behavior in the salt processing flowsheet indicates, inter alia, the following: (1) In the assembled Salt Batches 7, 8 and 9 in Tank 21, the total mercury is mostly soluble with methylmercury (MHg) contributing over 50% of the total mercury. Based on the analyses of samples from 2H Evaporator feed and drop tanks (Tanks 38/43), the source of MHg in Salt Batches 7, 8 and 9 can be attributed to the 2H evaporator concentrate used in assembling the salt batches. The 2H Evaporator is used to evaporate DWPF recycle water. (2) Comparison of data between Tank 21/49, Salt Solution Feed Tank (SSFT), Decontaminated Salt Solution Hold Tank (DSSHT), and Tank 50 samples suggests that the total mercury as well as speciated forms in the assembled salt batches in Tanks 21/49 pass through the Actinide Removal Process (ARP) / Modular Caustic Side Solvent Extraction Unit (MCU) process to Tank 50 with no significant change in the mercury chemistry. (3) In Tank 50, Decontaminated Salt Solution (DSS) from ARP/MCU is the major contributor to the total mercury including MHg. (4) Speciation analyses of TCLP leached solutions of the grout samples prepared from Tank 21, as well as Tank 50 samples, show the majority of the mercury released in the solution is MHg.« less

  7. Modelling innovative interventions for optimising healthy lifestyle promotion in primary health care: "Prescribe Vida Saludable" phase I research protocol

    PubMed Central

    Sanchez, Alvaro; Grandes, Gonzalo; Cortada, Josep M; Pombo, Haizea; Balague, Laura; Calderon, Carlos

    2009-01-01

    Background The adoption of a healthy lifestyle, including physical activity, a balanced diet, a moderate alcohol consumption and abstinence from smoking, are associated with large decreases in the incidence and mortality rates for the most common chronic diseases. That is why primary health care (PHC) services are trying, so far with less success than desirable, to promote healthy lifestyles among patients. The objective of this study is to design and model, under a participative collaboration framework between clinicians and researchers, interventions that are feasible and sustainable for the promotion of healthy lifestyles in PHC. Methods and design Phase I formative research and a quasi-experimental evaluation of the modelling and planning process will be undertaken in eight primary care centres (PCCs) of the Basque Health Service – OSAKIDETZA, of which four centres will be assigned for convenience to the Intervention Group (the others being Controls). Twelve structured study, discussion and consensus sessions supported by reviews of the literature and relevant documents, will be undertaken throughout 12 months. The first four sessions, including a descriptive strategic needs assessment, will lead to the prioritisation of a health promotion aim in each centre. In the remaining eight sessions, collaborative design of intervention strategies, on the basis of a planning process and pilot trials, will be carried out. The impact of the formative process on the practice of healthy lifestyle promotion, attitude towards health promotion and other factors associated with the optimisation of preventive clinical practice will be assessed, through pre- and post-programme evaluations and comparisons of the indicators measured in professionals from the centres assigned to the Intervention or Control Groups. Discussion There are four necessary factors for the outcome to be successful and result in important changes: (1) the commitment of professional and community partners who are involved; (2) their competence for change; (3) the active cooperation and participation of the interdisciplinary partners involved throughout the process of change; and (4) the availability of resources necessary to facilitate the change. PMID:19534832

  8. Establishment and assessment of code scaling capability

    NASA Astrophysics Data System (ADS)

    Lim, Jaehyok

    In this thesis, a method for using RELAP5/MOD3.3 (Patch03) code models is described to establish and assess the code scaling capability and to corroborate the scaling methodology that has been used in the design of the Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility. It was sponsored by the United States Nuclear Regulatory Commission (USNRC) under the program "PUMA ESBWR Tests". PUMA-E facility was built for the USNRC to obtain data on the performance of the passive safety systems of the General Electric (GE) Nuclear Energy Economic Simplified Boiling Water Reactor (ESBWR). Similarities between the prototype plant and the scaled-down test facility were investigated for a Gravity-Driven Cooling System (GDCS) Drain Line Break (GDLB). This thesis presents the results of the GDLB test, i.e., the GDLB test with one Isolation Condenser System (ICS) unit disabled. The test is a hypothetical multi-failure small break loss of coolant (SB LOCA) accident scenario in the ESBWR. The test results indicated that the blow-down phase, Automatic Depressurization System (ADS) actuation, and GDCS injection processes occurred as expected. The GDCS as an emergency core cooling system provided adequate supply of water to keep the Reactor Pressure Vessel (RPV) coolant level well above the Top of Active Fuel (TAF) during the entire GDLB transient. The long-term cooling phase, which is governed by the Passive Containment Cooling System (PCCS) condensation, kept the reactor containment system that is composed of Drywell (DW) and Wetwell (WW) below the design pressure of 414 kPa (60 psia). In addition, the ICS continued participating in heat removal during the long-term cooling phase. A general Code Scaling, Applicability, and Uncertainty (CSAU) evaluation approach was discussed in detail relative to safety analyses of Light Water Reactor (LWR). The major components of the CSAU methodology that were highlighted particularly focused on the scaling issues of experiments and models and their applicability to the nuclear power plant transient and accidents. The major thermal-hydraulic phenomena to be analyzed were identified and the predictive models adopted in RELAP5/MOD3.3 (Patch03) code were briefly reviewed.

  9. The NuRD complex component p66 suppresses photoreceptor neuron regeneration in planarians.

    PubMed

    Vásquez-Doorman, Constanza; Petersen, Christian P

    2016-06-01

    Regeneration involves precise control of cell fate to produce an appropriate complement of tissues formed within a blastema. Several chromatin-modifying complexes have been identified as required for regeneration in planarians, but it is unclear whether this class of molecules uniformly promotes the production of differentiated cells. We identify a function for p66, encoding a DNA-binding protein component of the NuRD (nucleosome remodeling and deacetylase) complex, as well as the chromodomain helicase chd4, in suppressing production of photoreceptor neurons (PRNs) in planarians. This suppressive effect appeared restricted to PRNs because p66 inhibition did not influence numbers of eye pigment cup cells (PCCs) and decreased numbers of brain neurons and epidermal progenitors. PRNs from p66(RNAi) animals differentiated with some abnormalities but nonetheless produced arrestin+ projections to the brain. p66 inhibition produced excess ovo+otxA+ PRN progenitors without affecting numbers of ovo+otxA- PCC progenitors, and ovo and otxA were each required for the p66(RNAi) excess PRN phenotype. Together these results suggest that p66 acts through the NuRD complex to suppress PRN production by limiting expression of lineage-specific transcription factors.

  10. Reliability and validity of the PHQ-9 for screening late-life depression in Chinese primary care.

    PubMed

    Chen, Shulin; Chiu, Helen; Xu, Baihua; Ma, Yan; Jin, Tao; Wu, Manhua; Conwell, Yeates

    2010-11-01

    The aim of this study was to examine the reliability and validation of the 9-item Patient Health Questionnaire (PHQ-9) for late-life depression in Chinese primary care. In the primary care clinics (PCCs) of Hangzhou city, we recruited 364 older patients (aged ≥ 60) for the PHQ-9 screening. Then 77 of them were further interviewed with Structured Clinical Interview for DSM Disorders (SCID) for the diagnosis of major depression in late life. Statistic strategies for the feasibility, reliability, validity, and receiver operating characteristic curve were performed. The mean administration time was 7.5 min, and the Cronbach's α was 0.91. The optimal cut-off score of PHQ-9 ≥ 9 revealed a sensitivity of 0.86, specificity of 0.77, and positive likelihood ratio of 5.73. The area under the curve (AUC) in this study was 0.92 (SD = 0.02, 95% CI 0.88-0.96). The PHQ-2 also revealed good sensitivity (0.84) and specificity (0.90) at the cut-off point ≥ 3. The PHQ-9 performs well and has acceptable psychometric properties for screening of patients with late-life depression in Chinese primary care settings.

  11. [Accessibility and resolution of mental health care: the matrix support experience].

    PubMed

    Quinderé, Paulo Henrique Dias; Jorge, Maria Salete Bessa; Nogueira, Maria Sônia Lima; Costa, Liduina Farias Almeida da; Vasconcelos, Mardenia Gomes Ferreira

    2013-07-01

    Psycho-social Care Centers (PCC) are also designed to coordinate actions in mental health care in Brazil, mainly at Primary Health Care (PHC) level. Matrix support is one of the pillars of the program, as it aims to ensure assistance of specialized back-up staff to the health teams. In this respect, this research seeks to understand how matrix actions in mental health contribute to the accessibility and resolution of mental health cases. This study involved qualitative research conducted in the cities of Fortaleza and Sobral in the State of Ceará, where 37 (thirty-seven) mental health workers, 14 (fourteen) primary health care users and 13 (thirteen) relatives who took part in matrix support actions were interviewed. As the results revealed, the PHC workers do not feel qualified to intervene in mental health cases. There is also excess haste in referring users to PCCs making access to mental health care more difficult. However, it was identified that discussions on mental health in primary care allow the appropriation of cases by PHC workers and promote rapprochement between the teams. In this way, they influence the resolution of mental health cases.

  12. Long-term high-level waste technology. Composite report

    NASA Astrophysics Data System (ADS)

    Cornman, W. R.

    1981-12-01

    Research and development studies on the immobilization of high-level wastes from the chemical reprocessing of nuclear reactor fuels are summarized. The reports are grouped under the following tasks: (1) program management and support; (2) waste preparation; (3) waste fixation; and (4) final handling. Some of the highlights are: leaching properties were obtained for titanate and tailored ceramic materials being developed at ICPP to immobilize zirconia calcine; comparative leach tests, hot-cell tests, and process evaluations were conducted of waste form alternatives to borosilicate glass for the immobilization of SRP high-level wastes, experiments were run at ANL to qualify neutron activation analysis and radioactive tracers for measuring leach rates from simulated waste glasses; comparative leach test samples of SYNROC D were prepared, characterized, and tested at LLNL; encapsulation of glass marbles with lead or lead alloys was demonstrated on an engineering scale at PNL; a canister for reference Commercial HLW was designed at PNL; a study of the optimization of salt-crete was completed at SRL; a risk assessment showed that an investment for tornado dampers in the interim storage building of the DWPF is unjustified.

  13. Mercury Reduction and Removal from High Level Waste at the Defense Waste Processing Facility - 12511

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrouzi, Aria; Zamecnik, Jack

    2012-07-01

    The Defense Waste Processing Facility processes legacy nuclear waste generated at the Savannah River Site during production of enriched uranium and plutonium required by the Cold War. The nuclear waste is first treated via a complex sequence of controlled chemical reactions and then vitrified into a borosilicate glass form and poured into stainless steel canisters. Converting the nuclear waste into borosilicate glass is a safe, effective way to reduce the volume of the waste and stabilize the radionuclides. One of the constituents in the nuclear waste is mercury, which is present because it served as a catalyst in the dissolutionmore » of uranium-aluminum alloy fuel rods. At high temperatures mercury is corrosive to off-gas equipment, this poses a major challenge to the overall vitrification process in separating mercury from the waste stream prior to feeding the high temperature melter. Mercury is currently removed during the chemical process via formic acid reduction followed by steam stripping, which allows elemental mercury to be evaporated with the water vapor generated during boiling. The vapors are then condensed and sent to a hold tank where mercury coalesces and is recovered in the tank's sump via gravity settling. Next, mercury is transferred from the tank sump to a purification cell where it is washed with water and nitric acid and removed from the facility. Throughout the chemical processing cell, compounds of mercury exist in the sludge, condensate, and off-gas; all of which present unique challenges. Mercury removal from sludge waste being fed to the DWPF melter is required to avoid exhausting it to the environment or any negative impacts to the Melter Off-Gas system. The mercury concentration must be reduced to a level of 0.8 wt% or less before being introduced to the melter. Even though this is being successfully accomplished, the material balances accounting for incoming and collected mercury are not equal. In addition, mercury has not been effectively purified and collected in the Mercury Purification Cell (MPC) since 2008. A significant cleaning campaign aims to bring the MPC back up to facility housekeeping standards. Two significant investigations are being undertaken to restore mercury collection. The SMECT mercury pump has been removed from the tank and will be functionally tested. Also, research is being conducted by the Savannah River National Laboratory to determine the effects of antifoam addition on the behavior of mercury. These path forward items will help us better understand what is occurring in the mercury collection system and ultimately lead to an improved DWPF production rate and mercury recovery rate. (authors)« less

  14. ANALYSIS OF 2H-EVAPORATOR SCALE WALL [HTF-13-82] AND POT BOTTOM [HTF-13-77] SAMPLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oji, L.

    2013-06-21

    Savannah River Remediation (SRR) is planning to remove a buildup of sodium aluminosilicate scale from the 2H-evaporator pot by loading and soaking the pot with heated 1.5 M nitric acid solution. Sampling and analysis of the scale material has been performed so that uranium and plutonium isotopic analysis can be input into a Nuclear Criticality Safety Assessment (NCSA) for scale removal by chemical cleaning. Historically, since the operation of the Defense Waste Processing Facility (DWPF), silicon in the DWPF recycle stream combines with aluminum in the typical tank farm supernate to form sodium aluminosilicate scale mineral deposits in the 2Hevaporatormore » pot and gravity drain line. The 2H-evaporator scale samples analyzed by Savannah River National Laboratory (SRNL) came from the bottom cone sections of the 2H-evaporator pot [Sample HTF-13-77] and the wall 2H-evaporator [sample HTF-13-82]. X-ray diffraction analysis (XRD) confirmed that both the 2H-evaporator pot scale and the wall samples consist of nitrated cancrinite (a crystalline sodium aluminosilicate solid) and clarkeite (a uranium oxy-hydroxide mineral). On “as received” basis, the bottom pot section scale sample contained an average of 2.59E+00 ± 1.40E-01 wt % total uranium with a U-235 enrichment of 6.12E-01 ± 1.48E-02 %, while the wall sample contained an average of 4.03E+00 ± 9.79E-01 wt % total uranium with a U-235 enrichment of 6.03E-01% ± 1.66E-02 wt %. The bottom pot section scale sample analyses results for Pu-238, Pu-239, and Pu-241 are 3.16E- 05 ± 5.40E-06 wt %, 3.28E-04 ± 1.45E-05 wt %, and <8.80E-07 wt %, respectively. The evaporator wall scale samples analysis values for Pu-238, Pu-239, and Pu-241 averages 3.74E-05 ± 6.01E-06 wt %, 4.38E-04 ± 5.08E-05 wt %, and <1.38E-06 wt %, respectively. The Pu-241 analyses results, as presented, are upper limit values. These results are provided so that SRR can calculate the equivalent uranium-235 concentrations for the NCSA. Results confirm that the uranium contained in the scale remains depleted with respect to natural uranium. SRNL did not calculate an equivalent U-235 enrichment, which takes into account other fissionable isotopes U-233, Pu-239 and Pu-241. The applicable method for calculation of equivalent U-235 will be determined in the NCSA.« less

  15. Towards a Standard for Provenance and Context for Preservation of Data for Earth System Science

    NASA Technical Reports Server (NTRS)

    Ramaprian, Hampapuram K.; Moses, John F.

    2011-01-01

    Long-term data sets with data from many missions are needed to study trends and validate model results that are typical in Earth System Science research. Data and derived products originate from multiple missions (spaceborne, airborne and/or in situ) and from multiple organizations. During the missions as well as well past their termination, it is essential to preserve the data and products to support future studies. Key aspects of preservation are: preserving bits and ensuring data are uncorrupted, preserving understandability with appropriate documentation, and preserving reproducibility of science with appropriate documentation and other artifacts. Computer technology provides adequate standards to ensure that, with proper engineering, bits are preserved as hardware evolves. However, to ensure understandability and reproducibility, it is essential to plan ahead to preserve all the relevant data and information. There are currently no standards to identify the content that needs to be preserved, leading to non-uniformity in content and users not being sure of whether preserved content is comprehensive. Each project, program or agency can specify the items to be preserved as a part of its data management requirements. However, broader community consensus that cuts across organizational or national boundaries would be needed to ensure comprehensiveness, uniformity and long-term utility of archived data. The Federation of Earth Science Information Partners (ESIP), a diverse network of scientists, data stewards and technology developers, has a forum for ESIP members to collaborate on data preservation issues. During early 2011, members discussed the importance of developing a Provenance and Context Content Standard (PCCS) and developed an initial list of content items. This list is based on the outcome of a NASA and NOAA meeting held in 1998 under the auspices of the USGCRP, documentation requirements from NOAA and our experience with some of the NASA Earth science missions. The items are categorized into the following 8 high level categories: Preflight/Pre-Operations, Products (Data), Product Documentation, Mission Calibration, Product Software, Algorithm Input, Validation, Software Tools.

  16. A small increase in UV-B increases the susceptibility of tadpoles to predation

    PubMed Central

    Alton, Lesley A.; Wilson, Robbie S.; Franklin, Craig E.

    2011-01-01

    Increased ultraviolet-B (UV-B) radiation as a consequence of ozone depletion is one of the many potential drivers of ongoing global amphibian declines. Both alone and in combination with other environmental stressors, UV-B is known to have detrimental effects on the early life stages of amphibians, but our understanding of the fitness consequences of these effects remains superficial. We examined the independent and interactive effects of UV-B and predatory chemical cues (PCC) on a suite of traits of Limnodynastes peronii embryos and tadpoles, and assessed tadpole survival time in a predator environment to evaluate the potential fitness consequences. Exposure to a 3 to 6 per cent increase in UV-B, which is comparable to changes in terrestrial UV-B associated with ozone depletion, had no effect on any of the traits measured, except survival time in a predator environment, which was reduced by 22 to 28 per cent. Exposure to PCC caused tadpoles to hatch earlier, have reduced hatching success, have improved locomotor performance and survive for longer in a predator environment, but had no effect on tadpole survival, behaviour or morphology. Simultaneous exposure to UV-B and PCC resulted in no interactive effects. These findings demonstrate that increased UV-B has the potential to reduce tadpole fitness, while exposure to PCCs improves their fitness. PMID:21270039

  17. Outcomes Following Three-Factor Inactive Prothrombin Complex Concentrate Versus Recombinant Activated Factor VII Administration During Cardiac Surgery.

    PubMed

    Harper, Patrick C; Smith, Mark M; Brinkman, Nathan J; Passe, Melissa A; Schroeder, Darrell R; Said, Sameh M; Nuttall, Gregory A; Oliver, William C; Barbara, David W

    2018-02-01

    To compare outcomes following inactive prothrombin complex concentrate (PCC) or recombinant activated factor VII (rFVIIa) administration during cardiac surgery. Retrospective propensity-matched analysis. Academic tertiary-care center. Patients undergoing cardiac surgery requiring cardiopulmonary bypass who received either rFVIIa or the inactive 3-factor PCC. Outcomes following intraoperative administration of rFVIIa (263) or factor IX complex (72) as rescue therapy to treat bleeding. In the 24 hours after surgery, propensity-matched patients receiving PCC versus rFVIIa had significantly less chest tube outputs (median difference -464 mL, 95% confidence interval [CI] -819 mL to -110 mL), fresh frozen plasma transfusion rates (17% v 38%, p = 0.028), and platelet transfusion rates (26% v 49%, p = 0.027). There were no significant differences between propensity-matched groups in postoperative stroke, deep venous thrombosis, pulmonary embolism, myocardial infarction, or intracardiac thrombus. Postoperative dialysis was significantly less likely in patients administered PCC versus rFVIIa following propensity matching (odds ratio = 0.3, 95% CI 0.1-0.7). No significant difference in 30-day mortality in patients receiving PCC versus rFVIIa was present following propensity matching. Use of rFVIIa versus inactive PCCs was significantly associated with renal failure requiring dialysis and increased postoperative bleeding and transfusions. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. In search of a consensus terminology in the field of platelet concentrates for surgical use: platelet-rich plasma (PRP), platelet-rich fibrin (PRF), fibrin gel polymerization and leukocytes.

    PubMed

    Dohan Ehrenfest, David M; Bielecki, Tomasz; Mishra, Allan; Borzini, Piero; Inchingolo, Francesco; Sammartino, Gilberto; Rasmusson, Lars; Everts, Peter A

    2012-06-01

    In the field of platelet concentrates for surgical use, most products are termed Platelet-Rich Plasma (PRP). Unfortunately, this term is very general and incomplete, leading to many confusions in the scientific database. In this article, a panel of experts discusses this issue and proposes an accurate and simple terminology system for platelet concentrates for surgical use. Four main categories of products can be easily defined, depending on their leukocyte content and fibrin architecture: Pure Platelet-Rich Plasma (P-PRP), such as cell separator PRP, Vivostat PRF or Anitua's PRGF; Leukocyteand Platelet-Rich Plasma (L-PRP), such as Curasan, Regen, Plateltex, SmartPReP, PCCS, Magellan, Angel or GPS PRP; Pure Plaletet-Rich Fibrin (P-PRF), such as Fibrinet; and Leukocyte- and Platelet-Rich Fibrin (L-PRF), such as Choukroun's PRF. P-PRP and L-PRP refer to the unactivated liquid form of these products, their activated versions being respectively named P-PRP gels and L-PRP gels. The purpose of this search for a terminology consensus is to plead for a more serious characterization of these products. Researchers have to be aware of the complex nature of these living biomaterials, in order to avoid misunderstandings and erroneous conclusions. Understanding the biomaterials or believing in the magic of growth factors ? From this choice depends the future of the field.

  19. Recent progress on the scalable fabrication of hybrid polymer/SiO2 nanophotonic cavity arrays with an encapsulated MoS2 film

    NASA Astrophysics Data System (ADS)

    Hammer, Sebastian; Mangold, Hans-Moritz; Nguyen, Ariana E.; Martinez-Ta, Dominic; Naghibi Alvillar, Sahar; Bartels, Ludwig; Krenner, Hubert J.

    2018-02-01

    We review1 the fully-scalable fabrication of a large array of hybrid molybdenum disulfide (MoS2) - silicon dioxide (SiO2) one-dimensional (1D), freestanding photonic-crystal cavities (PCCs) capable of enhancement of the MoS2 photoluminescence (PL) at the narrow cavity resonance. As demonstrated in our prior work [S. Hammer et al., Sci. Rep. 7, 7251 (2017)]1, geometric mode tuning over the wide spectral range of MoS2 PL can be achieved by changing the PC period. In this contribution, we provide a step-by-step description of the fabrication process and give additional detailed information on the degradation of MoS2 by XeF2 vapor. We avoid potential damage of the MoS2 monolayer during the crucial XeF2 etch by refraining from stripping the electron beam (e-beam) resist after dry etching of the photonic crystal pattern. The remaining resist on top of the samples encapsulates and protects the MoS2 film during the entire fabrication process. Albeit the thickness of the remaining resists strongly depends on the fabrication process, the resulting encapsulation of the MoS2 layer improves the confinement to the optical modes and gives rise to a potential enhancement of the light-matter interaction.

  20. Results for the Fourth Quarter Calendar Year 2015 Tank 50H Salt Solution Sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.

    In this memorandum, the chemical and radionuclide contaminant results from the Fourth Quarter Calendar Year 2015 (CY15) sample of Tank 50H salt solution are presented in tabulated form. The Fourth Quarter CY15 Tank 50H samples were obtained on October 29, 2015 and received at Savannah River National Laboratory (SRNL) on October 30, 2015. The information from this characterization will be used by Defense Waste Processing Facility (DWPF) & Saltstone Facility Engineering for the transfer of aqueous waste from Tank 50H to the Salt Feed Tank in the Saltstone Production Facility, where the waste will be treated and disposed of inmore » the Saltstone Disposal Facility. This memorandum compares results, where applicable, to Saltstone Waste Acceptance Criteria (WAC) limits and targets. Data pertaining to the regulatory limits for Resource Conservation and Recovery Act (RCRA) metals will be documented at a later time per the Task Technical and Quality Assurance Plan (TTQAP) for the Tank 50H saltstone task. The chemical and radionuclide contaminant results from the characterization of the Fourth Quarter Calendar Year 2015 (CY15) sampling of Tank 50H were requested by SRR personnel and details of the testing are presented in the SRNL Task Technical and Quality Assurance Plan.« less

  1. Analysis Of 2H-Evaporator Scale Pot Bottom Sample [HTF-13-11-28H

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oji, L. N.

    2013-07-15

    Savannah River Remediation (SRR) is planning to remove a buildup of sodium aluminosilicate scale from the 2H-evaporator pot by loading and soaking the pot with heated 1.5 M nitric acid solution. Sampling and analysis of the scale material from the 2H evaporator has been performed so that the evaporator can be chemically cleaned beginning July of 2013. Historically, since the operation of the Defense Waste Processing Facility (DWPF), silicon in the DWPF recycle stream combines with aluminum in the typical tank farm supernate to form sodium aluminosilicate scale mineral deposits in the 2H-evaporator pot and gravity drain line. The 2H-evaporatormore » scale samples analyzed by Savannah River National Laboratory (SRNL) came from the bottom cone sections of the 2H-evaporator pot. The sample holder from the 2H-evaporator wall was virtually empty and was not included in the analysis. It is worth noting that after the delivery of these 2H-evaporator scale samples to SRNL for the analyses, the plant customer determined that the 2H evaporator could be operated for additional period prior to requiring cleaning. Therefore, there was no need for expedited sample analysis as was presented in the Technical Task Request. However, a second set of 2H evaporator scale samples were expected in May of 2013, which would need expedited sample analysis. X-ray diffraction analysis (XRD) confirmed the bottom cone section sample from the 2H-evaporator pot consisted of nitrated cancrinite, (a crystalline sodium aluminosilicate solid), clarkeite and uranium oxide. There were also mercury compound XRD peaks which could not be matched and further X-ray fluorescence (XRF) analysis of the sample confirmed the existence of elemental mercury or mercuric oxide. On ''as received'' basis, the scale contained an average of 7.09E+00 wt % total uranium (n = 3; st.dev. = 8.31E-01 wt %) with a U-235 enrichment of 5.80E-01 % (n = 3; st.dev. = 3.96E-02 %). The measured U-238 concentration was 7.05E+00 wt % (n=3, st. dev. = 8.25E-01 wt %). Analyses results for Pu-238 and Pu-239, and Pu-241 are 7.06E-05 {+-} 7.63E-06 wt %, 9.45E-04 {+-} 3.52E-05 wt %, and <2.24E-06 wt %, respectively. These results are provided so that SRR can calculate the equivalent uranium-235 concentrations for the NCSA. Because this 2H evaporator pot bottom scale sample contained a significant amount of elemental mercury (11.7 wt % average), it is recommended that analysis for mercury be included in future Technical Task Requests on 2H evaporator sample analysis at SRNL. Results confirm that the uranium contained in the scale remains depleted with respect to natural uranium. SRNL did not calculate an equivalent U-235 enrichment, which takes into account other fissionable isotopes U-233, Pu-239 and Pu-241.« less

  2. Biodosimetry estimate for high-LET irradiation.

    PubMed

    Wang, Z Z; Li, W J; Zhi, D J; Jing, X G; Wei, W; Gao, Q X; Liu, B

    2007-08-01

    The purpose of this paper is to prepare for an easy and reliable biodosimeter protocol for radiation accidents involving high-linear energy transfer (LET) exposure. Human peripheral blood lymphocytes were irradiated using carbon ions (LET: 34.6 keV microm(-1)), and the chromosome aberrations induced were analyzed using both a conventional colcemid block method and a calyculin A induced premature chromosome condensation (PCC) method. At a lower dose range (0-4 Gy), the measured dicentric (dics) and centric ring chromosomes (cRings) provided reasonable dose information. At higher doses (8 Gy), however, the frequency of dics and cRings was not suitable for dose estimation. Instead, we found that the number of Giemsa-stained drug-induced G2 prematurely condensed chromosomes (G2-PCC) can be used for dose estimation, since the total chromosome number (including fragments) was linearly correlated with radiation dose (r = 0.99). The ratio of the longest and the shortest chromosome length of the drug-induced G2-PCCs increased with radiation dose in a linear-quadratic manner (r = 0.96), which indicates that this ratio can also be used to estimate radiation doses. Obviously, it is easier to establish the dose response curve using the PCC technique than using the conventional metaphase chromosome method. It is assumed that combining the ratio of the longest and the shortest chromosome length with analysis of the total chromosome number might be a valuable tool for rapid and precise dose estimation for victims of radiation accidents.

  3. Assessing the need for communication training for specialists in poison information.

    PubMed

    Planalp, Sally; Crouch, Barbara; Rothwell, Erin; Ellington, Lee

    2009-07-01

    Effective communication has been shown to be essential to physician-patient communication and may be even more critical for poison control center (PCC) calls because of the absence of visual cues, the need for quick and accurate information exchange, and possible suboptimal conditions such as call surges. Professionals who answer poison control calls typically receive extensive training in toxicology but very little formal training in communication. An instrument was developed to assess the perceived need for communication training for specialists in poison information (SPIs) with input from focus groups and a panel of experts. Requests to respond to an online questionnaire were made to PCCs throughout the United States and Canada. The 537 respondents were 70% SPIs or poison information providers (PIPs), primarily educated in nursing or pharmacy, working across the United States and Canada, and employed by their current centers an average of 10 years. SPIs rated communication skills as extremely important to securing positive outcomes for PCC calls even though they reported that their own training was not strongly focused on communication and existing training in communication was perceived as only moderately useful. Ratings of the usefulness of 21 specific training units were consistently high, especially for new SPIs but also for experienced SPIs. Directors rated the usefulness of training for experienced SPIs higher for 5 of the 21 challenges compared to the ratings of SPIs. Findings support the need for communication training for SPIs and provide an empirical basis for setting priorities in developing training units.

  4. Breathing exercises in upper abdominal surgery: a systematic review and meta-analysis.

    PubMed

    Grams, Samantha T; Ono, Lariane M; Noronha, Marcos A; Schivinski, Camila I S; Paulin, Elaine

    2012-01-01

    There is currently no consensus on the indication and benefits of breathing exercises for the prevention of postoperative pulmonary complications PPCs and for the recovery of pulmonary mechanics. To undertake a systematic review of randomized and quasi-randomized studies that assessed the effects of breathing exercises on the recovery of pulmonary function and prevention of PCCs after upper abdominal surgery UAS. We searched the Physiotherapy Evidence Database PEDro, Scientific Electronic Library Online SciELO, MEDLINE, and Cochrane Central Register of Controlled Trials. We included randomized controlled trials and quasi-randomized controlled trials on pre- and postoperative UAS patients, in which the primary intervention was breathing exercises without the use of incentive inspirometers. The methodological quality of the studies was rated according to the PEDro scale. Data on maximal respiratory pressures MIP and MEP, spirometry, diaphragm mobility, and postoperative complications were extracted and analyzed. Data were pooled in fixed-effect meta-analysis whenever possible. Six studies were used for analysis. Two meta-analyses including 66 participants each showed that, on the first day post-operative, the breathing exercises were likely to have induced MEP and MIP improvement treatment effects of 11.44 mmH2O (95%CI 0.88 to 22) and 11.78 mmH2O (95%CI 2.47 to 21.09), respectively. Breathing exercises are likely to have a beneficial effect on respiratory muscle strength in patients submitted to UAS, however the lack of good quality studies hinders a clear conclusion on the subject.

  5. System analyses on advanced nuclear fuel cycle and waste management

    NASA Astrophysics Data System (ADS)

    Cheon, Myeongguk

    To evaluate the impacts of accelerator-driven transmutation of waste (ATW) fuel cycle on a geological repository, two mathematical models are developed: a reactor system analysis model and a high-level waste (HLW) conditioning model. With the former, fission products and residual trans-uranium (TRU) contained in HLW generated from a reference ATW plant operations are quantified and the reduction of TRU inventory included in commercial spent-nuclear fuel (CSNF) is evaluated. With the latter, an optimized waste loading and composition in solidification of HLW are determined and the volume reduction of waste packages associated with CSNF is evaluated. WACOM, a reactor system analysis code developed in this study for burnup calculation, is validated by ORIGEN2.1 and MCNP. WACOM is used to perform multicycle analysis for the reference lead-bismuth eutectic (LBE) cooled transmuter. By applying the results of this analysis to the reference ATW deployment scenario considered in the ATW roadmap, the HLW generated from the ATW fuel cycle is quantified and the reduction of TRU inventory contained in CSNF is evaluated. A linear programming (LP) model has been developed for determination of an optimized waste loading and composition in solidification of HLW. The model has been applied to a US-defense HLW. The optimum waste loading evaluated by the LP model was compared with that estimated by the Defense Waste Processing Facility (DWPF) in the US and a good agreement was observed. The LP model was then applied to the volume reduction of waste packages associated with CSNF. Based on the obtained reduction factors, the expansion of Yucca Mountain Repository (YMR) capacity is evaluated. It is found that with the reference ATW system, the TRU contained in CSNF could be reduced by a factor of ˜170 in terms of inventory and by a factor of ˜40 in terms of toxicity under the assumed scenario. The number of waste packages related to CSNF could be reduced by a factor of ˜8 in terms of volume and by factor of ˜10 on the basis of electricity generation when a sufficient cooling time for discharged spent fuel and zero process chemicals in HLW are assumed. The expansion factor of Yucca Mountain Repository capacity is estimated to be a factor of 2.4, much smaller than the reduction factor of CSNF waste packages, due to the existence of DOE-owned spent fuel and HLW. The YMR, however, could support 10 times greater electricity generation as long as the statutory capacity of DOE-owned SNF and HLW remains unchanged. This study also showed that the reduction of the number of waste packages could strongly be subject to the heat generation rate of HLW and the amount of process chemicals contained in HLW. For a greater reduction of the number of waste packages, a sufficient cooling time for discharged fuel and efforts to minimize the amount of process chemicals contained in HLW are crucial.

  6. Automatic detection of blood versus non-blood regions on intravascular ultrasound (IVUS) images using wavelet packet signatures

    NASA Astrophysics Data System (ADS)

    Katouzian, Amin; Baseri, Babak; Konofagou, Elisa E.; Laine, Andrew F.

    2008-03-01

    Intravascular ultrasound (IVUS) has been proven a reliable imaging modality that is widely employed in cardiac interventional procedures. It can provide morphologic as well as pathologic information on the occluded plaques in the coronary arteries. In this paper, we present a new technique using wavelet packet analysis that differentiates between blood and non-blood regions on the IVUS images. We utilized the multi-channel texture segmentation algorithm based on the discrete wavelet packet frames (DWPF). A k-mean clustering algorithm was deployed to partition the extracted textural features into blood and non-blood in an unsupervised fashion. Finally, the geometric and statistical information of the segmented regions was used to estimate the closest set of pixels to the lumen border and a spline curve was fitted to the set. The presented algorithm may be helpful in delineating the lumen border automatically and more reliably prior to the process of plaque characterization, especially with 40 MHz transducers, where appearance of the red blood cells renders the border detection more challenging, even manually. Experimental results are shown and they are quantitatively compared with manually traced borders by an expert. It is concluded that our two dimensional (2-D) algorithm, which is independent of the cardiac and catheter motions performs well in both in-vivo and in-vitro cases.

  7. Using polymerization, glass structure, and quasicrystalline theory to produce high level radioactive borosilicate glass remotely: a 20+ year legacy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, Carol M.

    Vitrification is currently the most widely used technology for the treatment of high level radioactive wastes (HLW) throughout the world. Most of the nations that have generated HLW are immobilizing in borosilicate glass. One of the primary reasons that glass has become the most widely used immobilization media is the relative simplicity of the vitrification process, e.g. melt a highly variable waste with some glass forming additives such as SiO 2 and B 2O 3 in the form of a premelted frit and pour the molten mixture into a stainless steel canister. Seal the canister before moisture can enter themore » canister (10’ tall by 2’ in diameter) so the canister does not corrode from the inside out. Glass has also become widely used for HLW is that due to the fact that the short range order (SRO) and medium range order (MRO) found in the structure of glass atomistically bonds the radionuclides and hazardous species in the waste. The SRO and MRO have also been found to govern the melt properties such as viscosity and resistivity of the melt and the crystallization potential and solubility of certain species. Furthermore, the molecular structure of the glass also controls the glass durability, i.e. the contaminant/radionuclide release, by establishing the distribution of ion exchange sites, hydrolysis sites, and the access of water to those sites. The molecular structure is flexible and hence accounts for the flexibility of glass formulations to HLW waste variability. Nuclear waste glasses melt between 1050-1150°C which minimizes the volatility of radioactive components such as 99Tc, 137Cs, and 129I. Nuclear waste glasses have good long term stability including irradiation resistance. Process control models were developed based on the molecular structure of glass, polymerization theory of glass, and quasicrystalline theory of glass crystallization. These models create a glass which is durable, pourable, and processable with 95% accuracy without knowing from batch to batch what the composition of the waste coming out of the storage tanks will be. These models have operated the Savannah River Site Defense Waste Processing Facility (SRS DWPF), which is the world’s largest HLW Joule heated ceramic melter, since 1996. This unique “feed forward” process control, which qualifies the durability, pourability, and processability of the waste plus glass additive mixture before it enters the melter, has enabled ~8000 tons of HLW glass and 4242 canisters to be produced since 1996 with only one melter replacement.« less

  8. Using polymerization, glass structure, and quasicrystalline theory to produce high level radioactive borosilicate glass remotely: a 20+ year legacy

    DOE PAGES

    Jantzen, Carol M.

    2017-03-27

    Vitrification is currently the most widely used technology for the treatment of high level radioactive wastes (HLW) throughout the world. Most of the nations that have generated HLW are immobilizing in borosilicate glass. One of the primary reasons that glass has become the most widely used immobilization media is the relative simplicity of the vitrification process, e.g. melt a highly variable waste with some glass forming additives such as SiO 2 and B 2O 3 in the form of a premelted frit and pour the molten mixture into a stainless steel canister. Seal the canister before moisture can enter themore » canister (10’ tall by 2’ in diameter) so the canister does not corrode from the inside out. Glass has also become widely used for HLW is that due to the fact that the short range order (SRO) and medium range order (MRO) found in the structure of glass atomistically bonds the radionuclides and hazardous species in the waste. The SRO and MRO have also been found to govern the melt properties such as viscosity and resistivity of the melt and the crystallization potential and solubility of certain species. Furthermore, the molecular structure of the glass also controls the glass durability, i.e. the contaminant/radionuclide release, by establishing the distribution of ion exchange sites, hydrolysis sites, and the access of water to those sites. The molecular structure is flexible and hence accounts for the flexibility of glass formulations to HLW waste variability. Nuclear waste glasses melt between 1050-1150°C which minimizes the volatility of radioactive components such as 99Tc, 137Cs, and 129I. Nuclear waste glasses have good long term stability including irradiation resistance. Process control models were developed based on the molecular structure of glass, polymerization theory of glass, and quasicrystalline theory of glass crystallization. These models create a glass which is durable, pourable, and processable with 95% accuracy without knowing from batch to batch what the composition of the waste coming out of the storage tanks will be. These models have operated the Savannah River Site Defense Waste Processing Facility (SRS DWPF), which is the world’s largest HLW Joule heated ceramic melter, since 1996. This unique “feed forward” process control, which qualifies the durability, pourability, and processability of the waste plus glass additive mixture before it enters the melter, has enabled ~8000 tons of HLW glass and 4242 canisters to be produced since 1996 with only one melter replacement.« less

  9. SUMMARY OF 2010 DOE EM INTERNATIONAL PROGRAM STUDIES OF WASTE GLASS STRUCTURE AND PROPERTIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K.; Choi, A.; Marra, J.

    2011-02-07

    Collaborative work between the Savannah River National Laboratory (SRNL) and SIA Radon in Russia was divided among three tasks for calendar year 2010. The first task focused on the study of simplified high level waste glass compositions with the objective of identifying the compositional drivers that lead to crystallization and poor chemical durability. The second task focused on detailed characterization of more complex waste glass compositions with unexpectedly poor chemical durabilities. The third task focused on determining the structure of select high level waste glasses made with varying frit compositions in order to improve models under development for predicting themore » melt rate of the Defense Waste Processing Facility (DWPF) glasses. The majority of these tasks were carried out at SIA Radon. Selection and fabrication of the glass compositions, along with chemical composition measurements and evaluations of durability were carried out at SRNL and are described in this report. SIA Radon provided three summary reports based on the outcome of the three tasks. These reports are included as appendices to this document. Briefly, the result of characterization of the Task 1 glasses may indicate that glass compositions where iron is predominantly tetrahedrally coordinated have more of a tendency to crystallize nepheline or nepheline-like phases. For the Task 2 glasses, the results suggested that the relatively low fraction of tetrahedrally coordinated boron and the relatively low concentrations of Al{sub 2}O{sub 3} available to form [BO{sub 4/2}]{sup -}Me{sup +} and [AlO{sub 4/2}]{sup -}Me{sup +} tetrahedral units are not sufficient to consume all of the alkali ions, and thus these alkali ions are easily leached from the glasses. All of the twelve Task 3 glass compositions were determined to be mainly amorphous, with some minor spinel phases. Several key structural units such as metasilicate chains and rings were identified, which confirms the current modeling approach for the silicate phase. The coordination of aluminum and iron was found to be mainly tetrahedral, with some octahedral iron ions. In all samples, trigonally-coordinated boron was determined to dominate over tetrahedrally-coordinated boron. The results further suggested that BO{sub 4} tetrahedra and BO{sub 3} triangles form complex borate units and may be present as separate constituents. However, no quantification of tetrahedral-to-trigonal boron ratio was made.« less

  10. Legal liability of medical toxicologists serving as poison control center consultants: a review of relevant legal statutes and survey of the experience of medical toxicologists.

    PubMed

    Curtis, John A; Greenberg, Michael

    2009-09-01

    Legal liability is an increasing concern in many areas of medicine, although the extent to which this alters the practice of medicine is unclear. To date the risk for litigation against medical toxicologists serving in the role of poison control center (PCC) consultants has not been assessed. A survey questionnaire was mailed to medical toxicologists in the United States to assess their litigation history with regard specifically to their role as PCC consultants. In addition, state laws were examined for statutes that provide protective language with regard to medical toxicologists working as PCC consults. This survey revealed that most medical toxicologists have served or currently serve as PCC consultants. Most had some degree of concern over legal liability, and several had been sued as a result of PCC consultations. Several states have specific statutes that limit the legal liability of PCCs and their employees, including medical directors and consulting medical toxicologists. Based on the survey results, legal action against toxicologists serving as PCC consultants appears to be an uncommon occurrence. Lawsuits are usually based upon nonfeasance and have typically been settled or dropped before trial. Legal liability is a concern for PCC consultants. However, legal action against consultants appears to be rare, and respondents to the survey indicated that it did not affect their advice or willingness to serve as PC consultants. A limited number of states have enacted laws that provide protection for medical toxicologists serving as PCC consultants.

  11. Software Prototyping

    PubMed Central

    Del Fiol, Guilherme; Hanseler, Haley; Crouch, Barbara Insley; Cummins, Mollie R.

    2016-01-01

    Summary Background Health information exchange (HIE) between Poison Control Centers (PCCs) and Emergency Departments (EDs) could improve care of poisoned patients. However, PCC information systems are not designed to facilitate HIE with EDs; therefore, we are developing specialized software to support HIE within the normal workflow of the PCC using user-centered design and rapid prototyping. Objective To describe the design of an HIE dashboard and the refinement of user requirements through rapid prototyping. Methods Using previously elicited user requirements, we designed low-fidelity sketches of designs on paper with iterative refinement. Next, we designed an interactive high-fidelity prototype and conducted scenario-based usability tests with end users. Users were asked to think aloud while accomplishing tasks related to a case vignette. After testing, the users provided feedback and evaluated the prototype using the System Usability Scale (SUS). Results Survey results from three users provided useful feedback that was then incorporated into the design. After achieving a stable design, we used the prototype itself as the specification for development of the actual software. Benefits of prototyping included having 1) subject-matter experts heavily involved with the design; 2) flexibility to make rapid changes, 3) the ability to minimize software development efforts early in the design stage; 4) rapid finalization of requirements; 5) early visualization of designs; 6) and a powerful vehicle for communication of the design to the programmers. Challenges included 1) time and effort to develop the prototypes and case scenarios; 2) no simulation of system performance; 3) not having all proposed functionality available in the final product; and 4) missing needed data elements in the PCC information system. PMID:27081404

  12. Method for stationarity-segmentation of spike train data with application to the Pearson cross-correlation.

    PubMed

    Quiroga-Lombard, Claudio S; Hass, Joachim; Durstewitz, Daniel

    2013-07-01

    Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then "slicing" spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains.

  13. TRIPLICATE SODIUM IODIDE GAMMA RAY MONITORS FOR THE SMALL COLUMN ION EXCHANGE PROGRAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Couture, A.

    2011-09-20

    This technical report contains recommendations from the Analytical Development (AD) organization of the Savannah River National Laboratory (SRNL) for a system of triplicate Sodium Iodide (NaI) detectors to be used to monitor Cesium-137 ({sup 137}Cs) content of the Decontaminated Salt Solution (DSS) output of the Small Column Ion Exchange (SCIX) process. These detectors need to be gain stabilized with respect to temperature shifts since they will be installed on top of Tank 41 at the Savannah River Site (SRS). This will be accomplished using NaI crystals doped with the alpha-emitting isotope, Americium-241({sup 241}Am). Two energy regions of the detector outputmore » will be monitored using single-channel analyzers (SCAs), the {sup 137}Cs full-energy {gamma}-ray peak and the {sup 241}Am alpha peak. The count rate in the gamma peak region will be proportional to the {sup 137}Cs content in the DSS output. The constant rate of alpha decay in the NaI crystal will be monitored and used as feedback to adjust the high voltage supply to the detector in response to temperature variation. An analysis of theoretical {sup 137}Cs breakthrough curves was used to estimate the gamma activity expected in the DSS output during a single iteration of the process. Count rates arising from the DSS and background sources were predicted using Microshield modeling software. The current plan for shielding the detectors within an enclosure with four-inch thick steel walls should allow the detectors to operate with the sensitivity required to perform these measurements. Calibration, testing, and maintenance requirements for the detector system are outlined as well. The purpose of SCIX is to remove and concentrate high-level radioisotopes from SRS salt waste resulting in two waste streams. The concentrated high-level waste containing {sup 137}Cs will be sent to the Defense Waste Processing Facility (DWPF) for vitrification and the low-level DSS will be sent to the Saltstone Production Facility (SPF) to be incorporated into grout.« less

  14. THE HYDROTHERMAL REACTIONS OF MONOSODIUM TITANATE, CRYSTALLINE SILICOTITANATE AND SLUDGE IN THE MODULAR SALT PROCESS: A LITERATURE SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fondeur, F.; Pennebaker, F.; Fink, S.

    2010-11-11

    The use of crystalline silicotitanate (CST) is proposed for an at-tank process to treat High Level Waste at the Savannah River Site. The proposed configuration includes deployment of ion exchange columns suspended in the risers of existing tanks to process salt waste without building a new facility. The CST is available in an engineered form, designated as IE-911-CW, from UOP. Prior data indicates CST has a proclivity to agglomerate from deposits of silica rich compounds present in the alkaline waste solutions. This report documents the prior literature and provides guidance for the design and operations that include CST to mitigatemore » that risk. The proposed operation will also add monosodium titanate (MST) to the supernate of the tank prior to the ion exchange operation to remove strontium and select alpha-emitting actinides. The cesium loaded CST is ground and then passed forward to the sludge washing tank as feed to the Defense Waste Processing Facility (DWPF). Similarly, the MST will be transferred to the sludge washing tank. Sludge processing includes the potential to leach aluminum from the solids at elevated temperature (e.g., 65 C) using concentrated (3M) sodium hydroxide solutions. Prior literature indicates that both CST and MST will agglomerate and form higher yield stress slurries with exposure to elevated temperatures. This report assessed that data and provides guidance on minimizing the impact of CST and MST on sludge transfer and aluminum leaching sludge.« less

  15. Software Prototyping: A Case Report of Refining User Requirements for a Health Information Exchange Dashboard.

    PubMed

    Nelson, Scott D; Del Fiol, Guilherme; Hanseler, Haley; Crouch, Barbara Insley; Cummins, Mollie R

    2016-01-01

    Health information exchange (HIE) between Poison Control Centers (PCCs) and Emergency Departments (EDs) could improve care of poisoned patients. However, PCC information systems are not designed to facilitate HIE with EDs; therefore, we are developing specialized software to support HIE within the normal workflow of the PCC using user-centered design and rapid prototyping. To describe the design of an HIE dashboard and the refinement of user requirements through rapid prototyping. Using previously elicited user requirements, we designed low-fidelity sketches of designs on paper with iterative refinement. Next, we designed an interactive high-fidelity prototype and conducted scenario-based usability tests with end users. Users were asked to think aloud while accomplishing tasks related to a case vignette. After testing, the users provided feedback and evaluated the prototype using the System Usability Scale (SUS). Survey results from three users provided useful feedback that was then incorporated into the design. After achieving a stable design, we used the prototype itself as the specification for development of the actual software. Benefits of prototyping included having 1) subject-matter experts heavily involved with the design; 2) flexibility to make rapid changes, 3) the ability to minimize software development efforts early in the design stage; 4) rapid finalization of requirements; 5) early visualization of designs; 6) and a powerful vehicle for communication of the design to the programmers. Challenges included 1) time and effort to develop the prototypes and case scenarios; 2) no simulation of system performance; 3) not having all proposed functionality available in the final product; and 4) missing needed data elements in the PCC information system.

  16. Comparison of drug administration logistics between prothrombin complex concentrates and plasma in the emergency department.

    PubMed

    Alarfaj, Sumaiah J; Jarrell, Daniel H; Patanwala, Asad E

    2018-03-24

    Prothrombin complex concentrate (PCC) is used as an alternative to fresh frozen plasma (FFP) for emergency bleeding. The primary objective of this study was to compare the time from order to start of administration between 3-factor PCC (PCC3), 4-factor (PCC4), and FFP in the emergency department (ED). The secondary objective was to evaluate the effect of an ED pharmacist on time to administration of PCCs. This was a single center three-arm retrospective cohort study. Adult patients in the ED with bleeding were included. The primary outcome measure was the time from order to administration, which was compared between PCC3, PCC4, and FFP. The time from order to administration was also compared when the ED pharmacist was involved versus not involved in the care of patients receiving PCC. There were 90 patients included in the study cohort (30 in each group). The median age was 69years (IQR 57-82years), and 57% (n=52) were male. The median time from order to administration was 36min (IQR 20-58min) for PCC3, 34min (IQR 18-48min) for PCC4, and 92min (IQR 63-133) for FFP (PCC3 versus PCC4, p=0.429; PCC3 versus FFP, p<0.001; PCC4 versus FFP, p<0.001). The median time from order to administration was significantly decreased when the ED pharmacist was involved (24min [IQR 15-35min] versus 42min [IQR 32-59min], p<0.001). Time from order to administration is faster with PCC than FFP. ED pharmacist involvement decreases the time from order to administration of PCC. Copyright © 2018. Published by Elsevier Inc.

  17. Human exposures to pesticides in the United States.

    PubMed

    Langley, Ricky L; Mort, Sandra Amiss

    2012-01-01

    Pesticides are used in most homes, businesses, and farms to control a variety of pests, including insects, weeds, fungi, rodents, and even microbial organisms. Inappropriate use of pesticides can lead to adverse effects to humans and the environment. This study provides updated information on the magnitude of adverse pesticide exposures in the United States. Data on pesticide exposure were obtained from calls to poison control centers (PCCs) reported by the American Association of Poison Control Centers. Estimates of emergency department visits, hospitalizations, and health care costs were reported by the Agency for Healthcare Research and Quality (AHRQ), and deaths from pesticide poisonings reported by the Centers for Disease Control and Prevention (CDC) WONDER (Wide-ranging Online Data for Epidemiologic Research). An average of 23 deaths occur each year with pesticides as the underlying cause of death, most due to suicidal ingestions. An average of 130,136 calls to poison control centers were reported from 2006 to 2010, with an average of 20,116 cases (17.8%) treated in health care facilities annually. AHQR reported an annual average of 7385 emergency room visits during 2006 to 2008, and 1419 annual hospitalizations during 2005 to 2009. Excluding cost from lost work time, hospital physician fees, and pesticide-induced cancers, the annual national cost associated with pesticide exposures was estimated as nearly $200 million USD based on data from emergency department visits, hospitalizations, and for deaths. Pesticide exposures remain a significant public health issue. Health care providers, cooperative extension agents, and pesticide manufactures can help prevent exposures by increasing education of parents and workers, encourage use of less toxic agents, and encourage the practice of integrated pest management.

  18. Epidemic gasoline exposures following Hurricane Sandy.

    PubMed

    Kim, Hong K; Takematsu, Mai; Biary, Rana; Williams, Nicholas; Hoffman, Robert S; Smith, Silas W

    2013-12-01

    Major adverse climatic events (MACEs) in heavily-populated areas can inflict severe damage to infrastructure, disrupting essential municipal and commercial services. Compromised health care delivery systems and limited utilities such as electricity, heating, potable water, sanitation, and housing, place populations in disaster areas at risk of toxic exposures. Hurricane Sandy made landfall on October 29, 2012 and caused severe infrastructure damage in heavily-populated areas. The prolonged electrical outage and damage to oil refineries caused a gasoline shortage and rationing unseen in the USA since the 1970s. This study explored gasoline exposures and clinical outcomes in the aftermath of Hurricane Sandy. Prospectively collected, regional poison control center (PCC) data regarding gasoline exposure cases from October 29, 2012 (hurricane landfall) through November 28, 2012 were reviewed and compared to the previous four years. The trends of gasoline exposures, exposure type, severity of clinical outcome, and hospital referral rates were assessed. Two-hundred and eighty-three gasoline exposures were identified, representing an 18 to 283-fold increase over the previous four years. The leading exposure route was siphoning (53.4%). Men comprised 83.0% of exposures; 91.9% were older than 20 years of age. Of 273 home-based calls, 88.7% were managed on site. Asymptomatic exposures occurred in 61.5% of the cases. However, minor and moderate toxic effects occurred in 12.4% and 3.5% of cases, respectively. Gastrointestinal (24.4%) and pulmonary (8.4%) symptoms predominated. No major outcomes or deaths were reported. Hurricane Sandy significantly increased gasoline exposures. While the majority of exposures were managed at home with minimum clinical toxicity, some patients experienced more severe symptoms. Disaster plans should incorporate public health messaging and regional PCCs for public health promotion and toxicological surveillance.

  19. Zooming in on star formation in the brightest galaxies of the early Universe discovered with the Planck and Herschel satellites

    NASA Astrophysics Data System (ADS)

    Canameras, Raoul

    2016-09-01

    Strongly gravitationally lensed galaxies offer an outstanding opportunity to characterize the most intensely star-forming galaxies in the high-redshift universe. In the most extreme cases, one can probe the mechanisms that underlie the intense star formation on the scales of individual star-forming regions. This requires very fortuitous gravitational lensing configurations offering magnification factors >>10, which are particularly rare toward the high-redshift dusty star-forming galaxies. The Planck's Dusty GEMS (Gravitationally Enhanced subMillimeter Sources) sample contains eleven of the brightest high-redshift galaxies discovered with the Planck submillimeter all-sky survey, with flux densities between 300 and 1000 mJy at 350 microns, factors of a few brighter than the majority of lensed sources previously discovered with other surveys. Six of them are above the 90% completeness limit of the Planck Catalog of Compact Sources (PCCS), suggesting that they are among the brightest high-redshift sources on the sky selected by their active star formation. This thesis comes within the framework of the extensive multi-wavelength follow-up programme designed to determine the overall properties of the high-redshift sources and to probe the lensing configurations. Firstly, to characterize the intervening lensing structures and calculate lensing models, I use optical and near/mid-infrared imaging and spectroscopy. I deduce that our eleven GEMS are aligned with intervening matter overdensities at intermediate redshift, either massive isolated galaxies or galaxy groups and clusters. The foreground sources exhibit evolved stellar populations of a few giga years, characteristic of early-type galaxies. Moreover, the first detailed models of the light deflection toward the GEMS suggest magnification factors systematically >10, and >20 for some lines-of-sight. Secondly, we observe the GEMS in the far-infrared and sub-millimeter domains in order to characterize the background sources. The sub-arcsec resolution IRAM and SMA interferometry shows distorded morphologies which definitively confirm that the eleven sources are strongly lensed. I obtain dust temperatures between 33 and 50 K, and outstanding far-infrared luminosities of up to 2x1014 solar luminosities before correcting for the gravitational magnification. The relationship between dust temperatures and far-infrared luminosities also confirms that the GEMS are brighter than field galaxies at a given dust temperature. I conclude that dust heating seems to be strongly dominated by the star formation activity with an AGN contamination systematically below 30%. We find secure spectroscopic redshifts between 2.2 and 3.6 for the eleven targets thanks to the detection of at least two CO emission lines per source. Finally, I focus on the three gravitationally lensed sources showing the most remarkable properties including the brightest GEMS, a maximal starburst with star formation surface densities near the Eddington limit.

  20. The U.S. Department of Energy - Office of Environmental Management Cooperation Program with the Russian Federal Atomic Energy Agency (ROSATOM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerdes, K.D.; Holtzscheiter, E.W.

    2006-07-01

    The U.S. Department of Energy's (DOE) Office of Environmental Management (EM) has collaborated with the Russian Federal Atomic Energy Agency - Rosatom (formerly Minatom) for 14 years on waste management challenges of mutual concern. Currently, EM is cooperating with Rosatom to explore issues related to high-level waste and investigate Russian experience and technologies that could support EM site cleanup needs. EM and Rosatom are currently implementing six collaborative projects on high-level waste issues: 1) Advanced Melter Technology Application to the U.S. DOE Defense Waste Processing Facility (DWPF) - Cold Crucible Induction Heated Melter (CCIM); 2) - Design Improvements to themore » Cold Crucible Induction Heated Melter; 3) Long-term Performance of Hanford Low-Activity Glasses in Burial Environments; 4) Low-Activity-Waste (LAW) Glass Sulfur Tolerance; 5) Improved Retention of Key Contaminants of Concern in Low Temperature Immobilized Waste Forms; and, 6) Documentation of Mixing and Retrieval Experience at Zheleznogorsk. Preliminary results and the path forward for these projects will be discussed. An overview of two new projects 7) Entombment technology performance and methodology for the Future 8) Radiation Migration Studies at Key Russian Nuclear Disposal Sites is also provided. The purpose of this paper is to provide an overview of EM's objectives for participating in cooperative activities with the Russian Federal Atomic Energy Agency, present programmatic and technical information on these activities, and outline specific technical collaborations currently underway and planned to support DOE's cleanup and closure mission. (authors)« less

  1. Analysis of Tank 13H (HTF-13-14-156, 157) Surface and Subsurface Supernatant Samples in Support of Enrichment Control, Corrosion Control and Sodium Aluminosilicate Formation Potential Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oji, L. N.

    2015-02-18

    The 2H Evaporator system includes mainly Tank 43H (feed tank) and Tank 38H (drop tank) with Tank 22H acting as the DWPF recycle receipt tank. The Tank 13H is being characterized to ensure that it can be transferred to the 2H evaporator. This report provides the results of analyses on Tanks 13H surface and subsurface supernatant liquid samples to ensure compliance with the Enrichment Control Program (ECP), the Corrosion Control Program and Sodium Aluminosilicate Formation Potential in the Evaporator. The U-235 mass divided by the total uranium averaged 0.00799 (0.799 % uranium enrichment) for both the surface and subsurface Tankmore » 13H samples. This enrichment is slightly above the enrichment for Tanks 38H and 43H, where the enrichment normally ranges from 0.59 to 0.7 wt%. The U-235 concentration in Tank 13H samples ranged from 2.01E-02 to 2.63E-02 mg/L, while the U-238 concentration in Tank 13H ranged from 2.47E+00 to 3.21E+00 mg/L. Thus, the U-235/total uranium ratio is in line with the prior 2H-evaporator ECP samples. Measured sodium and silicon concentrations averaged, respectively, 2.46 M and 1.42E-04 M (3.98 mg/L) in the Tank 13H subsurface sample. The measured aluminum concentration in Tanks 13H subsurface samples averaged 2.01E-01 M.« less

  2. Results For The Third Quarter Calendar Year 2016 Tank 50H Salt Solution Sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.

    2016-10-13

    In this memorandum, the chemical and radionuclide contaminant results from the Third Quarter Calendar Year 2016 (CY16) sample of Tank 50H salt solution are presented in tabulated form. The Third Quarter CY16 Tank 50H samples (a 200 mL sample obtained 6” below the surface (HTF-5-16-63) and a 1 L sample obtained 66” from the tank bottom (HTF-50-16-64)) were obtained on July 14, 2016 and received at Savannah River National Laboratory (SRNL) on the same day. Prior to obtaining the samples from Tank 50H, a single pump was run at least 4.4 hours, and the samples were pulled immediately after pumpmore » shut down. The information from this characterization will be used by Defense Waste Processing Facility (DWPF) & Saltstone Facility Engineering for the transfer of aqueous waste from Tank 50H to the Saltstone Production Facility, where the waste will be treated and disposed of in the Saltstone Disposal Facility. This memorandum compares results, where applicable, to Saltstone Waste Acceptance Criteria (WAC) limits and targets. Data pertaining to the regulatory limits for Resource Conservation and Recovery Act (RCRA) metals will be documented at a later time per the Task Technical and Quality Assurance Plan (TTQAP) for the Tank 50H saltstone task. The chemical and radionuclide contaminant results from the characterization of the Third Quarter CY16 sampling of Tank 50H were requested by Savannah River Remediation (SRR) personnel and details of the testing are presented in the SRNL TTQAP.« less

  3. Experimental and theoretical study of horizontal tube bundle for passive condensation heat transfer

    NASA Astrophysics Data System (ADS)

    Song, Yong Jae

    The research in this thesis supports the design of a horizontal tube bundle condenser for passive heat removal system in nuclear reactors. From nuclear power plant containment, condensation of steam from a steam/noncondensable gas occurs on the primary side and boiling occurs on the secondary side; thus, heat exchanger modeling is a challenge. For the purpose of this experimental study, a six-tube bundle is used, where the outer diameter, inner diameter, and length of each stainless steel tube measures 38.10mm (1.5 inches), 31.75mm (1.25 inches) and 3.96m (156 inches), respectively. The pitch to diameter ratio was determined based on information gathered from literature surveys, and the dimensions were determined from calculations and experimental data. The objective of the calculations, correlations, and experimental data was to obtain complete condensation within the tube bundle. Experimental conditions for the tests in this thesis work were determined from Design Basis Accident (DBA). The applications are for an actual Passive Containment Cooling Systems (PCCS) condenser under postulated accident conditions in future light water reactors. In this research, steady state and transient experiments were performed to investigate the effect of noncondensable gas on steam condensation inside and boiling outside a tube bundle heat exchanger. The condenser tube inlet steam mass flow rate varied from 18.0 to 48.0 g/s, the inlet pressure varied from 100 kPa to 400 kPa, and the inlet noncondensable gas mass fraction varied from 1% to 10%. The effect of the noncondensable gas was examined by comparing the tube centerline temperatures for various inlet and system conditions. As a result, it was determined that the noncondensable gas accumulated near the condensate film causing a decrease of mass and energy transfer. In addition, the effect of the inlet steam flow rate gas was investigated by comparing the tube centerline temperatures, the conclusion being that, as the inlet steam mass flow rate increased, the length required for complete condensation also increased. Comparison of tube centerline temperature profiles was also used to examine the effect of inlet pressure on the heat transfer performance. From this assessment, it was determined that as the inlet pressure increased, the length required for complete condensation decreased. The investigation of tube bundle effects was conducted by comparing the condensate flow rates. The experimental results showed that the upper tubes in the bundle had better heat transfer performance than the lower tubes. In regard to modeling of the heat exchanger in this study, for the primary side, an empirical correlation was developed herein to provide Nusselt numbers for condensation heat transfer in horizontal tubes with noncondensable gases. Nusselt numbers were correlated as: Nu = 106.31·Re m0.147·W a-0.843. The empirical model for condensation heat transfer coefficients and the secondary-side model were integrated within a Matlab program to provide an analysis tool for horizontal tube bundle condenser heat exchangers. Also on the secondary side, two phase heat transfer coefficients were modeled considering both convective boiling and nucleate boiling as: hTP = 10.03·exp(-2.28·alpha)· hCV + 0.076·exp[3.73x10-6·(Re f-1.6x105)]·hNB.

  4. Commercial Submersible Mixing Pump For SRS Tank Waste Removal - 15223

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, Mike; Herbert, James E.; Scheele, Patrick W.

    The Savannah River Site Tank Farms have 45 active underground waste tanks used to store and process nuclear waste materials. There are 4 different tank types, ranging in capacity from 2839 m 3 to 4921 m 3 (750,000 to 1,300,000 gallons). Eighteen of the tanks are older style and do not meet all current federal standards for secondary containment. The older style tanks are the initial focus of waste removal efforts for tank closure and are referred to as closure tanks. Of the original 51 underground waste tanks, six of the original 24 older style tanks have completed waste removalmore » and are filled with grout. The insoluble waste fraction that resides within most waste tanks at SRS requires vigorous agitation to suspend the solids within the waste liquid in order to transfer this material for eventual processing into glass filled canisters at the Defense Waste Processing Facility (DWPF). SRS suspends the solid waste by use of recirculating mixing pumps. Older style tanks generally have limited riser openings which will not support larger mixing pumps, since the riser access is typically 58.4 cm (23 inches) in diameter. Agitation for these tanks has been provided by four long shafted standard slurry pumps (SLP) powered by an above tank 112KW (150 HP) electric motor. The pump shaft is lubricated and cooled in a pressurized water column that is sealed from the surrounding waste in the tank. Closure of four waste tanks has been accomplished utilizing long shafted pump technology combined with heel removal using multiple technologies. Newer style waste tanks at SRS have larger riser openings, allowing the processing of waste solids to be accomplished with four large diameter SLPs equipped with 224KW (300 HP) motors. These tanks are used to process the waste from closure tanks for DWPF. In addition to the SLPs, a 224KW (300 HP) submersible mixer pump (SMP) has also been developed and deployed within older style tanks. The SMPs are product cooled and product lubricated canned motor pumps designed to fit within available risers and have significant agitation capabilities to suspend waste solids. Waste removal and closure of two tanks has been accomplished with agitation provided by 3 SMPs installed within the tanks. In 2012, a team was assembled to investigate alternative solids removal technologies to support waste removal for closing tanks. The goal of the team was to find a more cost effective approach that could be used to replace the current mixing pump technology. This team was unable to identify an alternative technology outside of mixing pumps to support waste agitation and removal from SRS waste tanks. However, the team did identify a potentially lower cost mixing pump compared to the baseline SLPs and SMPs. Rather than using the traditional procurement using an engineering specification, the team proposed to seek commercially available submersible mixer pumps (CSMP) as alternatives to SLPs and SMPs. SLPs and SMPs have a high procurement cost and the actual cost of moving pumps between tanks has shown to be significantly higher than the original estimates that justified the reuse of SMPs and SLPs. The team recommended procurement of “off-the-shelf” industry pumps which may be available for significant savings, but at an increased risk of failure and reduced operating life in the waste tank. The goal of the CSMP program is to obtain mixing pumps that could mix from bulk waste removal through tank closure and then be abandoned in place as part of tank closure. This paper will present the development, progress and relative advantages of the CSMP.« less

  5. Analysis Of 2H-Evaporator Scale Wall [HTF-13-82] And Pot Bottom [HTF-13-77] Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oji, L. N.

    2013-09-11

    Savannah River Remediation (SRR) is planning to remove a buildup of sodium aluminosilicate scale from the 2H-evaporator pot by loading and soaking the pot with heated 1.5 M nitric acid solution. Sampling and analysis of the scale material has been performed so that uranium and plutonium isotopic analysis can be input into a Nuclear Criticality Safety Assessment (NCSA) for scale removal by chemical cleaning. Historically, since the operation of the Defense Waste Processing Facility (DWPF), silicon in the DWPF recycle stream combines with aluminum in the typical tank farm supernate to form sodium aluminosilicate scale mineral deposits in the 2H-evaporatormore » pot and gravity drain line. The 2H-evaporator scale samples analyzed by Savannah River National Laboratory (SRNL) came from two different locations within the evaporator pot; the bottom cone sections of the 2H-evaporator pot [Sample HTF-13-77] and the wall 2H-evaporator [sample HTF-13-82]. X-ray diffraction analysis (XRD) confirmed that both the 2H-evaporator pot scale and the wall samples consist of nitrated cancrinite (a crystalline sodium aluminosilicate solid) and clarkeite (a uranium oxyhydroxide mineral). On ''as received'' basis, the bottom pot section scale sample contained an average of 2.59E+00 {+-} 1.40E-01 wt % total uranium with a U-235 enrichment of 6.12E-01 {+-} 1.48E-02 %, while the wall sample contained an average of 4.03E+00 {+-} 9.79E-01 wt % total uranium with a U-235 enrichment of 6.03E-01% {+-} 1.66E-02 wt %. The bottom pot section scale sample analyses results for Pu-238, Pu-239, and Pu-241 are 3.16E-05 {+-} 5.40E-06 wt %, 3.28E-04 {+-} 1.45E-05 wt %, and <8.80E-07 wt %, respectively. The evaporator wall scale samples analysis values for Pu-238, Pu-239, and Pu-241 averages 3.74E-05 {+-} 6.01E-06 wt %, 4.38E-04 {+-} 5.08E-05 wt %, and <1.38E-06 wt %, respectively. The Pu-241 analyses results, as presented, are upper limit values. For these two evaporator scale samples obtained at two different locations within the evaporator pot the major radioactive components (on a mass basis) in the additional radionuclide analyses were Sr-90, Cs-137 Np-237, Pu-239/240 and Th-232. Small quantities of americium and curium were detected in the blanks used for Am/Cm method for these radionuclides. These trace radionuclide amounts are assumed to come from airborne contamination in the shielded cells drying or digestion oven, which has been replaced. Therefore, the Am/Cm results, as presented, may be higher than the true Am/Cm values for these samples. These results are provided so that SRR can calculate the equivalent uranium-235 concentrations for the NCSA. Results confirm that the uranium contained in the scale remains depleted with respect to natural uranium. SRNL did not calculate an equivalent U-235 enrichment, which takes into account other fissionable isotopes U-233, Pu-239 and Pu-241. The applicable method for calculation of equivalent U-235 will be determined in the NCSA. With a few exceptions, a comparison of select radionuclides measurements from this 2013 2H evaporator scale characterization (pot bottom and wall scale samples) with those measurements for the same radionuclides in the 2010 2H evaporator scale analysis shows that the radionuclide analysis for both years are fairly comparable; the analyses results are about the same order of magnitude.« less

  6. High Level Waste System Impacts from Small Column Ion Exchange Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCabe, D. J.; Hamm, L. L.; Aleman, S. E.

    2005-08-18

    The objective of this task is to identify potential waste streams that could be treated with the Small Column Ion Exchange (SCIX) and perform an initial assessment of the impact of doing so on the High-Level Waste (HLW) system. Design of the SCIX system has been performed as a backup technology for decontamination of High-Level Waste (HLW) at the Savannah River Site (SRS). The SCIX consists of three modules which can be placed in risers inside underground HLW storage tanks. The pump and filter module and the ion exchange module are used to filter and decontaminate the aqueous tank wastesmore » for disposition in Saltstone. The ion exchange module contains Crystalline Silicotitanate (CST in its engineered granular form is referred to as IONSIV{reg_sign} IE-911), and is selective for removal of cesium ions. After the IE-911 is loaded with Cs-137, it is removed and the column is refilled with a fresh batch. The grinder module is used to size-reduce the cesium-loaded IE-911 to make it compatible with the sludge vitrification system in the Defense Waste Processing Facility (DWPF). If installed at the SRS, this SCIX would need to operate within the current constraints of the larger HLW storage, retrieval, treatment, and disposal system. Although the equipment has been physically designed to comply with system requirements, there is also a need to identify which waste streams could be treated, how it could be implemented in the tank farms, and when this system could be incorporated into the HLW flowsheet and planning. This document summarizes a preliminary examination of the tentative HLW retrieval plans, facility schedules, decontamination factor targets, and vitrified waste form compatibility, with recommendations for a more detailed study later. The examination was based upon four batches of salt solution from the currently planned disposition pathway to treatment in the SCIX. Because of differences in capabilities between the SRS baseline and SCIX, these four batches were combined into three batches for a total of about 3.2 million gallons of liquid waste. The chemical and radiological composition of these batches was estimated from the SpaceMan Plus{trademark} model using the same data set and assumptions as the baseline plans.« less

  7. Alpine radar conversion for LAWR

    NASA Astrophysics Data System (ADS)

    Savina, M.; Burlando, P.

    2012-04-01

    The Local Area Weather Radar (LAWR) is a ship-born weather radar system operating in X-band developed by the DHI Group to detect precipitation in urban areas. To date more than thirty units are installed in different settings around the world. A LAWR was also deployed in the Alps, at 3883 m a.s.l. on the Kl. Matterhorn (Valais, Switzerland). This was the highest LAWR of the world and it led to the development of an Alpine LAWR system that, besides featuring important technological improvements needed to withstand the severe Alpine conditions, required the development of a new Alpine Radar COnversion Model (ARCOM), which is the main focus of this contribution. The LAWR system is equipped with the original FURUNO fan-beam slotted antenna and the original logarithmic receiver, which limits the radar observations to the video signal (L) withour providing the reflectivity (Z). The beam is 0.95 deg wide and 20 deg high. It can detect precipitation to a max range of 60 km. In order to account for the limited availability of raw signal and information and the specific mountain set-up, the conversion model had to be developed differently from the state-of-the-art radar conversion technique used for this class of radars. In particular, the ARCOM is based on a model used to simulate a spatial dependent factor, hereafter called ACF, which is in turn function of parameters that take in account climatological conditions, also used in other conversion methods, but additionally accounting for local radar beam features and for orographic forcings such as the effective sampling power (sP), which is modelled by means of antenna pattern, geometric ground clutter and their interaction. The result is a conversion factor formulated to account for a range correction that is based on the increase of the sampling volume, partial beam blocking and local climatological conditions. The importance of the latter in this study is double with respect to the standard conversion technique for this class of radars, because it accounts for the large variability of hydrometeors reflectivity and vertical hydrometeors positioning (echo-top), which is strongly influenced by the high location of the radar. The ARCOM procedure is in addition embedded in a multistep quality control framework, which also includes the calibration on raingauge observations, and can be summarized as follow: 1) correction of both LAWR and raingauge observations for known errors (e.g. magnetron decay and heated-related water loss) 2) evaluation of the local Pearson's correlation coefficient (PCC) as estimator of the linear correlation between raingauge and LAWR observations (logarithmic receiver); 3) computation of the local ACF in the form of the local linear regression coefficient between raingauge and LAWR observations; 4) calibration of the ARCOM, i.e. definition of the parametrization able to reproduce the spatial variability of ACF as function of the local sP, being the PCCs used as weight in the calibration procedure. The resulting calibrated ARCOM finally allows, in any ungauged mountain spot, to convert LAWR observations into precipitation rate. The temporal and the spatial transferability of the ARCOM are evaluated via split-sample and a take-one-out cross validation. The results revealed good spatial transferability and a seasonal bias within 7%, thus opening new opportunities for local range distributed measurements of precipitation in mountain regions.

  8. Pilot-scale tests of HEME and HEPA dissolution process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qureshi, Z.H.; Strege, D.K.

    A series of pilot-scale demonstration tests for the dissolution of High Efficiency Mist Eliminators (HEME`s) and High Efficiency Particulate Airfilters (HEPA) were performed on a 1/5th linear scale. These fiberglass filters are to be used in the Defense Waste Processing Facility (DWPF) to decontaminate the effluents from the off-gases generated during the feed preparation process and vitrification. When removed, these filters will be dissolved in the Decontamination Waste Treatment Tank (DWTT) using 5 wt% NaOH solution. The contaminated fiberglass is converted to an aqueous stream which will be transferred to the waste tanks. The filter metal structure will be rinsedmore » with process water before its disposal as low-level solid waste. The pilot-scale study reported here successfully demonstrated a simple one step process using 5 wt% NaOH solution. The proposed process requires the installation of a new water spray ring with 30 nozzles. In addition to the reduced waste generated, the total process time is reduced to 48 hours only (66% saving in time). The pilot-scale tests clearly demonstrated that the dissolution process of HEMEs has two stages - chemical digestion of the filter and mechanical erosion of the digested filter. The digestion is achieved by a boiling 5 wt% caustic solutions, whereas the mechanical break down of the digested filter is successfully achieved by spraying process water on the digested filter. An alternate method of breaking down the digested filter by increased air sparging of the solution was found to be marginally successful are best. The pilot-scale tests also demonstrated that the products of dissolution are easily pumpable by a centrifugal pump.« less

  9. FY13 GLYCOLIC-NITRIC ACID FLOWSHEET DEMONSTRATIONS OF THE DWPF CHEMICAL PROCESS CELL WITH SIMULANTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D.; Zamecnik, J.; Best, D.

    Savannah River Remediation is evaluating changes to its current Defense Waste Processing Facility flowsheet to replace formic acid with glycolic acid in order to improve processing cycle times and decrease by approximately 100x the production of hydrogen, a potentially flammable gas. Higher throughput is needed in the Chemical Processing Cell since the installation of the bubblers into the melter has increased melt rate. Due to the significant maintenance required for the safety significant gas chromatographs and the potential for production of flammable quantities of hydrogen, eliminating the use of formic acid is highly desirable. Previous testing at the Savannah Rivermore » National Laboratory has shown that replacing formic acid with glycolic acid allows the reduction and removal of mercury without significant catalytic hydrogen generation. Five back-to-back Sludge Receipt and Adjustment Tank (SRAT) cycles and four back-to-back Slurry Mix Evaporator (SME) cycles were successful in demonstrating the viability of the nitric/glycolic acid flowsheet. The testing was completed in FY13 to determine the impact of process heels (approximately 25% of the material is left behind after transfers). In addition, back-to-back experiments might identify longer-term processing problems. The testing was designed to be prototypic by including sludge simulant, Actinide Removal Product simulant, nitric acid, glycolic acid, and Strip Effluent simulant containing Next Generation Solvent in the SRAT processing and SRAT product simulant, decontamination frit slurry, and process frit slurry in the SME processing. A heel was produced in the first cycle and each subsequent cycle utilized the remaining heel from the previous cycle. Lower SRAT purges were utilized due to the low hydrogen generation. Design basis addition rates and boilup rates were used so the processing time was shorter than current processing rates.« less

  10. Report of the oversight assessment of the operational readiness review of the Savannah River Site Defense Waste Processing Facility Cold Chemical Runs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B.

    1993-03-01

    This report presents the results of an oversight assessment (OA) conducted by the US Department of Energy's (DOE) Office of Environment, Safety and Health (EH) of the operational readiness review (ORR) activities for the Cold Chemical Runs (CCRs) at the Defense Waste Processing Facility (DWPF) located at Savannah River Site (SRS). The EH OA of this facility took place concurrently with an ORR performed by the DOE Office of Environmental Restoration and Waste Management (EM). The EM ORR was conducted from September 28, 1992, through October 9, 1992, although portions of the EM ORR were extended beyond this period. Themore » EH OA evaluated the comprehensiveness and effectiveness of the EM ORR. The EH OA was designed to ascertain whether the EM ORR was thorough and demonstrated sufficient inquisitiveness to verify that the implementation of programs and procedures is adequate to assure the protection of worker safety and health. The EH OA was carried out in accordance with the protocol and procedures of the EH Program for Oversight Assessment of Operational Readiness Evaluations for Startups and Restarts,'' dated September 15, 1992. Based on its OA and verification of the resolution of EH OA findings, the EH OA Team believes that the startup of the CCRs may be safely begun, pending satisfactory completion and verification of the prestart findings identified by the EM ORR. The EH OA was based primarily on an evaluation of the comprehensiveness and effectiveness of the EM ORR and addressed the following areas: industrial safety, industrial hygiene, and respiratory protection; fire protection; and chemical safety. The EH OA conducted independent vertical-slice'' reviews to confirm EM ORR results in the areas of confined-space entry, respiratory protection, fire protection, and chemical safety.« less

  11. Evaluation of cosmetic product exposures reported to the Milan Poison Control Centre, Italy from 2005 to 2010.

    PubMed

    Ruggiero, Simona; Moro, Paola Angela; Davanzo, Franca; Capuano, Annalisa; Rossi, Francesco; Sautebin, Lidia

    2012-12-01

    To the average consumer, "cosmetics" are not considered to cause damage to human health under normal conditions of use. Thus, cosmetic "safety" does not require any particular attention to the possibility that cosmetics may result in a toxic exposure, especially for children. Poison Control Centres (PCCs) provide specialized and rapid information for consumers and health professionals to ensure management of events related to the exposures to different agents, including Cosmetics. Poison Control Centres also represent a unique source of information to investigate the frequency and type of exposures to cosmetic and the related risks. An analysis of cases concerning human exposures to cosmetics collected from 2005 to 2010 by the PCC at the Ospedale Niguarda Ca' Granda (Milan, Italy) was performed. During this period, 11 322 human exposure cases related to cosmetics were collected accounting for 4.5% of the total human clinical cases. Almost, all the requests for assistance came from consumers (53%) and hospitals (40%). The most frequently reported site of exposure was the consumer's own residence (94%). The exposures mainly involved children younger than 4 years (77%). No difference in gender distribution was observed (female 49%, male 51%). Almost, all of the exposures were unintentional (94%). Intentional exposures, mainly related to suicide attempts and accounted for 6% of cases involving persons aged more than 12 years. Personal hygiene products (30%), perfumes and hair care products (excluding hair dyes) (both 13%) were the most frequently involved categories. Symptoms were present only in 26% of the exposures and were mostly gastrointestinal (46%). Most of the cases were managed at home (43%) whereas hospital intervention was required in 38%. Since the exposure frequency seems more likely to reflect product availability and accessibility to ingestors, our results call for closer attention to this type of hazard, especially for children younger than 4 years of age.

  12. Serum Cholinesterase Is Inversely Associated with Body Weight Change in Men Undergoing Routine Health Screening.

    PubMed

    Oda, Eiji

    2015-01-01

    The purpose of this study is to investigate the relationships between serum cholinesterase and body weight change, in addition to incident obesity defined as a body mass index (BMI) of 25 kg/m(2) or greater. A retrospective 5-year follow-up study was conducted. The crude incidence and hazard ratios (HRs) of obesity adjusted for the BMI and other confounders were calculated for cholinesterase quartiles in 1,412 men and 921 women. Partial correlation coefficients (PCCs) were calculated between cholinesterase and changes in the BMI during the 5-year follow-up period adjusted for age and other confounders and the change in the BMI were compared among cholinesterase quartiles in 1,223 men and 681 women. During the 5-year follow-up period, 149 men (10.6%) and 65 women (7.1%) developed obesity. The adjusted HRs of obesity decreased, although the crude incidence of obesity increased along the quartiles of cholinesterase in men. The adjusted HRs of obesity for the first (lowest), second and third quartiles of cholinesterase were 2.02 (p=0.006), 1.45 (p=0.122), and 1.28 (p=0.265), respectively compared with the highest quartile in men. The PCC between the baseline level of cholinesterase and change in the BMI was -0.16 (p<0.001) in men. The mean changes in BMI for 5 years were 0.31 kg/m(2), 0.17 kg/m(2), 0.01 kg/m(2) and -0.04 kg/m(2), respectively in the first, second, third and fourth quartiles of cholinesterase in men (p=0.005). Neither incident obesity nor weight gain was significantly associated with cholinesterase in women. The serum cholinesterase level was inversely associated with body weight change, as well as incident obesity, after adjusted for the BMI in men.

  13. Human Plant Exposures Reported to a Regional (Southwestern) Poison Control Center Over 8 Years.

    PubMed

    Enfield, Ben; Brooks, Daniel E; Welch, Sharyn; Roland, Maureen; Klemens, Jane; Greenlief, Kim; Olson, Rachel; Gerkin, Richard D

    2018-03-01

    There is little published data about human plant exposures reported to US poison control centers (PCCs). A retrospective chart review of all reported plant exposures to a single regional PCC between January 1, 2003 and December 31, 2010 was done to understand better the characteristics of plant exposure cases. Specific generic plant codes were used to identify cases. Recorded variables included patient demographics, plant involved, exposure variables, symptoms, management site, treatments, and outcome. Univariate and multivariate regression was used to identify outcome predictors. A total of 6492 charts met inclusion criteria. The average age was 16.6 years (2 months-94 years); 52.4% were male. The most common exposure reason was unintentional (98%), and the majority (92.4%) occurred at the patient's home. Ingestions (58.3%) and dermal exposures (34.3%) accounted for most cases. Cactus (27.5%), oleander (12.5%), Lantana (5.7%), and Bougainvillea (3.8%) were most commonly involved. Symptoms developed in 47.1% of patients, and were more likely to occur following Datura (66.7%), and Morning Glory or Milkweed (25% each) exposures. Almost 94% of patients were managed onsite (home) and only 5.2% involved evaluation in a health care facility (HCF). Only 37 (0.6%) patients required hospital admission, and 2.9% of cases resulted in more than minimal effects. Exposures resulting in more than minimal clinical effects were predicted by several variables: abnormal vital signs (OR = 35.62), abnormal labs (OR = 14.87), and management at a HCF (OR = 7.37). Hospital admissions were increased for patients already at a HCF (OR = 54.01), abnormal vital signs (OR = 23.28), and intentional exposures (OR = 14.7). Plant exposures reported to our poison control center were typically unintentional ingestions occurring at home. Most patients were managed onsite and few developed significant symptoms.

  14. RESULTS OF THE FY09 ENHANCED DOE HIGH LEVEL WASTE MELTER THROUGHPUT STUDIES AT SRNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, F.; Edwards, T.

    2010-06-23

    High-level waste (HLW) throughput (i.e., the amount of waste processed per unit time) is a function of two critical parameters: waste loading (WL) and melt rate. For the Waste Treatment and Immobilization Plant (WTP) at the Hanford Site and the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS), increasing HLW throughput would significantly reduce the overall mission life cycle costs for the Department of Energy (DOE). The objective of this task is to develop data, assess property models, and refine or develop the necessary models to support increased WL of HLW at SRS. It is a continuationmore » of the studies initiated in FY07, but is under the specific guidance of a Task Change Request (TCR)/Work Authorization received from DOE headquarters (Project Number RV071301). Using the data generated in FY07, FY08 and historical data, two test matrices (60 glasses total) were developed at the Savannah River National Laboratory (SRNL) in order to generate data in broader compositional regions. These glasses were fabricated and characterized using chemical composition analysis, X-ray Diffraction (XRD), viscosity, liquidus temperature (TL) measurement and durability as defined by the Product Consistency Test (PCT). The results of this study are summarized below: (1) In general, the current durability model predicts the durabilities of higher waste loading glasses quite well. A few of the glasses exhibited poorer durability than predicted. (2) Some of the glasses exhibited anomalous behavior with respect to durability (normalized leachate for boron (NL [B])). The quenched samples of FY09EM21-02, -07 and -21 contained no nepheline or other wasteform affecting crystals, but have unacceptable NL [B] values (> 10 g/L). The ccc sample of FY09EM21-07 has a NL [B] value that is more than one half the value of the quenched sample. These glasses also have lower concentrations of Al{sub 2}O{sub 3} and SiO{sub 2}. (3) Five of the ccc samples (EM-13, -14, -15, -29 and -30) completely crystallized with both magnetite and nepheline, and still had extremely low NL [B] values. These particular glasses have more CaO present than any of the other glasses in the matrix. It appears that while all of the glasses contain nepheline, the NL [B] values decrease as the CaO concentration increases from 2.3 wt% to 4.3 wt%. A different form of nepheline may be created at higher concentrations of CaO that does not significantly reduce glass durability. (4) The T{sub L} model appears to be under-predicting the measured values of higher waste loading glasses. Trends in T{sub L} with composition are not evident in the data from these studies. (5) A small number of glasses in the FY09 matrix have measured viscosities that are much lower than the viscosity range over which the current model was developed. The decrease in viscosity is due to a higher concentration of non-bridging oxygens (NBO). A high iron concentration is the cause of the increase in NBO. Durability, viscosity and T{sub L} data collected during FY07 and FY09 that specifically targeted higher waste loading glasses was compiled and assessed. It appears that additional data may be required to expand the coverage of the T{sub L} and viscosity models for higher waste loading glasses. In general, the compositional regions of the higher waste loading glasses are very different than those used to develop these models. On the other hand, the current durability model seems to be applicable to the new data. At this time, there is no evidence to modify this model; however additional experimental studies should be conducted to determine the cause of the anomalous durability data.« less

  15. Report of the oversight assessment of the operational readiness review of the Savannah River Site Defense Waste Processing Facility Cold Chemical Runs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B.

    1993-03-01

    This report presents the results of an oversight assessment (OA) conducted by the US Department of Energy`s (DOE) Office of Environment, Safety and Health (EH) of the operational readiness review (ORR) activities for the Cold Chemical Runs (CCRs) at the Defense Waste Processing Facility (DWPF) located at Savannah River Site (SRS). The EH OA of this facility took place concurrently with an ORR performed by the DOE Office of Environmental Restoration and Waste Management (EM). The EM ORR was conducted from September 28, 1992, through October 9, 1992, although portions of the EM ORR were extended beyond this period. Themore » EH OA evaluated the comprehensiveness and effectiveness of the EM ORR. The EH OA was designed to ascertain whether the EM ORR was thorough and demonstrated sufficient inquisitiveness to verify that the implementation of programs and procedures is adequate to assure the protection of worker safety and health. The EH OA was carried out in accordance with the protocol and procedures of the ``EH Program for Oversight Assessment of Operational Readiness Evaluations for Startups and Restarts,`` dated September 15, 1992. Based on its OA and verification of the resolution of EH OA findings, the EH OA Team believes that the startup of the CCRs may be safely begun, pending satisfactory completion and verification of the prestart findings identified by the EM ORR. The EH OA was based primarily on an evaluation of the comprehensiveness and effectiveness of the EM ORR and addressed the following areas: industrial safety, industrial hygiene, and respiratory protection; fire protection; and chemical safety. The EH OA conducted independent ``vertical-slice`` reviews to confirm EM ORR results in the areas of confined-space entry, respiratory protection, fire protection, and chemical safety.« less

  16. Riveting Two-Dimensional Materials: Exploring Strain Physics in Atomically Thin Crystals with Microelectromechanical Systems

    NASA Astrophysics Data System (ADS)

    Christopher, Jason W.

    This thesis includes four studies that explore and compare the impacts of four contributing factors resulting in regional climate change on the North Slope of Alaska based on a numerical simulation approach. These four contributing factors include global warming due to changes in radiative forcing, sea ice decline, earlier Arctic lake ice-off, and atmospheric circulation change over the Arctic. A set of dynamically downscaled regional climate products has been developed for the North Slope of Alaska over the period from 1950 up to 2100. A fine grid spacing (10 km) is employed to develop products that resolve detailed mesoscale features in the temperature and precipitation fields on the North Slope of Alaska. Processes resolved include the effects of topography on regional climate and extreme precipitation events. The Representative Concentration Pathway (RCP) 4.5 scenario projects lower rates of precipitation and temperature increase than RCP8.5 compared to the historical product. The increases of precipitation and temperature trends in the RCP8.5 projection are higher in fall and winter compared to the historical product and the RCP4.5 projection. The impacts of sea ice decline are addressed by conducting sensitivity experiments employing both an atmospheric model and a permafrost model. The sea ice decline impacts are most pronounced in late fall and early winter. The near surface atmospheric warming in late spring and early summer due to sea ice decline are projected to be stronger in the 21st century. Such a warming effect also reduces the total cloud cover on the North Slope of Alaska in summer by destabilizing the atmospheric boundary layer. The sea ice decline warms the atmosphere and the permafrost on the North Slope of Alaska less strongly than the global warming does, while it primarily results in higher seasonal variability of the positive temperature trend that is bigger in late fall and early winter than in other seasons. The ongoing and projected earlier melt of the Arctic lake ice also contributes to regional climate change on the Northern coast of Alaska, though only on a local and seasonal scale. Heat and moisture released from the opened lake surface primarily propagate downwind of the lakes. The impacts of the earlier lake ice-off on both the atmosphere and the permafrost underneath are comparable to those of the sea ice decline in late spring and early summer, while they are roughly six times weaker than those of sea ice decline in late fall and early winter. The permafrost warming resulted from the earlier lake ice-off is speculated to be stronger with more snowfall expected in the 21st century, while the overall atmospheric warming of global origin is speculated to continue growing. Two major Arctic summer-time climatic variability patterns, the Arctic Oscillation (AO) and the Arctic Dipole (AD), are evaluated in 12 global climate models in Coupled Model Intercomparison Program Phase 5 (CMIP5). A combined metric ranking approach ranks the models by the Pattern Correlation Coefficients (PCCs) and explained variances calculated from the model-produced summer AO and AD over the historical period. Higher-ranked models more consistently project a positive trend of the summer AO index and a negative trend of summer AD index in their RCP8.5 projections. Such long-term trends of large-scale climate patterns will inhibit the increase in air temperature while favoring the increase in precipitation on the North Slope of Alaska. In summary, this thesis bridges the gaps by quantifying the relative importance of multiple contributing factors to the regional climate change on the North Slope of Alaska. Global warming is the leading contributing factor, while other factors primarily contribute to the spatial and temporal asymmetries of the regional climate change. The results of this thesis lead to a better understanding of the physical mechanisms behind the climatic impacts to the hydrological and ecological changes of the North Slope of Alaska that have been become more severe and more frequent. They, together with the developed downscaling data products, serve as the climatic background information in such fields of study.

  17. A retrospective review of 911 calls to a regional poison control center.

    PubMed

    Bosak, Adam; Brooks, Daniel E; Welch, Sharyn; Padilla-Jones, Angie; Gerkin, Richard D

    2015-01-01

    There is little data as to what extent national Emergency Medical Services (EMS; 911) utilize poison control centers (PCCs). A review of data from our PCC was done to better understand this relationship and to identify potential improvements in patient care and health care savings. Retrospective chart review of a single PCC to identify calls originating from 911 sources over a 4-year study period (1/1/08-12/31/11). Recorded variables included the origin of call to the PCC, intent of exposure, symptoms, management site, hospital admission, and death. Odds ratios (OR) were developed using multiple logistic regressions to identify risk factors for EMS dispatch, management site, and the need for hospital admission. A total of 7556 charts were identified; 4382 (58%) met inclusion criteria. Most calls (63.3%) involved accidental exposures and 31% were self-harm or misuse. A total of 2517 (57.4%) patients had symptoms and 2044 (50.8%) were transported to an Emergency Department (ED). Over 38% of calls (n = 1696) were handled primarily by the PCC and did not result in EMS dispatch; only 6.5% of cases (n = 287) with initial PCC involvement resulted in crew dispatch. There were 955 (21.8%) cases that resulted in admission, and five deaths. The OR for being transported to an ED was 45.4 (95% confidence interval [CI]: 30.2-68.4) when the crew was dispatched by the PCC. Hospital admission was predicted by intent for self-harm (OR 5.0; 95% CI: 4.1-6.2) and the presence of symptoms (OR 2.43; 95% CI: 1.9-3.0). The ORs for several other predictive variables are also reported. When 911 providers contact a PCC about poisoning-related emergencies, a history of intentional exposure and the presence of symptoms each predicted EMS dispatch by the PCC, patient transport to an ED, and hospital admission. Early involvement of a PCC may prevent the need for EMS activation or patient transfer to a health care facility.

  18. State of emergency medicine in Spain

    PubMed Central

    2010-01-01

    Spain has universal public health care coverage. Emergency care provisions are offered to patients in different modalities and levels according to the characteristics of the medical complaint: at primary care centers (PCC), in an extrahospital setting by emergency medical services (EMS) and at hospital emergency departments (ED). We have more than 3,000 PCCs, which are run by family doctors (general practitioners) and pediatricians. On average, there is 1 PCC for every 15,000 to 20,000 inhabitants, and every family doctor is in charge of 1,500 to 2,000 citizens, although less populated zones tend to have lower ratios. Doctors spend part of their duty time in providing emergency care to their own patients. While not fully devoted to emergency medicine (EM) practice, they do manage minor emergencies. However, Spanish EMSs contribute hugely to guarantee population coverage in all situations. These EMS are run by EM technicians (EMT), nurses and doctors, who usually work exclusively in the emergency arena. EDs dealt with more than 25 million consultations in 2008, which implies, on average, that one out of two Spaniards visited an ED during this time. They are usually equipped with a wide range of diagnostic tools, most including ultrasonography and computerized tomography scans. The academic and training background of doctors working in the ED varies: nearly half lack any structured specialty residence training, but many have done specific master or postgraduate studies within the EM field. The demand for emergency care has grown at an annual rate of over 4% during the last decade. This percentage, which was greater than the 2% population increase during the same period, has outpaced the growth in ED capacity. Therefore, Spanish EDs become overcrowded when the system exerts minimal stress. Despite the high EM caseload and the potential severity of the conditions, training in EM is still unregulated in Spain. However, in April 2009 the Spanish Minister of Health announced the imminent approval of an EM specialty, allowing the first EM resident to officially start in 2011. Spanish emergency physicians look forward to the final approval, which will complete the modernization of emergency health care provision in Spain. PMID:21373287

  19. The status of ABWR-II development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiroyuki, Okada; Hideya Kitamura; Kumiaki, Moriya

    This paper reports on the current development status of the ABWR-II project, a next generation reactor design based on the ABWR. In the early 90's, a program to develop the next generation reactor for the 21. century was launched, at a time when the first ABWR was still under construction. At the initial stage of this project, development of a 'user friendly' plant design was the primary objective. Thus, the main focus was placed on selecting a design with features promoting ease of operation and maintenance. Meanwhile, the circumstances surrounding the Japanese nuclear power industry changed. The delay of FBRmore » development and the deregulation of the power generation market have significantly boosted the role of light water reactors, and accelerated the need to improve LWR economics. For these reasons, economic competitiveness became an overriding objective in the development of the ABWR-II, with no less importance placed on achieving the highest standards of safety. Several new features were adopted to enhance economic performance: 1700 MW electric output, large fuel bundles, simplified MSIV, large capacity SRV. An output of 1700 MWe was selected for compatibility with the Japanese power grid, and with consideration of current reactor pressure vessel manufacturing capability. Large fuel bundles will contribute to a shortened refueling outage period and a reduction of CRDs. For enhanced safety, the reference design implements a modified ECCS with four subdivision RHR, a diversified power source incorporating gas turbine generators (GTG), an advanced RCIC (ARCIC) and passive heat removal systems consisting of a passive containment cooling system (PCCS) and a passive reactor cooling system (PRCS). The modified ECCS configuration also enables on-line maintenance. While current reactors rely on complex accident management (AM) procedures, implemented by operators in the event of a serious accident, the ABWR-II incorporated severe accident countermeasures at the design stage, to eliminate the need of operator induced AM procedures. The ABWR-II represents one of the most promising and reliable options for the future replacement of older units, without incurring excessive R and D costs. (authors)« less

  20. Final Report - Crystal Settling, Redox, and High Temperature Properties of ORP HLW and LAW Glasses, VSL-09R1510-1, Rev. 0, dated 6/18/09

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruger, Albert A.; Wang, C.; Gan, H.

    2013-11-13

    The radioactive tank waste treatment programs at the U. S. Department of Energy (DOE) have featured joule heated ceramic melter technology for the vitrification of high level waste (HLW). The Hanford Tank Waste Treatment and Immobilization Plant (WTP) employs this same basic technology not only for the vitrification of HLW streams but also for the vitrification of Low Activity Waste (LAW) streams. Because of the much greater throughput rates required of the WTP as compared to the vitrification facilities at the West Valley Demonstration Project (WVDP) or the Defense Waste Processing Facility (DWPF), the WTP employs advanced joule heated meltersmore » with forced mixing of the glass pool (bubblers) to improve heat and mass transport and increase melting rates. However, for both HLW and LAW treatment, the ability to increase waste loadings offers the potential to significantly reduce the amount of glass that must be produced and disposed and, therefore, the overall project costs. This report presents the results from a study to investigate several glass property issues related to WTP HLW and LAW vitrification: crystal formation and settling in selected HLW glasses; redox behavior of vanadium and chromium in selected LAW glasses; and key high temperature thermal properties of representative HLW and LAW glasses. The work was conducted according to Test Plans that were prepared for the HLW and LAW scope, respectively. One part of this work thus addresses some of the possible detrimental effects due to considerably higher crystal content in waste glass melts and, in particular, the impact of high crystal contents on the flow property of the glass melt and the settling rate of representative crystalline phases in an environment similar to that of an idling glass melter. Characterization of vanadium redox shifts in representative WTP LAW glasses is the second focal point of this work. The third part of this work focused on key high temperature thermal properties of representative WTP HLW and LAW glasses over a wide range of temperatures, from the melter operating temperature to the glass transition.« less

  1. WTP Waste Feed Qualification: Glass Fabrication Unit Operation Testing Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M. E.; Newell, J. D.; Johnson, F. C.

    The waste feed qualification program is being developed to protect the Hanford Tank Waste Treatment and Immobilization Plant (WTP) design, safety basis, and technical basis by assuring waste acceptance requirements are met for each staged waste feed campaign prior to transfer from the Tank Operations Contractor to the feed receipt vessels inside the Pretreatment Facility. The Waste Feed Qualification Program Plan describes the three components of waste feed qualification: 1. Demonstrate compliance with the waste acceptance criteria 2. Determine waste processability 3. Test unit operations at laboratory scale. The glass fabrication unit operation is the final step in the processmore » demonstration portion of the waste feed qualification process. This unit operation generally consists of combining each of the waste feed streams (high-level waste (HLW) and low-activity waste (LAW)) with Glass Forming Chemicals (GFCs), fabricating glass coupons, performing chemical composition analysis before and after glass fabrication, measuring hydrogen generation rate either before or after glass former addition, measuring rheological properties before and after glass former addition, and visual observation of the resulting glass coupons. Critical aspects of this unit operation are mixing and sampling of the waste and melter feeds to ensure representative samples are obtained as well as ensuring the fabrication process for the glass coupon is adequate. Testing was performed using a range of simulants (LAW and HLW simulants), and these simulants were mixed with high and low bounding amounts of GFCs to evaluate the mixing, sampling, and glass preparation steps in shielded cells using laboratory techniques. The tests were performed with off-the-shelf equipment at the Savannah River National Laboratory (SRNL) that is similar to equipment used in the SRNL work during qualification of waste feed for the Defense Waste Processing Facility (DWPF) and other waste treatment facilities at the Savannah River Site. It is not expected that the exact equipment used during this testing will be used during the waste feed qualification testing for WTP, but functionally similar equipment will be used such that the techniques demonstrated would be applicable. For example, the mixing apparatus could use any suitable mixer capable of being remoted and achieving similar mixing speeds to those tested.« less

  2. Platelet-rich plasma and platelet gel preparation using Plateltex.

    PubMed

    Mazzucco, L; Balbo, V; Cattana, E; Borzini, P

    2008-04-01

    The platelet gel is made by embedding concentrate platelets within a semisolid (gel) network of polymerized fibrin. It is believed that this blood component will be used more and more in the treatment of several clinical conditions and as an adjunctive material in tissue engineering. Several systems are available to produce platelet-rich plasma (PRP) for topical therapy. Recently, a new system became commercially available, Plateltex. Here we report the technical performance of this system in comparison with the performance of other commercially available systems: PRGF, PRP-Landesber, Curasan, PCCS, Harvest, Vivostat, Regen and Fibrinet. Both the PRP and the gel were prepared according to the manufacturer's directions. The blood samples of 20 donors were used. The yield, the efficiency, and the amount of platelet-derived growth factor AB (PDGF-AB), transforming growth factor beta, vascular endothelial growth factor and fibroblast growth factor were measured in the resulting PRP. The feature of the batroxobin-induced gelation was evaluated. The yield, the collection efficiency and the growth factor content of Plateltex were comparable to those of most of the other available systems. The gelation time was not dependent on the fibrinogen concentration; however, it was strongly influenced by the contact surface area of the container where the clotting reaction took place (P < 0.0001). Plateltex provided platelet recovery, collection efficiency and PDGF-AB availability close to those provided by other systems marketed with the same intended use. Batroxobin, the enzyme provided to induce gelation, acts differently from thrombin, which is used by most other systems. Platelets treated with thrombin become activated; they release their growth factors quickly. Furthermore, thrombin-platelet interaction is a physiological mechanism that hastens the clot-retraction rate. On the contrary, platelets treated with batroxobin do not become activated; they are passively entrapped within the fibrin network, and their growth factor release occurs slowly. In these conditions, the clot retraction takes longer to occur. According to these differences between thrombin and batroxobin, it is expected that batroxobin-induced PRP activation will tailor slow release of the platelet content, thus, providing longer in loco availability of trophic factors. In selected clinical conditions, this durable anabolic factor availability might be preferable to quick thrombin-induced growth factor release.

  3. Effectiveness of a psycho-educational group program for major depression in primary care: a randomized controlled trial

    PubMed Central

    2012-01-01

    Background Studies show the effectiveness of group psychoeducation in reducing symptoms in people with depression. However, few controlled studies that have included aspects of personal care and healthy lifestyle (diet, physical exercise, sleep) together with cognitive-behavioral techniques in psychoeducation are proven to be effective. The objective of this study is to assess the effectiveness of a psychoeducational program, which includes aspects of personal care and healthy lifestyle, in patients with mild/moderate depression symptoms in Primary Care (PC). Methods In a randomized, controlled trial, 246 participants over 20 years old with ICD-10 major depression were recruited through nurses/general practitioners at 12 urban Primary Care Centers (PCCs) in Barcelona. The intervention group (IG) (n=119) received a group psychoeducational program (12 weekly, 1.5 h sessions led by two nurses) and the control group (CG) (n=112) received usual care. Patients were assessed at baseline and at, 3, 6 and 9 months. The main outcome measures were the BDI, EQ-5D and remission based upon the BDI. Results 231 randomized patients were included, of whom 85 had mild depression and 146 moderate depression. The analyses showed significant differences between groups in relation to remission of symptoms, especially in the mild depression group with a high rate of 57% (p=0.009) at post-treatment and 65% (p=0.006) at 9 month follow up, and only showed significant differences on the BDI at post-treatment (p=0.016; effect size Cohen’s d’=.51) and at 6 and 9 month follow-up (p= 0.048; d’=.44). In the overall and moderate sample, the analyses only showed significant differences between groups on the BDI at post-treatment, p=0.02 (d’=.29) and p=0.010 (d’=.47), respectively. The psychoeducation group improved significantly on the EQ-5D at short and long-term. Conclusions This psychoeducational intervention is a short and long-term effective treatment for patients with mild depression symptoms. It results in a high remission rate, is recommended in PC and can be carried out by nurses with previous training. In moderate patients, group psychoeducation is effective in the short-term. Trial registration Clinical Trials.gov identifier NCT00841737 PMID:23249399

  4. Variability of differences between two approaches for determining ground-water discharge and pumpage, including effects of time trends, Lower Arkansas River Basin, southeastern Colorado, 1998-2002

    USGS Publications Warehouse

    Troutman, Brent M.; Edelmann, Patrick; Dash, Russell G.

    2005-01-01

    In the mid-1990s, the Colorado Division of Water Resources (CDWR) adopted rules governing measurement of tributary ground-water pumpage for the Arkansas River Basin. The rules allowed ground-water pumpage to be determined using one of two approaches?power conversion coefficient (PCC) or totalizing flowmeters (TFM). In addition, the rules allowed a PCC to be applied to the electrical power usage up to 4 years in the future to estimate ground-water pumpage. As a result of concerns about potential errors in applying the PCC approach forward in time, a study was done by the U.S. Geological Survey, in cooperation with CDWR and Colorado Water Conservation Board, to evaluate the variability in differences in pumpage between the two approaches, including the effects of time trends. This report compared measured ground-water pumpage using TFMs to computed ground-water pumpage using PCCs by developing statistical models of relations between explanatory variables, such as site, time, and pumping water level, and dependent variables, which are based on discharge, PCC, and pumpage. When differences in pumpage (diffP) were computed using PCC measurements and power consumption for the same year (1998-2002), the median diffP, depending on the year, ranged from +0.1 to -2.9 percent; the median diffP for the entire period was -1.5 percent. However, when diffP was computed using PCC measurements applied to the next year's power consumption, the median diffP was -0.3 percent; and when PCC measurements were applied 2, 3, or 4 years into the future, median diffPs were +1.8 percent for a 2-year forward lag and +5.3 percent for a 4-year forward lag, indicating that pumpage computed with the PCC approach, as generally applied under the ground-water pumpage measurement rules by CDWR, tended to overestimate pumpage as compared to pumpage using TFMs when PCC measurement was applied to future years of measured power consumption. Analyses were done to better understand the causes of the time trend; an estimate of the overall trend with time (uncorrected for pumping water-level changes) yielded a trend of about 2.2 percent per lag year for diffP. A separate analysis that incorporated a surface-water diversion term in the statistical model rendered the time-trend term insignificant, indicating that the time trend in the models served as a surrogate for other variables, some of which reflect underlying hydrologic conditions. A more precise explanation of the potential causes of the time trend was not obtained with the available data. However, the model results with the surface-water diversion term indicate that much of the trend of 2.2 percent per lag year in diffP resulted from applying a PCC to estimate pumpage under hydrologic conditions different from those under which the PCC was measured. Although there is no evidence to conclude that the upward time trend determined in the data for this 5-year period would hold in the future, historical static ground-water levels in the study area generally have exhibited small variations over multidecadal time scales. Therefore, the approximately 2 percent per lag year trend determined in these data is expected to be a reasonable guideline for estimating potential errors in the PCC approach resulting from temporally varying hydrologic conditions between time of PCC measurement and pumpage estimation. Comparisons also were made between total, or aggregated, pumpage for a network of wells as computed by the PCC approach and the TFM approach. For 100 wells and a lag of 4 years between PCC measurement and pumpage estimation, there was a 95-percent probability that the difference between total network pumpage measured by the PCC approach and that measured using a TFM would be between 5.2 and 14.4 percent. These estimates were based on a bias of 2.2 percent per lag year estimated for the period 1998-2002 during which hydrologic conditions were known to have changed. Using the same assumptions, the estimated d

  5. BEHAVIOR OF MERCURY DURING DWPF CHEMICAL PROCESS CELL PROCESSING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamecnik, J.; Koopman, D.

    2012-04-09

    The Defense Waste Processing Facility has experienced significant issues with the stripping and recovery of mercury in the Chemical Processing Cell (CPC). The stripping rate has been inconsistent, often resulting in extended processing times to remove mercury to the required endpoint concentration. The recovery of mercury in the Mercury Water Wash Tank has never been high, and has decreased significantly since the Mercury Water Wash Tank was replaced after the seventh batch of Sludge Batch 5. Since this time, essentially no recovery of mercury has been seen. Pertinent literature was reviewed, previous lab-scale data on mercury stripping and recovery wasmore » examined, and new lab-scale CPC Sludge Receipt and Adjustment Tank (SRAT) runs were conducted. For previous lab-scale data, many of the runs with sufficient mercury recovery data were examined to determine what factors affect the stripping and recovery of mercury and to improve closure of the mercury material balance. Ten new lab-scale SRAT runs (HG runs) were performed to examine the effects of acid stoichiometry, sludge solids concentration, antifoam concentration, form of mercury added to simulant, presence of a SRAT heel, operation of the SRAT condenser at higher than prototypic temperature, varying noble metals from none to very high concentrations, and higher agitation rate. Data from simulant runs from SB6, SB7a, glycolic/formic, and the HG tests showed that a significant amount of Hg metal was found on the vessel bottom at the end of tests. Material balance closure improved from 12-71% to 48-93% when this segregated Hg was considered. The amount of Hg segregated as elemental Hg on the vessel bottom was 4-77% of the amount added. The highest recovery of mercury in the offgas system generally correlated with the highest retention of Hg in the slurry. Low retention in the slurry (high segregation on the vessel bottom) resulted in low recovery in the offgas system. High agitation rates appear to result in lower retention of mercury in the slurry. Both recovery of mercury in the offgas system and removal (segregation + recovery) from the slurry correlate with slurry consistency. Higher slurry consistency results in better retention of Hg in the slurry (less segregation) and better recovery in the offgas system, but the relationships of recovery and retention with consistency are sludge dependent. Some correlation with slurry yield stress and acid stoichiometry was also found. Better retention of mercury in the slurry results in better recovery in the offgas system because the mercury in the slurry is stripped more easily than the segregated mercury at the bottom of the vessel. Although better retention gives better recovery, the time to reach a particular slurry mercury content (wt%) is longer than if the retention is poorer because the segregation is faster. The segregation of mercury is generally a faster process than stripping. The stripping factor (mass of water evaporated per mass of mercury stripped) of mercury at the start of boiling were found to be less than 1000 compared to the assumed design basis value of 750 (the theoretical factor is 250). However, within two hours, this value increased to at least 2000 lb water per lb Hg. For runs with higher mercury recovery in the offgas system, the stripping factor remained around 2000, but runs with low recovery had stripping factors of 4000 to 40,000. DWPF data shows similar trends with the stripping factor value increasing during boiling. These high values correspond to high segregation and low retention of mercury in the sludge. The stripping factor for a pure Hg metal bead in water was found to be about 10,000 lb/lb. About 10-36% of the total Hg evaporated in a SRAT cycle was refluxed back to the SRAT during formic acid addition and boiling. Mercury is dissolved as a result of nitric acid formation from absorption of NO{sub x}. The actual solubility of dissolved mercury in the acidic condensate is about 100 times higher than the actual concentrations measured. Mercury metal present in the MWWT from previous batches could be dissolved by this acidic condensate. The test of the effect of higher SRAT condenser temperature on recovery of mercury in the MWWT and offgas system was inconclusive. The recovery at higher temperature was lower than several low temperature runs, but about the same as other runs. Factors other than temperature appear to affect the mercury recovery. The presence of chloride and iodide in simulants resulted in the formation of mercurous chloride and mercurous iodide, respectively, in the offgas system. Actual waste data shows that the chloride content is much less than the simulant concentrations. Future simulant tests should minimize the addition of chloride. Similarly, iodine addition should be eliminated unless actual waste analyses show it to be present; currently, total iodine is not measured on actual waste samples.« less

  6. Planck intermediate results: XXVII. High-redshift infrared galaxy overdensity candidates and lensed sources discovered by Planck and confirmed by Herschel -SPIRE

    DOE PAGES

    Aghanim, N.; Altieri, B.; Arnaud, M.; ...

    2015-09-30

    We have used the Planck all-sky submillimetre and millimetre maps to search for rare sources distinguished by extreme brightness, a few hundred millijanskies, and their potential for being situated at high redshift. These “cold” Planck sources, selected using the High Frequency Instrument (HFI) directly from the maps and from the Planck Catalogue of Compact Sources (PCCS), all satisfy the criterion of having their rest-frame far-infrared peak redshifted to the frequency range 353–857 GHz. This colour-selection favours galaxies in the redshift range z = 2–4, which we consider as cold peaks in the cosmic infrared background. With a 4'.5 beam atmore » the four highest frequencies, our sample is expected to include overdensities of galaxies in groups or clusters, lensed galaxies, and chance line-of-sight projections. In this paper, we perform a dedicated Herschel-SPIRE follow-up of 234 such Planck targets, finding a significant excess of red 350 and 500μm sources, in comparison to reference SPIRE fields. About 94% of the SPIRE sources in the Planck fields are consistent with being overdensities of galaxies peaking at 350μm, with 3% peaking at 500μm, and none peaking at 250μm. About 3% are candidate lensed systems, all 12 of which have secure spectroscopic confirmations, placing them at redshifts z> 2.2. Only four targets are Galactic cirrus, yielding a success rate in our search strategy for identifying extragalactic sources within the Planck beam of better than 98%. The galaxy overdensities are detected with high significance, half of the sample showing statistical significance above 10σ. The SPIRE photometric redshifts of galaxies in overdensities suggest a peak at z ≃ 2, assuming a single common dust temperature for the sources of T d = 35 K. Under this assumption, we derive an infrared (IR) luminosity for each SPIRE source of about 4 × 10 12L ⊙, yielding star formation rates of typically 700 M ⊙ yr -1. If the observed overdensities are actual gravitationally-bound structures, the total IR luminosity of all their SPIRE-detected sources peaks at 4 × 10 13L ⊙, leading to total star formation rates of perhaps 7 × 10 3M ⊙ yr -1 per overdensity. Taken together, these sources show the signatures of high-z (z> 2) protoclusters of intensively star-forming galaxies. Finally, all these observations confirm the uniqueness of our sample compared to reference samples and demonstrate the ability of the all-skyPlanck-HFI cold sources to select populations of cosmological and astrophysical interest for structure formation studies.« less

  7. Planck intermediate results: XXVII. High-redshift infrared galaxy overdensity candidates and lensed sources discovered by Planck and confirmed by Herschel -SPIRE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghanim, N.; Altieri, B.; Arnaud, M.

    We have used the Planck all-sky submillimetre and millimetre maps to search for rare sources distinguished by extreme brightness, a few hundred millijanskies, and their potential for being situated at high redshift. These “cold” Planck sources, selected using the High Frequency Instrument (HFI) directly from the maps and from the Planck Catalogue of Compact Sources (PCCS), all satisfy the criterion of having their rest-frame far-infrared peak redshifted to the frequency range 353–857 GHz. This colour-selection favours galaxies in the redshift range z = 2–4, which we consider as cold peaks in the cosmic infrared background. With a 4'.5 beam atmore » the four highest frequencies, our sample is expected to include overdensities of galaxies in groups or clusters, lensed galaxies, and chance line-of-sight projections. In this paper, we perform a dedicated Herschel-SPIRE follow-up of 234 such Planck targets, finding a significant excess of red 350 and 500μm sources, in comparison to reference SPIRE fields. About 94% of the SPIRE sources in the Planck fields are consistent with being overdensities of galaxies peaking at 350μm, with 3% peaking at 500μm, and none peaking at 250μm. About 3% are candidate lensed systems, all 12 of which have secure spectroscopic confirmations, placing them at redshifts z> 2.2. Only four targets are Galactic cirrus, yielding a success rate in our search strategy for identifying extragalactic sources within the Planck beam of better than 98%. The galaxy overdensities are detected with high significance, half of the sample showing statistical significance above 10σ. The SPIRE photometric redshifts of galaxies in overdensities suggest a peak at z ≃ 2, assuming a single common dust temperature for the sources of T d = 35 K. Under this assumption, we derive an infrared (IR) luminosity for each SPIRE source of about 4 × 10 12L ⊙, yielding star formation rates of typically 700 M ⊙ yr -1. If the observed overdensities are actual gravitationally-bound structures, the total IR luminosity of all their SPIRE-detected sources peaks at 4 × 10 13L ⊙, leading to total star formation rates of perhaps 7 × 10 3M ⊙ yr -1 per overdensity. Taken together, these sources show the signatures of high-z (z> 2) protoclusters of intensively star-forming galaxies. Finally, all these observations confirm the uniqueness of our sample compared to reference samples and demonstrate the ability of the all-skyPlanck-HFI cold sources to select populations of cosmological and astrophysical interest for structure formation studies.« less

  8. Planck intermediate results. XXVII. High-redshift infrared galaxy overdensity candidates and lensed sources discovered by Planck and confirmed by Herschel-SPIRE

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Aghanim, N.; Altieri, B.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Beelen, A.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bethermin, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Burigana, C.; Calabrese, E.; Canameras, R.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Couchot, F.; Crill, B. P.; Curto, A.; Danese, L.; Dassas, K.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Falgarone, E.; Flores-Cacho, I.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Frye, B.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Guéry, D.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Hernández-Monteagudo, C.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Floc'h, E.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; MacKenzie, T.; Maffei, B.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martinache, C.; Martínez-González, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Natoli, P.; Negrello, M.; Nesvadba, N. P. H.; Novikov, D.; Novikov, I.; Omont, A.; Pagano, L.; Pajot, F.; Pasian, F.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ristorcelli, I.; Rocha, G.; Roudier, G.; Rusholme, B.; Sandri, M.; Santos, D.; Savini, G.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Sunyaev, R.; Sutton, D.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Valtchanov, I.; Van Tent, B.; Vieira, J. D.; Vielva, P.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Welikala, N.; Zacchei, A.; Zonca, A.

    2015-10-01

    We have used the Planck all-sky submillimetre and millimetre maps to search for rare sources distinguished by extreme brightness, a few hundred millijanskies, and their potential for being situated at high redshift. These "cold" Planck sources, selected using the High Frequency Instrument (HFI) directly from the maps and from the Planck Catalogue of Compact Sources (PCCS), all satisfy the criterion of having their rest-frame far-infrared peak redshifted to the frequency range 353-857 GHz. This colour-selection favours galaxies in the redshift range z = 2-4, which we consider as cold peaks in the cosmic infrared background. With a 4.´5 beam at the four highest frequencies, our sample is expected to include overdensities of galaxies in groups or clusters, lensed galaxies, and chance line-of-sight projections. We perform a dedicated Herschel-SPIRE follow-up of 234 such Planck targets, finding a significant excess of red 350 and 500μm sources, in comparison to reference SPIRE fields. About 94% of the SPIRE sources in the Planck fields are consistent with being overdensities of galaxies peaking at 350μm, with 3% peaking at 500μm, and none peaking at 250μm. About 3% are candidate lensed systems, all 12 of which have secure spectroscopic confirmations, placing them at redshifts z> 2.2. Only four targets are Galactic cirrus, yielding a success rate in our search strategy for identifying extragalactic sources within the Planck beam of better than 98%. The galaxy overdensities are detected with high significance, half of the sample showing statistical significance above 10σ. The SPIRE photometric redshifts of galaxies in overdensities suggest a peak at z ≃ 2, assuming a single common dust temperature for the sources of Td = 35 K. Under this assumption, we derive an infrared (IR) luminosity for each SPIRE source of about 4 × 1012L⊙, yielding star formation rates of typically 700 M⊙ yr-1. If the observed overdensities are actual gravitationally-bound structures, the total IR luminosity of all their SPIRE-detected sources peaks at 4 × 1013L⊙, leading to total star formation rates of perhaps 7 × 103M⊙ yr-1 per overdensity. Taken together, these sources show the signatures of high-z (z> 2) protoclusters of intensively star-forming galaxies. All these observations confirm the uniqueness of our sample compared to reference samples and demonstrate the ability of the all-skyPlanck-HFI cold sources to select populations of cosmological and astrophysical interest for structure formation studies. Appendices are available in electronic form at http://www.aanda.org

  9. Tank 40 Final Sludge Batch 8 Chemical Characterization Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, Christopher J.

    2013-09-19

    A sample of Sludge Batch 8 (SB8) was pulled from Tank 40 in order to obtain radionuclide inventory analyses necessary for compliance with the Waste Acceptance Product Specifications (WAPS). The SB8 WAPS sample was also analyzed for chemical composition, including noble metals, and fissile constituents, and these results are reported here. These analyses along with the WAPS radionuclide analyses will help define the composition of the sludge in Tank 40 that is currently being fed to the Defense Waste Processing Facility (DWPF) as SB8. At SRNL, the 3-L Tank 40 SB8 sample was transferred from the shipping container into amore » 4-L high density polyethylene bottle and solids were allowed to settle. Supernate was then siphoned off and circulated through the shipping container to complete the transfer of the sample. Following thorough mixing of the 3-L sample, a 553 g sub-sample was removed. This sub-sample was then utilized for all subsequent slurry sample preparations. Eight separate aliquots of the slurry were digested, four with HNO{sub 3}/HCl (aqua regia) in sealed Teflon(r) vessels and four with NaOH/Na{sub 2}O{sub 2} (alkali or peroxide fusion) using Zr crucibles. Two Analytical Reference Glass - 1 (ARG-1) standards were digested along with a blank for each preparation. Each aqua regia digestion and blank was diluted to 1:100 mL with deionized water and submitted to Analytical Development (AD) for inductively coupled plasma - atomic emission spectroscopy (ICP-AES) analysis, inductively coupled plasma - mass spectrometry (ICP-MS) analysis, atomic absorption spectroscopy (AA) for As and Se, and cold vapor atomic absorption spectroscopy (CV-AA) for Hg. Equivalent dilutions of the alkali fusion digestions and blank were submitted to AD for ICP-AES analysis. Tank 40 SB8 supernate was collected from a mixed slurry sample in the SRNL Shielded Cells and submitted to AD for ICP-AES, ion chromatography (IC), total base/free OH-/other base, total inorganic carbon/total organic carbon (TIC/TOC) analyses. Weighted dilutions of slurry were submitted for IC, TIC/TOC, and total base/free OH-/other base analyses. Activities for U-233, U-235, and Pu-239 were determined from the ICP-MS data for the aqua regia digestions of the Tank 40 WAPS slurry using the specific activity of each isotope. The Pu-241 value was determined from a Pu-238/-241 method developed by SRNL AD and previously described.« less

  10. Tank 40 final sludge batch 9 chemical and fissile radionuclide characterization results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannochie, C. J.; Kubilius, W. P.; Pareizs, J. M.

    A sample of Sludge Batch (SB) 9 was pulled from Tank 40 in order to obtain radionuclide inventory analyses necessary for compliance with the Waste Acceptance Product Specifications (WAPS)i. The SB9 WAPS sample was also analyzed for chemical composition, including noble metals, and fissile constituents, and these results are reported here. These analyses along with the WAPS radionuclide analyses will help define the composition of the sludge in Tank 40 that is fed to the Defense Waste Processing Facility (DWPF) as SB9. At the Savannah River National Laboratory (SRNL), the 3-L Tank 40 SB9 sample was transferred from the shippingmore » container into a 4-L high density polyethylene bottle and solids were allowed to settle. Supernate was then siphoned off and circulated through the shipping container to complete the transfer of the sample. Following thorough mixing of the 3-L sample, a 547 g sub-sample was removed. This sub-sample was then utilized for all subsequent slurry sample preparations. Eight separate aliquots of the slurry were digested, four with HNO3/HCl (aqua regiaii) in sealed Teflon® vessels and four with NaOH/Na2O2 (alkali or peroxide fusioniii) using Zr crucibles. Three Analytical Reference Glass – 1iv (ARG-1) standards were digested along with a blank for each preparation. Each aqua regia digestion and blank was diluted to 1:100 with deionized water and submitted to Analytical Development (AD) for inductively coupled plasma – atomic emission spectroscopy (ICP-AES) analysis, inductively coupled plasma – mass spectrometry (ICP-MS) analysis, atomic absorption spectroscopy (AA) for As and Se, and cold vapor atomic absorption spectroscopy (CV-AA) for Hg. Equivalent dilutions of the alkali fusion digestions and blank were submitted to AD for ICP-AES analysis. Tank 40 SB9 supernate was collected from a mixed slurry sample in the SRNL Shielded Cells and submitted to AD for ICP-AES, ion chromatography (IC), total base/free OH-/other base, total inorganic carbon/total organic carbon (TIC/TOC) analyses. Weighted dilutions of slurry were submitted for IC, TIC/TOC, and total base/free OH-/other base analyses. Activities for U-233, U-235, and Pu-239 were determined from the ICP-MS data for the aqua regia digestions of the SB9 WAPS slurry using the specific activity of each isotope. The Pu-241 value was determined from a Pu-238/-241 method developed by SRNL AD and previously described.v« less

  11. iPhone Sensors in Tracking Outcome Variables of the 30-Second Chair Stand Test and Stair Climb Test to Evaluate Disability: Cross-Sectional Pilot Study

    PubMed Central

    Samaan, Michael A; Schultz, Brooke; Popovic, Tijana; Souza, Richard B; Majumdar, Sharmila

    2017-01-01

    Background Performance tests are important to characterize patient disabilities and functional changes. The Osteoarthritis Research Society International and others recommend the 30-second Chair Stand Test and Stair Climb Test, among others, as core tests that capture two distinct types of disability during activities of daily living. However, these two tests are limited by current protocols of testing in clinics. There is a need for an alternative that allows remote testing of functional capabilities during these tests in the osteoarthritis patient population. Objective Objectives are to (1) develop an app for testing the functionality of an iPhone’s accelerometer and gravity sensor and (2) conduct a pilot study objectively evaluating the criterion validity and test-retest reliability of outcome variables obtained from these sensors during the 30-second Chair Stand Test and Stair Climb Test. Methods An iOS app was developed with data collection capabilities from the built-in iPhone accelerometer and gravity sensor tools and linked to Google Firebase. A total of 24 subjects performed the 30-second Chair Stand Test with an iPhone accelerometer collecting data and an external rater manually counting sit-to-stand repetitions. A total of 21 subjects performed the Stair Climb Test with an iPhone gravity sensor turned on and an external rater timing the duration of the test on a stopwatch. App data from Firebase were converted into graphical data and exported into MATLAB for data filtering. Multiple iterations of a data processing algorithm were used to increase robustness and accuracy. MATLAB-generated outcome variables were compared to the manually determined outcome variables of each test. Pearson’s correlation coefficients (PCCs), Bland-Altman plots, intraclass correlation coefficients (ICCs), standard errors of measurement, and repeatability coefficients were generated to evaluate criterion validity, agreement, and test-retest reliability of iPhone sensor data against gold-standard manual measurements. Results App accelerometer data during the 30-second Chair Stand Test (PCC=.890) and gravity sensor data during the Stair Climb Test (PCC=.865) were highly correlated to gold-standard manual measurements. Greater than 95% of values on Bland-Altman plots comparing the manual data to the app data fell within the 95% limits of agreement. Strong intraclass correlation was found for trials of the 30-second Chair Stand Test (ICC=.968) and Stair Climb Test (ICC=.902). Standard errors of measurement for both tests were found to be within acceptable thresholds for MATLAB. Repeatability coefficients for the 30-second Chair Stand Test and Stair Climb Test were 0.629 and 1.20, respectively. Conclusions App-based performance testing of the 30-second Chair Stand Test and Stair Climb Test is valid and reliable, suggesting its applicability to future, larger-scale studies in the osteoarthritis patient population. PMID:29079549

  12. Data Preservation -Progress in NASA's Earth Observing System Data and Information System (EOSDIS)

    NASA Astrophysics Data System (ADS)

    Ramapriyan, H. K.

    2013-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) has been operational since August 1994, processing, archiving and distributing data from a variety of Earth science missions. The data sources include instruments on-board satellites and aircraft and field campaigns. In addition, EOSDIS manages socio-economic data. The satellite missions whose data are managed by EOSDIS range from the Nimbus series of the 1960s and 1970s to the EOS series launched during 1997 through 2004 to the Suomi National Polar Partnership (SNPP) launched in October 2011. Data from future satellite missions such as the Decadal Survey missions will also be archived and distributed by EOSDIS. NASA is not legislatively mandated to preserve data permanently as are other agencies such as USGS, NOAA and NARA. However, NASA must preserve all the data and associated content beyond the lives of NASA's missions to meet NASA's near-term objective of supporting active scientific research. Also, NASA must ensure that the data and associated content are preserved for transition to permanent archival agencies. The term preservation implies ensuring long-term protection of bits, readability, understandability, usability and reproducibility of results. To ensure preservation of bits, EOSDIS makes sure that data are backed-up adequately. Periodically, the risk of data loss is assessed and corrective action is taken as needed. Data are copied to more modern media on a routine basis to ensure readability. For some of the oldest data within EOSDIS, we have had to go through special data rescue efforts. Data from very old media have been restored and film data have been scanned and digitized. For example, restored data from the Nimbus missions are available for ftp access at the Goddard Earth Sciences Data and Information Services Center (GES DISC). The Earth Science Data and Information System Project, which is responsible for EOSDIS, has been active within the Data Stewardship and Preservation Committee of the Earth Science Information Partners' (ESIP) Federation in developing an emerging 'Provenance and Context Content Standard (PCCS)', a matrix that details various content items that must be preserved to ensure understandability, usability and reproducibility of results. Starting with this matrix, we have developed the NASA Earth Science Data Preservation Content Specification (PCS), which identifies, for NASA missions, what categories of items must be preserved and why. The PCS is to be treated as a guideline for current and heritage missions, and as a requirement for missions still in planning. The PCS is being applied to instruments that are no longer operating to gather content to be preserved and checklists of the collected items are being generated. We are also considering the preservation information architecture to address where the various content items will be preserved and how they are linked to each other. The following key aspects of preservation are being considered by the four working groups in the Data Stewardship Interest Area of NASA's Earth Science Data System Working Groups - Preservation Information Architecture, Implementation of Digital Object Identifiers, Hierarchical Data Format (HDF) conventions to promote interoperability, and Provenance representation for Earth Science (PROV-ES).

  13. iPhone Sensors in Tracking Outcome Variables of the 30-Second Chair Stand Test and Stair Climb Test to Evaluate Disability: Cross-Sectional Pilot Study.

    PubMed

    Adusumilli, Gautam; Joseph, Solomon Eben; Samaan, Michael A; Schultz, Brooke; Popovic, Tijana; Souza, Richard B; Majumdar, Sharmila

    2017-10-27

    Performance tests are important to characterize patient disabilities and functional changes. The Osteoarthritis Research Society International and others recommend the 30-second Chair Stand Test and Stair Climb Test, among others, as core tests that capture two distinct types of disability during activities of daily living. However, these two tests are limited by current protocols of testing in clinics. There is a need for an alternative that allows remote testing of functional capabilities during these tests in the osteoarthritis patient population. Objectives are to (1) develop an app for testing the functionality of an iPhone's accelerometer and gravity sensor and (2) conduct a pilot study objectively evaluating the criterion validity and test-retest reliability of outcome variables obtained from these sensors during the 30-second Chair Stand Test and Stair Climb Test. An iOS app was developed with data collection capabilities from the built-in iPhone accelerometer and gravity sensor tools and linked to Google Firebase. A total of 24 subjects performed the 30-second Chair Stand Test with an iPhone accelerometer collecting data and an external rater manually counting sit-to-stand repetitions. A total of 21 subjects performed the Stair Climb Test with an iPhone gravity sensor turned on and an external rater timing the duration of the test on a stopwatch. App data from Firebase were converted into graphical data and exported into MATLAB for data filtering. Multiple iterations of a data processing algorithm were used to increase robustness and accuracy. MATLAB-generated outcome variables were compared to the manually determined outcome variables of each test. Pearson's correlation coefficients (PCCs), Bland-Altman plots, intraclass correlation coefficients (ICCs), standard errors of measurement, and repeatability coefficients were generated to evaluate criterion validity, agreement, and test-retest reliability of iPhone sensor data against gold-standard manual measurements. App accelerometer data during the 30-second Chair Stand Test (PCC=.890) and gravity sensor data during the Stair Climb Test (PCC=.865) were highly correlated to gold-standard manual measurements. Greater than 95% of values on Bland-Altman plots comparing the manual data to the app data fell within the 95% limits of agreement. Strong intraclass correlation was found for trials of the 30-second Chair Stand Test (ICC=.968) and Stair Climb Test (ICC=.902). Standard errors of measurement for both tests were found to be within acceptable thresholds for MATLAB. Repeatability coefficients for the 30-second Chair Stand Test and Stair Climb Test were 0.629 and 1.20, respectively. App-based performance testing of the 30-second Chair Stand Test and Stair Climb Test is valid and reliable, suggesting its applicability to future, larger-scale studies in the osteoarthritis patient population. ©Gautam Adusumilli, Solomon Eben Joseph, Michael A Samaan, Brooke Schultz, Tijana Popovic, Richard B Souza, Sharmila Majumdar. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 27.10.2017.

  14. Low-threshold indium gallium nitride quantum dot microcavity lasers

    NASA Astrophysics Data System (ADS)

    Woolf, Alexander J.

    Gallium nitride (GaN) microcavities with embedded optical emitters have long been sought after as visible light sources as well as platforms for cavity quantum electrodynamics (cavity QED) experiments. Specifically, materials containing indium gallium nitride (InGaN) quantum dots (QDs) offer an outstanding platform to study light matter interactions and realize practical devices, such as on-chip light emitting diodes and nanolasers. Inherent advantages of nitride-based microcavities include low surface recombination velocities, enhanced room-temperature performance (due to their high exciton binding energy, as high as 67 meV for InGaN QDs), and emission wavelengths in the blue region of the visible spectrum. In spite of these advantages, several challenges must be overcome in order to capitalize on the potential of this material system. Such diffculties include the processing of GaN into high-quality devices due to the chemical inertness of the material, low material quality as a result of strain-induced defects, reduced carrier recombination effciencies due to internal fields, and a lack of characterization of the InGaN QDs themselves due to the diffculty of their growth and therefore lack of development relative to other semiconductor QDs. In this thesis we seek to understand and address such issues by investigating the interaction of light coupled to InGaN QDs via a GaN microcavity resonator. Such coupling led us to the demonstration of the first InGaN QD microcavity laser, whose performance offers insights into the properties and current limitations of the nitride materials and their emitters. This work is organized into three main sections. Part I outlines the key advantages and challenges regarding indium gallium nitride (InGaN) emitters embedded within gallium nitride (GaN) optical microcavities. Previous work is also discussed which establishes context for the work presented here. Part II includes the fundamentals related to laser operation, including the derivation and analysis of the laser rate equations. A thorough examination of the rate equations serves as a natural motivation for QDs and high-quality factor low-modal volume resonators as an optimal laser gain medium and cavity, respectively. The combination of the two theoretically yields the most efficient semiconductor laser device possible. Part III describes in detail the design, growth, fabrication and characterization of the first InGaN QD microcavity laser. Additional experiments are also conducted in order to conclusively prove that the InGaN QDs serve as the gain medium and facilitate laser oscillation within the microdisk cavities. Part III continues with work related towards the development of the next generation of nitride light emitting devices. This includes the realization of photonic crystal cavity (PCC) fragmented quantum well (FQW) lasers that exhibit record low lasing thresholds of 9.1 muJ/cm2, comparable to the best devices in other III-V material systems. Part III also discusses cavity QED experiments on InGaN QDs embedded within GaN PCCs in order to quantify the degree of light-matter interaction. The lack of experimental evidence for weak or strong coupling, in the form of the Purcell Effect or cavity-mode anti-crossing respectively, naturally motivates the question of what mechanism is limiting the device performance. Part III concludes with cathodoluminesence and tapered fiber measurements in order to identify the limiting factor towards achieving strong coupling between InGaN QDs and GaN microcavities.

  15. On the structure and radiation chemistry of iron phosphate glasses: New insights from electron spin resonance, Mössbauer, and evolved-gas mass spectroscopy

    NASA Astrophysics Data System (ADS)

    Griscom, D. L.; Merzbacher, C. I.; Bibler, N. E.; Imagawa, H.; Uchiyama, S.; Namiki, A.; Marasinghe, G. K.; Mesko, M.; Karabulut, M.

    1998-05-01

    Several vitreous forms for immobilization of plutonium and/or high-level nuclear wastes have been surveyed by electron spin resonance (ESR) to gain insights into their atomic-scale structures and to look for signs of radiolytic decomposition resulting from exposures to γ-ray doses of 30 MGy. While preliminary results are reported for Defense Waste Processing Facility (DWPF) borosilicate compositions and an experimental lanthanum-silicate glass, this paper focusses primarily on a class of glasses containing 40-75 mol% P 2O 5 and up to 40 mol% Fe 2O 3. Each of the six diverse compositions investigated displayed characteristic ESR signals (not resembling those of the iron-containing phosphorus-free glasses) comprising combinations of an extremely broad "X resonance" and a narrow "Z resonance", both centered near g=2.00 and both displaying nearly perfect Lorentzian line shapes (peak-to-peak derivative widths ˜300-600 mT and ˜30 mT, respectively, at 300 K). The X-resonance intensities in the air-melted glasses correlated linearly with Fe:P ratio up to [Fe 2O 3]/[P 2O 5] ≈ 0.6, where intensity values ˜1 spin/phosphorus were reached. Mössbauer studies showed that the [Fe 3+]/[Fe] tot ratio could be varied from 0.82 to 0.49 by raising the melting temperature in air from 1150°C to 1450°C and/or by employing mildly reducing atmospheres. The combined X + Z-resonance intensities were reduced to zero for [Fe 3+]/[Fe] tot less than ˜0.6, leaving only a much weaker spectrum attributable to Fe 3+ ions. The X and Z ESR signals of the iron phosphate glasses resemble nothing else in the literature except the correspondingly denoted signals in an iron-free amorphous peroxyborate (APB) preparation. The X and Z resonances in the latter are deemed to arise from superoxide ions (O 2-) in the borate network and in a separated Na 2O 2 phase, respectively. An asymmetric Z resonance signal attributable to interstitial O 2- species was a radiation-induced manifestation in phosphate glass of composition 50P 2O 5-20Fe 2O 3-23Li 2O-7CeO 2. Irradiated and unirradiated samples of this same glass were studied by ESR isochronal annealing and differential thermal analysis, revealing a one-for-one conversion of X to Z upon partial crystallization near 670°C and a Z → X reconversion upon partial remelting near 970°C. Heating to 1070°C in dry Ar resulted in a weight loss of ˜5 wt%, while quadrupole mass spectrometry (QMS) during ramped heating to 1550°C at a pressure of 10 -5 Pa revealed the evolution of O 2 molecules with (radiation sensitive) ion-current peaks near 1170°C and 1350-1400°C. To account for the totality of these and other results, it is suggested that air-melted iron phosphate glasses may contain macroscopic numbers of superoxide ions as an intrinsic chemical feature of their as-quenched structures. A specific four-connected phosphorus-oxygen glass network incorporating O 2- ions is proposed.

  16. RADIOACTIVE DEMONSTRATIONS OF FLUIDIZED BED STEAM REFORMING AS A SUPPLEMENTARY TREATMENT FOR HANFORD'S LOW ACTIVITY WASTE AND SECONDARY WASTES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jantzen, C.; Crawford, C.; Cozzi, A.

    The U.S. Department of Energy's Office of River Protection (ORP) is responsible for the retrieval, treatment, immobilization, and disposal of Hanford's tank waste. Currently there are approximately 56 million gallons of highly radioactive mixed wastes awaiting treatment. A key aspect of the River Protection Project (RPP) cleanup mission is to construct and operate the Waste Treatment and Immobilization Plant (WTP). The WTP will separate the tank waste into high-level and low-activity waste (LAW) fractions, both of which will subsequently be vitrified. The projected throughput capacity of the WTP LAW Vitrification Facility is insufficient to complete the RPP mission in themore » time frame required by the Hanford Federal Facility Agreement and Consent Order, also known as the Tri-Party Agreement (TPA), i.e. December 31, 2047. Therefore, Supplemental Treatment is required both to meet the TPA treatment requirements as well as to more cost effectively complete the tank waste treatment mission. The Supplemental Treatment chosen will immobilize that portion of the retrieved LAW that is not sent to the WTP's LAW Vitrification facility into a solidified waste form. The solidified waste will then be disposed on the Hanford site in the Integrated Disposal Facility (IDF). In addition, the WTP LAW vitrification facility off-gas condensate known as WTP Secondary Waste (WTP-SW) will be generated and enriched in volatile components such as Cs-137, I-129, Tc-99, Cl, F, and SO4 that volatilize at the vitrification temperature of 1150 C in the absence of a continuous cold cap. The current waste disposal path for the WTP-SW is to recycle it to the supplemental LAW treatment to avoid a large steady state accumulation in the pretreatment-vitrification loop. Fluidized Bed Steam Reforming (FBSR) offers a moderate temperature (700-750 C) continuous method by which LAW and/or WTP-SW wastes can be processed irrespective of whether they contain organics, nitrates, sulfates/sulfides, chlorides, fluorides, volatile radionuclides or other aqueous components. The FBSR technology can process these wastes into a crystalline ceramic (mineral) waste form. The mineral waste form that is produced by co-processing waste with kaolin clay in an FBSR process has been shown to be as durable as LAW glass. Monolithing of the granular FBSR product is being investigated to prevent dispersion during transport or burial/storage but is not necessary for performance. A Benchscale Steam Reformer (BSR) was designed and constructed at the Savannah River National Laboratory (SRNL) to treat actual radioactive wastes to confirm the findings of the non-radioactive FBSR pilot scale tests and to qualify the waste form for applications at Hanford. Radioactive testing commenced in 2010 with a demonstration of Hanford's WTP-SW where Savannah River Site (SRS) High Level Waste (HLW) secondary waste from the Defense Waste Processing Facility (DWPF) was shimmed with a mixture of I-125/129 and Tc-99 to chemically resemble WTP-SW. Ninety six grams of radioactive product were made for testing. The second campaign commenced using SRS LAW chemically trimmed to look like Hanford's LAW. Six hundred grams of radioactive product were made for extensive testing and comparison to the non-radioactive pilot scale tests. The same mineral phases were found in the radioactive and non-radioactive testing.« less

  17. RADIOACTIVE DEMONSTRATION OF FINAL MINERALIZED WASTE FORMS FOR HANFORD WASTE TREATMENT PLANT SECONDARY WASTE BY FLUIDIZED BED STEAM REFORMING USING THE BENCH SCALE REFORMER PLATFORM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.; Burket, P.; Cozzi, A.

    2012-02-02

    The U.S. Department of Energy's Office of River Protection (ORP) is responsible for the retrieval, treatment, immobilization, and disposal of Hanford's tank waste. Currently there are approximately 56 million gallons of highly radioactive mixed wastes awaiting treatment. A key aspect of the River Protection Project (RPP) cleanup mission is to construct and operate the Waste Treatment and Immobilization Plant (WTP). The WTP will separate the tank waste into high-level and low-activity waste (LAW) fractions, both of which will subsequently be vitrified. The projected throughput capacity of the WTP LAW Vitrification Facility is insufficient to complete the RPP mission in themore » time frame required by the Hanford Federal Facility Agreement and Consent Order, also known as the Tri-Party Agreement (TPA), i.e. December 31, 2047. Therefore, Supplemental Treatment is required both to meet the TPA treatment requirements as well as to more cost effectively complete the tank waste treatment mission. In addition, the WTP LAW vitrification facility off-gas condensate known as WTP Secondary Waste (WTP-SW) will be generated and enriched in volatile components such as {sup 137}Cs, {sup 129}I, {sup 99}Tc, Cl, F, and SO{sub 4} that volatilize at the vitrification temperature of 1150 C in the absence of a continuous cold cap (that could minimize volatilization). The current waste disposal path for the WTP-SW is to process it through the Effluent Treatment Facility (ETF). Fluidized Bed Steam Reforming (FBSR) is being considered for immobilization of the ETF concentrate that would be generated by processing the WTP-SW. The focus of this current report is the WTP-SW. FBSR offers a moderate temperature (700-750 C) continuous method by which WTP-SW wastes can be processed irrespective of whether they contain organics, nitrates, sulfates/sulfides, chlorides, fluorides, volatile radionuclides or other aqueous components. The FBSR technology can process these wastes into a crystalline ceramic (mineral) waste form. The mineral waste form that is produced by co-processing waste with kaolin clay in an FBSR process has been shown to be as durable as LAW glass. Monolithing of the granular FBSR product is being investigated to prevent dispersion during transport or burial/storage, but is not necessary for performance. A Benchscale Steam Reformer (BSR) was designed and constructed at the SRNL to treat actual radioactive wastes to confirm the findings of the non-radioactive FBSR pilot scale tests and to qualify the waste form for applications at Hanford. BSR testing with WTP SW waste surrogates and associated analytical analyses and tests of granular products (GP) and monoliths began in the Fall of 2009, and then was continued from the Fall of 2010 through the Spring of 2011. Radioactive testing commenced in 2010 with a demonstration of Hanford's WTP-SW where Savannah River Site (SRS) High Level Waste (HLW) secondary waste from the Defense Waste Processing Facility (DWPF) was shimmed with a mixture of {sup 125/129}I and {sup 99}Tc to chemically resemble WTP-SW. Prior to these radioactive feed tests, non-radioactive simulants were also processed. Ninety six grams of radioactive granular product were made for testing and comparison to the non-radioactive pilot scale tests. The same mineral phases were found in the radioactive and non-radioactive testing.« less

  18. Methods to determine pumped irrigation-water withdrawals from the Snake River between Upper Salmon Falls and Swan Falls Dams, Idaho, using electrical power data, 1990-95

    USGS Publications Warehouse

    Maupin, Molly A.

    1999-01-01

    Pumped withdrawals compose most of the irrigation-water diversions from the Snake River between Upper Salmon Falls and Swan Falls Dams in southwestern Idaho. Pumps at 32 sites along the reach lift water as high as 745 feet to irrigate croplands on plateaus north and south of the river. The number of pump sites at which withdrawals are being continuously measured has been steadily decreasing, from 32 in 1990 to 7 in 1998. A cost-effective and accurate means of estimating annual irrigation-water withdrawals at pump sites that are no longer continuously measured was needed. Therefore, the U.S. Geological Survey began a study in 1998, as part of its Water-Use Program, to determine power-consumption coeffi- cients (PCCs) for each pump site so that withdrawals could be estimated by using electrical powerconsumption and total head data. PCC values for each pump site were determined by using withdrawal data that were measured by the U.S. Geological Survey during 1990–92 and 1994–95, energy data reported by Idaho Power Company during the same period, and total head data collected at each site during a field inventory in 1998. Individual average annual withdrawals for the 32 pump sites ranged from 1,120 to 44,480 acre-feet; average PCC values ranged from 103 to 1,248 kilowatthours per acre-foot. During the 1998 field season, power demand, total head, and withdrawal at 18 sites were measured to determine 1998 PCC values. Most of the 1998 PCC values were within 10 percent of the 5-year average, which demonstrates that withdrawals for a site that is no longer continuously measured can be calculated with reasonable accuracy by using the PCC value determined from this study and annual power-consumption data. K-factors, coefficients that describe the amount of energy necessary to lift water, were determined for each pump site by using values of PCC and total head and ranged from 1.11 to 1.89 kilowatthours per acre-foot per foot. Statistical methods were used to define the relations among PCC values and selected pumpsite characteristics. Multiple correlation analysis between average PCC values and total head, total horsepower, and total number of pumps revealed the strongest correlation was between average PCC and total head. Linear regression of these two variables resulted in a strong coefficient of determination R2=0 .9 86) and a representative K-factor of 1.463. Pump sites were subdivided into two groups on the basis of total head—0 to 300 feet and greater than 300 feet. Regression of average PCC values for eight pump sites with total head less than 300 feet produced a good correlation of determination (R2=0.870) and a representative K-factor of 1.682. The second group consisted of 10 pump sites with total head greater than 300 feet; regression produced a correlation of R2=0.939 and a representative K-factor of 1.405. Data on pump-site characteristics were successfully used to determine individual PCC and K-factor values. Statistical relations between pumpsite characteristics and PCC values were defined and used to determine regression equations that resulted in good coefficients of determination and representative K-factors. The individual PCC values will be used in the future to calculate irrigation- water withdrawals at sites that are no longer continuously measured. The representative K-factors and regression equations will be used to calculate irrigation-water withdrawals at sites that have not been previously measured and where total head and power consumption are known.

  19. Community wide interventions for increasing physical activity.

    PubMed

    Baker, Philip Ra; Francis, Daniel P; Soares, Jesus; Weightman, Alison L; Foster, Charles

    2011-04-13

    Multi-strategic community wide interventions for physical activity are increasingly popular but their ability to achieve population level improvements is unknown. To evaluate the effects of community wide, multi-strategic interventions upon population levels of physical activity. We searched the Cochrane Public Health Group Specialised Register, The Cochrane Library, MEDLINE, MEDLINE in Process, EMBASE, CINAHL, LILACS, PsycINFO, ASSIA, The British Nursing Index, Chinese CNKI databases, EPPI Centre (DoPHER, TRoPHI), ERIC, HMIC, Sociological Abstracts, SPORTDiscus, Transport Database and Web of Science (Science Citation Index, Social Sciences Citation Index, Conference Proceedings Citation Index). We also scanned websites of the EU Platform on Diet, Physical Activity and Health; Health-Evidence.ca; the International Union for Health Promotion and Education; the NIHR Coordinating Centre for Health Technology (NCCHTA) and NICE and SIGN guidelines. Reference lists of all relevant systematic reviews, guidelines and primary studies were followed up. We contacted experts in the field from the National Obesity Observatory Oxford, Oxford University; Queensland Health, Queensland University of Technology, the University of Central Queensland; the University of Tennessee and Washington University; and handsearched six relevant journals. The searches were last updated to the end of November 2009 and were not restricted by language or publication status. Cluster randomised controlled trials, randomised controlled trials (RCT), quasi-experimental designs which used a control population for comparison, interrupted time-series (ITS) studies, and prospective controlled cohort studies (PCCS) were included. Only studies with a minimum six-month follow up from the start of the intervention to measurement of outcomes were included. Community wide interventions had to comprise at least two broad strategies aimed at physical activity for the whole population. Studies which randomised individuals from the same community were excluded. At least two review authors independently extracted the data and assessed the risk of bias of each included study. Non-English language papers were reviewed with the assistance of an epidemiologist interpreter. Each study was assessed for the setting, the number of included components and their intensity. Outcome measures were grouped according to whether they were dichotomous (physically active, physically active during leisure time and sedentary or physically inactive) or continuous (leisure time physical activity, walking, energy expenditure). For dichotomous measures we calculated the unadjusted and adjusted risk difference, and the unadjusted and adjusted relative risk. For continuous measures we calculated net percentage change from baseline, unadjusted and adjusted risk difference, and the unadjusted and adjusted relative risk. After the selection process had been completed 25 studies were included in the review. Of the included studies, 19 were set in high income countries, using the World Bank economic classification, and the remaining six were in low income countries. The interventions varied by the number of strategies included and their intensity. Almost all of the interventions included a component of building partnerships with local governments or non-governmental organisations (NGOs) (22 studies). None of the studies provided results by socio-economic disadvantage or other markers of equity consideration. However of those included studies undertaken in high income countries, 11 studies were described by the authors as being provided to deprived, disadvantaged, or low socio-economic communities.Fifteen studies were identified as having a high risk of bias, 10 studies were unclear, and no studies had a low risk of bias. Selection bias was a major concern with these studies, with only one study using randomisation to allocate communities (Simon 2008). No studies were judged as being at low risk of selection bias although 16 studies were considered to have an unclear risk of bias. Eleven studies had a high risk of detection bias, 10 with an unclear risk and four with no risk. Assessment of detection bias included an assessment of the validity of the measurement tools and quality of outcome measures. The effects reported were inconsistent across the studies and the measures. Some of the better designed studies showed no improvement in measures of physical activity. Publication bias was evident. Although numerous studies have been undertaken, there is a noticeable inconsistency of the findings of the available studies and this is confounded by serious methodological issues within the included studies. The body of evidence in this review does not support the hypothesis that multi-component community wide interventions effectively increase population levels of physical activity. There is a clear need for well-designed intervention studies and such studies should focus on the quality of the measurement of physical activity, the frequency of measurement and the allocation to intervention and control communities.

  20. LITERATURE REVIEWS TO SUPPORT ION EXCHANGE TECHNOLOGY SELECTION FOR MODULAR SALT PROCESSING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, W

    2007-11-30

    This report summarizes the results of literature reviews conducted to support the selection of a cesium removal technology for application in a small column ion exchange (SCIX) unit supported within a high level waste tank. SCIX is being considered as a technology for the treatment of radioactive salt solutions in order to accelerate closure of waste tanks at the Savannah River Site (SRS) as part of the Modular Salt Processing (MSP) technology development program. Two ion exchange materials, spherical Resorcinol-Formaldehyde (RF) and engineered Crystalline Silicotitanate (CST), are being considered for use within the SCIX unit. Both ion exchange materials havemore » been studied extensively and are known to have high affinities for cesium ions in caustic tank waste supernates. RF is an elutable organic resin and CST is a non-elutable inorganic material. Waste treatment processes developed for the two technologies will differ with regard to solutions processed, secondary waste streams generated, optimum column size, and waste throughput. Pertinent references, anticipated processing sequences for utilization in waste treatment, gaps in the available data, and technical comparisons will be provided for the two ion exchange materials to assist in technology selection for SCIX. The engineered, granular form of CST (UOP IE-911) was the baseline ion exchange material used for the initial development and design of the SRS SCIX process (McCabe, 2005). To date, in-tank SCIX has not been implemented for treatment of radioactive waste solutions at SRS. Since initial development and consideration of SCIX for SRS waste treatment an alternative technology has been developed as part of the River Protection Project Waste Treatment Plant (RPP-WTP) Research and Technology program (Thorson, 2006). Spherical RF resin is the baseline media for cesium removal in the RPP-WTP, which was designed for the treatment of radioactive waste supernates and is currently under construction in Hanford, WA. Application of RF for cesium removal in the Hanford WTP does not involve in-riser columns but does utilize the resin in large scale column configurations in a waste treatment facility. The basic conceptual design for SCIX involves the dissolution of saltcake in SRS Tanks 1-3 to give approximately 6 M sodium solutions and the treatment of these solutions for cesium removal using one or two columns supported within a high level waste tank. Prior to ion exchange treatment, the solutions will be filtered for removal of entrained solids. In addition to Tanks 1-3, solutions in two other tanks (37 and 41) will require treatment for cesium removal in the SCIX unit. The previous SCIX design (McCabe, 2005) utilized CST for cesium removal with downflow supernate processing and included a CST grinder following cesium loading. Grinding of CST was necessary to make the cesium-loaded material suitable for vitrification in the SRS Defense Waste Processing Facility (DWPF). Because RF resin is elutable (and reusable) and processing requires conversion between sodium and hydrogen forms using caustic and acidic solutions more liquid processing steps are involved. The WTP baseline process involves a series of caustic and acidic solutions (downflow processing) with water washes between pH transitions across neutral. In addition, due to resin swelling during conversion from hydrogen to sodium form an upflow caustic regeneration step is required. Presumably, one of these basic processes (or some variation) will be utilized for MSP for the appropriate ion exchange technology selected. CST processing involves two primary waste products: loaded CST and decontaminated salt solution (DSS). RF processing involves three primary waste products: spent RF resin, DSS, and acidic cesium eluate, although the resin is reusable and typically does not require replacement until completion of multiple treatment cycles. CST processing requires grinding of the ion exchange media, handling of solids with high cesium loading, and handling of liquid wash and conditioning solutions. RF processing requires handling and evaporation of cesium eluates, disposal of spent organic resin, and handling of the various liquid wash and regenerate solutions used. In both cases, the DSS will be immobilized in a low activity waste form. It appears that both technologies are mature, well studied, and generally suitable for this application. Technology selection will likely be based on downstream impacts or preferences between the various processing options for the two materials rather than on some unacceptable performance property identified for one material. As a result, the following detailed technical review and summary of the two technologies should be useful to assist in technology selection for SCIX.« less

  1. Radiation Environment Modeling for Spacecraft Design: New Model Developments

    NASA Technical Reports Server (NTRS)

    Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray

    2006-01-01

    A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.

  2. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    PubMed

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  3. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    PubMed

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  4. Evaluating the Bias of Alternative Cost Progress Models: Tests Using Aerospace Industry Acquisition Programs

    DTIC Science & Technology

    1992-12-01

    suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model

  5. Experience with turbulence interaction and turbulence-chemistry models at Fluent Inc.

    NASA Technical Reports Server (NTRS)

    Choudhury, D.; Kim, S. E.; Tselepidakis, D. P.; Missaghi, M.

    1995-01-01

    This viewgraph presentation discusses (1) turbulence modeling: challenges in turbulence modeling, desirable attributes of turbulence models, turbulence models in FLUENT, and examples using FLUENT; and (2) combustion modeling: turbulence-chemistry interaction and FLUENT equilibrium model. As of now, three turbulence models are provided: the conventional k-epsilon model, the renormalization group model, and the Reynolds-stress model. The renormalization group k-epsilon model has broadened the range of applicability of two-equation turbulence models. The Reynolds-stress model has proved useful for strongly anisotropic flows such as those encountered in cyclones, swirlers, and combustors. Issues remain, such as near-wall closure, with all classes of models.

  6. Leadership Models.

    ERIC Educational Resources Information Center

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  7. SUMMA and Model Mimicry: Understanding Differences Among Land Models

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.

    2016-12-01

    Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.

  8. Seven Modeling Perspectives on Teaching and Learning: Some Interrelations and Cognitive Effects

    ERIC Educational Resources Information Center

    Easley, J. A., Jr.

    1977-01-01

    The categories of models associated with the seven perspectives are designated as combinatorial models, sampling models, cybernetic models, game models, critical thinking models, ordinary language analysis models, and dynamic structural models. (DAG)

  9. Pursuing the method of multiple working hypotheses to understand differences in process-based snow models

    NASA Astrophysics Data System (ADS)

    Clark, Martyn; Essery, Richard

    2017-04-01

    When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.

  10. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  11. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  12. The Influence of a Model's Reinforcement Contingency and Affective Response on Children's Perceptions of the Model

    ERIC Educational Resources Information Center

    Thelen, Mark H.; And Others

    1977-01-01

    Assesses the influence of model consequences on perceived model affect and, conversely, assesses the influence of model affect on perceived model consequences. Also appraises the influence of model consequences and model affect on perceived model attractiveness, perceived model competence, and perceived task attractiveness. (Author/RK)

  13. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  14. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.

  15. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    USGS Publications Warehouse

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.

  16. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.

  17. Literature review of models on tire-pavement interaction noise

    NASA Astrophysics Data System (ADS)

    Li, Tan; Burdisso, Ricardo; Sandu, Corina

    2018-04-01

    Tire-pavement interaction noise (TPIN) becomes dominant at speeds above 40 km/h for passenger vehicles and 70 km/h for trucks. Several models have been developed to describe and predict the TPIN. However, these models do not fully reveal the physical mechanisms or predict TPIN accurately. It is well known that all the models have both strengths and weaknesses, and different models fit different investigation purposes or conditions. The numerous papers that present these models are widely scattered among thousands of journals, and it is difficult to get the complete picture of the status of research in this area. This review article aims at presenting the history and current state of TPIN models systematically, making it easier to identify and distribute the key knowledge and opinions, and providing insight into the future research trend in this field. In this work, over 2000 references related to TPIN were collected, and 74 models were reviewed from nearly 200 selected references; these were categorized into deterministic models (37), statistical models (18), and hybrid models (19). The sections explaining the models are self-contained with key principles, equations, and illustrations included. The deterministic models were divided into three sub-categories: conventional physics models, finite element and boundary element models, and computational fluid dynamics models; the statistical models were divided into three sub-categories: traditional regression models, principal component analysis models, and fuzzy curve-fitting models; the hybrid models were divided into three sub-categories: tire-pavement interface models, mechanism separation models, and noise propagation models. At the end of each category of models, a summary table is presented to compare these models with the key information extracted. Readers may refer to these tables to find models of their interest. The strengths and weaknesses of the models in different categories were then analyzed. Finally, the modeling trend and future direction in this area are given.

  18. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  19. Expert models and modeling processes associated with a computer-modeling tool

    NASA Astrophysics Data System (ADS)

    Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.

    2006-07-01

    Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.

  20. Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis

    DTIC Science & Technology

    2017-02-01

    Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning

  1. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  2. Conceptual and logical level of database modeling

    NASA Astrophysics Data System (ADS)

    Hunka, Frantisek; Matula, Jiri

    2016-06-01

    Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.

  3. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DOE PAGES

    King, Zachary A.; Lu, Justin; Drager, Andreas; ...

    2015-10-17

    In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less

  4. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    PubMed Central

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  5. Service-oriented model-encapsulation strategy for sharing and integrating heterogeneous geo-analysis models in an open web environment

    NASA Astrophysics Data System (ADS)

    Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian

    2016-04-01

    Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.

  6. Object-oriented biomedical system modelling--the language.

    PubMed

    Hakman, M; Groth, T

    1999-11-01

    The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.

  7. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  8. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  9. An empirical model to forecast solar wind velocity through statistical modeling

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Ridley, A. J.

    2013-12-01

    The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.

  10. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  11. Women's Endorsement of Models of Sexual Response: Correlates and Predictors.

    PubMed

    Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert

    2016-02-01

    Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.

  12. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  13. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less

  14. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    DOE PAGES

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less

  15. Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  16. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-08-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  17. Airborne Wireless Communication Modeling and Analysis with MATLAB

    DTIC Science & Technology

    2014-03-27

    research develops a physical layer model that combines antenna modeling using computational electromagnetics and the two-ray propagation model to...predict the received signal strength. The antenna is modeled with triangular patches and analyzed by extending the antenna modeling algorithm by Sergey...7  2.7. Propagation Modeling : Statistical Models ............................................................8  2.8. Antenna Modeling

  18. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

    ERIC Educational Resources Information Center

    Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

  19. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.

    PubMed

    Jenness, Samuel M; Goodreau, Steven M; Morris, Martina

    2018-04-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.

  20. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks

    PubMed Central

    Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina

    2018-01-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699

  1. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  2. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    PubMed

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  3. The applicability of turbulence models to aerodynamic and propulsion flowfields at McDonnell-Douglas Aerospace

    NASA Technical Reports Server (NTRS)

    Kral, Linda D.; Ladd, John A.; Mani, Mori

    1995-01-01

    The objective of this viewgraph presentation is to evaluate turbulence models for integrated aircraft components such as the forebody, wing, inlet, diffuser, nozzle, and afterbody. The one-equation models have replaced the algebraic models as the baseline turbulence models. The Spalart-Allmaras one-equation model consistently performs better than the Baldwin-Barth model, particularly in the log-layer and free shear layers. Also, the Sparlart-Allmaras model is not grid dependent like the Baldwin-Barth model. No general turbulence model exists for all engineering applications. The Spalart-Allmaras one-equation model and the Chien k-epsilon models are the preferred turbulence models. Although the two-equation models often better predict the flow field, they may take from two to five times the CPU time. Future directions are in further benchmarking the Menter blended k-w/k-epsilon and algorithmic improvements to reduce CPU time of the two-equation model.

  4. The determination of third order linear models from a seventh order nonlinear jet engine model

    NASA Technical Reports Server (NTRS)

    Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex

    1989-01-01

    Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.

  5. BioModels: expanding horizons to include more modelling approaches and formats

    PubMed Central

    Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi

    2018-01-01

    Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614

  6. Modelling, teachers' views on the nature of modelling, and implications for the education of modellers

    NASA Astrophysics Data System (ADS)

    Justi, Rosária S.; Gilbert, John K.

    2002-04-01

    In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.

  7. Modelling land use change with generalized linear models--a multi-model analysis of change between 1860 and 2000 in Gallatin Valley, Montana.

    PubMed

    Aspinall, Richard

    2004-08-01

    This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed based on hypothesised relationships based on consideration of underlying and proximate causes of change, and (iv) that models are not invariant over time.

  8. Investigation of prospective teachers' knowledge and understanding of models and modeling and their attitudes towards the use of models in science education

    NASA Astrophysics Data System (ADS)

    Aktan, Mustafa B.

    The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of science teaching and teacher education programs are discussed. Directions for future research are suggested. Overall, based on the results, I suggest that prospective science teachers should engage in more modeling activities through their preparation programs, gain more modeling experience, and collaborate with their colleagues to better understand and implement scientific models in science teaching.

  9. Validation of Groundwater Models: Meaningful or Meaningless?

    NASA Astrophysics Data System (ADS)

    Konikow, L. F.

    2003-12-01

    Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.

  10. Hierarchical modeling and inference in ecology: The analysis of data from populations, metapopulations and communities

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, Robert M.

    2008-01-01

    A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.

  11. Using the Model Coupling Toolkit to couple earth system models

    USGS Publications Warehouse

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  12. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  13. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  14. Premium analysis for copula model: A case study for Malaysian motor insurance claims

    NASA Astrophysics Data System (ADS)

    Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah

    2014-06-01

    This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.

  15. Utilizing Biological Models to Determine the Recruitment of the IRA by Modeling the Voting Behavior of Sinn Fein

    DTIC Science & Technology

    2006-03-01

    models, the thesis applies a biological model, the Lotka - Volterra predator- prey model, to a highly suggestive case study, that of the Irish Republican...Model, Irish Republican Army, Sinn Féin, Lotka - Volterra Predator Prey Model, Recruitment, British Army 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...weaknesses of sociological and biological models, the thesis applies a biological model, the Lotka - Volterra predator-prey model, to a highly suggestive

  16. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  17. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  18. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  19. Examination of various turbulence models for application in liquid rocket thrust chambers

    NASA Technical Reports Server (NTRS)

    Hung, R. J.

    1991-01-01

    There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.

  20. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.

  1. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.

  2. [The reliability of dento-maxillary models created by cone-beam CT and rapid prototyping:a comparative study].

    PubMed

    Lv, Yan; Yan, Bin; Wang, Lin; Lou, Dong-hua

    2012-04-01

    To analyze the reliability of the dento-maxillary models created by cone-beam CT and rapid prototyping (RP). Plaster models were obtained from 20 orthodontic patients who had been scanned by cone-beam CT and 3-D models were formed after the calculation and reconstruction of software. Then, computerized composite models (RP models) were produced by rapid prototyping technique. The crown widths, dental arch widths and dental arch lengths on each plaster model, 3-D model and RP model were measured, followed by statistical analysis with SPSS17.0 software package. For crown widths, dental arch lengths and crowding, there were significant differences(P<0.05) among the 3 models, but the dental arch widths were on the contrary. Measurements on 3-D models were significantly smaller than those on other two models(P<0.05). Compared with 3-D models, RP models had more numbers which were not significantly different from those on plaster models(P>0.05). The regression coefficient among three models were significantly different(P<0.01), ranging from 0.8 to 0.9. But between RP and plaster models was bigger than that between 3-D and plaster models. There is high consistency within 3 models, while some differences were accepted in clinic. Therefore, it is possible to substitute 3-D and RP models for plaster models in order to save storage space and improve efficiency.

  3. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.

  4. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  5. Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.

    1997-01-01

    In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.

  6. 10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  7. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  8. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    PubMed

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  9. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  10. Computational Models for Calcium-Mediated Astrocyte Functions.

    PubMed

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes.

  11. Computational Models for Calcium-Mediated Astrocyte Functions

    PubMed Central

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes. PMID:29670517

  12. Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM). I: Model intercomparison with current land use

    USGS Publications Warehouse

    Breuer, L.; Huisman, J.A.; Willems, P.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.

    2009-01-01

    This paper introduces the project on 'Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM)' that aims at investigating the envelope of predictions on changes in hydrological fluxes due to land use change. As part of a series of four papers, this paper outlines the motivation and setup of LUCHEM, and presents a model intercomparison for the present-day simulation results. Such an intercomparison provides a valuable basis to investigate the effects of different model structures on model predictions and paves the ground for the analysis of the performance of multi-model ensembles and the reliability of the scenario predictions in companion papers. In this study, we applied a set of 10 lumped, semi-lumped and fully distributed hydrological models that have been previously used in land use change studies to the low mountainous Dill catchment, Germany. Substantial differences in model performance were observed with Nash-Sutcliffe efficiencies ranging from 0.53 to 0.92. Differences in model performance were attributed to (1) model input data, (2) model calibration and (3) the physical basis of the models. The models were applied with two sets of input data: an original and a homogenized data set. This homogenization of precipitation, temperature and leaf area index was performed to reduce the variation between the models. Homogenization improved the comparability of model simulations and resulted in a reduced average bias, although some variation in model data input remained. The effect of the physical differences between models on the long-term water balance was mainly attributed to differences in how models represent evapotranspiration. Semi-lumped and lumped conceptual models slightly outperformed the fully distributed and physically based models. This was attributed to the automatic model calibration typically used for this type of models. Overall, however, we conclude that there was no superior model if several measures of model performance are considered and that all models are suitable to participate in further multi-model ensemble set-ups and land use change scenario investigations. ?? 2008 Elsevier Ltd. All rights reserved.

  13. Benchmarking test of empirical root water uptake models

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman

    2017-01-01

    Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.

  14. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  15. Energy modeling. Volume 2: Inventory and details of state energy models

    NASA Astrophysics Data System (ADS)

    Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.

    1981-05-01

    An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.

  16. Advances in Geoscience Modeling: Smart Modeling Frameworks, Self-Describing Models and the Role of Standardized Metadata

    NASA Astrophysics Data System (ADS)

    Peckham, Scott

    2016-04-01

    Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders, time interpolators and unit converters) as necessary to mediate the differences between them so they can work together. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model or data set to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. Recent efforts to bring powerful uncertainty analysis and inverse modeling toolkits such as DAKOTA into modeling frameworks will also be described. This talk will conclude with an overview of several related modeling projects that have been funded by NSF's EarthCube initiative, namely the Earth System Bridge, OntoSoft and GeoSemantics projects.

  17. [A review on research of land surface water and heat fluxes].

    PubMed

    Sun, Rui; Liu, Changming

    2003-03-01

    Many field experiments were done, and soil-vegetation-atmosphere transfer(SVAT) models were stablished to estimate land surface heat fluxes. In this paper, the processes of experimental research on land surface water and heat fluxes are reviewed, and three kinds of SVAT model(single layer model, two layer model and multi-layer model) are analyzed. Remote sensing data are widely used to estimate land surface heat fluxes. Based on remote sensing and energy balance equation, different models such as simplified model, single layer model, extra resistance model, crop water stress index model and two source resistance model are developed to estimate land surface heat fluxes and evapotranspiration. These models are also analyzed in this paper.

  18. Examination of simplified travel demand model. [Internal volume forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.L. Jr.; McFarlane, W.J.

    1978-01-01

    A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less

  19. MPTinR: analysis of multinomial processing tree models in R.

    PubMed

    Singmann, Henrik; Kellen, David

    2013-06-01

    We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .

  20. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  1. Understanding and Predicting Urban Propagation Losses

    DTIC Science & Technology

    2009-09-01

    6. Extended Hata Model ..........................22 7. Modified Hata Model ..........................22 8. Walfisch – Ikegami Model...39 4. COST (Extended) Hata Model ...................40 5. Modified Hata Model ..........................41 6. Walfisch- Ikegami Model...47 1. Scenario One – Walfisch- Ikegami Model ........51 2. Scenario Two – Modified Hata Model ...........52 3. Scenario Three – Urban Hata

  2. A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service

    PubMed Central

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016

  3. A framework for sharing and integrating remote sensing and GIS models based on Web service.

    PubMed

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.

  4. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  5. The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.

    Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e,more » MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.« less

  6. Semi-automated Modular Program Constructor for physiological modeling: Building cell and organ models.

    PubMed

    Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B

    2015-01-01

    The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.

  7. Comparison of dark energy models after Planck 2015

    NASA Astrophysics Data System (ADS)

    Xu, Yue-Yao; Zhang, Xin

    2016-11-01

    We make a comparison for ten typical, popular dark energy models according to their capabilities of fitting the current observational data. The observational data we use in this work include the JLA sample of type Ia supernovae observation, the Planck 2015 distance priors of cosmic microwave background observation, the baryon acoustic oscillations measurements, and the direct measurement of the Hubble constant. Since the models have different numbers of parameters, in order to make a fair comparison, we employ the Akaike and Bayesian information criteria to assess the worth of the models. The analysis results show that, according to the capability of explaining observations, the cosmological constant model is still the best one among all the dark energy models. The generalized Chaplygin gas model, the constant w model, and the α dark energy model are worse than the cosmological constant model, but still are good models compared to others. The holographic dark energy model, the new generalized Chaplygin gas model, and the Chevalliear-Polarski-Linder model can still fit the current observations well, but from an economically feasible perspective, they are not so good. The new agegraphic dark energy model, the Dvali-Gabadadze-Porrati model, and the Ricci dark energy model are excluded by the current observations.

  8. Parametric regression model for survival data: Weibull regression model as an example

    PubMed Central

    2016-01-01

    Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846

  9. Inner Magnetosphere Modeling at the CCMC: Ring Current, Radiation Belt and Magnetic Field Mapping

    NASA Astrophysics Data System (ADS)

    Rastaetter, L.; Mendoza, A. M.; Chulaki, A.; Kuznetsova, M. M.; Zheng, Y.

    2013-12-01

    Modeling of the inner magnetosphere has entered center stage with the launch of the Van Allen Probes (RBSP) in 2012. The Community Coordinated Modeling Center (CCMC) has drastically improved its offerings of inner magnetosphere models that cover energetic particles in the Earth's ring current and radiation belts. Models added to the CCMC include the stand-alone Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model by M.C. Fok, the Rice Convection Model (RCM) by R. Wolf and S. Sazykin and numerous versions of the Tsyganenko magnetic field model (T89, T96, T01quiet, TS05). These models join the LANL* model by Y. Yu hat was offered for instant run earlier in the year. In addition to these stand-alone models, the Comprehensive Ring Current Model (CRCM) by M.C. Fok and N. Buzulukova joined as a component of the Space Weather Modeling Framework (SWMF) in the magnetosphere model run-on-request category. We present modeling results of the ring current and radiation belt models and demonstrate tracking of satellites such as RBSP. Calculations using the magnetic field models include mappings to the magnetic equator or to minimum-B positions and the determination of foot points in the ionosphere.

  10. A diversity index for model space selection in the estimation of benchmark and infectious doses via model averaging.

    PubMed

    Kim, Steven B; Kodell, Ralph L; Moon, Hojin

    2014-03-01

    In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.

  11. Standard fire behavior fuel models: a comprehensive set for use with Rothermel's surface fire spread model

    Treesearch

    Joe H. Scott; Robert E. Burgan

    2005-01-01

    This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.

  12. [Parameters modification and evaluation of two evapotranspiration models based on Penman-Monteith model for summer maize].

    PubMed

    Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing

    2017-06-18

    The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.

  13. Implementation of Dryden Continuous Turbulence Model into Simulink for LSA-02 Flight Test Simulation

    NASA Astrophysics Data System (ADS)

    Ichwanul Hakim, Teuku Mohd; Arifianto, Ony

    2018-04-01

    Turbulence is a movement of air on small scale in the atmosphere that caused by instabilities of pressure and temperature distribution. Turbulence model is integrated into flight mechanical model as an atmospheric disturbance. Common turbulence model used in flight mechanical model are Dryden and Von Karman model. In this minor research, only Dryden continuous turbulence model were made. Dryden continuous turbulence model has been implemented, it refers to the military specification MIL-HDBK-1797. The model was implemented into Matlab Simulink. The model will be integrated with flight mechanical model to observe response of the aircraft when it is flight through turbulence field. The turbulence model is characterized by multiplying the filter which are generated from power spectral density with band-limited Gaussian white noise input. In order to ensure that the model provide a good result, model verification has been done by comparing the implemented model with the similar model that is provided in aerospace blockset. The result shows that there are some difference for 2 linear velocities (vg and wg), and 3 angular rate (pg, qg and rg). The difference is instantly caused by different determination of turbulence scale length which is used in aerospace blockset. With the adjustment of turbulence length in the implemented model, both model result the similar output.

  14. THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability

    PubMed Central

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.

    2017-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125

  15. THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.

    PubMed

    Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R

    2016-07-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  16. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    NASA Technical Reports Server (NTRS)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; hide

    2016-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  17. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    DOE PAGES

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...

    2016-08-22

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less

  18. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less

  19. An ontology for component-based models of water resource systems

    NASA Astrophysics Data System (ADS)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  20. Novel forecasting approaches using combination of machine learning and statistical models for flood susceptibility mapping.

    PubMed

    Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah

    2018-07-01

    In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Exploring Several Methods of Groundwater Model Selection

    NASA Astrophysics Data System (ADS)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  2. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization.

    PubMed

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Models Archive and ModelWeb at NSSDC

    NASA Astrophysics Data System (ADS)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  4. Towards methodical modelling: Differences between the structure and output dynamics of multiple conceptual models

    NASA Astrophysics Data System (ADS)

    Knoben, Wouter; Woods, Ross; Freer, Jim

    2016-04-01

    Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.

  5. Synthesizing models useful for ecohydrology and ecohydraulic approaches: An emphasis on integrating models to address complex research questions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert

    Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less

  6. Synthesizing models useful for ecohydrology and ecohydraulic approaches: An emphasis on integrating models to address complex research questions

    USGS Publications Warehouse

    Brewer, Shannon K.; Worthington, Thomas; Mollenhauer, Robert; Stewart, David; McManamay, Ryan; Guertault, Lucie; Moore, Desiree

    2018-01-01

    Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio‐economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models, 43 were commonly applied due to their versatility, accessibility, user‐friendliness, and excellent user‐support. Forty‐one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user‐support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user‐friendly forms, increasing user‐support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Nonetheless, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.

  7. How does a three-dimensional continuum muscle model affect the kinematics and muscle strains of a finite element neck model compared to a discrete muscle model in rear-end, frontal, and lateral impacts.

    PubMed

    Hedenstierna, Sofia; Halldin, Peter

    2008-04-15

    A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.

  8. Synthesizing models useful for ecohydrology and ecohydraulic approaches: An emphasis on integrating models to address complex research questions

    DOE PAGES

    Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert; ...

    2018-04-06

    Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less

  9. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment

    PubMed Central

    2014-01-01

    Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387

  10. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment.

    PubMed

    Cao, Renzhi; Wang, Zheng; Cheng, Jianlin

    2014-04-15

    Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.

  11. Replicating Health Economic Models: Firm Foundations or a House of Cards?

    PubMed

    Bermejo, Inigo; Tappenden, Paul; Youn, Ji-Hee

    2017-11-01

    Health economic evaluation is a framework for the comparative analysis of the incremental health gains and costs associated with competing decision alternatives. The process of developing health economic models is usually complex, financially expensive and time-consuming. For these reasons, model development is sometimes based on previous model-based analyses; this endeavour is usually referred to as model replication. Such model replication activity may involve the comprehensive reproduction of an existing model or 'borrowing' all or part of a previously developed model structure. Generally speaking, the replication of an existing model may require substantially less effort than developing a new de novo model by bypassing, or undertaking in only a perfunctory manner, certain aspects of model development such as the development of a complete conceptual model and/or comprehensive literature searching for model parameters. A further motivation for model replication may be to draw on the credibility or prestige of previous analyses that have been published and/or used to inform decision making. The acceptability and appropriateness of replicating models depends on the decision-making context: there exists a trade-off between the 'savings' afforded by model replication and the potential 'costs' associated with reduced model credibility due to the omission of certain stages of model development. This paper provides an overview of the different levels of, and motivations for, replicating health economic models, and discusses the advantages, disadvantages and caveats associated with this type of modelling activity. Irrespective of whether replicated models should be considered appropriate or not, complete replicability is generally accepted as a desirable property of health economic models, as reflected in critical appraisal checklists and good practice guidelines. To this end, the feasibility of comprehensive model replication is explored empirically across a small number of recent case studies. Recommendations are put forward for improving reporting standards to enhance comprehensive model replicability.

  12. Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination

    NASA Astrophysics Data System (ADS)

    Li, Weihua; Sankarasubramanian, A.

    2012-12-01

    Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.

  13. Comparing the cognitive differences resulting from modeling instruction: Using computer microworld and physical object instruction to model real world problems

    NASA Astrophysics Data System (ADS)

    Oursland, Mark David

    This study compared the modeling achievement of students receiving mathematical modeling instruction using the computer microworld, Interactive Physics, and students receiving instruction using physical objects. Modeling instruction included activities where students applied the (a) linear model to a variety of situations, (b) linear model to two-rate situations with a constant rate, (c) quadratic model to familiar geometric figures. Both quantitative and qualitative methods were used to analyze achievement differences between students (a) receiving different methods of modeling instruction, (b) with different levels of beginning modeling ability, or (c) with different levels of computer literacy. Student achievement was analyzed quantitatively through a three-factor analysis of variance where modeling instruction, beginning modeling ability, and computer literacy were used as the three independent factors. The SOLO (Structure of the Observed Learning Outcome) assessment framework was used to design written modeling assessment instruments to measure the students' modeling achievement. The same three independent factors were used to collect and analyze the interviews and observations of student behaviors. Both methods of modeling instruction used the data analysis approach to mathematical modeling. The instructional lessons presented problem situations where students were asked to collect data, analyze the data, write a symbolic mathematical equation, and use equation to solve the problem. The researcher recommends the following practice for modeling instruction based on the conclusions of this study. A variety of activities with a common structure are needed to make explicit the modeling process of applying a standard mathematical model. The modeling process is influenced strongly by prior knowledge of the problem context and previous modeling experiences. The conclusions of this study imply that knowledge of the properties about squares improved the students' ability to model a geometric problem more than instruction in data analysis modeling. The uses of computer microworlds such as Interactive Physics in conjunction with cooperative groups are a viable method of modeling instruction.

  14. A physical data model for fields and agents

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek

    2016-04-01

    Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.

  15. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  16. Modeling Information Accumulation in Psychological Tests Using Item Response Times

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2015-01-01

    In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…

  17. Climate and atmospheric modeling studies

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.

  18. Models in Science Education: Applications of Models in Learning and Teaching Science

    ERIC Educational Resources Information Center

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  19. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.

  20. Vector models and generalized SYK models

    DOE PAGES

    Peng, Cheng

    2017-05-23

    Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  1. Validation of the PVSyst Performance Model for the Concentrix CPV Technology

    NASA Astrophysics Data System (ADS)

    Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault

    2011-12-01

    The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.

  2. Comparative Protein Structure Modeling Using MODELLER

    PubMed Central

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406

  3. A comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh

    1993-01-01

    A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).

  4. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-07-01

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  5. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.

    2017-12-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  6. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.

  7. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.

  8. Analysis of terahertz dielectric properties of pork tissue

    NASA Astrophysics Data System (ADS)

    Huang, Yuqing; Xie, Qiaoling; Sun, Ping

    2017-10-01

    Seeing that about 70% component of fresh biological tissues is water, many scientists try to use water models to describe the dielectric properties of biological tissues. The classical water dielectric models are Debye model, Double Debye model and Cole-Cole model. This work aims to determine a suitable model by comparing three models above with experimental data. These models are applied to fresh pork tissue. By means of least square method, the parameters of different models are fitted with the experimental data. Comparing different models on both dielectric function, the Cole-Cole model is verified the best to describe the experiments of pork tissue. The correction factor α of the Cole-Cole model is an important modification for biological tissues. So Cole-Cole model is supposed to be a priority selection to describe the dielectric properties for biological tissues in the terahertz range.

  9. Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model

    NASA Astrophysics Data System (ADS)

    Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus

    2017-12-01

    The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.

  10. The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

    NASA Astrophysics Data System (ADS)

    Plotnitsky, Arkady

    2017-06-01

    The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The necessity of using such models may change the nature of mathematical modeling in science and, thus, the nature of science, as it happened in the case of Q-models, which not only led to a revolutionary transformation of physics but also opened new possibilities for scientific thinking and mathematical modeling beyond physics.

  11. Vertically-Integrated Dual-Continuum Models for CO2 Injection in Fractured Aquifers

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Guo, B.; Bandilla, K.; Celia, M. A.

    2017-12-01

    Injection of CO2 into a saline aquifer leads to a two-phase flow system, with supercritical CO2 and brine being the two fluid phases. Various modeling approaches, including fully three-dimensional (3D) models and vertical-equilibrium (VE) models, have been used to study the system. Almost all of that work has focused on unfractured formations. 3D models solve the governing equations in three dimensions and are applicable to generic geological formations. VE models assume rapid and complete buoyant segregation of the two fluid phases, resulting in vertical pressure equilibrium and allowing integration of the governing equations in the vertical dimension. This reduction in dimensionality makes VE models computationally more efficient, but the associated assumptions restrict the applicability of VE model to formations with moderate to high permeability. In this presentation, we extend the VE and 3D models for CO2 injection in fractured aquifers. This is done in the context of dual-continuum modeling, where the fractured formation is modeled as an overlap of two continuous domains, one representing the fractures and the other representing the rock matrix. Both domains are treated as porous media continua and can be modeled by either a VE or a 3D formulation. The transfer of fluid mass between rock matrix and fractures is represented by a mass transfer function connecting the two domains. We have developed a computational model that combines the VE and 3D models, where we use the VE model in the fractures, which typically have high permeability, and the 3D model in the less permeable rock matrix. A new mass transfer function is derived, which couples the VE and 3D models. The coupled VE-3D model can simulate CO2 injection and migration in fractured aquifers. Results from this model compare well with a full-3D model in which both the fractures and rock matrix are modeled with 3D models, with the hybrid VE-3D model having significantly reduced computational cost. In addition to the VE-3D model, we explore simplifications of the rock matrix domain by using sugar-cube and matchstick conceptualizations and develop VE-dual porosity and VE-matchstick models. These vertically-integrated dual-permeability and dual-porosity models provide a range of computationally efficient tools to model CO2 storage in fractured saline aquifers.

  12. ATMOSPHERIC DISPERSAL AND DEPOSITION OF TEPHRA FROM A POTENTIAL VOLCANIC ERUPTION AT YUCCA MOUNTAIN, NEVADA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Harrington

    2004-10-25

    The purpose of this model report is to provide documentation of the conceptual and mathematical model (Ashplume) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. These aspects of volcanism-related dose calculation are described in the context of the entire igneous disruptive events conceptual model in ''Characterize Framework for Igneous Activity'' (BSC 2004 [DIRS 169989], Section 6.1.1). The Ashplume conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through themore » Yucca Mountain repository and downwind transport of contaminated tephra. The Ashplume mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the ground surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report update the previous documentation of the Ashplume mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model. In this report, ''Ashplume'' is used when referring to the atmospheric dispersal model and ''ASHPLUME'' is used when referencing the code of that model. Two analysis and model reports provide direct inputs to this model report, namely ''Characterize Eruptive Processes at Yucca Mountain, Nevada and Number of Waste Packages Hit by Igneous Intrusion''. This model report provides direct inputs to the TSPA, which uses the ASHPLUME software described and used in this model report. Thus, ASHPLUME software inputs are inputs to this model report for ASHPLUME runs in this model report. However, ASHPLUME software inputs are outputs of this model report for ASHPLUME runs by TSPA.« less

  13. Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.

    PubMed

    Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong

    2007-09-01

    Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.

  14. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  15. [Suitability of four stomatal conductance models in agro-pastoral ecotone in North China: A case study for potato and oil sunflower.

    PubMed

    Huang, Ming Xia; Wang, Jing; Tang, Jian Zhao; Yu, Qiang; Zhang, Jun; Xue, Qing Yu; Chang, Qing; Tan, Mei Xiu

    2016-11-18

    The suitability of four popular empirical and semi-empirical stomatal conductance models (Jarvis model, Ball-Berry model, Leuning model and Medlyn model) was evaluated based on para-llel observation data of leaf stomatal conductance, leaf net photosynthetic rate and meteorological factors during the vigorous growing period of potato and oil sunflower at Wuchuan experimental station in agro-pastoral ecotone in North China. It was found that there was a significant linear relationship between leaf stomatal conductance and leaf net photosynthetic rate for potato, whereas the linear relationship appeared weaker for oil sunflower. The results of model evaluation showed that Ball-Berry model performed best in simulating leaf stomatal conductance of potato, followed by Leuning model and Medlyn model, while Jarvis model was the last in the performance rating. The root-mean-square error (RMSE) was 0.0331, 0.0371, 0.0456 and 0.0794 mol·m -2 ·s -1 , the normalized root-mean-square error (NRMSE) was 26.8%, 30.0%, 36.9% and 64.3%, and R-squared (R 2 ) was 0.96, 0.61, 0.91 and 0.88 between simulated and observed leaf stomatal conductance of potato for Ball-Berry model, Leuning model, Medlyn model and Jarvis model, respectively. For leaf stomatal conductance of oil sunflower, Jarvis model performed slightly better than Leuning model, Ball-Berry model and Medlyn model. RMSE was 0.2221, 0.2534, 0.2547 and 0.2758 mol·m -2 ·s -1 , NRMSE was 40.3%, 46.0%, 46.2% and 50.1%, and R 2 was 0.38, 0.22, 0.23 and 0.20 between simulated and observed leaf stomatal conductance of oil sunflower for Jarvis model, Leuning model, Ball-Berry model and Medlyn model, respectively. The path analysis was conducted to identify effects of specific meteorological factors on leaf stomatal conductance. The diurnal variation of leaf stomatal conductance was principally affected by vapour pressure saturation deficit for both potato and oil sunflower. The model evaluation suggested that the stomatal conductance models for oil sunflower are to be improved in further research.

  16. Evaluation of chiller modeling approaches and their usability for fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are themore » Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  17. PyMT: A Python package for model-coupling in the Earth sciences

    NASA Astrophysics Data System (ADS)

    Hutton, E.

    2016-12-01

    The current landscape of Earth-system models is not only broad in scientific scope, but also broad in type. On the one hand, the large variety of models is exciting, as it provides fertile ground for extending or linking models together in novel ways to answer new scientific questions. However, the heterogeneity in model type acts to inhibit model coupling, model development, or even model use. Existing models are written in a variety of programming languages, operate on different grids, use their own file formats (both for input and output), have different user interfaces, have their own time steps, etc. Each of these factors become obstructions to scientists wanting to couple, extend - or simply run - existing models. For scientists whose main focus may not be computer science these barriers become even larger and become significant logistical hurdles. And this is all before the scientific difficulties of coupling or running models are addressed. The CSDMS Python Modeling Toolkit (PyMT) was developed to help non-computer scientists deal with these sorts of modeling logistics. PyMT is the fundamental package the Community Surface Dynamics Modeling System uses for the coupling of models that expose the Basic Modeling Interface (BMI). It contains: Tools necessary for coupling models of disparate time and space scales (including grid mappers) Time-steppers that coordinate the sequencing of coupled models Exchange of data between BMI-enabled models Wrappers that automatically load BMI-enabled models into the PyMT framework Utilities that support open-source interfaces (UGRID, SGRID,CSDMS Standard Names, etc.) A collection of community-submitted models, written in a variety of programminglanguages, from a variety of process domains - but all usable from within the Python programming language A plug-in framework for adding additional BMI-enabled models to the framework In this presentation we intoduce the basics of the PyMT as well as provide an example of coupling models of different domains and grid types.

  18. Can the super model (SUMO) method improve hydrological simulations? Exploratory tests with the GR hydrological models

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2017-04-01

    Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.

  19. Downscaling GISS ModelE Boreal Summer Climate over Africa

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  20. A tool for multi-scale modelling of the renal nephron

    PubMed Central

    Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.

    2011-01-01

    We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210

  1. An online model composition tool for system biology models

    PubMed Central

    2013-01-01

    Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914

  2. A parsimonious dynamic model for river water quality assessment.

    PubMed

    Mannina, Giorgio; Viviani, Gaspare

    2010-01-01

    Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.

  3. The cost of simplifying air travel when modeling disease spread.

    PubMed

    Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V

    2009-01-01

    Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.

  4. Risk prediction models of breast cancer: a systematic review of model performances.

    PubMed

    Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin

    2012-05-01

    The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. A. Wasiolek

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less

  6. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  7. Intelligent Decisions Need Intelligent Choice of Models and Data - a Bayesian Justifiability Analysis for Models with Vastly Different Complexity

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.

    2016-12-01

    Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.

  8. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    PubMed

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. Nursing resources and responsibilities according to hospital organizational model for management of inflammatory bowel disease in Spain.

    PubMed

    Marín, Laura; Torrejón, Antonio; Oltra, Lorena; Seoane, Montserrat; Hernández-Sampelayo, Paloma; Vera, María Isabel; Casellas, Francesc; Alfaro, Noelia; Lázaro, Pablo; García-Sánchez, Valle

    2011-06-01

    Nurses play an important role in the multidisciplinary management of inflammatory bowel disease (IBD), but little is known about this role and the associated resources. To improve knowledge of resource availability for health care activities and the different organizational models in managing IBD in Spain. Cross-sectional study with data obtained by questionnaire directed at Spanish Gastroenterology Services (GS). Five GS models were identified according to whether they have: no specific service for IBD management (Model A); IBD outpatient office for physician consultations (Model B); general outpatient office for nurse consultations (Model C); both, Model B and Model C (Model D); and IBD Unit (Model E) when the hospital has a Comprehensive Care Unit for IBD with telephone helpline, computer, including a Model B. Available resources and activities performed were compared according to GS model (chi-square test and test for linear trend). Responses were received from 107 GS: 33 Model A (31%), 38 Model B (36%), 4 Model C (4%), 16 Model D (15%) and 16 Model E (15%). The model in which nurses have the most resources and responsibilities is the Model E. The more complete the organizational model, the more frequent the availability of nursing resources (educational material, databases, office, and specialized software) and responsibilities (management of walk-in appointments, provision of emotional support, health education, follow-up of drug treatment and treatment adherence) (p<0.05). Nurses have more resources and responsibilities the more complete is the organizational model for IBD management. Development of these areas may improve patient outcomes. Copyright © 2011 European Crohn's and Colitis Organisation. Published by Elsevier B.V. All rights reserved.

  10. Template-free modeling by LEE and LEER in CASP11.

    PubMed

    Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung

    2016-09-01

    For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  11. Plausible combinations: An improved method to evaluate the covariate structure of Cormack-Jolly-Seber mark-recapture models

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.

    2013-01-01

    Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.

  12. Framework for Understanding Structural Errors (FUSE): A modular framework to diagnose differences between hydrological models

    USGS Publications Warehouse

    Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.

    2008-01-01

    The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.

  13. Moving alcohol prevention research forward-Part II: new directions grounded in community-based system dynamics modeling.

    PubMed

    Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller

    2018-02-01

    Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.

  14. A Comparison of Two Mathematical Modeling Frameworks for Evaluating Sexually Transmitted Infection Epidemiology.

    PubMed

    Johnson, Leigh F; Geffen, Nathan

    2016-03-01

    Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.

  15. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  16. ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2009-01-01

    ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.

  17. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models inmore » which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.« less

  18. Comparison of chiller models for use in model-based fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya; Haves, Philip

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  19. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  20. Geospace environment modeling 2008--2009 challenge: Dst index

    USGS Publications Warehouse

    Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.

    2013-01-01

    This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.

Top